f:\12000 essays\sciences (985)\Astronomy\Apollo 13 again.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Apollo 13 (AS-508)
Houston, we have a problem.
The Apollo 13 mission was launched at 2:13 p.m. EST, April 11, 1970 from launch
complex 39A at Kennedy Space Center. The space vehicle crew consisted of James A. Lovell, Jr.
commander, John L. Swigert, Jr., command module pilot and Fred W.
Haise, Jr. lunar module pilot.
The Apollo 13 Mission was planned as a lunar landing mission but
was aborted en route to the moon after about 56 hours of flight due to loss
of service module cryogenic oxygen and consequent loss of capability to
generate electrical power, to provide oxygen and to produce water.
Spacecraft systems performance was nominal until the fans in cryogenic oxygen tank 2 were
turned on at 55:53:18 ground elapsed time (GET). About 2 seconds after energizing the fan circuit,
a short was indicated in the current from fuel cell 3, which was supplying power to cryogenic
oxygen tank 2 fans. Within several additional seconds, two other shorted conditions occurred.
Electrical shorts in the fan circuit ignited the wire insulation, causing temperature and
pressure to increase within cryogenic oxygen tank 2. When pressure reached the cryogenic
oxygen tank 2 relief valve full-flow conditions of 1008 psi, the pressure began decreasing
for about 9 seconds, at which time the relief valve probably reseated, causing the pressure
to rise again momentarily. About a quarter of a second later, a vibration disturbance was
noted on the command module accelerometers.
The next series of events occurred within a fraction of a second between the
accelerometer disturbances and the data loss. A tank line burst, because of heat, in the
vacuum jacket pressurizing the annulus and, in turn, causing the blow-out plug on the
vacuum jacket to rupture. Some mechanism in bay 4 combined with the oxygen buildup in
that bay to cause a rapid pressure rise which resulted in separation of the outer panel. The
panel struck one of the dishes of the high-gain antenna. The panel separation shock closed
the fuel cell 1 and 3 oxygen reactant shut-off valves and several propellant and helium
isolation valves in the reaction control system. Data were lost for about 1.8 seconds as the
high-gain antenna switched from narrow beam to wide beam, because of the antenna being hit and
damaged.
As a result of these occurrences, the CM was powered down and the LM was configured
to supply the necessary power and other consumables.
The CSM was powered down at approximately 58:40 GET. The surge tank and
repressurization package were isolated with approximately 860 psi residual pressure (approx. 6.5
lbs of oxygen total). The primary water glycol system was left with radiators bypassed.
All LM systems performed satisfactorily in providing the necessary power and
environmental control to the spacecraft. The requirement for lithium hydroxide to remove carbon
dioxide from the spacecraft atmosphere was met by a combination of the CM and LM cartridges
since the LM cartridges alone would not satisfy the total requirement. The crew, with direction
from Mission Control, built an adapter for the CM cartridges to accept LM hoses.
The service module was jettisoned at approximately 138 hours
GET, and the crew observed and photographed the bay-4 area where the
cryogenic tank anomaly had occurred. At this time, the crew remarked
that the outer skin covering for bay-4 had been severely damaged, with a
large portion missing. The LM was jettisoned about 1 hour before entry,
which was performed nominally using primary guidance and navigation system.
f:\12000 essays\sciences (985)\Astronomy\Apollo 13.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Apollo 13 (AS-508)
Houston, we have a problem.
The Apollo 13 mission was launched at 2:13 p.m. EST, April 11, 1970 from launch
complex 39A at Kennedy Space Center. The space vehicle crew consisted of James A. Lovell, Jr.
commander, John L. Swigert, Jr., command module pilot and Fred W.
Haise, Jr. lunar module pilot.
The Apollo 13 Mission was planned as a lunar landing mission but
was aborted en route to the moon after about 56 hours of flight due to loss
of service module cryogenic oxygen and consequent loss of capability to
generate electrical power, to provide oxygen and to produce water.
Spacecraft systems performance was nominal until the fans in cryogenic oxygen tank 2 were
turned on at 55:53:18 ground elapsed time (GET). About 2 seconds after energizing the fan circuit,
a short was indicated in the current from fuel cell 3, which was supplying power to cryogenic
oxygen tank 2 fans. Within several additional seconds, two other shorted conditions occurred.
Electrical shorts in the fan circuit ignited the wire insulation, causing temperature and
pressure to increase within cryogenic oxygen tank 2. When pressure reached the cryogenic
oxygen tank 2 relief valve full-flow conditions of 1008 psi, the pressure began decreasing
for about 9 seconds, at which time the relief valve probably reseated, causing the pressure
to rise again momentarily. About a quarter of a second later, a vibration disturbance was
noted on the command module accelerometers.
The next series of events occurred within a fraction of a second between the
accelerometer disturbances and the data loss. A tank line burst, because of heat, in the
vacuum jacket pressurizing the annulus and, in turn, causing the blow-out plug on the
vacuum jacket to rupture. Some mechanism in bay 4 combined with the oxygen buildup in
that bay to cause a rapid pressure rise which resulted in separation of the outer panel. The
panel struck one of the dishes of the high-gain antenna. The panel separation shock closed
the fuel cell 1 and 3 oxygen reactant shut-off valves and several propellant and helium
isolation valves in the reaction control system. Data were lost for about 1.8 seconds as the
high-gain antenna switched from narrow beam to wide beam, because of the antenna being hit and
damaged.
As a result of these occurrences, the CM was powered down and the LM was configured
to supply the necessary power and other consumables.
The CSM was powered down at approximately 58:40 GET. The surge tank and
repressurization package were isolated with approximately 860 psi residual pressure (approx. 6.5
lbs of oxygen total). The primary water glycol system was left with radiators bypassed.
All LM systems performed satisfactorily in providing the necessary power and
environmental control to the spacecraft. The requirement for lithium hydroxide to remove carbon
dioxide from the spacecraft atmosphere was met by a combination of the CM and LM cartridges
since the LM cartridges alone would not satisfy the total requirement. The crew, with direction
from Mission Control, built an adapter for the CM cartridges to accept LM hoses.
The service module was jettisoned at approximately 138 hours
GET, and the crew observed and photographed the bay-4 area where the
cryogenic tank anomaly had occurred. At this time, the crew remarked
that the outer skin covering for bay-4 had been severely damaged, with a
large portion missing. The LM was jettisoned about 1 hour before entry,
which was performed nominally using primary guidance and navigation system.
f:\12000 essays\sciences (985)\Astronomy\Apollo and Challenger Disasters.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
This paper is going to compare the Apollo 1 and the Challenger disasters. Both space
programs were unfortunate disasters, caused by a series of oversights and misjudgments.
How did this lost of life occur in such a high tech environment?
Apollo 4
On January 27, 1967, the three astronauts of the Apollo 4, were doing a test
countdown on the launch pad. Gus Grissom was in charge. His crew were Edward H.
White, the first American to walk in space, and Roger B. Chaffee, a naval officer going up
for the first time. 182 feet below, R.C.A technician Gary Propst was seated in front of a
bank of television monitors, listening to the crew radio channel and watching various
televisions for important activity.
Inside the Apollo 4 there was a metal door with a sharp edge. Each time the door
was open and shut, it scraped against an environmental control unit wire. The repeated
abrasion had exposed two tiny sections of wire. A spark alone would not cause a fire, but
just below the cuts in the cable was a length of aluminum tubing, which took a ninety-
degree turn. There were hundreds of these turns in the whole capsule. The aluminum
tubing carried a glycol cooling fluid, which is not flammable, but when exposed to air it
turns to flammable fumes. The capsule was filled with pure oxygen in an effort to allow
the astronauts to work more efficiently. It also turns normally not so flammable items to
highly flammable items. Raschel netting that was highly flammable in the pure oxygen
environment was near the exposed section of the wires.
At 6:31:04 p.m. the Raschel netting burst into an open flame. A second after the
netting burst into flames, the first message came over the crew's radio channel: "Fire,"
Grissom said. Two Seconds later, Chaffee said clearly, "We've got a fire in the cockpit."
His tone was businesslike (Murray 191).
There was no camera in the cabin, but a remote control camera, if zoomed in on
the porthole could provide a partial, shadowy view of the interior of the space craft. There
was a lot of motion, Propst explained, as White seemed to fumble with something and
then quickly pull his arms back, then reach out again. Another pair of arms came into
view from the left, Grissom's, as the flames spread from the far left-hand corner of the
spacecraft toward the porthole (Murray 192). The crew struggled for about 30 seconds
after their suits failed, and then died of asphyxiation, not the heat. To get out of the
capsule astronauts had to remove three separate hatches, atleast 90 seconds was required
to open all three hatches.
The IB Saturn rocket contained no fuel, so no chance of fire was really thought of,
so there were no fire crews or doctors standing by. Many people were listening to the
crew's radio channel, and would have responded, but were caught off guard and the first
mention of fire was not clearly heard by anyone.
Challenger
On January 28, 1986 the space shuttle Challenger was ready to launch. The lead
up to the launch had not been without its share of problems. The talk of cold weather,
icicles, and brittle and faulty o-rings were the main problems. It was revealed that deep
doubts of some engineers had not been passed on by their superiors to the shuttle director,
Mr. Moore.
Something was unusual about that morning in Florida: it was uncommonly cold.
The night before, the temperature had dropped to twenty-two degrees fahrenheit. Icicles
hung from the launch pad, it was said that the icicles could have broken off and damaged
the space shuttle's heat tiles. It had been the coldest day on which a shuttle launch had
ever been attempted.
Cold weather had made the rubber O-ring seals so brittle that they no longer sealed
the joint properly. People feared a reduction in the efficiency of the O-ring seals on the
solid rocket boosters. Level 1 authorities at NASA had received enough information
about faulty O-rings by August 1985 that they should have ordered discontinuation of
flights.
The shuttle rocketed away from the icicle laden launch pad, carrying a New
Hampshire school teacher, NASA's first citizen in space. It was the worst accident in the
history of NASA in nearly 25 years. 11:38 a.m. cape time, the main engine ignition
followed by clouds of smoke and flame came from the solid fuel rocket boosters.
Unknown to anyone in the cabin or on the ground, there was a jet of flame around the
giant orange fuel tank coming from the right-hand booster rocket. Seventy-three seconds
after lift-off the Challenger suddenly disappeared amid a cataclysmic explosion which
ripped the fuel tank from nose to tail (Timothy 441). The explosion occured as Challenger
was 10.35 miles high and 8.05 miles downrange from the cape, speeding toward space at
1,977 mph. Lost along with the $1.2 billion spacecraft were a $100 million satellite that
was to have becooome an important part of NASA's communications network (Associated
Press 217). Pictures taken revealed that even after the enormous explosion occurred the
cockpit remained somewhat intact. Aerodynamic pressure exerted on the human
passengers would have killed anyone who survived the explosion. The remains of the
shuttle were spread over miles of ocean. Over half were recovered.
In comparison, both disasters were preventable. Both disasters had a main
explosion or malfunction, but even if there were survivors they would have died because
there was no escape. The Challenger disaster was mainly a lot of people wanting to get
better jobs and more money, or simply to get on the good side of someone. The Apollo 4
had many problems which should have been caught.
Conclusion
Apollo 4 had many deficiencies: loose, shoddy wiring, excessive use of
combustible materials in spite of a 100 percent oxygen atmosphere, inadequate provisions
for rescue, and a three layer, ninety plus second hatch. The Challenger had faulty O-rings,
icicles, and bad management which threatened to bring the entire american astronaut
program to an end. Over a billion dollars was lost all together.
Both disasters could have been prevented if the time, effort, and funding was
spent. Many people involved in both disasters were either lazy or greedy.
Works Cited
Biel, Timothy L. The Challenger. San Diego: Lucent Books, Inc. 1990.
Murray, Charles A. Apollo, the Race to the Moon. New York: Simon and Schuster,
1989.
Appel, Fred and Wolleck, James. The Marshall Cavendish Illustrated Encyclopedia of
Discovery and Expedition. Vol. 16. New York: Marshall Cavendish, 1990.
Bond, Peter. Heroes in Space. New York: Basil Blackwell Ltd, 1987.
Associated Press. Moments in Space. New York: Gallery Books, 1986.
Encarta. Challenger Disaster. Encyclopedia Cd-rom. Funk and Wagnell's
Corporation, 1983.
Burton, Jonathon "The Haunting Legacy of the Challenger." Scholastic Update.
December 4, 1992: 10,11
f:\12000 essays\sciences (985)\Astronomy\Aquarius viewing and history.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aquarius can be found in the SE sky in autumn, especially October. A dark night is especially helpful because many faint stars make up Aquarius. This will help to make the fainter stars stand out because its hard enough to see a shape in Aquarius. Up and to the west of aquarius, pegasus can be found. Down and to the east of aquarius, capricorn can be found.
Aquarius portrays a man or boy spilling water from an urn. Aquarius is identified with Ganymede, a beautiful young shepherd who was abducted by Zeus and taken to Mount Olympus to be the cup bearer for the gods.
Stars:
Sadalmelik: Arabic for "lucky one of the king". It lies just off the celestial equator.
Sudalsud: It means "luckiest of the lucky" in Arabic. It is the brightest star in the constellation
Sadachbia: Arabic for "lucky star of hidden things" or " lucky star of the tents." This makes up part of the asterism sometimes called the tent, but is usually called the urn referring to Aquarius.
Skat or Scheat: It comes from the Arabic word for shin and it dates back to the translation of Ptolemy's Almagest.
Albali: The name comes from the Arabic, which means "swallower"; no one really knows why the star got this name
Situla: This name comes from Latin and means "well bucket". Situla was the original Arabic name for the entire constellation Aquarius.
There are three star clusters contained in Aquarius. M2, which was discovered in 1764, is one that can be seen with a small telescope. A larger telescope is needed to make out the individual stars. M72 is another cluster that is located southeast of Albali and isn't far from the Saturn Nebula. NGC 7492 is the third cluster and is located east of Skat.
Aquarius also has two nebulae in it. It is called the Saturn Nebula because it resembles the rings on Saturn. A very large telescope is needed to see its rings. It was discovered in 1782 by William Herschel. In a small telescopes it will appear as faint disks of fuzzy light. It lies southeast of Albali near the cluster M72. Its central star can be spotted with a large telescope. The other nebula, NGC 7923 is southwest of Skat and is a well known Helix Nebula. It is brighter than NGC 7009, the Saturn nebula, but has a fainter central star.
There are five galaxies in Aquarius but they only appear as fuzzy patches in amateur telescopes. They are: NGC 7184, NGC 7606, NGC 7721, NGC 7723, and NGC 7727.
f:\12000 essays\sciences (985)\Astronomy\Are We Alone .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Outline
Thesis: We once believed that Earth is the only planet in the Universe
that supports life. Today there is overwhelming evidence that
not only suggests, but supports the very real possibility that we
may share the Universe with other intelligent beings.
I. Things in the Sky
A. The First Documented Sighting
B. The Fever Spreads
1. Pilot Encounters
2. The Lights in the Sky
II. Dents in the Earth
III. Unexplained Phenomenon
A. The Writing on the Wall
B. Geodes
IV. What About Religion?
A. The Christian Bible
B. The Ancient Greeks
C. The American Indian
V. Conclusion
We are not Alone.
On June 24th, 1947 while searching for the remains of a downed
Marine C-46 transport, lost somewhere in the Mount Ranier area, a young
Idahoan businessman named Kenneth Arnold spotted something that would
change his life forever. Just north of his position flying at an altitude of
9,500 feet and an unprecedented airspeed of 1,700 mph he spotted nine
circular aircraft flying in formation. According to his estimate the aircraft
were approximately the size of a DC-4 airliner ( Jackson 4).
This account was the first sighting to ever receive a great deal of media
attention. This sighting gave birth to the phrase "flying Saucer" coined by a
reporter named Bill Begrette. Although not the first UFO sighting in history,
Kenneth Arnolds account is considered to be the first documented UFO
sighting.
The following day Mr. Arnold discovered that in addition to his
sighting there were several others in the Mount Ranier area that same day
(Jackson 6).
When most of think of UFO sightings we picture an unemployed, half-
crazed, alcoholic hick living in a trailer park in the middle small town USA.
Often times this description, although a little exaggerated, seems to fit
fairly
well. In the past when the average person spotted a UFO they were quickly
discounted as a kook or con-artist in search of either attention or monetary
reward. It wasn't until more reputable figures in our society began to come
forward that we that we started looking at this issue a little more seriously.
An article written 1957, entitled " Strange lights over Grenada" written by
Aime' Michel describes just such an account:
At 10:35 p.m. on September the 4th, 1957 Cpt Ferreira ordered his
wing to abandon a planned exercise and execute a 50 degree turn to
port. Ferreira was attempting to get a closer look at what he
described as brilliant, pulsating light hanging low over the horizon.
When the turn was completed he noticed that the object had turned
too. It was still directly over his left. There was absolutely no doubt
that the orange light was shadowing the F-84s. For another 10
minutes, it followed the jets without changing direction or
appearance. The pilots watched as four small yellow discs broke away
from the large red object and took up a formation on either side of it.
All at once the large luminous disc shot vertically upward while the
smaller discs shot straight towards the F-84s. In an instant the flat
disc sped overhead in a hazy blur and vanished. When Cpt Ferriera
was questioned by Portuguese Air Force Investigators he was quoted
as saying: "Please don't come out with the old explanation that we
were being chased by the planet Venus, weather balloons, or freak
atmospheric conditions. What we saw up there was real and
intelligently controlled. And it scared the hell out of us. (32)
This is only one of literally hundreds of pilot accounts that have been
documented and cross verified by other sources. To date the Portuguese
Government has taken no official position as to what the luminous discs
were.
The United States has had more than it's fair share of unexplained aerial
objects. In February of 1960 the N.A.A.D.S. (North American Air Defense
System) spotted a satellite of unknown origin orbiting the Earth. They knew
that it wasn't a Soviet satellite because it was orbiting perpendicular to
trajectory produced by a Soviet launch. It also had a mass estimated at 15
metric tons, no evidence of booster rockets and traveled at speed three times
faster than any known satellite. The satellite orbited for two weeks and
disappeared without a trace. Before its disappearance, the object which
appeared to give off a red glow, was photographed over New York several
times (Jackson 19).
Lights in the sky aren't the only evidence that suggests we may have
cosmic company. In the book "A History of UFO Crashes", the author Kevin
D. Randal gives detailed accounts of numerous UFO crashes in history.
Perhaps the most famous of these crashes occurred on July 4th, 1947 in
Roswell New Mexico. The crash at Roswell was witnessed from
afar by over a hundred people. Until just recently, no one who was involved
in the recovery operation was talking, but thanks to continued pressure from
UFO enthusiast our government has begun to declassify much of its UFO
related material. Perhaps more startling are than the government documents
are the accounts given by local police and members of the recovery team.
According to one unnamed witness, a member of the Roswell recovery team:
The crash site was littered with pieces of aircraft. Something about the
size of a fighter plane had crashed, the metal was unlike anything I'd
ever seen before. I picked up a piece the size of a car fender with one
hand, it couldn't have more than a quarter of a pound and no matter
how hard I tried I couldn't even get it to bend. (10)
In my opinion the most fascinating piece of evidence to come out of
the Roswell crash is the alien autopsy film. Apparently there was more than
bits and pieces of spaceship recovered at Roswell. There is an Air Force video
account of an autopsy being performed on a life form that doesn't share the
common characteristics of organ development found in life forms on this
planet. The film is silent and labeled "Autopsy, Roswell, July 1947" (Randal
17).
As difficult as the Roswell evidence is to explain or discount it pales in
comparison to the physical evidence left by our ancestors. An Illustration
taken from a Nuremburg Broadsheet Tells how men and women "saw a very
frightful spectacle". At sunrise April 14th, 1561 "globes, crosses and tubes
began to fight one another", the event continued for about an hour.
Afterward they fell to ground in flames, minutes later a "black, spear like
object appeared". In a Basal Broadsheet dated August 7th, 1566 large black
and white globes are seen over Dasel, Switzerland. Both events occurred in a
time period when there should have nothing more than birds and bees filling
our skies. They each considered to be Divine warnings at the time (Gould
95-96).
Ancient physical evidence isn't limited to newspaper illustration and
sketches on cave wall. Perhaps the most astounding and unexplainable pieces
of physical evidence are a pair of geodes. Both are believed to be
approximately 1,800 years old and when carefully examined were identified as
electrical cells. One of the cells which was discovered in Iraq was tested and
produced 2 volts of electricity. The other, which was discovered by a pair of
Arizona rock hounds, was damaged when the sedimentary encrustation was
being removed and therefore couldn't be tested (Montgomery 221).
Since the dawn of time man has told stories of heavenly and demonic
beings coming to rule, teach, torment, seduce and provide salvation. Every
culture has myths of ancient gods who strode through the heavens. The
American Indians had the cachinas who taught them to farm and saved them
from numerous cataclysms. Greece had Zeus who threw lightning bolts from
his finger tips and Apollo crossed the sky in his golden chariot. The
Christians have Ecclesiastes who encountered the "ant people" and rode
through the skies with them from Babylon to Israel. Across the entire globe
we find drawings on cave walls that resemble men in space suits and objects
that greatly resemble flying saucers. The sacred artwork of the Hopi Indian
when is without a doubt a representation of the waves produced by modern
day oscilloscopes (Montgomery 225-237). The Hopis are also native to the
area where one of the electrical cells were found. It could be that these
things
are no more than mere coincidence, but I doubt it. Man in his arrogance is
reluctant to believe that we may share Gods vast, glorious universe with other
beings of intelligence. We sometimes fail to realize that if the Earth were a
day old, the race of man would only have been here for 13 minutes. If you
couple that with the fact that there are Black Holes and White Dwarfs
millions of years older than our sun, it increases the improbability that we
are
the only ones out here. In the preceding text I have produced a limited
sampling of the volumes of evidence available. I will close this paper on
a quote from Ecclesiastes I:9 "there is no new thing under the sun", and that
includes intelligent life.
Works Cited
Ecclesiastes. Holy Bible. Nashville, Tennessee: Thomas Nelson, 1976.
Gould, Robert. Oddities. New York: Bell PC, 1965.
Jackson, Robert. UFO's: Sightings of Strange Phenomena in the Skies.
New Jersey: Chartwell Books, 1995
Michel, Aime'. "Strange lights over Grenada." Fate Magazine. Aug. 1957.
29-32
Montgomery, Ruth. Aliens Among Us. New York: G.P Putnum's Sons, 1985
Randle, Kevin. A History of UFO Crashes. Avon Books,1995
f:\12000 essays\sciences (985)\Astronomy\Aristotle.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aristotle
Aristotle was a Greek philosopher and was born in 348 B.C. He studied under another philopsopher Plato and later tutored Alexander the Great at the Macedonian court. In 335 B.C. he opened a school in the Athenian Lyceum. During the anti-macedonian agitation after Alexander's death Aristotle fled to Chalcis where he later died in 322 B.C. His extant writings, largely in the form of lecture notes made by his students, include the Organum (treatises of logic); Physics; Metaphysics; De Anima (on the soul); Nicomachean Ethics and Eudemian Ethics; Politics: De Poetica: Rhetoric; and works biology and physics. Aristotle held philosophy to be the the discerning, through the use of systematic logic as expressed in Syllogisms, of the self-evident, changeless first principles that form the basis of all knowledge. He taught that knowledge of a thing requires an inquiry into causality and that the "final cause"-the purpose or function of the thing-is primary.
This is a direct quote from his works (translated):
"The highest good for the individual is the complete exercise of the specifically human function of rationality. In contrast to the Platonic belief that a concrete reality partakes of a from but does not embody it with the exception of the Prime Mover (God), form has no separate existence but is immanent in matter."
Aristotle's work was lost following the decline of the Roman Empire but was reintroduced to the West through the work of Arab and Jewish scholars, becoming the basis of medieval scholasticism.
In my opinion Aristotle was one of the greatest and most important of the philosophers and scientists of the world's history.
f:\12000 essays\sciences (985)\Astronomy\Asteroid Defense.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Asteroid Defense
When it comes down to developing a way to defend the entire planet from destruction I am all for it. A large asteroid or comet hitting the earth is not a common occurrence. But it has happened many times before and when it does happen again the asteroid may wipe out all life, including humans. If our government did develop an anti-asteroid defense system, it would not only have to protect our country, but the whole planet.
If we had such technology we would first have to be very sure it would work.
We wouldn't want to shoot a nuclear weapon at an asteroid just to have it break into multiple pieces and have those pieces raining down on Earth. One of the most important parts of defending our planet would be to find and chart every asteroid that could threaten us. That would be a very tedious and never ending job, but is necessary for the defense system to work. It would do us humans no good to have some sort of defense against asteroids if we don't know when they will strike.
So after thinking about an anti-asteroid defense system, I think that our government should look into constructing one. When one thinks about what an asteroid could do to our planet it is usually a very scary thought. In the past we have been very lucky with where asteroids have hit our earth. Back in 1908 in the Tunguska region of Siberia, an object from space hit there causing miles of forest to be devastated. If that same object had hit New York, it would have probably been like a 20 megaton bomb going off in Times Square. That would have completely altered history. What makes it worse is that it is thought that a small comet hit in Tunguska. What if a huge comet had hit there? These examples are very good reasons why I think that humankind needs to come up with a way to stop asteroids or any other type of object that could kill off all life on earth.
f:\12000 essays\sciences (985)\Astronomy\Astrology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
WHAT IS ASTROLOGY
By Lauren Kaplan
Astrology is the study of planetary influences and their affect on the world and
everything in it. Astrology is usually limited to human beings--their nature, and their
affairs, although a chart can be drawn up for just about any event. The horoscope is a
blue print or pattern of the solar system cast for a particular moment of time. It is from
this that the astrologer bases the interpretation or delineation as indicated by the nature
of the sun, moon, and planets.
The natal horoscope is a chart drawn at the moment of birth to see and understand the
nature and makeup of the soul of the newborn as it takes residence in the physical
vehicle or body. The human soul is a focal point of cosmic energy, and the pattern of the
heavens, as charted in the horoscope, is the means the soul comes to know itself and its
destiny.
Astrology points the way to soul development and growth. The soul's strengths and
weaknesses are noted in the horoscope. Life is an opportunity given to soul for further
enhancement.
Because the heavens are in constant motion, and because this motion is quite ordered
and exact, it is possible to project the positions of the sun, moon, and planets for any
given time. Astrologers use this information to draw-up a horoscope and forecast the
"influences" that will affect the soul at that time. Astrologers usually do not predict
actual events in the future. They can only say what might happen, or could happen, but
not what will happen--much like a weather forecast; although many psychics do make
predictions, and astrology is the tool they use to focus their abilities.
Another common feature of astrology is the comparison of birth charts to ascertain the
compatibility of two people. This is a straightforward method used by over-laying one
chart upon the other. The aspects or angles formed by the planets are then analyzed to
determine how the energy fields of each person blend together. Some couples form more
harmonious bonds than others, less harmonious bonds offer a greater challenge for peace
and happiness.
Over the years astrologers have developed numerous techniques for expanding their
"art" to include a multitude of services that can only be evaluated upon the merit and
usefulness of that technique. Astrology can only offer so much; it is imperative that the
individual soul strive to attain that which is rightfully theirs in this short life, and to
regard any advice with great care.
f:\12000 essays\sciences (985)\Astronomy\Atlantis & Mir.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Last year something amazing took place, but it wasn't in a laboratory. It wasn't under the ocean, and it wasn't on the land. It was in space. The docking of the United States's Space Shuttle Atlantis and the Russian Space Station Mir was an important step towards international cooperation in space.
This is not the first time the U.S. has been in contact with Russia in space matters. In 1965, an American Apollo capsule and a Russian Soyuz module were locked in orbital tango 245 miles above the earth, both crews tense with the feeling of peace and rivalry at the same time. In the past few years, American astronauts have used the Russian training facility at the Cosmodrome to prepare for work on the Mir space station. We couldn't give up the hope of peace and freedom.
Launch Day had arrived; June 29, 1995. During launch and ascent, there wasn't much to think about except the thrill of the ride. Then the tension rose. In less than thirty-six hours, they would have to close a gap of four thousand miles at an altitude of 245 miles while traveling at 17,500 miles per hour, only to have to move within three inches and two degrees of a quickly moving, very fragile object in space. One small thrust of a poorly aligned engine could cost one of four space-worthy shuttles and the world's first and only long-term space station.
Docking was over with soon, and was followed by the cosmonauts of Mir greeting the astronauts of Atlantis. Gifts of flowers, candy and fruit were given by the Americans, who in return, following a Russian tradition, received gifts of bread and salt. Knowing the importance of the mission, the Russians, as well as the Americans, forced the good nature of the mission to be seen. After an exchange in the crew of Mir, Atlantis disengaged the docking clamps, and returned home.
This mission was important to the future of the conquest of space, for the American scientists need help with the design of the planned International Space Station Freedom, and the Russians need help financing their space program. Both countries need the help of other countries and space agencies, like Canada, Japan, and the European Space Agency. With everyone working together, the future is more easily reached. As once said by United States President William J. Clinton, "We need to build a bridge to the future." This mission is one of the supports that will hold up that bridge.
f:\12000 essays\sciences (985)\Astronomy\Beta Pictoris Planets Life or what .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BETA PICTORIS: PLANETS? LIFE? OR WHAT?
JARA
ASTRONOMY 102 SEC 013
The ultimate question is; Is there a possibility that life might exist on a planet in the Beta Pictoris system? First, one must ask, Are there planets in the Beta Pictoris system?. However, that question would be impossible to answer if one did not answer the most basic questions first; Where do planets come from? and do the key elements and situations, needed to form planets, exist in the Beta Pictoris system?.
To understand where planets come from, one has to first look at where the planets in our solar system came from. Does or did our star, the sun, have a circumstellar disk around it? the answer is believed to be yes.
Scientists believe that a newly formed star is immediately surrounded by a relatively dense cloud of gas and dust. In 1965, A. Poveda stated, "That new stars are likely to be obscured by this envelope of gas and dust (1)." In 1967, Davidson and Harwit agreed with Poveda and then termed this occurrence, the "cocoon nebula" (1). Other authors have referred to this occurrence as, a "placental nebula" (1), noting that it sustains the growth of planetary bodies.
For a long time, even before there was the term cocoon nebula, planetary scientists knew that a cocoon nebula had surrounded the sun, long ago, in order for our solar system to form and take on their currents motions (1).
In 1755, a German, named Immanuel Kant, reasoned that "gravity would make circumsolar cloud contract and that rotation would flatten it (1)." Thus, the cloud would assume the general shape of a rotating disk, explaining the fact that the planets, in our solar system, revolve in a disk-shaped distribution.
This idea, about the disk-shaped nebula that was formed around the early sun, came to be known as the nebula hypothesis (1). Then, in 1796, a French mathematician named Laplace, proposed that the rotating disk continued to cool and contract, forming planetary bodies (1). Also, when investigating the evolution of stars, it was proposed "that a star forms as a central condensation in an extended nebula... The outer part remains behind as the cocoon nebula (1)". During the same study it was also indicated that under various conditions such as: rotation, turbulence, etc. the nucleus of the forming star may divide into two or more bodies orbiting each other (1). This may be the explanation as to why more than half of all star systems are binary or multiple, rather than singles stars, like ours, the sun.
This same fragmentation may also form bodies too small to become stars. However, they could form into large planets, about the same size as Jupiter (1).
In 1966, Low and Smith calculated that the dust must be orbiting the star at a distance of many tens of astronomical units, in order for planets to from (1). Others have reasoned that the cocoon nebula must contain silicate and/or ice particles (planet-forming materials), in order for the presence of planetary bodies (1). Still others have concluded that planets form during the early life of a star (1).
After determining that planets are formed in a circumstellar disk surrounding a star, we must ask ourselves, Does Beta Pictoris have a cirumstellar disk around it?
Beta Pictoris was found to have a circumstellar disk in 1983. It was first detected by the Infrared Astronomy Satellite. The disk is seen to extend to more than 400 astronomical units from the star (2). The orbits of most of the particles are inclined 5 degrees or less to the plane of the system (2). These minimal orbital inclinations are typical of the major planets in our own solar system. There is evidence that the circumstellar material around Beta Pictoris takes the form of a highly flattened disk, rather than a spherical shell implies an almost certain association with planet formation (2). The disk material itself is believed to be a potential source for planet accretion (2). This retention of nearly coplanar orbits in the Beta Pictoris disk is a qualitative argument in support of its being a relatively young system (2). Some astronomers believe that we are witnessing planet formation in the process.
Lagage and Pantin found that the inner region of the disk surrounding Beta Pictoris is clear of dust, a prime indicator that there is evidence of one or more planetary bodies (3).
The depletion zone extends to about 15 AU from the star, about the same size as our solar system; and has an average particle density only one tenth of the area just outside this zone (3).
Lagage and Pantin believe that the inner zone may have been swept clean by the gravitational pull of a planet orbiting around Beta Pictoris (3). A planet would gravitationally deflect the particles out of the inner zone. This planet, which is only believed to exist, may also be deflecting comets into the star, as indicated by the presence of highly variable absorption lines in the spectrum of Beta Pictoris (3).
The infrared image by Lagage and Pantin also provide information that the edge-on disk is not symmetrical around the star (3). This suggests a more intimate relationship between the asymmetry and the properties of the inner disk. As the orbital timescale for particles is relatively short (less than 100 years), one would expect that the irregularities in the disk would have been smoothed out by now (3). Unless, there was something stirring it up, such as a planet (3).
If there is a planet orbiting Beta Pictoris, its orbit is probably eccentric, as are most of the planetary orbits in our solar system (4). A planet with even a moderately eccentric orbit would generate the asymmetry that is been noted in the dust disk surrounding Beat Pictoris (4).
The Hubble Space Telescope, using the high-resolution spectrograph, found that the disk surrounding beta Pictoris consists of two parts: an outer ring of small, solid particles, and an inner ring of diffuse gas within a few hundred miles of the star (5).
Albert Boggess, an astronomer at NASA's Goddard Space Flight Center, suspects that the gas comes from the ring of solid particles (5). If he is correct, then the gas may be a sign that planets are being born there. The gas could be a result from the collision of solid particles in the outer ring accreting into planets that are still too small to see because of the brightness of the star itself (5). During the collisions some of the particles would be vaporized and drawn toward the star. The planets in our own solar system are believed to have formed through countless numbers of such collisions (5).
Boggess also believes that Beta Pictoris is very similar to a very early phase of our own solar system (5).
Additional evidence, from the Hubble, also suggests that Beta Pictoris might be following in our footsteps. The gaseous inner ring appears to contain clumps of material spiraling toward the star (5). These clumps may be comets, diverted from the normal paths by close calls with protoplanets (5). This also fits with current ideas about the evolution of our own solar system. Gases from comet impacts may have been the creating factor of the Earth's atmosphere and oceans (5).
Wetherill argues that life on Earth is reliant upon the existence of Jupiter and Saturn, because they cleansed our Solar System of most of its planetesimals (comets) that, otherwise, would be striking the Earth (6). In order for a planet to survive long enough for life to begin, it is necessary for the existence of gas giants (Jupiter and Saturn) to get rid of the hazardous comets.
No one person can say for sure whether there are planets in the Beta Pictoris System, or not. However, it is definitely a possibility. There is a circumstellar disk surrounding Beta Pictoris. It is a highly flattened disk, as was the disk that once surrounded the Sun. The disk contains the necessary elements for planet formation. The star is a young one. The inner zone of the disk is clear. All of these things point to the almost probable formation of planets. Richard Terrile, from the Jet Propulsion Laboratory, says, "It's hard not to form planets from material like this (7)."
To answer whether or not there could be life on one of these planets, is not easy to say. No one can really even speculate. I, believe that it is possible, if all the variables come together in just the right way. I am not 'earthnocentric' to assume that the earth is the only planet in the Universe that can sustain life. Whether or not a planet in the Beta Pictoris system has what it takes, who knows, we can only wait and watch.
BIBLIOGRAPHY
(1) Moons And Planets, third edition; William K.
Hartman; Wadsworth Publishing company;
California; 1993.
(2) A Circumstellar Disk Around Beta Pictoris; Science;
volume 226; pages 1421-1424.
(3) Footprints in The Dust; Charles M. Telesco;
Nature; volume 369; pages 610-611.
(4) Dust Depletion In The Inner Disk Of Beta Pictoris
As A Possible Indicator Of Planets; P. O. Lagage
and E. Pantin; Nature; volume 369; pages 628-
630.
(5) Birth Of A Solar System?; Tim Folger; Discover;
volume 13; page 27.
(6) Inhibition Of Giant-planet formation By Rapid Gas
Depletion Around Young Stars; B. Zucherman,
T. Forveille, and J. H. Kastner; Nature; volume
373; pages 494-496.
(7) A Planet Around Beta Pictoris?; Sky and Telescope;
Volume 88; page 10.
ADDITIONAL BIBLIOGRAPHY
A Closer Look At Beta Pictoris; Astronomy;
volume 21; Page 18.
Birth Announcements; Scientific American;
volume 256; pages 60+.
Faraway Planets; Science Digest; volume 94;
page 47.
Protoplanetary nebula around Beta Pictoris;
Astronomy; volume 13; page 60.
f:\12000 essays\sciences (985)\Astronomy\Black Hole.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Into the Depths of A Black Hole Everyday we look out upon the night sky, wondering and dreaming of what lies beyond our planet. The universe that we live in is so diverse and unique, and it interests us to learn about all the variance that lies beyond our grasp. Within this marvel of wonders our universe holds a mystery that is very difficult to understand because of the complications that arise when trying to examine and explore the principles of space. That mystery happens to be that of the ever clandestine, black hole. This essay will hopefully give you the knowledge and understanding of the concepts, properties, and processes involved with the space phenomenon of the black hole. It will describe how a black hole is generally formed, how it functions, and the effects it has on the universe. In order to understand what exactly a black hole is, we must first take a look at the basis for the cause of a black hole. All black holes are formed from the gravitational collapse of a star, usually having a great, massive, core. A star is created when huge, gigantic, gas clouds bind together due to attractive forces and form a hot core, combined from all the energy of the two gas clouds. This energy produced is so great when it first collides, that a nuclear reaction occurs and the gases within the star start to burn continuously. The Hydrogen gas is usually the first type of gas consumed in a star and then other gas elements such as Carbon, Oxygen, and Helium are consumed. This chain reaction fuels the star for millions or billions of years depending upon the amount of gases there are. The star manages to avoid collapsing at this point because of the equilibrium achieved by itself. The gravitational pull from the core of the star is equal to the gravitational pull of the gases forming a type of orbit, however when this equality is broken the star can go into several different stages. Usually if the star is small in mass, most of the gases will be consumed while some of it escapes. This occurs because there is not a tremendous gravitational pull upon those gases and therefore the star weakens and becomes smaller. It is then referred to as a White Dwarf. If the star was to have a larger mass however, then it may possibly Supernova, meaning that the nuclear fusion within the star simply goes out of control causing the star to explode. After exploding a fraction of the star is usually left (if it has not turned into pure gas) and that fraction of the star is known as a neutron star. A black hole is one of the last option that a star may take. If the core of the star is so massive (approximately 6-8 solar masses; one solar mass being equal to the sun's mass) then it is most likely that when the star's gases are almost consumed those gases will collapse inward, forced into the core by the gravitational force laid upon them. After a black hole is created, the gravitational force continues to pull in space debris and other type of matters to help add to the mass of the core, making the hole stronger and more powerful. Most black holes tend to be in a consistent spinning motion. This motion absorbs various matter and spins it within the ring (known as the Event Horizon) that is formed around the black hole. The matter keeps within the Event Horizon until it has spun into the centre where it is concentrated within the core adding to the mass. Such spinning black holes are known as Kerr Black Holes. Most black holes orbit around stars due to the fact that they once were a star, and this may cause some problems for the neighbouring stars. If a black hole gets powerful enough it may actually pull a star into it and disrupt the orbit of many other stars. The black hole could then grow even stronger (from the star's mass) as to possibly absorb another. When a black hole absorbs a star, the star is first pulled into the Ergosphere, which sweeps all the matter into the Event Horizon, named for it's flat horizontal appearance and because this happens to be the place where mostly all the action within the black hole occurs. When the star is passed on into the Event Horizon the light that the star endures is bent within the current and therefore cannot be seen in space. At this exact point in time, high amounts of radiation are given off, that with the proper equipment can be detected and seen as an image of a black hole. Through this technique astronomers now believe that they have found a black hole known as Cygnus X1. This supposed black hole has a huge star orbiting around it, therefore we assume there must be a black hole that it is in orbit with. The first scientists to really take an in depth look at black holes and the collapsing of stars, were a professor, Robert Oppenheimer and his student Hartland Snyder, in the early nineteen hundreds. They concluded on the basis of Einstein's theory of relativity that if the speed of light was the utmost speed over any massive object, then nothing could escape a black hole once in it's clutches. **(1) The name "black hole" was named such, because of the fact that light could not escape from the gravitational pull from the core, thus making the black hole impossible for humans to see without using technological advancements for measuring such things like radiation. The second part of the word was named "hole" due to the fact that the actual hole, is where everything is absorbed and where the centre core presides. This core is the main part of the black hole where the mass is concentrated and appears purely black on all readings even through the use of radiation detection devices. Just recently a major discovery was found with the help of a device known as The Hubble Telescope. This telescope has just recently found what many astronomers believe to be a black hole, after being focused on an star orbiting empty space. Several picture were sent back to Earth from the telescope showing many computer enhanced pictures of various radiation fluctuations and other diverse types of readings that could be read from the area in which the black hole is suspected to be in. Several diagrams were made showing how astronomers believe that if somehow you were to survive through the centre of the black hole that there would be enough gravitational force to possible warp you to another end in the universe or possibly to another universe. The creative ideas that can be hypothesized from this discovery are endless. Although our universe is filled with much unexplained, glorious, phenomenons, it is our duty to continue exploring them and to continue learning, but in the process we must not take any of it for granted. As you have read, black holes are a major topic within our universe and they contain so much curiosity that they could possibly hold unlimited uses. Black holes are a sensation that astronomers are still very puzzled with. It seems that as we get closer to solving their existence and functions, we just end up with more and more questions. Although these questions just lead us into more and more unanswered problems we seek and find refuge into them, dreaming that maybe one day, one far off distant day, we will understand all the conceptions and we will be able to use the universe to our advantage and go where only our dreams could take us. Dave May 343 1992/12/04 References For Into The Depths of a Black Hole **(1): Parker, Barry. Colliding Galaxies. PG#96
f:\12000 essays\sciences (985)\Astronomy\Black Holes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Into the Depths of A Black Hole
Everyday we look out upon the night sky, wondering and dreaming of what lies beyond our planet. The universe that we live in is so diverse and unique, it interests us to learn about all the variance that lies beyond our grasp. Within this marvel of wonders, our universe holds a mystery that is very difficult to understand, because of the complications that arise when trying to examine and explore the principles of space. That mystery happens to be that of black holes. As you will read about black holes, you can finally appreciate the phenomenon, we all know as, black holes.
In order to understand what exactly a black hole is, we must first take a look at the basis for the cause of a black hole. All black holes are formed from the gravitational collapse of a star, usually having a great, massive, core. A star is created when huge, gigantic, gas clouds bind
together due to attractive forces and form a hot core, combined from all the energy of the two gas clouds. This energy produced is so great when it first collides, that a nuclear reaction occurs, and the gases within the star start to burn continuously. The Hydrogen gas is usually the first type of
gas consumed in a star and then other gas elements such as Carbon, Oxygen, and Helium are consumed.
This chain reaction fuels the star for millions, or billions, of years depending upon the amount of gases there are. The star manages to avoid collapsing at this point because of the equilibrium achieved by itself. The gravitational pull from the core of the star is equal to the gravitational pull of the gases forming a type of orbit, however when this equality is broken the star can go into several different stages. Usually if the star is small in mass, most of the gases will be consumed while some of it escapes. This occurs because there is not a tremendous gravitational pull upon those gases and therefore the star weakens and becomes smaller. It is then
referred to as a 'White Dwarf'. If the star was to have a larger mass however, then it may possibly Supernova, meaning that the nuclear fusion within the star simply goes out of control causing the star to explode. After exploding a fraction of the star is usually left (if it has not turned into pure gas) and that fraction of the star is known as a neutron star. Black holes are one of the last option that a star may take. If the core of the star is so massive (approximately 68 solar masses; one solar mass being equal to the sun's mass), then it is most likely that when the star's gases are almost consumed those gases will collapse inward, forced into the core by the gravitational force laid upon them.
After a black hole is created, the gravitational force continues to pull in space debris and other type of matters to help add to the mass of the core, making the hole stronger and more powerful. Most black holes tend to be in a consistent spinning motion. This motion absorbs various matter and spins it within the ring (known as the Event Horizon) that is formed around the black hole. The matter keeps within the Event Horizon until it has spun into the centre where it is concentrated within the core adding to the mass. Such spinning black holes are known as Kerr Black Holes. Most black holes orbit around stars due to the fact that they once were a star, and
this may cause some problems for the neighboring stars. If a black hole gets powerful enough it may actually pull a star into it and disrupt the orbit of many other stars. The black hole could then grow even stronger (from the star's mass) as to possibly absorb another.
When a black hole absorbs a star, the star is first pulled into the Ergosphere, which sweeps all the matter into the Event Horizon, named for it's flat horizontal appearance and because this happens to be the place where mostly all the action within the black hole occurs. When the star is
passed on into the Event Horizon the light that the star endures is bent within the current and therefore cannot be seen in space. At this exact point in time, high amounts of radiation are given off, that with the proper equipment can be detected and seen as an image of a black hole. Through
this technique astronomers now believe that they have found a black hole known as Cygnus X1. This supposed black hole has a huge star orbiting
around it, therefore we assume there must be a black hole that it is in orbit with.
The first scientists to really take an in depth look at black holes and the collapsing of stars, were a professor, Robert Oppenheimer and his student Hartland Snyder, in the early nineteen hundreds. They concluded on the basis of Einstein's theory of relativity that if the speed of light
was the utmost speed over any massive object, then nothing could escape a black hole once in it's clutches.
The name "black hole" was named such, because of the fact that light could not escape from the gravitational pull from the core, thus making the black hole impossible for humans to see without using technological advancements for measuring such things like radiation. The second
part of the word was named "hole" due to the fact that the actual hole, is where everything is absorbed and where the center core presides. This core is the main part of the black hole where the mass is concentrated and appears
purely black on all readings even through the use of radiation detection devices.
Just recently a major discovery was found with the help of a device known as The Hubble Telescope. This telescope has just recently found what many astronomers believe to be a black hole, after being focused on a star orbiting empty space. Several pictures were sent back to Earth
from the telescopes showing many computer enhanced pictures of various radiation fluctuations and other diverse types of readings, that could be read from the area in which the black hole is suspected to be in.
Several diagrams were made showing how astronomers believe that if somehow you were to survive through the center of the black hole, that there would be enough gravitational force to possibly warp you to another end in the universe or possibly to another universe. The creative ideas that can be hypothesized from this discovery are endless.
Although our universe is filled with much unexplained, glorious, phenomenons, it is our duty to continue exploring them and to continue learning, but in the process we must not take any of it for granted.
As you have read, black holes are a major topic within our universe and they contain so much curiosity that they could possibly hold unlimited uses. Black holes are a sensation that astronomers are still very puzzled with. It seems that as we get closer to solving their existence and functions, we just end up with more and more questions. Although these questions just lead
us into more and more unanswered problems we seek and find refuge into them, dreaming that maybe one day, one far off distant day, we will understand all the conceptions and we will be able to use the universe to our advantage and go where only our dreams could take us.
f:\12000 essays\sciences (985)\Astronomy\Comet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comets
Have you ever looked up in the sky and seen a little ball creeping by? If so, did you wonder what it was? That little ball is called a comet. Comets are small, fragile, and irregularly shaped. Most are composed of frozen gas. However, some are composed of frozen gas and non-volatile grains. They usually follow very strict paths around the sun. Comets become most visible when they cross the sun. This also applies to people who view comets with telescopes. When a comet gets near the sun it becomes very visible because the sun's radiation starts to sublime its volatile gases, which, in turn, blow away small bits of the little solid material the comet has.
Another feature of a comet is a long tail. This is caused by materials breaking off and expanding. They expand into an enormous escaping atmosphere called the coma. This becomes at least the size of our planet. With the comet going so fast, these materials are forced behind the comet, forming a long tail of dust and gas.
Comets are cold bodies. We see them only because the gases they are composed of glow in the sunlight. All comets are regular family members of the solar system family. They are bound by gravity to a strict path around the solar system. Scientists believe that all comets were formed of material, originally in the outer part of the solar system, which did not become incorporated into planets. This material is from when the planets just started forming. This makes comets an extremely interesting topic to scientists who are studying the history of the solar system.
In comparison to planets, comets are very small. They can be anywhere from 750 meters (or less) to 20 kilometers in diameter. However, lately, scientists have been finding proof that there are comets 300 kilometers in diameter or greater.
Comets are still compared to the planets, though. Planets usually follow the shape of a sphere. Most planets are fat at the equator. Comets come in all different shapes and sizes. Most evidence that science has revealed says that comets are extremely fragile. A comet is so poorly structured that it is like a loose snowball--it can be pulled apart with one's own bare hands.
Comets have very awkward rotation periods. They are very oblong. When comets reach their aphelion they are usually near Jupiter or even sometimes Neptune. Other comets, however, come from even farther out in the solar system. No matter what, if a comet passes Jupiter, it is strongly attracted to it. Sometimes Jupiter's massive gravitational pull makes comets slam into planets .
Comets' nuclei look like dirty snowballs. They are solid, persisting of ice and gas. Most nuclei contain rock, actually, small grains of rock somewhat like rock here on Earth. A nucleus appears to be black in color because it is made up of carbon compounds and sometimes free carbon. Since comet nuclei are so small they are difficult to study from Earth.
An interesting feature of a comet that few people know is that even though a comet appears to have a single tail, it actually has two. One tail is a dust tail and the other is an ion tail.
Although comets are very old, the oldest comet recorded is Comet Halley. They are Chinese records of this comet dating as far back as 240 BC Sir Edmund Halley predicted in 1705 that a comet which had appeared in 1531, 1607, and also 1682 would return in 1758. (unfortunately, the comet appeared on the day he was born and the day he died, he never got to see the comet) It was named Comet Halley in honor of him. A sighting of the comet was confirmed on Christmas day 1758.
Halley predicted the date on which the comet would return using Kepler's Third Law which states:
1. All orbits are ellipses with the sun at one focus.
2. A line between a planet and the Sun sweeps out an equal area during any fixed interval of time (i.e. planets move quickly when they are close to the sun)
3. (OrbitalPeriod(years))squared = (OrbitalRadius(AU))cubed
A comet that has been discovered more recently is the Hale-Bopp comet. It is scheduled to appear in April 1997.
Alan Hale is a native New Mexican. Hale is a professional astronomer, he specializes in studying sunlike stars and searching for other planetary systems. He has been studying comets since 1970. Here is how he discovered the comet:
"During my normal study of comets, it is my practice to observe comets once a week, on the average, and measure their brightnesses. On the night of July 22--the first clear night here in a week and a half--I planned to observe two comets. I finished with the first one--Periodic Comet Clark--shortly before midnight, and had about an hour and a half to wait before the second one--Periodic Comet D'Arrest--rose high enough in the east to get a good look at. I decided to pass the time by observing some deep-sky objects in Sagittarius, and when I turned my telescope (a Meade DS-16) to M70, I immediately noticed a fuzzy object in the field that hadn't been there when I had looked at M70 two seeks earlier. After verifying that I was indeed looking at M70, and not one of the many other globular clusters in that part of the sky, I checked the various deep-sky catalogues, then ran the comment-identification program at the IAU Central Bureau's computer in Cambridge, Massachusetts. I sent an e-mail to Brian Marsden and Dan Green at the Central Bureau at that time informing them of a possible comet; later, when I had verified that the object had moved against the background stars, I sent them an additional e-mail. I continued to follow the comet for a total of about 3 hours, until it set behind trees in the southwest, and then was able to e-mail a detailed report, complete with two positions."
After he discovered the comet he said "I love this irony -- I've spent over 400 hours of my life looking for comets, and I haven't found anything, and now, suddenly, when I'm not looking for one, I get one dumped right in my lap. I had obtained an observation of P/Clark earlier, and I needed to wait an hour or so before P/d' Arrest got high enough to look at, and I was just passing time til' then and I decided to look at some deep-sky objects in Sagittarius. When I turned to M70, I saw a fuzzy object in the same field, and almost immediately expected a comet, since I had been looking at M70 last month, and *knew* there wasn't any objects there."
It all started for Bopp on July 22nd, 1995 on the exact night that Alan Hale saw the comet. In fact, they both saw the comet within 5 minutes of each other. Alan Hale was the first person to see it however. Here is the story of Thomas Bopp.
"On the night of July 22, some friends and I headed out into the desert for a dark moon observation session. The site, which is west of Stanfield, Arizona, and a few miles of interstate 8 is about 90 miles southwest of my home.
My friend Jim Stevens had brought his 17-1/2" Dobsonian. We started the evening observing some of the messier objects such as the Veil and the North American Nebulae in Cygnus, when Jim said "Lets look at some of the globularsin Sagittarius." We started our tour with M22 and M28, observing at 50X and then 180X. Around 11:00 local time, we had M70 in the field when Jim went to the charts to determine the next object of investigation. I continued watching M70 slowly drift across the field, when it reached a point 3/4 of the way across an alight glow appeared on the eastern edge. I repositioned the scope to the center on the new object but was unable to resolve it. I called to Jim and asked him if he knew what it might be, after visual inspection he stated he was not familiar with it but would check the charts. After determining the general position of the object he was unable to find it on Sky Atlas 2000.0 or Uranometria.
The moment Jim said "we might have something" excitement began to grow among our group and I breathed a silent prayer thanking God for his wondrous creation. My friend, Kevin Gill then took a position from his digital setting circlesand estimated a magnitude.
At 11:15 I said that we needed to check the object for motion and should watch it for an hour. The group observed it change position against the star field over that period and at 12:25 I decided to drive home and report our finding.
Arriving at home, initial attempts to send a telegram were unsuccessful due to an incomplete address I had. After searching my library I was able to locate the correct address and confirmation was requested.
At 8:25 A.M. July 23rd, 1995, Daniel Green of the Harvard Smithsonian Astrophysical Observatory telephoned and said, "Congratulations Tom, I believe you discovered a new comet." And that was one of the happiest moments of my life."
Thomas Bopp lives in Glendale, Arizona. (Small suburb just barely outside Phoenix) He is the supervisor for a construction material company in Phoenix. Bopp is an enthusiastic observer of deep sky objects. The exact name of the site Bopp saw the comet at is Vekol Ranch.
Since they discovered the comet within minutes of each other the comet was named the Hale-Bopp Comet.
Nobody knows the exact orbital period of the comet but it is believed to be a little over 3000 years. It has passed through our solar system before (that is, it is not a new comet from the Oort Cloud)
On April 1, 1997, the comet is expected to reach its closed point to the sun. At this time it will also be most visible because the sun reflects off the tail of the comet.
It will come .914 astronomical units from the sun. This is not all that close to the sun considering the fact that some comets have run into the sun and others have skimmed the surface of it.
Although the comet will be closest to the sun on April 1, it will be closest to the earth on March 23, 1997. Some people have been saying that the comet will hit earth and cause human extinction, just like the dinosaurs. The fact is, however, THE COMET WILL NOT HIT EARTH. The closest it will come is 120 million miles away from the earth.
Some people are saying that the comet is going to Be huge, and others say it will be small. We will never know though because we can not see the nucleus of a comet. The part of the comet we see is the tail. The tail of a comet can be over 10,000 kilometers long.
In all, comets, the history of comets, and comets waiting to be discovered is very interesting. I think that one day we will get to see the nucleus of a comet, and be able to watch comets form in the Oort Cloud.
KC
f:\12000 essays\sciences (985)\Astronomy\Copernicus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Copernicus' work on planetary motion stood an a very high mathematical level for his times. His
theory explained how all the celestial bodies move around the Sun. It took Nicolaus 30 years of mathematical research to form a theory about planetary motion. The three most popular instruments which Copernicus used were quadrant, armilla, and triquetrum. All furnish some measure of the position of the heavenly body. It took an endless amount of mathematical calculations to come up with the Copernicus' theory. He had to find out how fast the Earth spins around the Sun, and how far the Sun is. He also had to calculate the length of the orbit of the Earth. People use math in every walk of life. In our days everything is related to math, and Copernicus used his knowledge of mathematics to provide the human kind with an important discovery.
Only a small amount of people are interested in Copernicus' work. If it wasn't for Copernicus' love of scientific truth, the people would not know that the Earth spins around the Sun, and not the other way around. That's way we should live with the knowledge that someone spent 30 years researching and finding the truth, and that man was Nicolaus Copernicus.
f:\12000 essays\sciences (985)\Astronomy\ET and Egypt .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Did the early Egyptians have help in building the pyramids?" All over the world remain fantastic objects, vestiges of people or forces which the theories of archaeology, history, and religion cannot explain. There is something inconsistent about our archaeology. They have found electric batteries many thousands of years old. They have found strange beings in perfect space-suits with platinum fasteners. They have also found numbers with fifteen digits-something not registered by any computer. How did the early men acquire the ability to do this? \par \tab Some say all these questions can be answered through the evidence found in ancient wall paintings and carvings, and the sculpture and buildings found in many different parts of the world. All over Europe and South America there is evidence left behind by the ancient people of these great civilizations.\par \tab First, a look at whether there is or could be intelligent life on other planets. It is conceivable !
that w
e world citizens of the twentieth century are not the only living beings of our kind in the cosmos. Because no aliens\par from another planet is on display in a museum for us to visit, the answer, "our earth is the only planet with human beings," still seems to be legitimate and convincing. But that is a very narrow-minded way to look at things. The idea that life can flourish only under terrestrial conditions has been made obsolete by research. It is a mistake to believe that life cannot exist without water and oxygen. Even on our own earth there are forms of life that need no oxygen. They are called anaerobic bacteria. A given amount of oxygen acts like poison on them. Why should there not be higher forms of life that do not need oxygen?\par We are still convinced that our earth is the center of everything, although it has been proved that the earth is an ordinary star of insignificant size-30,000 light-years from the center of the Milky Way. The human race is cer!
tainly more willing to accept the possibility of extraterrestria
l contact now than it was, say, half a century ago. So if there is evidence shown the extraterrestrials did have an influence on ancient civilizations, we should be able to look at it and make a intelligent decision for ourselves.\par \tab Much evidence is found on the walls of ancient buildings and temples. The walls of tombs and even caves have the signature of something other than human. In Anannhet, Tassili there are rock paintings 8,000 years old with strong figures. These figures are flying above a spherical object with a hatch like lid and two protrustions, l that seem to be spitting fire or smoke. Also, on these rock paintings there is a painting of a creature with antenna-like excrescence's on his arms and thighs. He has a helmet with slits for eyes nose, and mouth. There is a naked woman next to him. Also, in the Libyan Desert there are Stone Age Cave paintings of floating people, creatures. How do cavemen, or how would they think of floating men? They did!
n't ev
en have a spoken language. On another Tassili Mountains there is a man that seems to be wearing a close fitting spacesuit like that in modern times. A disc was found named the "genetic disc". It was named this because on each side on the disc there were carvings of the life from conception to full growth the disc is dated around 12,000 B>C> This is very amazing since prehistoric inhabitants of Colombia or anywhere else for that matter didn't have microscopes and therefore it would have been almost impossible to know of spermatozoa. So where did they get this knowledge Many wall paintings and carvings are just this. They show men in modern day astronaut suits and wings. There are carvings of flying machines in many places all over the world. Did these ancient people just think of these creatures and very modern objects? One must remember most ancient civilizations depicted everyday life on their walls and things that really happened to them. So why would they draw s!
pace suits and flying men and objects.\par \tab There are also m
any sculptures that have certain characteristics that are unusual to the normal art of the civilizations and its people. Sculptures of half human half animal creatures are found in many parts of the world. Also on Lake Maracabo, Venezzuela a female figure with four faces and huge slanty eyes was found. Some archaeologists even think that the statuettes of the pregnant women even represent something. They think that the abnormal huge shape of the women shows that they had to be carrying something more then normal embryo. Is it that or just the depiction of the pregnant women to these ancient people. There are legends that say giant people invade Malta. That they impregnated their women and that is why the women were so huge, they had huge babies inside them.\par \pard \tab \tab Kebra Negast tells use about wombs split at birth\tab because the fetuses had grown too big. A Sumerian \tab cuneiform inscription from Nippur says that Enlil, god \tab of the air, violated t!
he chi
ld of earth, Ninlil. \tab Ninlil beseeched the profligate: "...my vagina is too \tab small, it does not understand intercourse. My lips are \tab to small, they do not understand how to kiss..."\par \tab \tab I do not venture to speculate whether Enlil was \tab an extraterrestrial or a first generation descendant\par \tab but it does emerge clearly from the Sumerian text that \tab his body and its parts were too big for the normal-\tab sized maiden, Ninlil.\par \par \pard\sl480 \tab So does the prove that there were extraterrestrials? No, but it does give one more piece of evidence to support that theory.\par \tab Next, let's look at the buildings and the architecture of the ancient civilizations. First, in Sacsayhuaman, Peru there are huge steps that are made with such accuracy and are so large there is no credible explanation for them. There are also monoliths that look as if they had been pre-\par cast like modern concrete. Thrones for giants? These are huge. Did !
giant men sit in them? There water conduits are cut out solid p
ieces of exact measurements. They have polished insides and outside surfaces, with smooth cross sections. In Tiahuanaco there are blocks that have holes and ridges in them as if there were clamps that held the two stones together. There are also massive stones that have been cut in La Paz, Bolivia, presicely and with such sharp edges they couldn't have been made by the stone axes or wooden wedges used in the time this was carved. A ball made of one solid piece of stone stands in San Jose, Costa Rica, as a decoration. This ball is dated several thousand years ago. It stands with a diameter of seven feet, one inch. The surface is very smooth and the ball is a perfect sphere. All of these structures are amazing in that, it is unexplainable how people of these ancient civilizations could have made them with the resources they had to work with. However, I think one of the most amazing of all the ancient structures are the pyramids of ancient Egypt. These pyramids are s!
o awes
ome in size that it is very hard to believe that any human being, or even several hundred human beings together could build such a mammoth structure. It might be more convincing if they made the pyramids out of small blocks. It would take a long time, but they could do it. Instead, they were made out of huge blocks of stone and carried from far off places. There are many theories on how these pyramids were built, but all theories have been disproven or at least quite far-fetched. There are many structures that cannot be explained. So should we look to the stars? There may be an answer.\par \tab From the sky there is another facit to the theories of extraterrestrials on earth. From an airplane, one can see an ape, 260 feet high included in a geometrical system of lines drawn with an extreme accuracy that would have been inconceivable without a knowledge of surveying. There are also pictures scratched on the hillsides near Nazca that show figures several yards high, wi!
th radiating crowns, similar to the aureoles in Christian painti
ngs. In Peru there are worshipping figures in rock drawing, they have zigzag lines that are attributes on the gods, according to Peruvian tradition. How could these be made? They are high sophisticated designs that are to large to do while on the ground, without a way to see it. \par \tab We have only looked at a very small portion of the evidence. There are book after book that give evidence to support these theories and to explain how it all fits together. Are there any answers? Really all there is , is the evidence that one archaeologist or another has said was done or was the work of extraterrestrials. One must look at the evidence. All the wall paintings and sculptures, all the amazing structures with no real reason or explanation. To reallly look at thie subject, one will need to open their mind and forget about all the traditional rules and decisions and maybe see that the possibilities are endless.\par }
f:\12000 essays\sciences (985)\Astronomy\Evolution of Satalites.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
satellite is probably the most useful invention since the wheel. Satellites have the capability to let you talk with someone across the nation or let you close a business deal through video communication. Almost everything today is heading towards the use of satellites, such as telephones. At&t has used this communications satellite (top right) ever since the late 1950s. TVS and radios are also turning to the use of satellites. RCA and Sony have released satellite dishes for Radio and Television services. New technology also allows the military to use satellites as a weapon. The new ION cannon is a satellite that can shoot a particle beam anywhere on earth and create an earthquake. They can also use it's capability for imaging enhancement, which allows you to zoom in on someone's nose hairs all the way from space.
Robert Gossard (left) was one of the most integral inventors of the satellite. He was born on October 5, 1882. He earned his Masters and Doctoral degree in Physics at Clark University. He conducted research on improving solid-propellant rockets. He is known best for firing the world's first successful liquid-propellant rocket on March 16, 1926. This was a simple pressure-fed rocket that burned gasoline and liquid oxygen. It traveled only 56m (184 ft) but proved to the world that the principle was valid. Gossard Died August 10, 1945. Gossard did not work alone, he was also in partnership with a Russian theorist named Konstantin Tsiolkovsky. Tsiolkovsky was born on September 7, 1857. As a child Tsiolkovsky educated himself and rose to become a High School teacher of mathematics in the small town of Kaluga, 145km (90mi) south of Moscow. In his early years Tsiolkovsky caught scarlet fever and became 80% deaf. Together, the theoretical work of Russian Konstantin Tsiolkovsky and the experimental work of American Robert Gossard, confirmed that a satellite might be launched by means of a rocket.
I chose the satellite to research because many things such as computers, TVS and telephones are using satellites, and I thought it would be a good idea to figure out how they work and the history behind them before we start to use them more rapidly. I also picked the satellite because I think that my life would differ without it. For instance, The Internet or World Wide Web would run very slowly or would cease to exist altogether. We wouldn't be able to talk to people across the world because telephone wires would have to travel across the Atlantic, and if they did, the reception would be horrible. We wouldn't know what the weather would be like on earth, or what the stars and planets are like in space. We wouldn't be able to watch live television premiers across the country because all those are via satellite.
A satellite is a secondary object that revolves in a closed orbit around a planet or the sun, but an artificial satellite is used to revolve around the earth for scientific research, earth applications, or Military Reconnaissance. All artificial satellites consist of certain features in common. They include radar for altitude measurements, sensors such as optical devices in observation satellites, receivers and transmitters in communication satellites, and stable radio-signal sources in navigation satellites. Solar cells generate power from the sun , and storage batteries are used for the periods when the satellite is blocked from the sun by the Earth. These batteries in turn are recharged by the solar cells. The Russians launched Sputnik 1 (left) on October 4, 1957, as the first satellite ever to be in space. The United States followed by launching Explorer 1 on January 31, 1958. In the years that followed, more than 3,500 satellites were launched by the end of 1986. A science physicist said that "If you added up all the radio waved sent and received by satellites, it wouldn't equal the energy of a snowflake hitting the ground. Satellites were built and tested on the ground. They were then placed into a rocket and launched into space, where they were released and placed into orbit. The rocket would then become space junk, and the owner of the satellite would lose a tremendous amount of money. Now that NASA has created a space shuttle, several satellites can be launched simultaneously from the shuttle and the shuttle can then land for reuse and financial purposes. The space shuttles also have the capability to retrieve a satellite from orbit and bring it down to earth for repairs or destruction. Once the satellite is released from the space shuttle, the antenna on the satellite will receive a signal from earth that will activate it's rockets to move it into orbit. Once in orbit, The antenna will receive another signal telling the satellite to erect it's solar panels (bottom). Then the control center on earth will upload a program to the satellite telling it to use it's censors to maintain a natural orbit with earth. The satellite will then pick a target point on earth, and stay above that point for the remainder of it' s life. Once a satellite shuts down, the program uploaded to the satellite will tell it to fold up it's solar panels and remain in its orbit. Several days after the shut down, a space shuttle will pick up the satellite for repairs or replacement of new cells.
As you can see, the satellite is a very complicated piece of technology, but it's capabilities are endless. By the end of the year 2000, there will be an estimated 7,000 satellites in orbit! That's a satellite per 36,000 people. Satellites are becoming more and more useful as technology advances. Computers are turning towards the Internet, telephones are turning towards video-communication, and televisions are looking for better cable services. So as long as satellites orbit the earth, you might as well take advantage of them now, before it's too late.
f:\12000 essays\sciences (985)\Astronomy\Fusion .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For centuries, humankind has looked at the stars, and for just as many years humankind has tried to explain the existence of those very same stars. Were they holes in an enormous canvas that covered the earth? Were they fire-flies that could only be seen when the Apollo had parked his chariot for the night? There seemed to be as many explanations for the stars as there were stars themselves. Then one day an individual named Galileo Galilei made an astounding discovery: the stars were replicas of our own sun, only so far away that they seemed as large as pin pricks to the naked eye. This in turn gave rise to many more questions. What keeps the stars burning? Have they always been glowing, or are they born like humans, and thus will they die? The answers to all these questions can be summed up in two words; stellar fusion. Therefore one can begin to understand the stars by understanding what fusion is, how it affects the life of a star, and what happens to a star when fusion can no longer occur.
The first question one must ask is, "What is fusion?" One simple way of explaining it is taking two balls of clay and mashing them into one, creating a new, larger particle from the two. Now replace those balls of clay with sub-atomic particles, and when they meld, release an enormous amount of energy. This is fusion. There is currently three known variations of fusion: the proton-proton reaction (Figure 1.1), the carbon cycle (Figure 1.2), and the triple-alpha process (Figure 1.3). In the proton-proton reaction, a proton (the positively charged nucleus of a hydrogen atom) is forced so close to another proton (within a tenth of a trillionth of an inch) that a short range nuclear force known as the strong force takes over and forces the two protons to bond together (1). One proton then decays into a neutron (a particle with the same mass as a proton, but with no charge), a positron (a positively charged particle with almost no mass), and a neutrino (a particle with almost no mass, and no charge). The neutrino and positron then radiate off, releasing heat energy. The remaining particle is known as a deuteron, or the nucleus of the hydrogen isotope deuterium. This deuteron is then fused with another proton, creating a helium isotope (2). Then two helium isotopes fuse, creating a helium nucleus and releasing two protons, which facilitate the chain reaction (3). This final split is so violent that one-half of the total fusion energy is carried away by the two free protons. The second fusion variation, the carbon cycle, starts with a carbon nucleus being fused with a lone proton (1). This creates a nitrogen isotope. One proton then decays into it's primaries -- a neutron, positron and neutrino. The positron and neutrino separate from the nuclei as another proton fuses with the cluster. This creates a nitrogen nucleus which is then fused with yet another proton, forming an oxygen isotope (2). One proton then decays again as still another proton is forced into the nucleus (3). This final fusion splits into a nitrogen and a carbon nucleus; the nitrogen carries away the majority of the fusion heat, while the carbon goes back into the cycle. The triple-alpha process, the last known variety, is perhaps one of the simplest fusion reactions to understand. In this process, two helium nuclei fuse together to form a beryllium nucleus (four protons and four neutrons) (1). Almost immediately after this, another helium nucleus is forced into the cluster, creating a carbon nucleus of six protons and six neutrons (2). In this reaction, all of the heat given off is short-wavelength gamma rays, one of the most penetrating forms of radiation. Each variety of fusion occurs depending on the size and age of the star. This will affect core temperature, causing the corresponding variety of stellar fusion.
Now that fusion has been explained, one can learn how it occurs in the different star types. All stellar bodies start off as protostars, or concentrations of combusting gases found within large clouds of dust and various gases. These protostars, under their own gravity, collapse inward until it's core has been heated and compressed enough to begin proton-proton fusion reactions. After that starts, a star's mass will determine how long and through what kind of reactions it will go through. Generally, there are three classes of stars which can form: dwarfs, sun-class stars, and giants. Dwarfs begin as protostars of low size and mass (most protostars fall under this category). These stars, which have on average less than one-third the mass of our sun, go through very basic existances. One variety is the red dwarf, which has at least one-third the mass of the sun. Because of it's low mass, the red dwarf is predicted to last thousands of billions of years. The gravitational pressure of the star will cause the proton-proton reaction to occur in it's core, but after all the hydrogen has been fused into helium, the star lacks the pressure to begin the triple-alpha process. It is predicted that it will then contract into and inert, compressed ball of gas known as a black dwarf. Another variety of dwarf is the brown dwarf, which is so light (less than one-tenth the mass of the sun) that it lacks the pressure to even begin the proton-proton reaction, and becomes a black dwarf within just a few hundred million years, it's nuclear fuels unexpended. Sun-class stars are massive enough to move past the hurdle that the dwarves encounter and continue on the fusion chain. With a mass of two to five times that of the sun, the core of these stars rise to several million degrees Kelvin, bringing the surface temperature to approximately 6,000 degrees. After ten billion years, the inert helium in the core has compressed and the released heat ignites a hydrogen shell around the core. The energy given off by the combustion causes the stars size to double. The star continues to grow into a super-giant, raising the core temperature so high that in what's known as a helium flash, the helium core fuses into carbon. The series of these reactions causes varying shells of helium, hydrogen, and fusing hydrogen until the lack of pressure to fuse carbon ends the fusion in the core, it's gaseous surroundings dissipating, leaving a highly compressed and hot ball of carbon known as a white dwarf. Giants, the largest of all stars, have the shortest and most complex lives of any of the stars. These bright blue monstrosities begin from protostars which are hundreds of times the size of our sun. Within only a hundred million years, the proton-proton reaction at the core ends. The star is now six times the sun's size, and almost four times as hot. Once the core has changed to helium, the heat from it's compression causes the star to double in size. The star now makes it's final journey into oblivion.
Most stars end their lives by lacking pressure to continue fusion and calmly fade into inert masses. This is not the case with giant class stars. After a mere 9 or 10 million years, all of the hydrogen atoms in the core have fused into helium (Figure 2.1). This causes a temporary pause to the fusion in the core, allowing gravity to compress it. This compression raises the core temperature to 170 million degrees Kelvin (from 40 million degrees during the proton-proton reaction phase). This energy is transferred to the hydrogen envelope surrounding the core, expanding it to a thousand times the diameter of our sun. After this, most of the events of importance that occur happen in the core. With one million years to go, the collapse of the star raises the core temperature enough to halt the collapse and fuse it's core into carbon and oxygen while fusing the outer shell into helium (Figure 2.2). It remains this way for almost a million years. With a thousand years to go, most of the helium in the core is gone. This again pauses fusion, and collapse continues. The periods of collapse and fusion get increasingly shorter as time goes on. Once the collapse raises the temperature to 700 million degrees Kelvin, the carbon/oxygen core begins to fuse into neon and magnesium, creating layers around the core that continue to fuse hydrogen into helium, and helium into carbon (Figure 2.3). With a mere seven years to go, the core temperature of 1.5 billion degrees, the neon atoms in the core begin to fuse into more oxygen and magnesium, giving the star an onion-like appearance, each layer being denser toward the center (Figure 2.4). With one year to go, the core temperature reaches two billion degrees, fusing the oxygen core into sulfur and silicon (Figure 2.5). Only a few days to go, and the core temperature soars to three billion degrees, fusing the core into tightly compressed iron, which has a mass of almost 1.44 solar masses (the mass of our sun is one solar mass) (Figure 2.6). Since iron cannot fuse into anything further, the core continues to collapse under it's own gravity. With a tenth of a second to go, the iron core is collapsing at approximately 45,000 miles a second, packing the earth-sized core into a sphere only ten miles across. The iron atoms become so compressed that the nuclei melt together, creating enough heat to fill the core with neutrinos. The core has now reached maximum crunch, meaning it can no longer contract (Figure 2.7). The repulsive force in the core becomes so strong that it overpowers the gravitational force, and the core recoils and projects matter in a shock wave that bursts through all the outside layers. Almost one hundred percent of the energy is released as neutrinos, the first outwardly noticeable sign of the death of the star. The shock wave dissipates all of the surrounding layers, leaving a small dense sphere composed of neutrons which is known as a neutron star. This final explosion can be seen for thousands of years. Most remain neutrino stars , but if the core had more than three solar masses, it's gravity continues to collapse it, condensing the star into a singularity, or point of infinite mass and density. The gravity of this singularity is so great that even light cannot escape. This is what is known as a black hole.
Through examining the above circumstances, one can now understand what solar fusion is, and how a star is directly connected to it. And yet one must take the information with a grain of salt. Scientists have only determined these facts from the information they now have. Everyday new things are discovered that may discredit all we believe to be fact. One can only hope that one day we as a people can learn enough to prove once and for all the exact nature of the universe.
Stellar Fusion
~ The Cosmic Ballet ~
Dylan Richards
Chemistry 30
Mr. Hartley
October 20, 1996
Bibliography
Time - Life Editors, Voyage Through the Universe - Stars. Time - Life Books Inc., 1990
f:\12000 essays\sciences (985)\Astronomy\Galileo vs Darwin.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Whereas Galileo spent his last days under house arrest and was formally condemned by the Church for his scientific views, the elder Darwin was widely respected by the Anglican Church and was buried at the Westminster Abbey, an honor reserved for only the most illustrious personages of Great Britain. The reason for the two scientists' very different fortunes is simple: Galileo couldn't prove the Copernican hypothesis but Darwin was able to demonstrate the truth of his theory of evolution."
In the world of Galileo proof was what was needed to succeed with a scientific hypothesis. In the world of Darwin proof was not needed for a scientific hypothesis to succeed. There were many differences in the worlds of these two great scientists that lead to the reaction to their respective scientific hypothesize. It was not their beliefs alone that led to their peer's views it was also the way each of them conducted themselves. Galileo worked his way to become a respected scientist by most of the intellectual community but he was also despised by many because of his background and because of his attitude towards others. Darwin was well respected by the intellectual community, but the difference was the way they conducted themselves when they were dealing with their hypothesizes.
Galileo's Hypothesis
Galileo was well known for standing up for what he believed in and pushing the things he wanted. He was constantly writing letters and engaging in debates over the issue of Copernicanism. Galileo's entire reason for doing this was to make the scientific and religious communities accept that Copernicanism was actually a fact and that there was enough proof to believe it.
One example of the way Galileo strongly pushed his beliefs on others was the way he preached Copernicanism to the people of Rome in 1615. While he was in Rome trying to improve the church's opinion of him, Galileo was also debating the Aristotelians over this issue and beating them at their own game. Because of his natural talent for debate and because of his intelligence was able to outwit the Aristotelians in these debates. Galileo acted this way because he believed that he was in a position to make these kinds of statements without getting on the wrong side of the church. These actions show that Galileo was very insistent upon his ideas and upon what other people and the church thought of them. By preaching and debating Galileo was trying to force his ideas into acceptance by the church. His methods were highly unusual; he tried to force acceptance of his ideas because he believed his findings to be conclusive and had enough proof for people to accept. But when the actual facts are looked at it is very easy to come to the conclusion that there was no definitive proof that the earth and other planets orbit around the sun. Galileo in his papers and speeches tried to hide his lack of proof by focusing on only what he knew. His actions on the hypothesis are the complete opposite of the ideas of modern science. In modern science a hypothesis is always trying to be proven like Galileo was doing but it in never stated as truth until conclusive proof is demonstrated. While Galileo is considered one of the founders of modern science is very easy to see that some of his actions were not very scientific. The fact that Galileo was trying to have people accept his hypothesis, as a fact is one of the major problems with Galileo's fight for Copernicanism.
Galileo's other major problem with his fight for Copernicanism was that he was too cocky and believed that since his ideas made the most sense people should give up the old paradigm and believe Copernicanism to be true. If Galileo had been more cautious about preaching his ideas and has spent more time trying to prove his hypothesis scientifically he would have most likely been better off in his later years. He would have avoided much of the controversy that surrounded him by the church and might have just gotten off with another warning.
Since Galileo was the man he was, he could not end his fight after his first warning by the church, he had to persist and write another book on the theory. In his book Dialogue on the Great Systems of the World, Galileo tried to work around the ban of his belief of the Copernican system by the church by presenting it as merely an opinion of his. But this as most people realized was just another attempt to show the benefits of the Copernican system. It was because of Galileo's inability to give in and accept defeat that he wrote this book on Copernicanism. Galileo had also thought that he had the church on his side this time since he was friends with the pope but the pope was cajoled into believing that Galileo had insulted him with the book and started and investigation to determine if Galileo had broken the decree of 1616. It took trickery and deception to bring Galileo down in his second conflict with the church but it was his pompous attitude that caused him to come to this end. If Galileo hadn't been so into proving others wrong and just worked on his theories he would have been much better off.
Darwin's Hypothesis
Charles Darwin is the man well known for bringing about the theory of Darwinism and natural selection. Darwin became a very successful scientist in his time and convinced many people that his theory was scientifically sound. By the time Darwin died he had a large group of followers the believed his ideas to be true. The reason for Darwin's success was that he was very good at convincing people of his ideas without overpowering them with arguments. Unlike Galileo's methods of convincing people Darwin preferred to tell his ideas to his close friends and allow them to spread them to others.
Darwin worked for years on his theory without telling more than a few people about it. He did this because he was worried of what other people would think of him and he didn't want to release an idea into the scientific community that was only partially though through. So for years he worked on his ideas alone and kept track of all the work he did. It took Wallace's paper to get Darwin to speed up his own work and finally publish an extract of his own at the same time as Wallace's paper was published. Once Darwin realized that someone might beat him to the punch he began to work faster and soon published His book known as The Origin of Species.
Darwin's methods of obtaining support for his newly public theory were radically different from Galileo's methods. Instead of trying to convince everyone that he was right Darwin concentrated on his research and left the preaching of his theories to his friends. Darwin's close friends Hooker and Huxley were major players in the spread of Darwin's theory. While it was his own theory his contributions to promoting it were revising his books and continuing his work further. This method of promoting his work worked extremely well for Darwin. While Darwin was seen as the figurehead of the movement he was not seen as one of its key pushers. By doing this Darwin was able to remain behind the scenes and continue his work to improve his theories. Darwin spent most of his time in his house outside of London furthering his research and remaining in contact with the efforts to popularize his theory but did not actively participate in the way Galileo did.
By staying somewhat behind the scenes and not overpowering people with his radical theory Darwin was able to gain a large number of supporters in the biological community who in turn spread his theory more and more. It was through Darwin's connections that the vast majority of people learned about evolution. Darwin was not trying to force his theory onto others as Galileo was; instead he was just presenting it as his theory, which he was still working on. It was because of this that people weren't threatened by it as in the way they were by Galileo's theory. That is not to say that people were not threatened by Darwin's theory they certainly were. Darwin's theory went against allot more in the bible than Copernicanism did. While Copernicanism only went against a few obscure passages, evolution went against much of the story of Genesis. This fact caused many people to be hostile to the theories but Darwin still remained in better standing with the church than Galileo ever was. The main reason for this is the fact that in the time Darwin lived there were more people that were educated in the ways of science which was far more progressed than in the days of Galileo. These new scientific people were more open to ideas than the more religious people of Galileo's time. If Darwin's Ideas had been introduced in Galileo's time they might have been met with the same reaction if not stronger than the actions taken against Galileo. In this manner Darwin was lucky to live in the time he did where he could present a book for publication which did not have to be subject to the approval of the Catholic Church.
Darwin and Galileo were very different men who are both remembered as great scientists of their times. While Galileo was condemned for his efforts Darwin was remembered as a hero. This was because of their different methods of presenting their ideas. Galileo was a fighter who would not back down from a fight until he was pitted against the Vatican and faced with excommunication. He tactics caused many people to despise him in his time, which lead to the ban of his book on Copernicanism. While Darwin preferred to work in his home and have others fight his battles for him. It was because of Darwin's passive promotion of his book that he made very few enemies when compared to Galileo. If Galileo was more like Darwin he may have been better off at the time of his death.
While much of the Copernican theory is know as fact now there is still a debate over Darwinism. This is because it has yet to be proven definitively. Darwin was still honored for his contribution of this theory because he did it in a scientific manner and did not impose his opinion onto others.
f:\12000 essays\sciences (985)\Astronomy\galileo.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Galileo Galilei
"founder of modern experimental science"
Galileo Galilei was one of the most remarkable scientists ever. He discovered many new ideas and theories and introduced them to mankind. Galileo helped society as an Italian astronomer and physicist, but how did he come to be such a great and well-known scientist? It took hard work and patience....
Galileo was born during the renaissance in Pisa, Italy on February 15, 1564. He was raised by his mom, Giulia Ammanati, and his dad, Vincenzo Galilei. His family had enough money for school, but they were not rich. When he was about seven years old, his family moved to Florence where he started his education. In 1581, his father sent him to the University of Pisa because he thought his son should be a doctor. For four years, he studied medicine and the different theories of the scientist Aristotle. He was not interested in medicine, but soon he became interested in math. In 1585, he convinced his father to let him leave the school without a degree.
Galileo was a math tutor for the next four years in Florence. He spent a lot of the four years studying the scientific thoughts and philosophies of Aristotle. He also invented an instrument that could find the gravity of objects. This instrument, called a hydrostatic balance, was used by weighing the objects in water.
Galileo returned to Pisa in 1589 and became a professor in math. He taught courses in astronomy at the University of Pisa, based on Ptolemy's theory that the sun and all of the planets move around the earth. Teaching these courses, he became more understanding of astronomy.
In 1592, the University of Padua gave him a professorship in math. He stayed at that school for eighteen years. He learned and believed Nicolaus Copernicus's theory that all of the planets move around the sun, made a mechanical tool called a sector, explained the tides based on Copernican theory of motion of earth, found that the Milky Way was made up of many stars, and told people that machines cannot create power, they can only change it.
In 1602, still at Padua, Galileo did research on motion. The Aristotelian theory of motion went against the theory that the earth moves. Because of this, Galileo worked on forming a theory that would show that the earth does move. He formed a theory that all pendulums swing at the same rate no matter what size the arc is by watching a chandelier swing at the cathedral at Pisa. He timed it with his pulse and found out that the c
f:\12000 essays\sciences (985)\Astronomy\Gamma Rays.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gamma Rays are Waves on the electromagnetic Spectrum that have a
Wavelength of 10 or Higher and 11 down. Gamma Rays are produced in labs
through the process of nuclear collision and also through the artificial
Radioactivity that accompanies these interactions. The high energy nuclei
needed for the collisions are accelerated by such devices such as the
Cyclotron and synchrotron.
There are also many uses for Gamma rays in Medicine. Gamma Rays
are used in medicine to kill and treat certain types of cancers and tumors.
Gamma rays passing through the tissue of the body produce ionization in the
tissue. Gamma rays can harm the cells in our body. The rays can also detect
brain and Cardiovascular Abnormalities. These are some of the many uses of
Gamma Rays in Medicine.
Gamma Rays are also Used a great deal in modern day industries.
Gamma Rays can be used to examine metallic castings or welds in oil
pipelines for weak points. The rays pass through the metal and darken a
photographic film at places opposite weak points. In industry, Gamma rays
are also used for detecting internal defects in metal castings and in welded
structures. Gamma rays are used to kill pesticides and bugs in food. Gamma
rays are also used in nuclear reactors and atomic bombs.
Gamma rays are often used in the food industry. The radioisotopes
preserve foods. Although the rays never come in contact with the food, Beta
radiation kills various organisms such as bacteria, yeast, and insects. Gamma
rays are sometimes used in science. They are used to detect Beryllium. They
also played a very important role in the development of the atomic bomb.
Gamma Rays can be very dangerous to use or be in contact with.
Gamma rays bombard our bodies constantly. They come from the naturally
radioactive materials in rocks and soil. We take some of these materials into
our bodies from the air we breath and the water we drink. Gamma rays
passing through our bodies produce ionization in the tissue. High levels of
gamma Radiation can produce ionization of the tissue and cause skin cancer.
There are many ways in which we can protect ourselves from these
harmful affects Protection from gamma rays can be obtained Using a sheet of
iron that is a 1/2 inch thick. This kind of shielding will block only 50% of 1
million electron volts of Gamma rays. We can also protect ourselves from
gamma rays with 4 inches of water. Lead provides the most protection from
gamma rays. A 1/4 of an inch absorbs all the gamma ray exposure.
Many Gamma rays also come from outer space in a few major bursts
the sun produces gamma rays with energies up to one million electron volts.
The interaction of high energy electrons, Protons, and Nuclei of the sun, emit
the rays. Gamma rays can also come from the other stars in space, Through
the creation and death of the stars along with the creation of solar flares.
Astronomers have studied gamma rays to gain a better understanding of the
astronomical process. Gamma rays are a form of Electromagnetic radiation
similar to X-rays. Gamma rays carry millions of electron volts. As gamma
rays pass through matter, They lose energy, But at the same time Knock
electrons loose from the atom which ionizes them. Uranium and other
naturally occurring radioactive elements, which emit alpha and beta particles
from their nuclei which transforming into new elements, also emit gamma
rays.
Long before experiments gamma rays emitted by cosmic sources,
scientists had known that the universe should be producing such photons.
Hard work by several brilliant scientists had shown that a number of different
processes which were occurring in the universe would result in gamma ray
emissions. These processes included cosmic ray interactions with interstellar
gases, supernova explosions, and interactions of energetic electrons with
magnetic fields. In the 1960's we finally developed thew ability to actually
detect these emissions and we have been looking at them ever since.
f:\12000 essays\sciences (985)\Astronomy\Hale Bopp.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As I am sure all of you know, we have recently been able to see a new but not permanent additon to the night sky. This addition is known as Hale-Bopp, a comet that is about 122 million miles (about 1.3 times the distance of the sun to the earth) from the earth and is approximately 25 miles wide. Hale-Bopp was discovered on July 23,1995 by two scientists named Alan Hale in New Mexico and Thomas Bopp in Arizona. This is the first discovery for both of them, although Alan Hale is one of the top visual comet observers in the world, having seen about 200 comet apparitions. That is one of the reasons they put his name first.
Alan Hale comments, "I love the irony -- I've spent over 400 hours of my life looking for comets, and haven't found anything, and now, suddenly, when I'm not looking for one, I get one dumped in my lap. I had obtained an observation of P/Clark earlier, and needed to wait an hour or so before P/d'Arrest got high enough to look at, and was just passing the time til then, and I decided to look at some deep-sky objects in Sagittarius. When I turned to M70, I saw a fuzzy object in the same field, and almost immediately suspected a comet, since I had been looking at M70 last month, and *knew* there wasn't any other objects there."
Thomas Bopp explains his story like this, "On the night of July 22, 1995 some friends and I headed out into the desert for a dark of the moon observing session. The site, which is west of Stanfield, AZ and a few mile south of Interstate 8 is about 90 miles southwest from my home.
My friend Jim Stevens had brought his 17-1/2" Dobsonian. We started the evening observing some of the Messier objects such as the Veil and North American Nebulae in Cygnus, when Jim said " Let's look at some of the globulars in Sagittarius." We started our tour with M22 and M28, observing at 50X and then at 180X. Around 11:00 local time, we had M-70 in the field when Jim went to the charts to determine the next object of investigation. I continued watching M-70 slowly drift across the field, when it reached a point 3/4 of the way across a slight glow appeared on the eastern edge. I repositioned the scope to center on the new object but was unable to resolve it. I called to Jim and asked him if he knew what it might be, after a visual inspection he stated he wasn't familiar with it but would check the charts. After determining the general position of the object he was unable to find it on either Sky Atlas 2000.0 or Uranometria.
The moment Jim said "we might have something" excitement began to grow among our group and I breathed a silent prayer thanking God for his wondrous creation. My friend, Kevin Gill then took a position from his digital setting circles and estimated a magnitude.
At 11:15 I said that we needed to check the object for motion and should watch it for an hour. The group observed it change position against the star field over that period and at 12:25 I decided to drive home and report our finding.
Arriving at home initial attempts to send the telegram were unsuccessful due to an incomplete address I had. After searching my library I was able to located the correct address and confirmation was requested.
At 8:25 AM July 23, 1995 Daniel Green of the Harvard Smithsonian Astrophysical Observatory telephoned and said, "Congratulations Tom, I believe you discovered a new comet." and that was one of the most exciting moments of my life.
The comet is visible in the evening. Look about 40 degrees west of North and about 20 degrees off the horizon at about 8:00 p.m. The comet will be the brightest object in the northwest sky.The comet is traveling at about 28 km per second and the orbit of this comet is about 4,200 years since the last appearance and because of gravitational tugs by the planets, especially Jupiter, the next appearance will be in about 2380 years or the year 4377. Hale-Bopp has been through our solar system before which surprisingly means it is not a new comet from the Oort Cloud. Its orbit is a very long, stretched out orbit and the comet is part of our solar system in orbit around our Sun. Sadly, this excitment will end in October when Hale-Bopp will disapear to the naked eye.
(Special thanks to Kevin Gill of the Black Mountain Observatory for Alan Hale's and Thomas Bopp's quotes.)
f:\12000 essays\sciences (985)\Astronomy\Little Green Men or Just Little Microscopic Organisms .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Little Green Men or
Just Little Microscopic Organisms?
The question of life on Mars is a puzzle that has plagued many minds throughout the world. Life on Mars, though, is a reality. When you think of Martians, you think of little green men who are planning to invade Earth and destroy all human life, right? Well, some do and some do not. Though believing that there are little green men on Mars is just a fantasy, or is it? The kind of life that may have lived there is the kind you would never consider of giving the name "Martian" to. They are small organisms such as microbes or bacteria.
Proof of this was found in a meteorite containing the fossils of the microscopic organisms intact. Two highly regarded chemistry professors from Stanford, Claude Maechling and Richard Zare, dissected three meteorites that were about 2 to 8 millimeters long and found trace elements of a big mumbo jumbo word- polycyclic aromatic hydrocarbons. That pretty much means that there once was a warmer climate and maybe even lakes or oceans. Life on Mars is now a real idea.
The climate of Mars about 3.8 billion years ago was much similar to the young Earth. Microbes and bacteria probably sprouted everywhere in the warm and wet climate. Although now we only see a cold red planet, which was probably due to a collision of an astroid that would have set back the evolution process of Mars, causing it to be a harsh planet. A Viking spacecraft which landed on Mars in 1976 found that the planet was bathed in ultraviolet radiation, "intense enough so it would probably fry any microbe we know on this planet,"says Jack Farmer, an Ames researcher who calls himself an "exopaleontologist"-a searcher for fossils on other worlds. The redness of Mars is due to the chemical assault known as oxidation, which turns iron compounds into rust, and it would surely kill anything that sticks its head up.
"So why do you still believe that there is life on Mars?" you say. Life on Mars is not located on the ultraviolet radiation oxidized surface. The microbes are found below it, probably located in the boiling hot springs, or in frozen time capsules. Life here on Earth are located in some strange places so why wouldn't the Martian microbes be found in strange places if they were trying to survive? Scientists have found bacteria here on Earth that were living inside rocks where they got all of their nourishment from the rocks and from some water. Martians probably do the same thing.
The Marsokhod, which is Russian for "Mars Rover"- a six-wheeled vehicle about the size of a golf cart, with an arm for carrying a camera or other instruments, is planned to launch in 1998. The rover might actually find the truth that there was once life and that there is still life on Mars.
Who knows, but what if the once ancient microbes or bacteria have evolved into little green men who are planning to invade Earth and destroy all human life? What if there was a whole colony of Martians in underground tunnels all over Mars? How did we evolve? From microscopic microbes, right? They may have evolved, too. When I read all of this I am reminded by a quote from a character on Jurassic Park named Ian Malcolm who said, "Life finds a way."
Bibliography
Chui, Glennda. "Life on Mars II". [http://www.sjmercury.com/news/nation/mars.htm.] December 19, 1995.
Davidson, Keay. "New Signs That There Was Life On Mars." San Francisco Examiner. March 16, 1995. Pg. A2. SIRS Physical Science, Electronic Only 1995. Art.104. SIRS Researcher CD-ROM, CD-ROM. SIRS. Fall 1996.
f:\12000 essays\sciences (985)\Astronomy\Mercury.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Laolee Xiong
Period 2
2/26/97
MERCURY
Mercury is the closest planet to the sun. It's average distance from the sun is approximately fifty-eight million km and it's diameter is 4875 km, making it the second smallest planet in our solar system. It's volume and mass are about 1/18 that of the earth and it's average density is approximately equal to that of the earth. Mercury's magnetic field is one-hundred times weaker than that of Earth's. Mercury has the shortest revolution of all the planets in our solar system and revolves around the sun in about eighty-eight days. Radar observations of the planet show that its period of rotation is 58.7 days, or two-thirds of its period of revolution. That means that Mercury has one and one-half days in it's year.
Mercury doesn't have an atmosphere, but it does have a thin layer of helium. The helium is actually solar wind that is trapped by Mercury's weak gravity. Scientists think that collisions with protoplanets early in the history of the solar system may have stripped away lighter materials, making Mercury a very dense planet with an iron core extending outwards 4/5 of the way to the surface.
Mercury bares a very similar resemblance to our moon because it has a lot of craters. The craters, which cover seventy-five percent of Mercury's surface, were formed by huge rocks that smashed into the planet's surface. The largest crater is called the Caloris Basin and it is 1400 km in diameter and is flooded with molten lava. Mercury also has many cliffs that are usually over 300 miles long and two miles high. The rest of the planet's surface is smooth and may have been formed by lava flowing out of cracks in the surface.
Temperatures on Mercury vary greatly because of it's closeness to the sun. The surface temperature on the sunlit side is about 430 degrees Celsius, while the dark side may reach temperatures of -170 degrees Celsius..
Mercury was a difficult planet to study before the invention of the telescope. Even then, you could only see Mercury in the morning and evening. Then the Mariner 10 was built in the 1970's to go observe Mercury. The Mariner 10 spacecraft passed Mercury twice in 1974 and once in 1975 and it took hundreds of pictures of the planet. After this, the Mariner 10 came too close to the sun and is now orbiting the sun.
Mercury has no known moons and it also has a double sunrise at perihelion (the point closest to the sun). Mercury also has the widest temperature range (500 degrees between coldest and hottest) of all the planets. But even with all this information, scientists still don't know that much about Mercury.
f:\12000 essays\sciences (985)\Astronomy\My theory of the universe.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My Theory of the Universe
You are about to be transported to a very strange world, read on if you dare! The planet you are on is a giant disco ball, rotating clockwise. (Earth) This disco ball is in a place that has two stories. It is on the first story ceiling, but the ceiling is made of glass so the disco ball can be seen from both floors. The walls of the room are black. There is a big yellow spotlight in one corner of the room that slowly moves up and down. (Sun) There is another spotlight in the opposite corner, but this one is white and has a rotating filter to block some of the light out. (Moon) This filter starts at one end of the light, works its way across, and then works its way back. (Phases of the Moon) There are many spots on the walls of the room that are just reflections off of the disco ball. (Stars) These spots seem to form different patterns on the walls and move along with the disco ball, but not always at the same rate. (Constellations) There are two very shy people in the room that sometimes leave the room. When they are in the room they stand by the walls and always wink for some reason, so all we can see of them is one of their eyes. (Mercury & Venus) Then there are three very weird people in the room that are always in the middle of the room doing the Waltz. They do this some how by themselves and they to are always winking. They are sometimes on the first floor and sometimes on the second floor. (Mars, Jupiter, and Saturn) There is also one guy dressed in a white polyester leisure suit, gold chains, and rings dancing to "Stayin' Alive." (comet) It seems as though everyone in the universe hears their own music. He is only in the room for a little bit; he makes his way across the room then leaves, he must get tired really quick! Sometimes when he is dancing or even when you can not see him dancing, (must have forgot his gold chains) somehow he loses a ring because we can see it fly across the sky. (shooting star) Then there are two people, one directly below the ball and one directly above the ball when they are there. They both dress in silver lamay (shiny stuff) that makes all different colors that we can see. (Northern/Southern Lights) Occasionally some stupid guy stands in front of one of the spotlights and blocks it out. He only does this until someone kicks him out and someone always does. (Eclipse) Then sometimes we receive things from this party on our ball we think that it is the people's dandruff that as been floating in the room and settles on our ball. (Meteorites) There is also a place in the room where a large number of people are standing and you can see all their eyes reflecting off the ball. Finally there is one guy who stands in the room and holds up a lighter so the band will continue to play. He only does this when he thinks the band is going to stop and when they start playing again he stops and vanishes from sight. (Supernova)
f:\12000 essays\sciences (985)\Astronomy\Neptune.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neptune is the outermost planet of the gas giants. It has an equatorial diameter of 49,500 kilometers (30,760 miles) and is the eighth planet from the sun. If Neptune were hollow, it could contain nearly 60 Earth's. Neptune orbits the Sun every 165 years. It has eight moons, six of which were found by Voyager 2. A day on Neptune is 16 hours and 6.7 minutes. Neptune was discovered on September 23, 1846 by Johann Gottfried Galle, of the Berlin Observatory. Neptune got its named from the Roman God of the Sea.
Much of what is know today about Neptune was discovered in 1989 by the U.S Voyager 2 spacecraft during its 1989 flyby f Neptune. Neptune as compared to Earth is 3.9 times the diameter, 30 times the distance from the sun, 17 times as massive, and 0.3 times the density.
Neptune travels around the Sun in an elliptical orbit at an average distance of 4.504 billion km (2.799 billion miles). Neptune consists largely of hydrogen and helium, and it has no apparent solid surface. The first two thirds of Neptune is composed of a mixture of molten rock, water, liquid ammonia and methane. The outer third is a mixture of heated gases comprised of hydrogen, helium, water and methane. The atmospheric composition is 85% Hydrogen, 13% Helium, and 2% methane. The planet's atmosphere, particularly the outer layers, contains substantial amounts of methane gas. Absorption of red light by the atmospheric methane is responsible for Neptune's deep blue color.
Neptune is a dynamic planet with several large, dark spots reminiscent of Jupiter's hurricane-like storms. The largest spot, known as the Great Dark Spot, is about the size of the earth and is similar to the Great Red Spot on Jupiter.
Neptune receives less than half as much sunlight as Uranus, but heat escaping from its interior makes Neptune slightly warmer than Uranus. The heat liberated may also be responsible for Neptune's stormier atmosphere, which exhibits the fastest winds seen on any planet in the solar system. Most of the winds there blow westward, opposite to the rotation of the planet. Near the Great Dark Spot, winds blow up to 2,000 kilometers (1,200 miles) an hour. Voyager 2 found that the winds averaged about 300 meters per second (700 miles/hour) in the planet's atmosphere.
Long bright clouds, similar to cirrus clouds on Earth, were seen high in Neptune's atmosphere. At low northern latitudes, Voyager captured images of cloud streaks casting their shadows on cloud decks below.
Feathery white clouds fill the boundary between the dark and light blue regions on the Great Dark Spot. The pinwheel shape of both the dark boundary and the white cirrus suggests that the storm system rotates counterclockwise. Periodic small scale patterns in the white cloud, possibly waves, are short lived and do not persist from one Neptunian rotation to the next. (Courtesy NASA/JPL)
Until the Voyager 2 encounter in 1989, the rings surrounding Neptune were thought to be arcs. We now know that the rings completely circle the planet, but the thickness of each ring varies along its length. Neptune has a set of four rings which are narrow and very faint. The rings are made up of dust particles thought to have been made by tiny meteorites smashing into Neptune's moons. From ground based telescopes the rings appear to be arcs but from Voyager 2 the arcs turned out to be bright spots or clumps in the ring system. The exact cause of the bright clumps is unknown.
The magnetic field of Neptune, like that of Uranus, is highly tilted at 47 degrees from the rotation axis and offset at least 13,500 kilometers or 8,500 miles from the physical center. Comparing the magnetic fields of the two planets, scientists think the extreme orientation may be characteristic of flows in the interior of the planet and not the result of that planet's sideways orientation or of any possible field reversals within the planet.
Neptune also has eight known satellites. Only two of these, Triton and Nereid, had been observed prior to the Voyager 2 flyby. Triton is the largest of the eight satellites and is almost as big as the Earth's Moon. The other Neptunian satellites range in diameter from 58 to 416 km (36 to 258 miles). Apart from Triton, the moons of Neptune are irregularly shaped and have very dark surfaces.
Triton is the largest moon of Neptune, with a diameter of 2,700 kilometers (1,680 miles). It was discovered by William Lassell, a British astronomer, in 1846 scarcely a month after Neptune was discovered. Triton is colder than any other measured object in the Solar System with a surface temperature of -235° C (-391° F). It has an extremely thin atmosphere. Nitrogen ice particles might form thin clouds a few kilometers above the surface. The atmospheric pressure at Triton's surface is about 14 microbars, 1/70,000th the surface pressure on Earth.
Triton is the only large satellite in the solar system to circle a planet in a retrograde direction -- in a direction opposite to the rotation of the planet. It also has a density of about 2.066 grams per cubic centimeter (the density of water is 1.0 gram per cubic centimeter). This means Triton contains more rock in its interior than the icy satellites of Saturn and Uranus do. The relatively high density and the retrograde orbit has led some scientists to suggest that Triton may have been captured by Neptune as it traveled through space several billion years ago. If that is the case, tidal heating could have melted Triton in its originally eccentric orbit, and the satellite might even have been liquid for as long as one billion years after its capture by Neptune.
Triton is scarred by enormous cracks. Voyager 2 images showed active geyser-like eruptions spewing nitrogen gas and dark dust particles several kilometers into the atmosphere.
f:\12000 essays\sciences (985)\Astronomy\Roswell Incident.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Roswell Incident
Edward OÆBrien
March 13, 1997
CP-11 Period 6
Outline
Thesis: The Roswell Incident, which enlightened our minds to the capacity of excepting, has remained one of the most controversial issues issues today.
I. Introduction to Extraterrestrials
A. Standpoints on Extraterrestrials
1. SocietyÆs
a. Past
b. Present
2. GovernmentÆs
a. Past
b. Present
II. Roswell, New Mexico
A. What Exactly Happened
1. Who Discovered the Wreckage
2. Discoveries
3. Bodies
B. Testing in Roswell, New Mexico
1. Military Testing in Roswell
III. The Cover-up
A. Wreckage
1. UFO
2. Bodies
B. The Weather Balloon
1. The Balloon
a. Composite of the Balloon
C. Witnesses
1. The Nurses at Roswell, AAF
2. The Yearbook
IV. Alien Autopsy
A. Bodies
B. Authenticity
V. Conclusion
A. Controversy Continues
B. Final Thoughts
The Roswell Incident
The Roswell Incident, which enlightened our minds to the capacity of excepting all, has remained one of the most controversial issues today. In Roswell, New Mexico, 1947, a strange occurrence arises. An alien craft from outer space crashed in an open field. The issue lay still for almost thirty years, until the thought of a government cover-up arose.
SocietyÆs opinions have changed over the years. Previous to the 1990Æs, people have despised the thought of sharing the universe with other intelligent life forms. Now people are interested in this mysterious phenomenon. People think it is the blame of the movies and television. By watching this, people are at a level at which they understand. Not only do these movies entertain, they inform people about the little information we obtained from the government.
The thought of government cover-ups have been long discussed. The government has always, in the past, tried to keep any sign of aliens, whether it be pictures from space, to crashes on earth, to a low or nonexistent level. Just recently has the government been harassed to the point where they actually gave us clues to alien existence. It has in some ways been believed that the government has worked in partnership with popular movie directors, to produce alien movies to ease the thought that we may not be alone. Such movies as ôThe Arrivalö and the ever popular ôIndependence Dayö are very good examples of well convincing alien movies. If this is true, they did a good job, because statistics state that 75% of people today believe that there is some kind of intelligent life forms besides ourselves in the universe. That is very convincing compared to the 20% whom believed 25 years ago. ô New opinions are always suspected, and usually opposed, without any other reason but because they are not already common.ö (MacGowan 261)
A local New Mexico rancher, MacBrazel, while riding out in the morning to check his sheep after a long night of thunderstorms, discovered a considerable amount of debris. It created a gouge several hundred feet long and was scattered over a large area. Some of the debris had strange physical properties. He took some debris to show his neighbors then his son. Soon after that he notified the sheriff. The sheriff then contacted the authorities at Roswell Army Air Field Base. The are was closed off and the debris was eventually flown by B-29 and C-54 aircraft to Wright Field in Dayton, Ohio. A New York Daily News article says ô...either conclusive proof extraterrestrials have indeed visited earth, or one of the most elaborate hoaxes ever perpetuated on the public.....ö (Dominquez). Besides the wreckage that was found, there were three objects which were highly debated about. Three bodies, two found dead, the other to die in a couple of weeks. Whether or not the bodies were actually found, is only determined by the few witnesses who claim to have seen the bodies. A few of these people turned out to be very highly respected military officers. Some people say that the bodies were human which have been exposed to the radiation. This radiation could have been caused, due to nuclear weapons that Roswell Army Air Base had been testing, since they were at the time the only squadron which had authorization to nuclear weapons. This theory was discounted by most, saying that this kind of deformation would have caused a human being to die before such damage could occur. Albert Einstein once said: ô....I am convinced that, there is an absolute truth. If there canÆt be absolute truth, there cannot be a relative truth.ö (MacGowan 289) The government has been blamed with covering up this whole event. They have been claimed to have shipped off the wreckage to Dayton, Ohio, to avoid publicity. Which is normal, to prevent a worldwide panic. The bodies however, were not as lucky to have not become public, yet.
The government has, and will always say that the wreckage found was a secret spy balloon. The people who have seen the wreckage, and believe that's what it was, describe it as a bundle of tinfoil, broken wood, beams , and rubber remnants of a balloon. Most discount this because, why would the government be messing around with balloons, if they were exploring the characteristics of jet fighters. Yes, the wreckage did seem like tin foil, at first, until you held the material, which if you bent, twisted, and did anything you dreamed up of, would still return to its original shape. They have tried to burn and shoot through it and had unsuccessfully destroyed it. It is thought that the government has used this to theyÆre benefit, and discovered its properties to use on future planes, but this has not yet been yet proven, since no planes known are this indestructible.
Glenn Davis, a respected business man in Roswell, was called by a friend, General Exon, asked how to seal and preserve bodies that have been exposed to the alien materials. Davis did not know the answer to this question, since he did not know much of the incident. Later that night he made a trip to the base hospital, outside the back entrance he spotted two military vans with the rear doors open, from which large pieces of wreckage protruded. Once inside he encountered a young nurse whom which he knew, at the same instance, he was noticed by military police which escorted him from the building.
The next day he met up with the nurse at a coffee shop. She explained that she had been called upon to help two doctors disect to small bodies, which she thought to be alien. She drew a diagram on a napkin showing an outline of thier features. That meeting was to be thier last; as she was transferred to England a few days later. Today the nurse would be sixty-nine years old. Investigators are trying to locate her. Five nurses are pictured in the 1947 Roswell Air Field yearbook. The files of all five are strangely missing from military records.
A couple of years ago a report on Fox television network broadcasted a show called ôAlien Autopsyö. This show was mainly about what happened to the bodies which were recovered from the Roswell crash site. Bodies were showed being dissected on video tape.
The whole process of the dissection is like this: One of the two people first examined the body. He then with a surgical knife, cut open the neck down to the belly, and took out the organs. As for the head, he found that the eyes of this body were covered with two black membranes, which were easily removed with tweezers. Lastly, he cut open the scull and took out the brain. The other person seemed to be responsible for recording the whole process.
People at first thought this whole movie was a joke until they opened the bodies up. Steven Speilberg even stated : ô There is no way these bodies could be fake , due to the high complexity of the inner organs, not only would it have been impossible then, it also practically impossible to replicate them now, and if it were possible it would cost us millions of dollars to replicate, or even try to replicate them.ö Kodak film company has done their own testing on the actual film on which the footage had been filmed. They had verified that the film was made in the 1940Æs. In the film, when it zoomed up on the bodies, blurred. This was thought to purposely done to hide detail, but in fact cameras in the 1940Æs had no focus on objects that were to close to it. There were even people who are claimed to be eyewitnesses of the crash of the UFO in Roswell, and strongly believed that the body appeared in this film, was one of the three found in this incident.
Even though there is a lot of information which points to the program being real, there is a lot which point to its falsehood. The government wanted to keep this at a very low-key. They still insisted that the crash in Roswell was still a military balloon. However, a congressman wanted the government to open all the document related to the Roswell Incident, the reply was that all of documents were destroyed! What did the government want to cover? If they wanted to cover the fact that aliens came to earth, why did they let the show air to the public? Of course, if the film is false, it doesn't matter.
There also some differences in the body in the broadcast, and the descriptions from the witnesses. The most obvious is of which the fingers in the movie have six fingers and toes, while the witnesses from Roswell said they had only four. It is said that the UFO had exploded in the air, if that were the case, then the bodies would have been severely burned. In ô Alien Autopsyö, the only injury on the body, was the cut on the right leg. The dissection room in which it was taken was much too simple. The people in the movie are wearing normal dissection clothes, if it were a true alien, then why not try to prevent yourselves from catching some out of this world disease. Lastly , people say that the bodies are to similar to human bodies, and that the bodies just might be abnormal humans. Like many people do, we think aliens may look similar to us. By chance there may be aliens that resemble us, or we may resemble them.
For obvious reasons, it is necessary that the military services and the intelligence agencies impose a certain amount of secrecy. In recent decades, however, many observers say: ôthat the use of government secrecy has become excessive. Secrecy tantamount to power and, like power, lends itself to abuse. Behind the shield of secrecy, it is possible for an agency or service to avoid scrutiny and essentially operate outside the law. Accountability to the taxpayers, and to the Congress, can be conveniently avoided.ö Perhaps this is a major reason the U.S. annual ôblack budgetö has climbed to a staggering $25 billion a year. ô Secrecy, like power, is not readily relinquished.
As we all know we will never know the true story of which happened in Roswell, New Mexico in 1947, but it is up to us to decide for ourselves.
f:\12000 essays\sciences (985)\Astronomy\Solar Oven.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SOLAR OVEN
Why it will work:
It will work because the cold air under the box will be separated by
four (4) wood pegs. The black, non-glossy paint will attract heat, rather
than reflect it. The glass, double panel, will allow heat to pass through, but
not allow it to escape. The charcoal cubes and shavings will absorb the heat
in the box and store the excess heat allowing the flow to continuously warm
the oven. The tin foil will reflect heat and subsequently heat up the oven.
The polyurethane material will trap the heat inside the oven. The mirrors
will help the heat stay inside the box.
Materials used:
Shoe Box
Tin Foil
Sheets of Glass
Charcoal Cubes and Shavings
Scissors
Glue
Knives
Polyurethane
Non-Glossy, Black Spay Paint
Four (4) Wood Pegs
Mirrors
Sealer
f:\12000 essays\sciences (985)\Astronomy\Star Mars.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Star Mars
Since the boom in space technology about 30 years ago, man has found the method for expanding his existence beyond the many once thought "unbreakable barriers." Together with this development in space technology came a large quantity of information and discoveries of the compounds of the universe, and scientific questions seemed to jump out in equal number. The question that captures the eye of the media today causing a bitter controversy is probably the most easy to understand, considering the complex astronomy jargon. Is life possible on Mars? The fact is we still don't know. "Some of the early arguments we now know to be almost certainly erroneous, but even the most recent pieces of evidence do not unambiguously demonstrate the existence of life on Mars." ( Sagan and Shklovskii 273)
Some scientist believe man should look up in the sky searching for new habitats for future generations, since human kind today seems to be going backwards in many aspects of the earth's ecology. The first attempt would be to study the moon; the second, our neighbor planet. Unfortunately, our actual technology slightly provides strong, useful information about the red planet because of the vast distance between us.
While people such as Steven Spielberg and George Lucas try to convince us with hundred million dollar movies that we are not alone, engineers and geologists like from the NASA-Stanford University team pursue, based on true evidence, the idea of possible life on Mars. However, the burden of proof is sometimes too heavy even based on real evidence. The tough debate started on August 1996, when scientists from the NASA-SU team announced that a meteorite found on the Antartica contained evidence of past life on the red planet. They supported their conclusion on the basis of organic molecules, carbonates, and minerals found inside the rock, which are basic components of living things. This announcement astonished the world, but not the critics who skeptically stated opposite explanations for each of the components discovered. The main discussion focused by critics like Allan Treiman arguing that "This scientist have lowered the standards of evidence rather than raised them, which is what you would expect for a claim this extraordinary." (qtd. in Begley and Rogers 58) The problem raises when it is proved that those kinds of minerals and organic molecules found in the meteorite, which fell from Mars about 13,000 years ago, can also be formed during nonbiological reactions such as very high temperatures.
For us, the common magazine readers, it is difficult to deal with these two positions: the final acceptance of extraterrestrial life, which is the strong motivation of the NASA-SU team, or the final submission to the fact that we stand as the only life form here and everywhere. This assumption is kind of complicated as well considering the enormous size of the Solar System; moreover, we know that our System resembles a grain of sand in the unimaginable vastness of the universe.
I strongly believe in the scientific method, the experiment conducted to reach the solution to a problem using true information, gathered and analyzed in an objective way to minimize the possibility of error or bias. I like to see irrefutable proof on the table, not just to hear them strong from highly renowned people. Scientists have made numerous mistakes in the past, and will continue to do so even though our technology is becoming more accurate year by year. I find some weaknesses in certain points cited by planetary geologist David McKay. He admits that "The evidence is somewhat circumstantial, but there is enough to support the hypothesis of ancient life on Mars." (qtd. In Begley and Roberts 57) Enough to support the hypothesis, but not the thesis, I would say.
The debate continues today and new information will be revealed to the world next April at Houston's Johnson Space Center, when the Mars workshop will be open. The media will have to wait until then just to put the story in the eye of the hurricane, again.
f:\12000 essays\sciences (985)\Astronomy\StarLab.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IN PREPARATION FOR MY EXPERIMENT THE FIRST THING I DID WAS GATHER ALL
OF MY MATERIALS NEEDED FOR THE EXPERIMENT AND PLACED
THEM BY THE BACKDOOR OF MY HOUSE, BECAUSE THE EXPERIMENT WAS GOING
TO TAKE PLACE IN MY BACKYARD. SECONDLY, I TOOK A NAP. SINCE
I WAS PERFORMING THE EXPERIMENT BY MYSELF, I WANTED TO BE WELL
RESTED FOR THE 6 HR. SPAN THAT I WOULD BE OUTSIDE. THIRDLY, I ATE
SOMETHING, I DID NOT WANT TO HAVE TO INTERUPT THE EXPERIMENT BY
HAVING TO RUN TO THE REFRIGERATOR BECAUSE I COULD MISS A COUPLE OF
SHOOTING STARS WITHIN THE TIME IT TOOK ME TO GET A SANDWICH. MY
DOMAIN WAS THE EASTERN SKY WITH THE FOCAL POINT BEING "THE LITTLE
DIPPER" WHICH WAS CLEARLY VISIBLE.
I STARTED MY EXPERIMENT AT PRECISELY AT 11:00 PM, FRI. NOV. 15, TAG
HEUER TIME, AND ENDED THE EXPERIMENT AT 5:00 AM, SAT. NOV. 16. FROM 11:00
PM TO 12:00 AM NO "SHOOTING STARS" WERE OBSERVED IN THE SKY. FROM THE
PERIOD OF 12:00 AM TO 1:00 AM I OVSERVED 1 "SHOOTING STAR" AT
12:48 AM. NOTE: ( THE POSITION OF THE "SHOOTING STARS" AND THEIR PATHS
ARE REFLECTED ON THE STAR MAP THAT I COMPLIED). THE 1ST 2 HOURS INTO
THE EXPERIMENT WERE VERY DISCOURAGING DUE TO THE LACK OF ACTIVITY.
FROM 1:00 AM TO 2:00 AM ACTIVITY STARTED TO PICK UP WITH THE SIGHTING OF
3 "SHOOTING STARS" AT 1:04 AM, 1:12 AM, & AT 1:52 AM. BETWEEN THE HR. OF
2:00 AM & 3:00 AM, I SAW 3 "SHOOTING STARS", 1 AT 2:02 AM, 1 AT 2:36 AM, &
ANOTHER AT 2:53 AM WITH SOMEWHAT OF A LONGER VISIBLE PATH THAN
OTHERS. BY 3:00 AM I NOTICED THAT THE "SHOOTING STARS" HAD NO APPARENT
ORIGIN, OR SPECIFIC PATH PATTERN. DURING THE PERIOD OF 3:00 AM & 4:00 AM I
STARTED TO SEE MORE "SHOOTING STARS" CLOSER TOGETHER IN TIME AT THE
TIMES OF 3:16 AM, 3:27 AM, 3:34 AM, 3:36 AM & 3:49 AM. AFTER LOGGING THE
ACTIVITY BETWEEN 3:00 AM & 4:00 AM I NOTICED THAT THE 3:00 AM TO 3:15 AM
TIME SLOT THERE WAS NOT A "SHOOTING STAR" SIGHTED. THAT OBSERVATION
POSED THE QUESTION; "WHEN I WENT TO THE BATHROOM, DID I MISS ANY
SHOOTING STARS WITHIN THAT 8 MIN. WINDOW?".
IN THE LAST HR. OF THE EXPERIMENT, 4:00 AM TO 5:00 AM I SIGHTED 7
"SHOOTING STARS" ALL IN DIFFERENT REGIONS OF MY DOMAIN. THE TIMES OF
THE SIGHTINGS WERE 4:00 AM, 4:05 AM, 4:11 AM, 4:17 AM, 4:31 AM, 4:45 AM, AND AT
4:56 AM. CONCLUDING THAT THE 4:00 - 5:00 AM TIME FRAME WAS THE MOST
PRODUCTIVE. ALSO CONCLUDING THAT THE EARLIER IN THE MORNING, OR THE
CLOSER TO DAWN, THE NUMBER OF "SHOOTING STARS" OBSERVED WAS
GREATER. THE ENDING NUMBER OF "SHOOTING STARS" TALLIED DURING THE
PERIOD OF 11:00 PM - 5:00 AM, NOV. 15 - NOV. 16 WAS 20.
QUESTIONS TO BE ANSWERED:
QUESTION #1: WHY WOULD YOU EXPECT TO SEE MOST METEORS LATER IN THE
NIGHT, TOWARD?
ANSWER #1: THE EARTH'S ORBITAL MOTION & ROTATIONAL MOTION ARE
MOVING IN THE SAME DIRECTION. WHICH CAUSES THAT PART OF THE EARTH TO
MOVE FASTER. IN RESPECT TO THAT PART OF THE EARTH MOVING FASTER, IT IS
CARRIED THROUGH MORE DEBRIS AT THE TIME OF NIGHT, OR EARLY MORNING.
QUESTION #2: WAS THERE ANY PREFERRED DIRECTION OF TRAVEL FOR THE
METEORS YOU SAW?
ANSWER #2: NO, THERE WAS NO PREFERRED DIRECTION OF TRAVEL WHICH
MADE ME CONCLUDE THAT THEY WERE "SHOOTING STARS," NOT PART OF A
METEOR SHOWER.
QUESTION #3: DID THE METEORS SEEM TO ORIGINATE FROM ANY PARTICULAR
PLANE IN THE SKY?
ANSWER #3: FAINTER METEORS, CALLED SHOOTING STARS, OR FALLING STARS
USUALLY OCCUR SINGLY AND SPORADICALLY. AT INTERVALS, HOWEVER,
HUNDREDS OF SUCH METEORS OCCUR SIMULTANEOUSLY AND APPEAR TO COME
FROM A FIXED POINT. THESE SWARMS ARE CALLED METEORIC SHOWERS AND
ARE NAMED AFTER THE CONSTELLATION IN WHICH THEY SEEM TO HAVE THEIR
POINT OF ORIGIN. SOME APPEAR ANUALLY ON THE SAME DAYS OF EACH YEAR
AND ARE CALLED PERIODIC SHOWERS; OTHERS OCCUR INFREQUENTLY AT
VARYING INTERVALS.
f:\12000 essays\sciences (985)\Astronomy\Stars.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
STARS
Magnitudes
The magnitude scale was invented by an ancient Greek astronomer named Hipparchus in about 150 BC He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest. Modern astronomy has extended this system to stars brighter than Hipparchus' 1st magnitude stars and ones much, much fainter than 6.
As it turns out, the eye senses brightness logarithmically, so each increase in 5 magnitudes corresponds to a decrease in brightness by a factor 100. The absolute magnitude is the magnitude the stars would have if viewed from a distance of 10 parsecs or some 32.6 light years. Obviously, Deneb is intrinsically very bright to make this list from its greater distance. Rigel, of nearly the same absolute magnitude, but closer, stands even higher in the list. Note that most of these distances are really nearby, on a cosmic scale, and that they are generally uncertain by at least 20%. All stars are variable to some extent; those which are visibly variable are marked with a "v".
What are apparent and absolute magnitudes? Apparent is how bright the appear to us in the sky. The scale is somewhat arbitrary, as explained above, but a magnitude difference of 5 has been set to exactly a factor of 100 in intensity. Absolute magnitudes are how bright a star would appear from some standard distance, arbitrarily set as 10 parsecs or about 32.6 light years. Stars can be as bright as absolute magnitude -8 and as faint as absolute magnitude +16 or fainter. There are thus (a very few) stars more than 100 times brighter than Sirius, while hardly any are known fainter than Wolf 356.
Star, large celestial body composed of gravitationally contained hot gases emitting electromagnetic radiation, especially light, as a result of nuclear reactions inside the star. The sun is a star. With the sole exception of the sun, the stars appear to be fixed, maintaining the same pattern in the skies year after year. In fact the stars are in rapid motion, but their distances are so great that their relative changes in position become apparent only over the centuries.
The number of stars visible to the naked eye from earth has been estimated to total 8000, of which 4000 are visible from the northern hemisphere and 4000 from the southern hemisphere. At any one time in either hemisphere, only about 2000 stars are visible. The other 2000 are located in the daytime sky and are obscured by the much brighter light of the sun. Astronomers have calculated that the stars in the Milky Way, the galaxy to which the sun belongs, number in the hundreds of billions. The Milky Way, in turn, is only one of several hundred million such galaxies within the viewing range of the larger modern telescopes. The individual stars visible in the sky are simply those that lie closest to the solar system in the Milky Way.
The star nearest to our solar system is the triple star Proxima Centauri, which is about 40 trillion km (about 25 trillion mi) from earth. In terms of the speed of light, the common standard used by astronomers for expressing distance, this triple-star system is about 4.29 light-years distant; light traveling at about 300,000 km per sec (about 186,000 mi per sec) takes more than four years and three months to travel from this star to earth (see LIGHT-YEAR).
Physical Description
The sun is a typical star, with a visible surface called a photosphere, an overlying atmosphere of hot gases, and above them a more diffuse corona and an outflowing stream of particles called the solar (stellar) wind. Cooler areas of the photosphere, such as the sunspots (see SUN) on the sun, are likely present on other typical stars; their existence on some large nearby stars has been inferred by a technique called speckle interferometry. The internal structure of the sun and other stars cannot be directly observed, but studies indicate convection currents and layers of increasing density and temperature until the core is reached where thermonuclear reactions take place. Stars consist mainly of hydrogen and helium, with varying amounts of heavier elements.
The largest stars known are supergiants with diameters that are more than 400 times that of the sun, whereas the small stars known as white dwarfs have diameters that may be only 0.01 times that of the sun. Giant stars are usually diffuse, however, and may be only 40 times more massive than the sun, whereas white dwarfs are extremely dense and may have masses about 0.1 times that of the sun despite their small size. Supermassive stars are suspected that could be 1000 times more massive than the sun, and, at the lower range, hot balls of gases may exist that are too small to initiate nuclear reactions. One possible such brown dwarf was first observed in 1987, and others have been detected since then.
Star brightness is described in terms of magnitude. The brightest stars may be as much as 1,000,000 times brighter than the sun; white dwarfs are about 1000 times less bright.
f:\12000 essays\sciences (985)\Astronomy\STEPHEN J HAWKING.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Stephen J. Hawking
by Rachel Finck
Stephen Hawking was born in January of 1942 in Oxford, England. He
grew up near London and was educated at Oxford, from which he received his
BA in 1962, and Cambridge, where he received his doctorate in theoretical
physics. Stephen Hawking is a brilliant and highly productive researcher, and,
since 1979, he has held the Lucasian professorship in mathematics at
Cambridge, the very chair once held by Isaac Newton. Although still relatively
young, Hawking is already being compared to such great intellects as Newton
and Albert Einstein. Yet it should be noted that since the early 1960s he has been
the victim of a progressive and incurable motorneurone disease, ALS, that now
confines him to a wheelchair. This affliction prevents Hawking from reading,
writing, or calculating in a direct and simple way. The bulk of his work, involving
studying, publishing, lecturing, and worldwide travel, is carried on with the help of
colleagues, friends, and his wife. Of his illness, Hawking has said that it has
enhanced his career by giving him the freedom to think about physics and the
Universe.
Stephen Hawking has written many essays involving the unified theory,
which is a theory summarizing the entire of the physical world; a theory that would
stand as a complete, consistent theory of the physical interactions that would
describe all possible observations. Our attempts at modeling physical reality
normally consists of two parts: a) A set of local laws that are obeyed by the
various physical quantities, formulated in terms of differential equations, and b)
Sets of boundary conditions that tell us the state of some regions of the universe
at a certain time and what effects propagate into it subsequently from the rest of
the universe. Presently, physicist are still trying to unify two separate theories to
describe everything in the universe. The two theories are the general theory of
relativity and quantum mechanics.
Albert Einstein formulated the general theory of relativity almost single-
handedly in 1915. First, in 1905, he developed the special theory of relativity,
which deals with the concept of people measuring different time intervals, while
moving at different speeds, yet measuring the same speed for the speed of light,
regardless of velocity. In 1915, he developed the general theory of relativity.
This theory dealt with the concept of gravity as a distortion of space-time, and not
just a force within it.
Einstein's original equations predicted that the universe was either
expanding or contracting. Einstein's equations showed that mass and energy are
always positive, which is why gravity always attracts bodies toward each other.
Space-time is curved back onto itself like the surface of the earth. It was then
theorized that what if matter could curve a region in on itself so much that it could
cut itself off from the rest of the universe. The region would become what is
known as a black hole. Nothing could escape it, although objects could fall in.
To get out, the objects would have to move faster than the speed of light, and this
was not allowed by the general theory of relativity. In 1965, Hawking along with
Roger Penrose proved a number of theorems that showed the fact that space-
time was curved in on itself so that there would be singularities where space-time
had a beginning or an end.
"The fact that Einstein's general theory of relativity turned out to predict
singularities led to a crisis in physics. (Hawking)" The equations of general
relativity cannot be defined as a singularity. This means that general relativity
cannot predict how the universe should begin at the big bang. Thus, it is not a
complete theory. It must be paired with quantum mechanics.
In 1905, the photoelectric effect was written about by Einstein, which he
theorized could be explain if light came not in continuously variable amounts, but
in packets of a certain size. A few years earlier, the idea of energy in quanta had
been introduced by Max Planck.
The full implications of the photoelectric effect were not realized until 1925,
when Werner Heisenberg pointed out that it made it impossible to measure the
position of a particle exactly. To see where a particle is, you have to shine a light
on it. As Einstein showed, you had to use at least one quanta of light. This
whole packet of light would disturb the particle and cause it to move at some
speed in some direction different than its state before the light was shined. In this
way, it was theorized that the more accurately you want to measure the position
of the particle, the greater the energy packet you would have to use and thus the
more you would disturb the particle. This dilemma is called the Heisenberg
uncertainty principle.
Einstein's general theory of relativity is a classic theory because it does not
take into account the uncertainty principle. One therefore has to find a new
theory that combines general relativity and the uncertainty principle. In most
situations, the difference between the general relativity theory and the new theory
is very small. However, the singularity theorems that Hawking proved show that
space-time will become highly curved on very small scales. The effects of the
uncertainty principle will then become very important.
The problems that Einstein had with quantum mechanics is that he used
the commonsense notion that a particle has a definite history. And that a particle
has a definite location. But, it must be taken into account that a particle has an
infinite set of histories. A famous thought experiment called Shroedinger's cat
helps to illustrate this concept. Let's say that a cat is placed in a sealed box and
a gun is pointed at it. The gun will only go off if a radioactive nucleus decays.
There is exactly a 50% chance of this happening. Later on, before the box is
opened, there are two possibilities of what happened to the cat: the gun did not
go off, and the cat is alive, or the gun did go off, and the cat is dead. Before the
box is opened, the cat is both alive and dead at the same time. The cat has two
separate histories.
Another way to think of this was put forth by a physicist Richard Feynman.
He contributed that a system didn't just have a single history in space-time, but it
had every possible history. "Consider, for example, a particle at point A at a
certain time. Normally, one would assume that the particle would move in a
straight line away from A. However, according to the sum over histories, it can
move on any path that starts at A. (Hawking)" It's like what happens when you
place a drop of ink on blotting paper, and it diffuses along every path away from
its point of origin.
In 1973, Stephen Hawking began investigating what effect the uncertainty
principle would have on a particle in the curved space-time near a black hole. He
found that the black hole would not be completely black. The uncertainty
principle would allow particles to leak out of the black hole at a steady rate.
Although, the discovery came as a complete surprise, "It ought to have been
obvious. The Feynman sum over histories says that particles can take any path
through space-time. Thus it is possible for a particle to travel faster than light.
(Hawking)"
In 1983, Stephen Hawking proposed that the sum of histories for the
universe should not be taken over histories in real time. Rather, it should be
taken over histories in imaginary time that were closed in on themselves, like the
surface of the earth. Because these histories didn't have any singularities or any
beginning or end, what happened to them would be determined entirely by the
laws of physics. This means, what happened in imaginary time could be
calculated. "And if you know the history of the universe in imaginary time, you
can calculate how it behaves in real time. In this way, you could hope to get a
complete unified theory, one that would predict everything in the universe.
(Hawking)"
Imaginary time is a concept that Hawking has made a particular advance
in as a physicist. It seems obvious that the universe has a unique history, yet
since the discovery of quantum mechanics, we have to consider the universe as
having every possible history. To grasp the concept of imaginary time, think of
real time as horizontal line. Early times are on the left, and late times are on the
right. Then think of lines going 90° from the horizontal line of real time. These
lines, which are at right angles to real time, represent imaginary time. The
importance of imaginary time lies in the fact that the universe is curved in on
itself, leading to singularities. At the singularities, the equations of physics cannot
be defines, thus one cannot predict what will happen. But the imaginary time
direction is at right angles to real time. This means that it behaves in a similar
way to the three directions that correspond to moving in space. Then, the
curvature of space can lead to the three directions and the imaginary time
direction meeting up around the back. These would form a closed surface, like
the surface of the earth. Stephen Hawking as a physicist has many much
progress in the use of imaginary time in the way the field of physics thinks.
f:\12000 essays\sciences (985)\Astronomy\The Beginning of Time.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE BEGINNING OF TIME
There was a period in history when the beginning of the world in which we
live in was a expressed through legends and myths, now, through the use of
increasingly advanced scientific equipment we can see that the universe is more
vast and complex than ever imaginable. The purpose of this paper is to bring
light to some of the modern beliefs regarding the origin of the universe by
answering a series of questions. What are the commonly excepted theories of
the evolution of the universe? What is meant by the "Big Bang Theory" and how
does it work? And how our planet and solar system developed from The Big
Bang? This paper will use scientific data to base the evolution of our universe
around The Big Bang.
At the present time there are two theories which are used to explain the
creation of the universe. The first theory is the infamous Big Bang Theory,
which will be detailed later. The second is the Steady State Theory.
(Weinberg, 1977)
The later hypothesis was created to replace the common belief that the
universe was completely static. The expansion of the universe was discovered
in 1929 when Edwin Hubble discovered that every galaxy in the universe was
moving away from each other, this meant that the universe was expanding.
Hubble found the movement of the galaxies by using a phenomenon known as
the Doppler effect. This effect caused bodies moving away from an observer to
have a "red-shifted" spectrum (the light spectrum of the body had been shifted
closer to red) and bodies moving towards an observer to be "blue-shifted"
(Hawking, 1988)
The expansion was traced backwards through time to discover that all the
galaxies had originated from the same point. It was later believed that all matter
spawn from that "center of the universe" discovered by Hubble, by means of
some sort of enigmatic portal. Matter would collect outside this singularity and
form every moon, planet, and star known today.
The Steady State Theory was very attractive because it featured a
universe with no beginning or end. The theory meant that scientist had to
abandon the laws of conservations of mass and energy. It seemed plausible
that the aforementioned laws of physics could breakdown at a certain point but
more and more evidence gathered against the Steady State Theory, leading to
unending modifications to it. Until finally the theory was dropped completely with
the discovery of the smooth microwave background radiation (radiation so
ancient it had shifted right out of the visible spectrum into microwave radiation).
A smooth background to the universe suggested that it was hot and uniform - the
ideal conditions for the Big Bang. (Weinberg, 1977)
The Big Bang was almost exactly what it sounds like - a giant explosion.
During this explosion all the materials in the universe today (matter, energy and
even time) were expelled into a vacuum about 12 billion years ago. The
combined mass of the universe was interpolated to a point of zero volume
(therefore infinite density). It is impossible to predict what the universe would
physically be like because the density of the universe (infinity) cannot be
plugged into any physical equation. (Weinberg, 1977)
The history of the universe can, however, be traced back to a moment
10-33 seconds after the big bang. At this moment the universe is filled with a sea
of various exotic particles along with electrons, photons, and neutrinos (and their
respectable anti-particles). At this time there are also a small number of protons
and neutrons. The protons and neutron are, in this very dense soup,
participating in sub-atomic reactions. The two most important of these reactions
are:
Antineutrino + Proton ----> Positron (anti-electron) + Neutron
Neutrino + Neutron ----> Electron + Proton
In effect the protons are becoming neutrons and vice-versa. The energies are
so great that simple atoms being formed fall apart immediately after coming
together. (Silk, 1994)
As the universe expands, and loses energy the electrons and positrons
begin to collide, effectively annihilating one another, leaving only energy in the
form of photons and neutrinos. Appropriately fourteen seconds after the Big
Bang simple atoms are formed like deuterium (heavy hydrogen) and helium.
About three minutes after the incidence of creation, the universe has
sufficiently cooled to allow formations of helium and other light elements.
(Weinberg, 1977)
As it is proven by the cosmic background radiation, the universe was
uniformly smooth. A change had to have occurred, otherwise no celestial
objects would have formed and as the particles lost energy, they would simply
decompose into simpler particles. Something had to have caused the particles
to group together and form larger entities. (Silk, 1994)
Gravity comes to mind, but, at this point the largest particle is a helium
atom, which due to it's small size, has very little of a gravitational pull.
(Weinberg, 1977)
The only respectable theory is the "cosmic string theory" is states that our
four dimensional space (three spatial dimensions plus time) is made up of knots
in seven or eight dimensional 'strings'. These strings are really massive (each
metre of string would weigh 1021 kg). This would require that the universe was
not a complete vacuum prior to the Big Bang because space itself would be
make up of cosmic strings. (Kitchen, 1990)
The cosmic strings while being extremely heavy are also very tight, so
tight that if a string were not either a circle (connect to itself in a loop) or of
infinite length, it would pull itself together into nothing. A string can also
disconnect and reattach with other strings that are intersecting it. (Kitchen 1990)
Now a universe can be pictured with an infinite number of 'cosmic strings'
interacting with each other even before the Big Bang. After the material was
dispersed via the Big Bang, particles were attracted to the cosmic strings
(namely loops, since the mass would be more centralized). These cosmic string
loops, could be the basis for the formation of a galaxy. The small particles
would be attracted by the strong gravitational field of the loops, thus creating a
hub for the creation of a galaxy. After some years all the loops would decay
because of their strong emission of gravitational radiation leaving enough
collected matter to form a fully functioning galaxy behind. (Kitchen, 1990)
In clouds of dust and gas (namely hydrogen) at the center of the galaxy,
pressure and temperature build causing an increase in density and gravity. The
heavier particles fall to an orbital cloud of the young star while the lighter
elements close in on the core. This increase in gravity causes a further increase
in pressure, until the center of the star has the conditions ideal for nuclear
fusion. This process occurs at the very core of the star and converts hydrogen
into helium at an alarming rate. A star is born. (Silk 1994)
Perchance the outer cloud of the star may also harbor some heavenly
bodies usually planets or other stars. The clouds of dust collect the same way in
planet except the temperatures don't quite reach the point where nuclear
reactions take place. (Silk, 1994)
By means of commonly accepted theories in the field of astrophysics, the
origin of the universe from the Big Bang to the formation of a planet, has been
successfully detailed. Thanks to new technology introduced in the past fifty
years and thank to intellectual minds capable of supporting that technology,
more has been learned about our would than ever imagined possible. Although
all the advances assumed feasible have put to use will still are far from knowing
the absolute truth. Surely the early astronomers thought that they were correct
in their theories, but most ended up being dead wrong. We cannot assume that
all of our current theories are correct because although we may know more, we
will never know all.
BIBLIOGRAPHY
Hawking, S. W. (1988). A Brief History Of Time. New York: Bantam.
Kitchen, C. R. (1990). Journeys To The End Of The Universe. Bristol: Adam
Hilger.
Silk, Joseph. (1994). A Short History Of The Universe. New York: Scientific
Americal Library.
Wienberg, Steven. (1977). The First Three Minutes. New York: Basic Books, Inc.
f:\12000 essays\sciences (985)\Astronomy\The Big Bang comparisive of two major theories.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Big Bang
It is always a mystery about how the universe began, whether if and when it will end.
Astronomers construct hypotheses called cosmological models that try to find the answer.
There are two types of models: Big Bang and Steady State. However, through many
observational evidences, the Big Bang theory can best explain the creation of the
universe.
The Big Bang model postulates that about 15 to 20 billion years ago, the universe
violently exploded into being, in an event called the Big Bang. Before the Big Bang, all
of the matter and radiation of our present universe were packed together in the primeval
fireball - an extremely hot dense state from which the universe rapidly expanded. The
Big Bang was the start of time and space. The matter and radiation of that early stage
rapidly expanded and cooled. Several million years later, it condensed into galaxies. The
universe has continued to expand, and the galaxies have continued moving away from
each other ever since. Today the universe is still expanding, as astronomers have
observed.
The Steady State model says that the universe does not evolve or change in time. There
was no beginning in the past, nor will there be change in the future. This model assumes
the perfect cosmological principle. This principle says that the universe is the same
everywhere on the large scale, at all times. It maintains the same average density of
matter forever.
There are observational evidences found that can prove the Big Bang model is more
reasonable than the Steady State model. First, the redshifts of distant galaxies. Redshift
is a Doppler effect which states that if a galaxy is moving away, the spectral line of that
galaxy observed will have a shift to the red end. The faster the galaxy moves, the more
shift it has. If the galaxy is moving closer, the spectral line will show a blue shift. If the
galaxy is not moving, there is no shift at all. However, as astronomers observed, the
more distance a galaxy is located from Earth, the more redshift it shows on the spectrum.
This means the further a galaxy is, the faster it moves. Therefore, the universe is
expanding, and the Big Bang model seems more reasonable than the Steady State model.
The second observational evidence is the radiation produced by the Big Bang. The Big
Bang model predicts that the universe should still be filled with a small remnant of
radiation left over from the original violent explosion of the primeval fireball in the past.
The primeval fireball would have sent strong shortwave radiation in all directions into
space. In time, that radiation would spread out, cool, and fill the expanding universe
uniformly. By now it would strike Earth as microwave radiation. In 1965 physicists
Arno Penzias and Robert Wilson detected microwave radiation coming equally from all
directions in the sky, day and night, all year. And so it appears that astronomers have
detected the fireball radiation that was produced by the Big Bang. This casts serious
doubt on the Steady State model. The Steady State could not explain the existence of this
radiation, so the model cannot best explain the beginning of the universe.
Since the Big Bang model is the better model, the existence and the future of the
universe can also be explained. Around 15 to 20 billion years ago, time began. The
points that were to become the universe exploded in the primeval fireball called the Big
Bang. The exact nature of this explosion may never be know. However, recent
theoretical breakthroughs, based on the principles of quantum theory, have suggested that
space, and the matter within it, masks an infinitesimal realm or utter chaos, where events
happen randomly, in a state called quantum weirdness.
Before the universe began, this chaos was all there was. At some time, a portion of this
randomness happened to form a bubble, with a temperature in excess of 10 to the power
of 34 degrees Kelvin. Being that hot, naturally it exploded. For an extremely brief and
short period, billionths and billionths of a second, it inflated. At the end of this period of
inflation, the universe may have a diameter of a few centimetres. The temperature had
cooled enough for particles of matter and antimatter to form, and they instantly destroyed
each other, producing fire and a thin haze of matter apparently because slightly more
matter than antimatter was formed. The fireball, and the smoke of its buring, was the
universe at an age of a trillionth a second.
The temperature of the expanding fireball dropped rapidly, cooling to a few billion
degrees in a few minutes. Matter continued to condense out of energy, first protons and
neutrons, then electrons, and finally neutrinos. After about an hour, the temperature had
dropped below a billion degrees, and protons and neutrons combined and formed
hydrogen, deuterium, and helium. In a billion years, this cloud of energy, atoms, and
neutrinos had cooled enough for galaxies to form. The expanding cloud cooled still
further, until today its temperature is a couple of degrees above absolute zero.
In the future, the universe may end up in two possible situations. From the initial Big
Bang, the universe attained a speed of expansion. If that speed is greater than the
universe's own escape velocity, then the universe will not stop its expansion. Such a
universe is said to be open. If the velocity of expansion is slower than the escape
velocity, the universe will eventually reach the limit of its outward thrust, just like a ball
thrown in the air comes to the top of its arc, slows, stops, and starts to fall. The crash of
the long fall may be the Big Bang to the beginning of another universe, as the fireball
formed at the end of the contraction leaps outward in another great expansion. Such a
universe is said to be closed, and pulsating.
If the universe has achieved escape velocity, it will continue to expand forever. The
stars will redden and die, the universe will be like a limitless empty haze, expanding
infinitely into the darkness. This space will become even emptier, as the fundamental
particles of matter age, and decay through time. As the years stretch on into infinity,
nothing will remain. A few primitive atoms such as positrons and electrons will be
orbiting each other at distances of hundreds of astronomical units. These particles will
spiral slowly toward each other until touching, and they will vanish in the last flash of
light. After all, the Big Bang model is only an assumption. No one knows for sure that
exactly how the universe began and how it will end. Man will never know the exact truth
about the roots of our universe, however, the Big Bang model is the most logical and
reasonable theory to explain the universe in modern science.
f:\12000 essays\sciences (985)\Astronomy\The Big Bang Theory.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It is always a mystery about how the universe began, whether
if and when it will end. Astronomers construct hypotheses called
cosmological models that try to find the answer. There are two
types of models: Big Bang and Steady State. However, through
many observational evidences, the Big Bang theory can best
explain the creation of the universe.
The Big Bang model postulates that about 15 to 20 billion
years ago, the universe violently exploded into being, in an
event called the Big Bang. Before the Big Bang, all of the
matter and radiation of our present universe were packed together
in the primeval fireball--an extremely hot dense state from which
the universe rapidly expanded.1 The Big Bang was the start of
time and space. The matter and radiation of that early stage
rapidly expanded and cooled. Several million years later, it
condensed into galaxies. The universe has continued to expand,
and the galaxies have continued moving away from each other ever
since. Today the universe is still expanding, as astronomers
have observed.
The Steady State model says that the universe does not
evolve or change in time. There was no beginning in the past,
nor will there be change in the future. This model assumes the
perfect cosmological principle. This principle says that the
universe is the same everywhere on the large scale, at all
times.2 It maintains the same average density of matter forever.
There are observational evidences found that can prove the
Big Bang model is more reasonable than the Steady State model.
First, the redshifts of distant galaxies. Redshift is a Doppler
effect which states that if a galaxy is moving away, the spectral
line of that galaxy observed will have a shift to the red end.
The faster the galaxy moves, the more shift it has. If the
galaxy is moving closer, the spectral line will show a blue
shift. If the galaxy is not moving, there is no shift at all.
However, as astronomers observed, the more distance a galaxy is
located from Earth, the more redshift it shows on the spectrum.
This means the further a galaxy is, the faster it moves.
Therefore, the universe is expanding, and the Big Bang model
seems more reasonable than the Steady State model.
The second observational evidence is the radiation produced
by the Big Bang. The Big Bang model predicts that the universe
should still be filled with a small remnant of radiation left
over from the original violent explosion of the primeval fireball
in the past. The primeval fireball would have sent strong
shortwave radiation in all directions into space. In time, that
radiation would spread out, cool, and fill the expanding universe
uniformly. By now it would strike Earth as microwave radiation.
In 1965 physicists Arno Penzias and Robert Wilson detected
microwave radiation coming equally from all directions in the
sky, day and night, all year.3 And so it appears that
astronomers have detected the fireball radiation that was
produced by the Big Bang. This casts serious doubt on the Steady
State model. The Steady State could not explain the existence of
this radiation, so the model cannot best explain the beginning of
the universe.
Since the Big Bang model is the better model, the existence
and the future of the universe can also be explained. Around 15
to 20 billion years ago, time began. The points that were to
become the universe exploded in the primeval fireball called the
Big Bang. The exact nature of this explosion may never be known.
However, recent theoretical breakthroughs, based on the
principles of quantum theory, have suggested that space, and the
matter within it, masks an infinitesimal realm of utter chaos,
where events happen randomly, in a state called quantum
weirdness.4
Before the universe began, this chaos was all there was. At
some time, a portion of this randomness happened to form a
bubble, with a temperature in excess of 10 to the power of 34
degrees Kelvin. Being that hot, naturally it expanded. For an
extremely brief and short period, billionths of billionths of a
second, it inflated. At the end of the period of inflation, the
universe may have a diameter of a few centimetres. The
temperature had cooled enough for particles of matter and
antimatter to form, and they instantly destroy each other,
producing fire and a thin haze of matter-apparently because
slightly more matter than antimatter was formed.5 The fireball,
and the smoke of its burning, was the universe at an age of
trillionth of a second.
The temperature of the expanding fireball dropped rapidly,
cooling to a few billion degrees in few minutes. Matter
continued to condense out of energy, first protons and neutrons,
then electrons, and finally neutrinos. After about an hour, the
temperature had dropped below a billion degrees, and protons and
neutrons combined and formed hydrogen, deuterium, helium. In a
billion years, this cloud of energy, atoms, and neutrinos had
cooled enough for galaxies to form. The expanding cloud cooled
still further until today, its temperature is a couple of degrees
above absolute zero.
In the future, the universe may end up in two possible
situations. From the initial Big Bang, the universe attained a
speed of expansion. If that speed is greater than the universe's
own escape velocity, then the universe will not stop its
expansion. Such a universe is said to be open. If the velocity
of expansion is slower than the escape velocity, the universe
will eventually reach the limit of its outward thrust, just like
a ball thrown in the air comes to the top of its arc, slows,
stops, and starts to fall. The crash of the long fall may be the
Big Bang to the beginning of another universe, as the fireball
formed at the end of the contraction leaps outward in another
great expansion.6 Such a universe is said to be closed, and
pulsating.
If the universe has achieved escape velocity, it will
continue to expand forever. The stars will redden and die, the
universe will be like a limitless empty haze, expanding
infinitely into the darkness. This space will become even
emptier, as the fundamental particles of matter age, and decay
through time. As the years stretch on into infinity, nothing
will remain. A few primitive atoms such as positrons and
electrons will be orbiting each other at distances of hundreds of
astronomical units.7 These particles will spiral slowly toward
each other until touching, and they will vanish in the last flash
of light. After all, the Big Bang model is only an assumption.
No one knows for sure that exactly how the universe began and how
it will end. However, the Big Bang model is the most logical and
reasonable theory to explain the universe in modern science.
ENDNOTES
1. Dinah L. Mache, Astronomy, New York: John Wiley & Sons,
Inc., 1987. p. 128.
2. Ibid., p. 130.
3. Joseph Silk, The Big Bang, New York: W.H. Freeman and
Company, 1989. p. 60.
4. Terry Holt, The Universe Next Door, New York: Charles
Scribner's Sons, 1985. p. 326.
5. Ibid., p. 327.
6. Charles J. Caes, Cosmology, The Search For The Order Of
The Universe, USA: Tab Books Inc., 1986. p. 72.
7. John Gribbin, In Search Of The Big Bang, New York: Bantam
Books, 1986. p. 273.
BIBLIOGRAPHY
Boslough, John. Stephen Hawking's Universe. New York: Cambridge
University Press, 1980.
Caes, J. Charles. Cosmology, The Search For The Order Of The
Universe. USA: Tab Books Inc., 1986.
Gribbin, John. In Search Of The Big Bang. New York: Bantam
Books, 1986.
Holt, Terry. The Universe Next Door. New York: Charles
Scribner's Sons, 1985.
Kaufmann, J. William III. Astronomy: The Structure Of The
Universe. New York: Macmillan Publishing Co., Inc., 1977.
Mache, L. Dinah. Astronomy. New York: John Wiley & Sons, Inc.,
1987.
Silk, Joseph. The Big Bang. New York: W.H. Freeman and Company,
1989.
----------------
f:\12000 essays\sciences (985)\Astronomy\The Creation of the Universe.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Creation of the Universe
This paper will go over the creation of the universe. There are many theories about this
issue. I will briefly summarize a few of them, and then give whatever evidence is available for or
against each.
There are many theories regarding the creation of the universe, for example, there is an
ancient Egyptian legend that says that Osiris Khepera created himself out of a dark, boundless ocean
called "Nu". Then out of this ocean, he created the universe. I will be writing about these theories:
The Big Bang theory is what most people believe, also there is a theory called "Steady State", which
is the opposite of the Big Bang theory. There is the theory of an "Oscillating Universe", which is sort
of a compromise between the Big Bang theory, and the Steady State theory. There is also the
religious theory, in which God created everything.
There are a lot of different theories regarding the creation of the universe. It is a very
controversial topic, because most theories don't follow the story of Genesis in the bible. There is the
"Big Bang" theory, the "Steady State" theory, and the religious theory. The theory that best explains
creation is the Big Bang theory,
f:\12000 essays\sciences (985)\Astronomy\The Future of NASA.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Future of NASA
One hundred years from now, NASA's space program will not be so far
advanced that people will be able to beam around the Universe or travel through
time. However, unless something goes terribly wrong with the world, it is
expected to advance tremendously. New, high-tech designs for rockets will
make them more environmentally safe. Rockets will also be recycled and
reused. Systems retrieving parts of rockets that are today, left behind in space,
will be created. Astronauts will be well on their way to exploring Mars from a
hands-on perspective. Because of the overpopulated Earth, scientists may even
be considering ways to alter life on Mars, so that people would be able to live
there some day.
Some products developed in NASA's space program that we now
incorporate in our daily lives include the vacuum cleaner, pacemaker, pens that
can write upside-down, and the zero-gravity training system. The vacuum
cleaner was originally a great tool for astronauts in outer space. It is now a very
helpful tool for cleaning our homes. The pacemaker is a form of life-support on
spacecrafts, helping astronomers' hearts pump while they are outside of the
Earth's atmosphere. It is used, on Earth, for those who's hearts have problems
with pumping blood. Pens that write upside-down are used in space, where
there is no gravity and writing with pens would otherwise be impossible. They
are convenient tools on Earth when we are trying to write on vertical surfaces. A
zero-gravity training system is used to help astronauts become more comfortable
with the conditions in space. It is used in places such as Sportsland, for kids to
twirl around in.
In the future, telephones with picture screens, much like those used to
see astronauts in space with, will become common on Earth. Rooms with no
gravity may become a part of amusement parks. More solar-powered energy
sources will also be available. Space Internet may be created, so that
astronomers and anyone else that happens to be in space can upload pictures
and chat with the rest of the world while they are actually in outer space. In
general, there is a bright future in store for NASA, with new and advanced
technology waiting around the bend of the twenty-first century.
f:\12000 essays\sciences (985)\Astronomy\The life of Nicolaus Copernicus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nicolaus Copernicus
His Life:
Throughout history people have always looked up at the sky and wondered about the universe. Some just wonder while others attempt to solve this mystery. One of the people who had endeavored to solve it was Nicolaus Copernicus.
Copernicus was born in the present day town of Torun, Poland in February of 1473. While still a young boy, Copernicus was put in custody of his uncle when his father died. His uncle made sure that his nephew got the best education they could obtain. This is how Copernicus was able to enter the University of Krakow, which was well known for its mathematics, and astronomy programs. After finishing in Krakow, he was inspired to further his education by going to the University of Bologna in Italy. While there, he roomed with Domenico Maria de Novara, the mathematics professor. In 1500, Copernicus lectured in Rome and in the next year, obtained permission to study medicine at Padua. Before returning to Poland, he received a doctorate in canon law from the University of Ferrara.
Copernicus lived with his uncle in his bishopric palace. While he stayed there he published his first book which was a translation of letters written by the 7th century writer, Theophylactus of Simocatta. After that he wrote an astronomical discourse that laid the foundation of his heliocentric theory; the theory that the sun is the center of our solar system. However, it was 400 years before it was published.
After leaving his uncle, he wrote a treatise on money, and began the work for which he is most famous, On the Revolution of the Celestial Spheres, which took him almost 15 years to write. It is ironic that what he devoted a good part of his life would not be published until he was on his deathbed.
His Theory:
To understand the contribution Copernicus made to the astrological community, you first need to understand the theory that had been accepted at the time of Copernicus.
The question of the arrangement of the planets arose about 4000 BC. At this time the Mesopotamians believed that the earth was at the center of the universe and that other heavenly bodies moved around the earth. This belief was synonymously know as geocentric. They believed this, but they had no scientific proof to support it.
It was not until the 2nd century that the famous astronomer, Ptolemy, gave an explanation for the movement of the stars across the sky, that the geocentric theory began to become creditable.
That was the theory that existed at the time of Copernicus. Copernicus was not the first one to come up with the idea of a sun-centered (heliocentric) universe. Not too long after Ptolemy theorized about the movement of the stars there was a man by the name of Aristarchus of Samos. He was the first one to propose the idea of a sun-centered universe.
The stipulations of Copernicus's theory are:
· The earth rotates on its axis daily and rotates around the sun yearly
· The other planets circle the earth
· As the earth rotates it wobbles like a top
· The stars are stationary
· The greater the radius of a planet's orbit, the more time it takes to make one complete circuit around the sun
All these concepts seem totally logical to us, however most 16th century readers were not ready to accept that the earth rotated around the sun. It may seem weird but the calculations that Copernicus made were not much more accurate than his predecessors, however most of his theory was accepted, while the radical ones were omitted.
The one concept that was not liked was that the earth moved around the sun. To deal with this dilemma, Tycho Brahe met Copernicus and Ptolemy halfway by making the earth a stationary object while the planets orbited the sun in the center.
The rotating earth idea was not revived until the English philosopher Isaac Newton started explaining celestial mechanics.
Nicolaus Copernicus
His Life:
Throughout history people have always looked up at the sky and wondered about the universe. Some just wonder while others attempt to solve this mystery. One of the people who had endeavored to solve it was Nicolaus Copernicus.
Copernicus was born in the present day town of Torun, Poland in February of 1473. While still a young boy, Copernicus was put in custody of his uncle when his father died. His uncle made sure that his nephew got the best education they could obtain. This is how Copernicus was able to enter the University of Krakow, which was well known for its mathematics, and astronomy programs. After finishing in Krakow, he was inspired to further his education by going to the University of Bologna in Italy. While there, he roomed with Domenico Maria de Novara, the mathematics professor. In 1500, Copernicus lectured in Rome and in the next year, obtained permission to study medicine at Padua. Before returning to Poland, he received a doctorate in canon law from the University of Ferrara.
Copernicus lived with his uncle in his bishopric palace. While he stayed there he published his first book which was a translation of letters written by the 7th century writer, Theophylactus of Simocatta. After that he wrote an astronomical discourse that laid the foundation of his heliocentric theory; the theory that the sun is the center of our solar system. However, it was 400 years before it was published.
After leaving his uncle, he wrote a treatise on money, and began the work for which he is most famous, On the Revolution of the Celestial Spheres, which took him almost 15 years to write. It is ironic that what he devoted a good part of his life would not be published until he was on his deathbed.
His Theory:
To understand the contribution Copernicus made to the astrological community, you first need to understand the theory that had been accepted at the time of Copernicus.
The question of the arrangement of the planets arose about 4000 BC. At this time the Mesopotamians believed that the earth was at the center of the universe and that other heavenly bodies moved around the earth. This belief was synonymously know as geocentric. They believed this, but they had no scientific proof to support it.
It was not until the 2nd century that the famous astronomer, Ptolemy, gave an explanation for the movement of the stars across the sky, that the geocentric theory began to become creditable.
That was the theory that existed at the time of Copernicus. Copernicus was not the first one to come up with the idea of a sun-centered (heliocentric) universe. Not too long after Ptolemy theorized about the movement of the stars there was a man by the name of Aristarchus of Samos. He was the first one to propose the idea of a sun-centered universe.
The stipulations of Copernicus's theory are:
· The earth rotates on its axis daily and rotates around the sun yearly
· The other planets circle the earth
· As the earth rotates it wobbles like a top
· The stars are stationary
· The greater the radius of a planet's orbit, the more time it takes to make one complete circuit around the sun
All these concepts seem totally logical to us, however most 16th century readers were not ready to accept that the earth rotated around the sun. It may seem weird but the calculations that Copernicus made were not much more accurate than his predecessors, however most of his theory was accepted, while the radical ones were omitted.
The one concept that was not liked was that the earth moved around the sun. To deal with this dilemma, Tycho Brahe met Copernicus and Ptolemy halfway by making the earth a stationary object while the planets orbited the sun in the center.
The rotating earth idea was not revived until the English philosopher Isaac Newton started explaining celestial mechanics.
f:\12000 essays\sciences (985)\Astronomy\The Mercury Program.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Project Mercury, the first manned U.S. space project, became an official NASA program
on October 7, 1958. The Mercury Program was given two main but broad objectives: 1. to
investigate man's ability to survive and perform in the space environment and 2. to develop basic
space technology and hardware for manned space flight programs to come.
NASA also had to find astronauts to fly the spacecraft. In 1959 NASA asked the U.S.
military for a list of their members who met certain qualifications. All applicants were required to
have had extensive jet aircraft flight experience and engineering training. The applicants could be
no more than five feet eleven inches tall, do to the limited amount of cabin space that the Mercury
modules provided. All who met these requirements were also required to undergo numerous
intense physical and psychological evaluations. Finally, out of a field of 500 people who met the
experience, training, and height requirements, NASA selected seven to become U.S. astronauts.
There names, Lieutenant M. Scott Carpenter; Air Force Captains L. Gordon Cooper, Jr., Virgil
"Gus" Grissom, and Donald K. "Deke" Slayton; Marine Lieutenant Colonel John H. Glenn, Jr.;
and Navy Lieutenant commanders Walter M. Schirra, Jr., and Alan B. Shepard, Jr. Of these, all
flew in Project Mercury except Deke Slayton who was grounded for medical reasons. He later
became an American crewmember of the Apollo-Soyuz Test Project.
The Mercury module was a bell shaped craft. Its base measured exactly 74.5 inches wide
and it was nine feet tall. For its boosters NASA chose two U.S. military rockets: the Army's
Redstone, which provided 78,000 pounds of thrust, was used for suborbital flights, and the Air
Force Atlas, providing 360,000 pounds of thrust, was used for orbital fights. The Mercury craft
was fastened to the top of the booster for launch. Upon reaching the limits of Earth's atmosphere
the boosters were released from the module, and fell into uninhabited ocean.
The first Mercury launch was performed on May 5, 1961. The ship, Freedom 7, was the
first U.S. craft used for manned space flight. Astronaut Alan Shepard, Jr. remained in suborbital
flight for 15 minutes and 22 seconds, with an accumulated distance of 116 miles.
The second and final suborbital mission of the Mercury Project was launched on July 21,
1961. Gus Grissom navigated his ship, Liberty Bell 7, through flight for just 15 seconds longer
than the previous mission.
The next Mercury flight was accomplished using an Atlas booster. On February 20,1962
it fired up and launched John Glenn, Jr., inside Friendship 7, into orbit. Glenn orbited Earth three
times and when he returned the country celebrated.
Just three months later on May 24 Scott Carpenter also orbited Earth three times in Aurora
7.
On October 3, 1962 Walter Schirra, Jr. entered Earth's orbit in his ship, Sigma 7. He
completed 6 orbits and then completed the first splashdown in the Atlantic Ocean. All previous
splashdowns and recoveries were performed in the Pacific.
The final Mercury mission was the longest. Launched into orbit on May 15, 1963, Faith 7,
with Gordon Cooper, Jr. inside, went around Earth 22 times in 34and a half hours. On May 16 it
too splashed down in the Atlantic Ocean where it was recovered, successfully ending the Mercury
Project.
The Mercury Project, five years and $392.6 million dollars after it began, came to a close.
The entire project was highly successful, achieving both of its goals. It paved the way for the next
generation of NASA spacecraft: Gemini.
f:\12000 essays\sciences (985)\Astronomy\The Roswell Incident.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE ROSWELL INCIDENT
--------------------
Forty-seven years ago an incident occurred in the southwestern desert of the United States that could have significant implications for all mankind. It involved the recovery by the U.S. Military of material alleged to be of extraterrestrial origin. The event was announced by the Army Air Force on July 8, 1947 through a press release carried by newspapers throughout the country. It was subsequently denied by what is now believed to be a cover story claiming the material was nothing more than a weather balloon. It has remained veiled in government secrecy ever since.
The press release announcing the unusual event was issued by the Commander of the 509th Bomb Group at Roswell Army Air Field, Colonel William Blanchard, who later went on to become a four-star general and Vice Chief of Staff of the United States Air Force. That the weather balloon story was a cover-up has been confirmed by individuals directly involved, including the late General Thomas DuBose, who took the telephone call from Washington, D.C. ordering the cover-up. Numerous other credible military and civilian witnesses have testified that the original press release was correct and that the Roswell wreckage was of extraterrestrial origin. One such individual was Major Jesse Marcel, the Intelligence Officer of the 509th Bomb Group and one of the first military officers at the scene.
On January 12, 1994, United States Congressman Steven Schiff of Albuquerque, New Mexico, announced to the press that he had been stonewalled by the Defense Department when requesting information on the 1947 Roswell event on behalf of constituents and witnesses. Indicating that he was seeking further investigation into the matter, Congressman Schiff called the Defense Department's lack of response "astounding" and concluded it was apparently "another government cover-up."
History has shown that unsubstantiated official assurances or denials by government are often meaningless. Nevertheless, there is a logical and straightforward way to ensure that the truth about Roswell will emerge: an Executive Order declassifying any information regarding the existence of UFOs or extraterrestrial intelligence. Because this is a unique issue of universal concern, such an action would be appropriate and warranted. To provide positive assurance for all potential witnesses, it would need to be clearly stated and written into law. Such a measure is essentially what presidential candidate Jimmy Carter promised and then failed to deliver to the American people eighteen years ago in 1976.
If, as is officially claimed, no information on Roswell, UFOs, or extraterrestrial intelligence is being withheld, an Executive Order declassifying it would be a mere formality, as there would be nothing to disclose. The Order would, however, have the positive effect of setting the record straight once and for all. Years of controversy and suspicion would be ended, both in the eyes of the United States' own citizens and in the eyes of the world.
If, on the other hand, the Roswell witnesses are telling the truth and information on extraterrestrial intelligence does exist, it is not something to which a privileged few in the United States Government should have exclusive rights. It is knowledge of profound importance to which all people throughout the world should have an inalienable right. Its release would unquestionably be universally acknowledged as an historic act of honesty and goodwill.
I support the request, as outlined above, for an Executive Order declassifying any U.S. Government information regarding the existence of UFOs or extraterrestrial intelligence. Whether such information exists or whether it does not, I feel that the people of the world have a right to know the truth about this issue and that it is time to put an end to the controversy surrounding it.
THE ROSWELL INCIDENT FILM
-------------------------
This film was taken by a high security government photographer, in the summer of 1947, when the most thoroughly documented and witnessed crash of a flying saucer occurred in a remote desert of New Mexico. (see the book, The Truth about the UFO Crash at Roswell, by Randle and Schmitt)
After filming the amazing events, including the crash site and two autopsies, the cameraman turned over 300 minutes of 16mm black and white film to the Pentagon. He still had 90 minutes of film left to develop at his private lab. Incredibly, the Pentagon never retrieved these remaining reels from him. He ended up taking them home with him in 1952, when he went on to civilian work. He secretly kept the film reels in his house, under his bed, for over forty years.
The footage was sold by the cameraman (now 80 years old), last November, to London producer Ray Santilli, who is preparing to release this important film to the public in the near future. A number of U.S. Senators and Representatives recently saw the autopsy footage, and it appears that an investigation is in progress. We may soon know how much has been covered up by the military all these years.
It seems that those who have said we are not alone in the Universe were right.
So far, this does not appear to be a hoax. If it is a hoax, it is an incredibly elaborate one, costing a fortune. Prepare yourself for a shock. Below, you'll find several still frames from the film. Just a taste of things to come.
Above, the being, apparently non-human, and dead, is ready for the autopsy to proceed, at a Dallas, Texas operating theater, in 1947. The being appears to be about 5 feet tall. The right hand is severed. That may be a crash related injury, along with the burned or damaged thigh. Note the way the head flares out toward the back. It is huge. Note also the fact that this humanoid being has six digits on each limb, bizarre hips, odd musculature, and other anomalies. At first glance, many people believe that it is a female, but given the absence of a navel or even nipples, that may be jumping to conclusions. There are many creatures who's males have no external genitals.
Above, the surgeon, who is dressed in a full biohazard suit, uses a scalpel to make the first incisions. Note the black eyes, the lack of teeth, the muzzle around the mouth, the lack of hair, and the very low, strangely shaped ears.
In this still frame from the film, the surgeon is removing the thin black membrane that covered the left eye, using tweezers. The right eye still has its covering.
This frame shows the damaged thigh tissues, and the right hand. Note how thin the thumb is, and how long the pinkie finger is.
In this frame from the film, the left hand is being examined. You can clearly see five fingers, plus a very delicate thumb. The six toes of the feet are visible as well. Note that, unlike humans in general, the big toe is longer than any of the other toes. And there are no visible indications of toe or fingernails.
f:\12000 essays\sciences (985)\Astronomy\The Search for Black Holes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Search for Black Holes:
Both As A Concept And An Understanding
For ages people have been determined to explicate on everything. Our search for explanation rests only when there is a lack of questions. Our skies hold infinite quandaries, so the quest for answers will, as a result, also be infinite. Since its inception, Astronomy as a science speculated heavily upon discovery, and only came to concrete conclusions later with closer inspection. Aspects of the skies which at one time seemed like reasonable explanations are now laughed at as egotistical ventures. Time has shown that as better instrumentation was developed, more accurate understanding was attained. Now it seems, as we advance on scientific frontiers, the new quest of the heavens is to find and explain the phenomenom known as a black hole.
The goal of this paper is to explain how the concept of a black hole came about, and give some insight on how black holes are formed and might be tracked down in our more technologically advanced future. Gaining an understanding of a black hole allows for a greater understanding of the concept of spacetime and maybe give us a grasp of both science fiction and science fact. Hopefully, all the clarification will come by the close of this essay.
A black hole is probably one of the most misunderstood ideas among people outside of the astronomical and physical communities. Before an understanding of how it is formed can take place, a bit of an introduction to stars is necessary. This will shed light (no pun intended) on the black hole philosophy.
A star is an enormous fire ball, fueled by a nuclear reaction at its core which produces massive amounts of heat and pressure. It is formed when two or more enormous gaseous clouds come together which forms the core, and as an aftereffect the conversion, due to that impact, of huge amounts of energy from the two clouds. The clouds come together with a great enough force, that a nuclear reaction ensues. This type of energy is created by fusion wherein the atoms are forced together to form a new one. In turn, heat in excess of millions of degrees farenheit are produced.
This activity goes on for eons until the point at which the nuclear fuel is exhausted. Here is where things get interesting. For the entire life of the star, the nuclear reaction at its core produced an enormous outward force. Interestingly enough, an exactly equal force, namely gravity, was pushing inward toward the center. The equilibrium of the two forces allowed the star to maintain its shape and not break away nor collapse.
Eventually, the fuel for the star runs out, and it this point, the outward force is overpowered by the gravitational force, and the object caves in on itself. This is a gigantic implosion. Depending on the original and final mass of the star, several things might occur. A usual result of such an implosion is a star known as a white dwarf. This star has been pressed together to form a much more massive object. It is said that a teaspoon of matter off a white dwarf would weigh 2-4 tons. Upon the first discovery of a white dwarf, a debate arose as to how far a star can collapse. And in the 1920's two leading astrophysicists, Subrahmanyan Chandrasekgar and Sir Arthur Eddington came up with different conclusions. Chandrasekhar looked at the relations of mass to radius of the star, and concluded an upper limit beyond which collapse would result in something called a neutron star. This limit of 1.4 solar masses was an accurate measurement and in 1983, the Nobel committee recognized his work and awarded him their prize in Physics. The white dwarf is massive, but not as massive as the next order of imploded star known as a neutron star. Often as the nuclear fuel is burned out, the star will begin to shed its matter in an explosion called a supernovae. When this occurs the star loses an enormous amount of mass, but that which is left behind, if greater than 1.4 solar masses, is a densely packed ball of neutrons. This star is so much more massive that a teaspoon of it's matter would weigh somewhere in the area of 5 million tons in earth's gravity. The magnitude of such a dense body is unimaginable. But even a neutron star isn't the extreme when it comes to a star's collapse. That brings us to the focus of this paper. It is felt, that when a star is massive enough, any where in the area of or larger than 3-3.5 solar masses, the collapse would cause something of a much greater mass. In fact, the mass of this new object is speculated to be infinite. Such an entity is what we call a black hole.
After a black hole is created, the gravitational force continues to pull in space debris and all other types of matter in. This continuous addition makes the hole stronger and more powerful and obviously more massive.
The simplest three dimensional geometry for a black hole is a sphere. This type of black hole is called a Schwarzschild black hole. Kurt Schwarzschild was a German astrophysicist who figured out the critical radius for a given mass which would become a black hole. This calculation showed that at a specific point matter would collapse to an infinitely dense state. This is known as singularity. Here too, the pull of gravity is infinitely strong, and space and time can no longer be thought of in conventional ways. At singularity, the laws defined by Newton and Einstein no longer hold true, and a "myterious" world of quantum gravity exists. In the Schwarzschild black hole, the event horizon, or skin of the black hole, is the boundary beyond which nothing could escape the gravitational pull.
Most black holes would tend to be in a consistent spinning motion, because of the original spin of the star. This motion absorbs various matter and spins it within the ring that is formed around the black hole. This ring is the singularity. The matter keeps within the Event Horizon until it has spun into the center where it is concentrated within the core adding to the mass. Such spinning black holes are known as Kerr Black Holes. Roy P. Kerr, an Australian mathematician happened upon the solution to the Einstein equations for black holes with angular momentums. This black hole is very similar to the previous one. There are, however, some differences which make it more viable for real, existing ones. The singularity in the this hole is more time-like, while the other is more space-like. With this subtle difference, objects would be able to enter the black whole from regions away from the equator of the event horizon and not be destroyed.
The reason it is called a black hole is because any light inside of the singularity would be pulled back by the infinite gravity so that none of it could escape. As a result anything passing beyond the event horizon would dissappear from sight forever, thus making the black hole impossible for humans to see without using technologicalyl advanced instruments for measuring such things like radiation. The second part of the name referring to the "hole" is due to the fact that the actual hole, is where everything is absorbed and where the center core presides. This core is the main part of the black hole where the mass is concentrated and appears purely black on all readings even through the use of radiation detection devices.
The first scientists to really take an in depth look at black holes and the collapsing of stars, were a professor, Robert Oppenheimer and his student Hartland Snyder, in the early nineteen hundreds. They concluded on the basis of Einstein's theory of relativity that if the speed of light was the utmost speed over any massive object, then nothing could escape a black hole once in it's clutches.
It should be noted, all of this information is speculation. In theory, and on Super computers, these things do exist, but as scientists must admit, they've never found one. So the question arises, how can we see black holes? Well, there are several approaches to this question. Obviously, as realized from a previous paragraph, by seeing, it isn't necessarily meant to be a visual representation. So we're left with two approaches. The first deals with X-ray detection. In this precision measuring system, scientists would look for areas that would create enormous shifts in energy levels. Such shifts would result from gases that are sucked into the black hole. The enormous jolt in gravitation would heat the gases by millions of degrees. Such a rise could be evidence of a black hole. The other means of detection lies in another theory altogether. The concept of gravitational waves could point to black holes, and researchers are developing ways to read them.
Gravitational Waves are predicted by Einstein's General Theory of Relativity. They are perturbations in the curvature of spacetime. Sir Arthur Eddington was a strong supporter of Einstein, but was skeptical of gravity waves and is reported to have said, "Graviatational waves propagate at the speed of thought." But what they are is important to a theory. Gravitational waves are enormous ripples eminating from the core of the black hole and other large masses and are said to travel at the speed of light, but not through spacetime, but rather as the backbone of spacetime itself. These ripples pass straight through matter, and their strength weakens as it gets farther from the source. The ripples would be similar to a stone dropped in water, with larger ones toward the center and fainter ones along the outer circumference. The only problem is that these ripples are so minute that detecting them would require instrumentation way beyond our present capabilities. Because they're unaffected by matter, they carry a pure signal, not like X-rays which are diffused and distorted. In simulations the black hole creates a unique frequency known as it natural mode of vibrations. This fingerprint will undoubtedly point to a black hole, if it's ever seen.
Just recently a major discovery was found with the help of The Hubble Space Telescope. This telescope has just recently found what many astronomers believe to be a black hole, after being focused on a star orbiting an empty space. Several picture were sent back to Earth from the telescope showing many computer enhanced pictures of various radiation fluctuations and other diverse types of readings that could be read from the area in which the black hole is suspected to be in.
Because a black hole floats wherever the star collapsed, the truth is, it can vastly effect the surrounding area, which might have other stars in it. It could also absorb a star and wipe it out of existance. When a black hole absorbs a star, the star is first pulled into the Ergosphere, this is the area between the event horizon and singularity, which sweeps all the matter into the event horizon, named for it's flat horizontal appearance and critical properties where all transitions take place. The black hole doesn't just pull the star in like a vaccuum, rather it creates what is known as an accretion disk which is a vortex like phenomenom where the star's material appears to go down the drain of the black hole. When the star is passed on into the event horizon the light that the star ordinarily gives off builds inside the ergosphere of the black hole but doesn't escape. At this exact point in time, high amounts of radiation are given off, and with the proper equipment, this radiation can be detected and seen as an image of emptiness or as preferred, a black hole. Through this technique astronomers now believe that they have found a black hole known as Cygnus X1. This supposed black hole has a huge star orbiting around it, therefore we assume there must be a black hole that it is in orbit with.
Science Fiction has used the black hole to come up with several movies and fantastical events related to the massive beast. Tales of time travel and of parallel universes lie beyond the hole. Passing the event horizon could send you on that fantastical trip. Some think there would be enough gravitational force to possible warp you to an end of the universe or possibly to a completely different one. The theories about what could lie beyond a black hole are endless. The real quest is to first find one. So the question remains, do they exist?
Black holes exist, unfortunately for the scientific community, their life is restricted to formulas and super computers. But, and there is a but, the scientific community is relentless in their quest to build a better means of tracking. Already the advances of hyper-sensitive equipment is showing some good signs, and the accuracy will only get better.
f:\12000 essays\sciences (985)\Astronomy\The Tragic Challanger Explosion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Tragic Challenger Explosion
Space Travel. It is a sense of national pride for many Americans. If you ask anyone who
was alive at the time, they could probably tell you exactly where they were when they
heard that Neil Armstrong was the first person to walk on the Moon. But all of the success
in our space programs is overshadowed by tragedy. On January 28, 1986, one of the worst
disasters in our space program's history occurred. Many people were watching at the
moment because it was the highly televised space mission where, for the first time, a
civilian was a member of the crew that was to be shot into space. This civilian was the
winner of the "Teacher in Space" contest, Christa McAuliffe. The disaster: the explosion
of the Space Shuttle Challenger. (Compton's 1) Many people thought that disaster couldn't
strike because a civilian was on board. But as the whole nation found out, nobody is
immortal. By examining this further, we will look at the lives of the seven who died in this
dumbfounding calamity, take a look at exactly what went wrong during this fateful mission,
and the outcome from this sorrowful occurrence.
First, who exactly were those astronauts that died on the Challenger? Sharon Christa
Corrigan McAuliffe, born in 1948, was the famous winner of the teacher-in-space
program, was a high school teacher at Concord, N. H., a wife, and a mother of two
children. She touched the lives of all those she knew and taught. As a school official in
Concord said after her death, "To us, she seemed average. But she turned out to be
remarkable. She handled success so beautifully." She also wanted everyone to learn
more, including herself. Demonstrating her aspirations after entering the space program,
she is quoted saying, "What are we doing here? We're reaching for the stars." Also, after
reflecting on her position, she said in August 1995, "I touch the future, I teach (Gray 32)."
Francis R. (Dick) Scobee, born in 1948, was a tremendous enthusiast for aviation and the
space program. At 18 years old, he enlisted in the Air Force. While working as a
mechanic in the service, he put himself through night school, eventually earning a degree in
aerospace engineering that helped him become an officer and a pilot. He loved flying.
Scobee once observed, :You know, it's a real crime to be paid for a job that I have so much
fun doing." On one of his space missions, he carried a banner made for him by students at
Auburn High, his old high school. It read "TROJANS FLY HIGH WITH SCOBEE."
School officials announced after the tragic explosion that the banner would be put on
display to remind others at Auburn High that other seemingly ordinary students can too fly
high. (Gray 33)
Judith Resnik, born 1949, had a Ph.D. in electrical engineering. She was very ambitious
and loved everything. She once said, "I want to do everything there is to be done." Being
chosen for the space program gave her the opportunity to meet a few self-described
personal goals:
"To learn a lot about quite a number of different technologies; to be able to use them
somehow, to do something that required a concerted team effort and, finally, a great
individual effort (Gray 33)."
She had said once, when asked, about the dangers of the space program, "I think something
is only dangerous if you are not prepared for it or if you don't have control over it or if you
can't think through how to get yourself out of a problem." For Resnik, danger was simply
another unknown to be mastered.
Ronald McNair, born in 1950, was the second black man in space. He was truly
remarkable growing up in his segregated South Carolina school. He was remembered by
those he knew as "one who was always looking to the clouds." Jesse Jackson, one of his
collage classmate's at N.C. Agricultural and Technical State University said McNair saw
participation in the space program as "the highest way he could contribute to the system
that gave him so much." McNair did think much of the space program. He once said, "The
true courage of space flight comes from enduring . . . persevering and believing in oneself
(page 34)."
Michael Smith, born in 1945, always had his head in the clouds. At the age of 16, he
soloed in a single-engine Aeronca. After the U.S. put its first astronaut into space in 1961,
Smith decided that was where he wanted to be. His older brother said, "In high school he
paid a lot of attention to academics because he knew that was the best way to get in." He
also thought much of the space program. He once said, "Everybody looks at flying the
shuttle as something dangerous. But it's not. It's a good program, and something the
country should be proud of (Gray 34)."
Ellison Onizuka, born in 1946, became an instant hero to both the Hawaiians and the
Japanese Americans because he was the first member of either group to fly in space. He
was one who was always fascinated by the vastness of outer space and spend a lot of time
studying it. When he was young, he spent much of his time examining the universe through
a telescope at Honolulu's Bishop Museum. He also said before the Challenger launch, "I'll
be looking at Halley's comet. They tell me I'll have on of the best views around (Gray
35)." His family always looked favorably upon his achievement. After the tragedy, his
mother remembered that "Ellison always had it in his mind to become an astronaut, but was
too embarrassed to tell anyone. When he was growing up, there were no Asian astronauts,
no black astronauts, just white ones (Gray 35)." Ellison will be forever remembered as
being the first Japanese American in space.
Finally, the last member of the seven person crew, Gregory Jarvis, born in 1944. Gregory
was very dedicated to the space program. Despite being bumped off two previous flights,
he finally got his chance. Unfortunately, his only flight was that of the Challenger. It is
very saddening to see seven bright lives vanish in a ball of fire, but it is said that the
explosion was so rapid that the crew did not realize their coming fate. (Gray 35) Perhaps
we can all take comfort in the fact that their last vision was that of the stars.
Now, many people haven't heard exactly what went wrong to cause such an explosion.
(Dumoulin, 1-2) The Challenger finally launched after five days of delays. On January 28,
1986, the morning of the launch, there was ice at Kennedy Space Center. After an
inspection crew gave the go-ahead, the launch was underway. Just after liftoff at .678
seconds into the flight, photographic data show a strong puff of gray smoke was spurting
from the vicinity of the aft field joint on the right solid rocket booster. Computer graphic
analysis of film from pad cameras indicated the initial smoke came from the 270 to 310-
degree sector of the circumference of the aft field joint of the right solid rocket booster.
This area of the solid booster faces the External Tank. The vaporized material streaming
from the joint indicated there was not complete sealing action within the joint. Eight more
distinctive puffs of increasingly blacker smoke were recorded between .836 and 2.500
seconds. The smoke appeared to puff upwards from the joint. While each smoke puff was
being left behind by the upward flight of the Shuttle, the next fresh puff could be seen near
the level of the joint. The multiple smoke puffs in this sequence occurred at about four
times per second, approximating the frequency of the structural load dynamics and resultant
joint flexing. As the Shuttle increased its upward velocity, it flew past the emerging and
expanding smoke puffs. The last smoke was seen above the field joint at 2.733 seconds.
The black color and dense composition of the smoke puffs suggest that the grease, joint
insulation and rubber O-rings in the joint seal were being burned and eroded by the hot
propellant gases. At approximately 37 seconds, Challenger encountered the first of several
high-altitude wind shear conditions, which lasted until about 64 seconds. The wind shear
created forces on the vehicle with relatively large fluctuations. These were immediately
sensed and countered by the guidance, navigation and control system. The steering system
(thrust vector control) of the solid rocket booster responded to all commands and wind
shear effects. The wind shear caused the steering system to be more active than on any
previous flight. Both the Shuttle main engines and the solid rockets operated at reduced
thrust approaching and passing through the area of maximum dynamic pressure of 720
pounds per square foot. Main engines had been throttled up to 104 percent thrust and the
solid rocket boosters were increasing their thrust when the first flickering flame appeared
on the right solid rocket booster in the area of the aft field joint. This first very small flame
was detected on image enhanced film at 58.788 seconds into the flight. It appeared to
originate at about 305 degrees around the booster circumference at or near the aft field
joint. One film frame later from the same camera, the flame was visible without image
enhancement.
It grew into a continuous, well-defined plume at 59.262 seconds. At about the same time
(60 seconds), telemetry showed a pressure differential between the chamber pressures in
the right and left boosters. The right booster chamber pressure was lower, confirming the
growing leak in the area of the field joint. As the flame plume increased in size, it was
deflected rearward by the aerodynamic slipstream and circumferentially by the protruding
structure of the upper ring attaching the booster to the External Tank. These deflections
directed the flame plume onto the surface of the External Tank. This sequence of flame
spreading is confirmed by analysis of the recovered wreckage. The growing flame also
impinged on the strut attaching the solid rocket booster to the External Tank. The first
visual indication that swirling flame from the right solid rocket booster breached the
External Tank was at 64.660 seconds when there was an abrupt change in the shape and
color of the plume. This indicated that it was mixing with leaking hydrogen from the
External Tank. Telemetered changes in the hydrogen tank pressurization confirmed the leak.
Within 45 milliseconds of the breach of the External Tank, a bright sustained glow
developed on the black-tiled underside of the Challenger between it and the External Tank.
Beginning at about 72 seconds, a series of events occurred extremely rapidly that
terminated the flight. Telemetered data indicate a wide variety of flight system actions that
support the visual evidence of the photos as the Shuttle struggled futility against the forces
that were destroying it. At about 72.20 seconds the lower strut linking the solid rocket
booster and the External Tank was severed or pulled away from the weakened hydrogen
tank permitting the right solid rocket booster to rotate around the upper attachment strut.
This rotation is indicated by divergent yaw and pitch rates between the left and right solid
rocket boosters. At 73.124 seconds,. a circumferential white vapor pattern was observed
blooming from the side of the External Tank bottom dome. This was the beginning of the
structural failure of hydrogen tank that culminated in the entire aft dome dropping away.
This released massive amounts of liquid hydrogen from the tank and created a sudden
forward thrust of about 2.8 million pounds, pushing the hydrogen tank upward into the
intertank structure. At about the same time, the rotating right solid rocket booster impacted
the intertank structure and the lower part of the liquid oxygen tank. These structures failed
at 73.137 seconds as evidenced by the white vapors appearing in the intertank region.
Within milliseconds there was massive, almost explosive, burning of the hydrogen
streaming from the failed tank bottom and liquid oxygen breach in the area of the intertank.
At this point in its trajectory, while traveling at a Mach number of 1.92 at an altitude of
46,000 feet, the Challenger was totally enveloped in the explosive burn. The Challenger's
reaction control system ruptured and a hypergolic burn of its propellants occurred as it
exited the oxygen-hydrogen flames. The reddish brown colors of the hypergolic fuel burn
are visible on the edge of the main fireball. The Orbiter, under severe aerodynamic loads,
broke into several large sections which emerged from the fireball. Separate sections that
can be identified on film include the main engine/tail section with the engines still burning,
one wing of the Orbiter, and the forward fuselage trailing a mass of umbilical lines pulled
loose from the payload bay. The Explosion 73 seconds after liftoff claimed crew and
vehicle. Cause of explosion was determined to be an O-ring failure in right solid rocket
booster. Cold weather was a contributing factor.
Finally, what was the outcome of this terrible disaster? (Compton's, page 1) The shuttle
program was suspended until the exact cause could be found. It wasn't until September
1988 when the next shuttle launch happened. After many hours of investigating and finding
out what exactly caused the disaster, many changes were made to the structural designs of
the space shuttle. Also, they don't allow launches when the temperature is that low. Also,
the explosion delayed the now famous Hubble Telescope program (Church 38). We have
seen the tremendous photographs the Telescope has sent to Earth, it's a shame they couldn't
have been received sooner.
From a media standpoint, this disaster really changed the way television was used to report
major disasters. It may seem fairly common when Special Reports interrupt normal
programming, but in 1986, it was pretty unusual. In fact, ABC switchboards alone fielded
more than 1,200 complaints from people who wanted to watch soap operas rather than an
all-day report about the Challenger and the late breaking news related to it (Zoglin 42).
Television definitely had a tremendous impact on reporting this story. ABC Anchorman
Peter Jennings said, "We all shared in this experience in an instantaneous way because of
television. I can't recall any time or crisis in history when television has had such an
impact. (Zoglin 42)"
The disaster even affected President Reagan's State of the Union address. When asked
about the State of the Union speech, Reagan replied, "There could be no speech without
mentioning this, but you can't stop governing the nation because of a tragedy of this kind
(Magnuson 29)."
In conclusion, it is such a sad tragedy that this negligence led to such a disaster. If we
learn from our mistakes, then hopefully, this sort of disaster won't happen again.
Works Cited
"Space Shuttle Missions: Challenger." Compton's Encyclopedia of American History on
CD-ROM. Compton's New Media, Inc., 1994.
Morrow, Lance. "A Nation Mourns." Time 10 February 1986: 23.
Magnuson, Ed. "A Nation Mourns." Time 10 February 1986: 24-31.
Gray, Paul. "Seven Who Flew for All of Us." Time 10 February 1986: 32-35.
Friedrich, Otto. "Looking for What Went Wrong." Time 10 February 1986: 36-37.
Church, George J. "Putting the Future on Hold." Time 10 February 1986: 38-41.
Zoglin, Richard. "Covering the Awful Unexpected." Time 10 February 1986: 42-45.
Murphy, Jamie. "It Was Not the First Time." Time 10 February 1986: 45.
Dumoulin, Jim. "51-L" [Online] Available http://www.ksc.nasa.gov/shuttle/missions/51-
l/mission-51-l.html, October 5, 1996.
Annotated Bibliography
"Space Shuttle Missions: Challenger." Compton's Encyclopedia of American History
on CD-ROM. Compton's New Media, Inc., 1994.
This article gave a nice overview of the incident, but didn't really get detailed. It
helped me get a picture of what happened and what caused the failure. This is a secondary
source.
Morrow, Lance. "A Nation Mourns." Time 10 February 1986: 23.
This article gave a nice portrayal of what people felt while watching the launch on
television. This is a secondary source.
Magnuson, Ed. "A Nation Mourns." Time 10 February 1986: 24-31.
This article gave a good look at the National perspective of things after the
explosion. It also gave a good account of the memorial service. This is a secondary
source.
Gray, Paul. "Seven Who Flew for All of Us." Time 10 February 1986: 32-35.
This article gave me most of my report. It gave a nice description of the seven
astronauts that died on the shuttle. This is a secondary source.
Friedrich, Otto. "Looking for What Went Wrong." Time 10 February 1986: 36- 37.
This article gave an account of the theories that appeared afterwards about why the
shuttle exploded. It also told about the NASA press conference held afterwards. This is a
secondary source.
Church, George J. "Putting the Future on Hold." Time 10 February 1986: 38-41.
This article told about the setbacks to the space program that the explosion would
cause. It mainly told about the Hubble space telescope. This is a secondary source.
Zoglin, Richard. "Covering the Awful Unexpected." Time 10 February 1986: 42-45.
This article went to the media's perspective of covering the accident. It told about
how the three major networks (ABC, CBS, NBC) spend their time covering the disaster.
This is a secondary source.
Murphy, Jamie. "It Was Not the First Time." Time 10 February 1986: 45.
This article told about previous disasters in the space programs of the United States
and Russia. This is a secondary source.
Dumoulin, Jim. "51-L" [Online] Available
http://www.ksc.nasa.gov/shuttle/missions/51-l/mission-51-l.html, October 5, 1996.
This article from NASA also contributed a lot to my report. It is the official report
about the Challenger explosion. This is a primary source.
f:\12000 essays\sciences (985)\Astronomy\UFOs Are Real.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Little Green Men", "Martians", "Outer Limits" ! That is what people think about
when aliens and UFOs come to mind. Aliens have been around, as far as we can see,
since 1561. The question is now asked, How come because they [UFOs] have been
sighted, encountered, and taken hostage; Why have we been kept in the dark by our
governments? "Since UFOs were considered a security risk, the report on these sightings
was originally classified as secret (Kadrey 22)." The name 'flying saucer' was 'coined' by
a Air Force pilot in 1947, when he stated that he had saw something that looked like a
'flying saucer'. "The government knew about UFOs and have been tracking them since
1947, which is to be believed when the 'Age of UFOs' started (Stacey 55). So, through the
next pages the theory that UFOs are real because of evidence the government has covered
up, the number of sightings, and the uncountable number of abductions, will become a
reality.
The first reason that is going to be addressed, is the secrecy and Government
coverup of UFOs.
"National Security Agency, or NSA, an acronym often assumed by insiders to
mean 'NEVER SAY ANYTHING' , (Stacey 40) " has been blamed for millions of UFO
governmental cover-ups around the nation. "Our problem is with government secrecy,
because it widens the gap between citizens and the government, making it much more
difficult to participate in the democratic process (Stacey 40)," says Steven Aftergood
while addressing UFO secrecy. The UFO enigma, or as it is formerly know as the
"'Cosmic Watergate' : the ongoing cover-up of the government's knowledge about
extraterrestrial UFOs and their terrestrial activities (Stacey 36)" is believed to be started
during the Nixon administration, which is still under alot of scrutiny. The Nixon
Administration also established the Freedom of Information Act [FOIA] in the 1970s', it
opened the door to alot of truth and more coverups. " I don't think they would do a 300-
page report on everything they detect," says Joe Stefula who is a UFO researcher. The
"military would far rather have people blame such things on flying saucers than them
(Brookesmith 14). "
Several secret UFO Projects have proven the existence of UFOs. One of the more
popular ones was 'Project Blue Book' which "was organized by the Air Force. "The panel
employed a number of scientists, including phycists, engineers, meteorlogists, and an
astronomer. It had three main goals : 1) To explain all reported sightings of UFOs. 2) To
decide if UFOs posed a threat to the national security of the United States. 3) And to
determine whether UFOs were using any advanced technology that the Unted States
could use ( Kadrey 25 ) ." Another less popular two were also organized, 'Project Moon
Dust' and 'Project Blue Fly'. Project(s) Moon Dust and Blue Fly was "efforts aimed at
retrieving man-made space objects that re-enter the atmosphere and crash (Kadrey 38)."
Even with all this evidence, the authorities often "dismiss such experiences with
desperate 'rational explanations' ranging from 'spots before the eyes' to 'fireflies'. and that
old stand-by 'hallucinations' (Brookesmith 21) ."
Although, not all UFOs are UFOs to the government. Such things as weather
balloons, pie plates, and Venus have been proven to cause people to see things, but it still
does not give bearing for what is out there and the numerous life witnesses of
extraterrestrials. The UFO enigma is "the most 'well-kept and explosive secret in world
history (Stacey 36). "
The next issue that is going to be covered to prove the existence of UFOs is
Sightings of UFOs.
'Flying Saucers' , 'fireballs' , or 'Unidentified Flying Objects '. Whatever the name,
people just seem to exactly know what your saying. A private pilot named Kenneth
Arnold "spotted nine weird, cresent-shaped disks flashing through the air as he was flying
over the Cascade Mountains in Washington State, USA. Accounts of Arnold's experience
were flashed around the world by the news media, and triggered a wave of 'flying saucer'
reports across the United States. By 4 July 1947, there had been sightings in every state
but Georgia and West Virginia; by 16 July 1947, the United States Army Air Forces had
received over eight-hundred fifty reports. (Brookesmith 19)" Then the 'Age of UFOs' was
born on 24 June 1947. Although, "many other UFO sightings turned out to be such
objects as weather balloons, satellites, aircraft lights, or formations of birds (Kadrey 24),
" it still does not give light to all of the other sightings that do not turn out to be such
things.
"Its only when 'Venus' suddenly executes an abrupt right-angle or divides into two
smaller lights that streak away at high speed that we find our attention attracted and
realize we may, in fact, be in the middle of a UFO sighting (Huyghe 50)." "During World
War II, especially after 1943, there were many reports of balls of light, varying in color
and in size from several feet to a few inches across, flying in pursuit of warplanes.
Because these 'fireballs' never attacked aircraft, Allied pilots assumed they were enemy
inventions - either reconnaissance drones or psychological - warfare weapons. When
captured enemy aircrew were interrogated it became clear that both German and
Japanese fliers had also been pursued by 'Foo Fighters'. The same phenomenon was also
reported during the Korean War, between 1950 and 1953 (Brookesmith 17). " Although,
"many UFO sightings have dated back from the year 1561 (Jueneman 25). Many people
still believe that something is out there and also it seems like the egyptians had too, it has
been proven that the egyptians may have thought 'aliens' to be 'gods'.
Its 6:05 A.M. on the morning of 6 February 1966 at Nederland, Texas. Three
witnesses witnessed a famous UFO event for approximately five minutes. The primary
witness described the event, "the neighboorhood has lit up in a red glow. My first thought
was that a police car was parked nearby or a fire truck. I called to my wife that something
must be wrong in the neighboorhood and to come and see. Suddenly I realized the light
was coming from overhead [Fig. 1 ] . I looked up andsaw the outline of an object moving
out past the pitch of my roof, approximately two-hundred fifty to five-hundred feet high.
The red glow was coming from beneath the object, about center. It appeared as a stream
of light coming from inside a hole (Huyghe "UFO Crime Lab" 60)." Many UFOs are
described such as this. Is it a UFO or something else? Remember, a UFO is only a UFO if
not explained.
"On the same day, but at 11:50 A.M., five technicians, including a Colonel and a
Major, were watching two P-82 and an A-26 aircraft conduct an ejection-seat experiment
at 20, 000 feet (6100 meters). They viewed a circular object, a color of aluminum, and at
first "resembled a 'parachute canopy' come into view. The ejection-seat canopy opened
30 seconds later. The UFO, clearly nearer to the ground, descended three times faster
than the parachute, rotating or oscillating. The men noted no smoke, flame, propellar
arcs, engine noise, or other means of propulsion. The UFO reacheaced ground level and
then rose again (Brookesmith 19)," this clearly states either UFOs are more more
advanced or are not UFOs, but secret military project 'screwups', that would raither have
people like us blame things on extraterrestrials; just so they don't look stupid.
The next topic that is going to be covered is the alien abductions and government
encounters with them.
"During the period of 1969-1971, MJ-12 representing the U.S. Government made
a deal with these creatures, called EBEs (Extraterrestrial Biological Entities, named by
Delev Bronk, original MJ-12 member and 6th President of Johns Hopkins University).
The 'deal' was that in exchange for 'technology' that they would provide to us, we agreed
to 'ignore' the abductions that were going on and suppress information on the cattle
mutilations. The EBEs assured MJ-12 that the abductions (usually lasting about 2 hours)
were merely the ongoing monitoring of developing civilizations.
In fact, the purposes for the abductions turned out to be:
(1) The insertion of a 3mm spherical device through the nasal cavity of the
abductee into the brain; the device is used for the biological monitoring, tracking, and
control of the abductee.
(2) Implementation of Posthypnotic Suggestion to carry out a specific activity
during a specific time period, the actuation of which will occur within the next 2 to 5
years.
(3) Termination of some people so that they could function as living sources for
biological material and substances.
(4) Termination of individuals who represent a threat to the continuation of their
activity.
(5) Effect genetic engineering experiments.
(6) Impregnation of human females and early termination of pregnancies to
secure the crossbreed infant.
The U.S. Government was not initially aware of the far reaching consequences of
their 'deal'. They were led to believe that the abductions were essentially benign and since
they figured the abductions would probably go on anyway whether they agreed or not,
they merely insisted that a current list of abductees be submitted, on a periodic basis, to
MJ-12 and the National Security Council. Does this sound incredible? An actual list of
abductees sent to the National Security Council? Read on, because I have news for you.
The EBEs have a genetic disorder in that their digestive system is atrophied and
not functional. Some speculate that they were involved in some type of accident or
nuclear war, or possibly on the back side of an evolutionary genetic curve. In order to
sustain themselves they use an enzyme or hormonal secretion obtained from the tissue
that they extract from humans and animals. (Note: Cows and Humans are genetically
similar. In the event of a national disaster, cow's blood can be used by humans.)
The secretions obtained are then mixed with hydrogen peroxide and applied on
the skin by spreading or dipping parts of their bodies in the solution. The body absorbs
the solution, then excretes the waste back through the skin. The cattle mutiliations that
were prevalent throughout the period from 1973 to 1983 and publicly noted through
newspaper and magazine stories and included a documentary produced by Linda Hoew
for the Denver CBS affiliate KMGH-TV, were for the collection of these tissues by the
aliens. The mutiliations included genitals taken, rectums cored out to the colon, eyes,
tongue, and throat all surgically removed with extreme precision. In some cases the
incisions were made by cutting between the cells, a process we are not yet capable of
performing in the field. In many of the mutilations there was no blood found at all in the
carcass, yet there was no vascular collapse of the internal organs.
The various parts of the body are taken to various underground laboratories, one
of which is known to be near the small New Mexico town of Dulce. This jointly occupied
(CIA-Alien) facility has been described as enormous, with huge tiled walls that 'go on
forever'. Witnesses have reported huge vats filled with amber liquid with parts of human
bodies being stirred inside.
During the period between 1979 and 1983 it became increasingly obvious to MJ-
12 that things were not going as planned. It became known that many more people (in the
thousands) were being abducted than were listed on the official abduction lists. In
addtion it became obvious that some, not all, but some of the nation's missing children
had been used for secretions and other parts required by the aliens.
By 1984, MJ-12 must have been in stark terror at the mistake they had made in
dealing with the EBEs. They had subtly promoted Close Encounters of the Third Kind
and E.T. to get the public used to 'odd looking' aliens that were compassionate,
benevolent and very much our 'space brothers'. MJ-12 'sold' the EBEs to the public, and
were now faced with the fact that quite the opposite was true. In addition, a plan was
formulated in 1968 to make the public aware of the existence of aliens on earth over the
next 20 years to be culminated with several documentaries to be realeased during 1985-
1987 period of time. These documentarieswould explain the history and intentions of the
EBEs. The discovery of the 'Grand Deception' put the entire plans, hopes and dreams of
MJ-12 into utter confusion and panic.
Meeting at the 'Country Club', a remote lodge with private golf course,
comfortable sleeping and working quarters, and its own private airstrip built by and
exclusively for the members of MJ-12, it was a factional fight of what to do now. Part of
MJ-12 wanted to confess the whole scheme and shambles it had become to the public,
beg their forgiveness and ask for their support. The other part (and majority ) of MJ-12
argued that there was no way they could do that, that the situation was untenable and
there was no use in exciting the public with the 'horrible truth' and that the best plan was
to continue the development of a weapon that could be used against the EBEs under the
guise of 'SDI'. the Strategic Defense Initiative, which had nothin whatsoever to with a
defense for inbound Russian nuclear missles. (Brookesmith 112 and 113) "
Although, this did not stop the the EBEs. A woman named Myrna Hansen and her
six-year old son are a good example of the EBEs power.
"Driving home on a road near Cimarron once evening in the spring of 1980, 28-
year old Myna Hansen and her six-year old saw five UFOs descending into a cow pasture.
Until hypnotically regressed, she had confused memories of a close encounter, and could
not account for a period of 'missing time' of some four hours. She was regressed in
sessions held between 11 May and 3 June by Dr. Leo Sprinkle, in the company of Dr.
Paul Bennewitz, an electronics engineer who was also an investigator for the Aerial
Phenomena Research Organization.
According to Ms. Hansen's accounts under hypnosis, two white-suited figures
emerged from one of the UFOs and mutilated one of the cows in the field, while it was
alive, with an 18-inch (45 cm)-long knife. She remonstrated with them, and she and her
son were captured and taken to seperate ships. She continued to resist but was undressed
and given a physical examination, including a (censor) probe that reportedly later
produced a sever infection.
The procedure was interrupted by what appeared to be a tail, jaundiced human,
who apologized and ordered the aliens punished. He then took Ms. Hansen on a tour of
this and possible some other UFOs. The last seems to have taken flight, as she was next
led out into a landscape that at one point she believed she recognized as being west of
Las Cruces, and at another had the impression was near Roswell. Here she was taken into
an underground base, where she managed to escape briefly. She found herself in a room
full of what appeared to be water tanks, and was horrified to discover they were vats in
which were floating human body parts, including an arm that had an hand attached to it.
Ms. Hansen was then dragged out of this area, and she and her son were both put
through a further painful process involving loud noises and blinding lights. before being
taken back aboard the UFO and flown (with her car also aboard) back to the site of the
abduction. (Brookesmith 108) "
In conclusion, I believe that UFOs, Aliens, EBEs, or whatever you call them; do
have the key to our future in the existence of mankind. Although, secret pacts with our
governments may not exactly get what we want, we can fight back. Lets jump back for a
second and ask ourselves a question now: What if we were in the same predicament,
wouldn't we want ourselves to be healthy too, and take what we need? If we look at the
aliens actions from their perspective, what they are doing is exactly what we are doing to
ourselves by destroying the o-zone layer. So why can't they do the same to us? I feel that
the thesis to prove the existence of UFOs as stated that Government coverups do pose a
threat to the truth and national security of our country, UFO sightings and abductions do
pose a threat at this time and we must be ready at all odds to expose the Government and
The aliens.
The End.
è?ø‰Fú‰Vü‹Fü
FútJ‹Fú‹Vü%
f:\12000 essays\sciences (985)\Astronomy\What are NEOs .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NEAR EARTH OBJECTS
What are NEOs? Where do they come from? Do they pose any real threat to
Earth? Can they provide viable space resources? All of these questions are now under investigation by planetary scientist. There are two highly recognized research programs that I will discuss with you. The Spaceguard program is sponsored and run by NASA Ames Space Science Division: Asteroid and Comet Impact Hazard. Also under the direction of Dr. Tom Gehrels the University of Arizona has the Spacewatch program.
NEOs can be either asteroid or comets. Ninety percent of the information that I came across discussed asteroids. Therefore, I will concentrate on asteroids alone. I'm not fully knowledgeable on the subject but I did learn a great deal.
What are NEOs? The "Webster's New World Dictionary" states, "Any of the small planets between Mars and Jupiter". The "Funk and Wagnalls Encyclopedia from Infopedia" states, "One of the many small or minor planets that are members of the solar system and that move in elliptical orbits primarily between the orbits of Mars and Jupiter".
Where dot they come from? The NEOs are small objects (<7 miles) with a range of compositions spanning all common asteroid types. They are derived from a mixture of main-belt collisional fragments and burned-out short-period comets. According to Dr. Tom Gehrels of the University of Arizona Spacewatch program, "The total number of NEOs over 100 meters is estimated to be about 100,000, with 150 or so currently known".
Do they pose any real threat to Earth? The Earth orbits the Sun in sort of a cosmic shooting gallery, subject to impacts from asteroids. It is only fairly recent that we have come to appreciate that these impacts by asteroids pose a significant hazard to life and property. Although the annual probability of the Earth being struck by a large asteroid is extremely small, the consequences of such a collision are so devastating that it is prudent to assess the nature of the threat and prepare to deal with it.
Studies have shown that the risk from a cosmic impact increases with the size of the projectile. The greatest risk is associated with objects large enough to perturb the Earth's climate on a global scale by injecting large quantities of dust into the stratosphere. Such an event could depress temperatures around the globe, leading to massive loss of food crops and possible breakdown of society. Global catastrophes are qualitatively different from other more common hazards that we face (except nuclear war), because of their potential effect on the entire planet and its population.
Various studies have suggested that the minimum mass impacting body to produce such global consequences is several ten of billions of tons, resulting in a groundburst explosion with energy in the vicinity if a million megatons of TNT. The diameter for Earth-crossing asteroids are between 1 3/5 and 3 1/4 miles. Smaller objects (down to 32 feet in diameter) can cause severe local damage but pose no global threat.
According to Spaceguard, "Of approximately 200 Earth-crossing asteroids, fewer that 200 have actually been discovered. At present no asteroid is known to be on a collision course with the Earth. David Morrison of the NASA Spaceguard Research Center states, "The chances of a collision within the next century with an object 1 3/5 mile in diameter or more are very small (less than 1 in a 100). But, such a collision is possible and could happen at any time. If we did have sufficient warning, however, the incoming object could be deflected or destroyed". Cosmic impacts are the only known natural disaster that could be avoided entirely by the appropriate application of space technology.
The Spacewatch telescope located on Kitt Peak is used to survey for moving objects, including asteroids whose orbits approach or cross the orbit of the Earth. Among these are asteroids that may someday be used as sources of raw materials. Spacewatch uses a Charg-Coupled Device or (CCD) and an automated computer program to discover NEOs.
`The Spacewatch Observatory has already detected one of the smallest asteroids known, and also the one that passed very close to Earth, the Apollo asteroid 1991 BA. The semi-automatic Spacewatch system at the University of Arizona has considerably increased the discovery rate, and will have profound consequences on the utility of NEOs as near-Earth space resources.
f:\12000 essays\sciences (985)\Astronomy\Your Bones in Space.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
============================================================================== Your Bones in Space ASTRONOMY AND SPACE SCIENCE SIG ------------------------------------------------------------------------------ Hypogravitational Osteoporosis: A review of literature. By Lambert Titus Parker. May 19 1987. (GEnie Spaceport) Osteoporosis: a condition characterized by an absolute decrease in the amount of bone present to a level below which it is capable of maintaining the structural integrity of the skeleton. To state the obvious, Human beings have evolved under Earth's gravity "1G". Our musculoskeleton system have developed to help us navigate in this gravitational field, endowed with ability to adapt as needed under various stress, strains and available energy requirement. The system consists of Bone a highly specialized and dynamic supporting tissue which provides the vertebrates its rigid infrastructure. It consists of specialized connective tissue cells called osteocytes and a matrix consisting of organic fibers held together by an organic cement which gives bone its tenacity, elasticity and its resilience. It also has an inorganic component located in the cement between the fibers consisting of calcium phosphate [85%]; Calcium carbonate [10%] ; others [5%] which give it the hardness and rigidity. Other than providing the rigid infrastructure, it protects vital organs like the brain], serves as a complex lever system, acts as a storage area for calcium which is vital for human metabolism, houses the bone marrow within its mid cavity and to top it all it is capable of changing its architecture and mass in response to outside and inner stress. It is this dynamic remodeling of bone which is of primary interest in microgravity. To feel the impact of this dynamicity it should be noted that a bone remodeling unit [a coupled phenomena of bone reabsorption and bone formation] is initiated and another finished about every ten seconds in a healthy adult. This dynamic system responds to mechanical stress or lack of it by increasing the bone mass/density or decreasing it as per the demand on the system. -eg; a person dealing with increased mechanical stress will respond with increased mass / density of the bone and a person who leads a sedentary life will have decreased mass/density of bone but the right amount to support his structure against the mechanical stresses she/she exists in. Hormones also play a major role as seen in postmenopausal females osteoporosis (lack of estrogens) in which the rate of bone reformation is usually normal with the rate of bone re-absorption increased. In Skeletal system whose mass represent a dynamic homeostasis in 1g weight- bearing,when placed in microgravity for any extended period of time requiring practically no weight bearing, the regulatory system of bone/calcium reacts by decreasing its mass. After all, why carry all that extra mass and use all that energy to maintain what is not needed? Logically the greatest loss -demineralization- occurs in the weight bearing bones of the leg [Os Calcis] and spine. Bone loss has been estimated by calcium-balance studies and excretion studies. An increased urinary excretion of calcium , hydroxyproline & phosphorus has been noted in the first 8 to 10 days of microgravity suggestive of increased bone re-absorption. Rapid increase of urinary calcium has been noted after takeoff with a plateau reached by day 30. In contrast, there was a steady increase off mean fecal calcium throughout the stay in microgravity and was not reduced until day 20 of return to 1 G while urinary calcium content usually returned to preflight level by day 10 of return to 1G. There is also significant evidence derived primarily from rodent studies that seem to suggest decreased bone formation as a factor in hypogravitational osteoporosis. Boy Frame,M.D a member of NASA's LifeScience Advisory Committee [LSAC] postulated that "the initial pathologic event after the astronauts enter zero gravity occurs in the bone itself, and that changes in mineral homeostasis and the calcitropic hormones are secondary to this. It appears that zero gravity in some ways stimulate bone re-absorption, possibly through altered bioelectrical fields or altered distribution of tension and pressure on bone cells themselves. It is possible that gravitational and muscular strains on the skeletal system cause friction between bone crystals which creates bioelectrical fields. This bioelectrical effect in some way may stimulate bone cells and affect bone remodeling." In the early missions, X-ray densitometry was used to measure the weight-bearing bones pre & post flight. In the later Apollo, Skylab and Spacelab missions Photon absorptiometry (a more sensitive indicator of bone mineral content) was utilized. The results of these studies indicated that bone mass [mineral content] was in the range of 3.2% to 8% on flight longer than two weeks and varying directly with the length of the stay in microgravity. The accuracy of these measurements have been questioned since the margin of error for these measurements is 3 to 7% a range being close to the estimated bone loss. Whatever the mechanism of Hypogravitational Osteoporosis, it is one of the more serious biomedical hazard of prolonged stay in microgravity. Many forms of weight loading exercises have been tried by the astronauts & cosmonauts to reduce the space related osteoporosis. Although isometric exercises have not been effective, use of Bungee space suit have shown some results. However use of Bungee space suit [made in such a way that everybody motion is resisted by springs and elastic bands inducing stress and strain on muscles and skeletal system] for 6 to 8 hrs a day necessary to achieve the desired effect are cumbersome and require significant workload and reduces efficiency thereby impractical for long term use other than proving a theoretical principle in preventing hypogravitational osteoporosis. Skylab experience has shown us that in spite of space related osteoporosis humans can function in microgravity for six to nine months and return to earth's gravity. However since adults may rebuild only two-third of the skeletal mass lost, even 0.3 % of calcium loss per month though small in relation to the total skeletal mass becomes significant when Mars mission of 18 months is contemplated. Since adults may rebuild only two-thirds of the skeletal mass lost in microgravity, even short durations can cause additive effects. This problem becomes even greater in females who are already prone to hormonal osteoporosis on Earth. So far several studies are under way with no significant results. Much study has yet to be done and multiple experiments were scheduled on the Spacelab Life Science [SLS] shuttle missions prior to the Challenger tragedy. Members of LSAC had recommended that bone biopsies need to be performed for essential studies of bone histomorphometric changes to understand hypogravitational osteoporosis. In the past, astronauts with the Right Stuff had been resistant and distrustful of medical experiments but with scientific personnel with life science training we should be able to obtain valid hard data. [It is of interest that in the SLS mission, two of the mission specialists were to have been physicians, one physiologist and one veterinarian.] After all is said, the problem is easily resolved by creation of artificial gravity in rotating structures. However if the structure is not large enough the problem of Coriolis effect must be faced. To put the problem of space related osteoporosis in perspective we should review our definition of Osteoporosis: a condition characterized by an absolute decrease in the amount of bone present to a level below which it is capable of maintaining the structural integrity of the skeleton. In microgravity where locomotion consists mostly of swimming actions with stress being exerted on upper extremities than lower limbs resulting in reduction of weight bearing bones of lower extremities and spine which are NOT needed for maintaining the structural integrity of the skeleton. So in microgravity the skeletal system adapts in a marvelous manner and problem arises only when this microgravity adapted person need to return to higher gravitational field. So the problem is really a problem of re-adaptation to Earth's gravity. To the groups wanting to justify space related research: Medical expense due to osteoporosis in elderly women is close to 4 billion dollars a year and significant work in this field alone could justify all space life science work. It is the opinion of many the problem of osteoporosis on earth and hypogravity will be solved or contained, and once large rotating structures are built the problem will become academic. For completeness sake: Dr. Graveline, at the School of Aerospace Medicine, raised a litter of mice on a animal centrifuge simulating 2G and compared them with a litter mates raised in 1G. "They were Herculean in their build, and unusually strong...." reported Dr.Graveline. Also X-ray studies showed the 2G mice to have a skeletal density to be far greater than their 1G litter mates.
f:\12000 essays\sciences (985)\Biology\A Look at the Human Genome Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A Massive Project for the Benefit of Mankind:
A Look at the Human Genome Project
Scientists are taking medical technology to new heights as they race to map all of the genes, nearly 100,000, in the 23 chromosomes of the human body. Along the way, they hope to understand the basis of, and maybe even develop methods of treating certain genetic diseases, such as Alzheimer's and Muscular Dystrophy. They plan to do this by identifying the DNA sequence of an abnormal gene in which a disease originates and comparing it with the data of a normal or healthy gene. The entire research project is entitled "The Human Genome Project."
"The Human Genome Project" is a large scale project being conducted by more than 200 laboratories, with even more researchers and labs having joined in. Most of the labs and researchers are located in France and the United States. The project started in 1990 and was slated to take 15 years and cost $3 billion in U.S. money for the entire project coming to roughly $200 million per year. Federal funding for the project is nearly 60% of the annual need. This has created some funding problems for the project. There also have been technological advances and discoveries that have helped to speed up the project. This automation may help to reduce the cost and help the project to meet its objectives ahead of schedule. The project was estimated to have detailed maps of all of the chromosomes and know the location of most of the human Genes by 1996.
Researchers have successfully located the gene and DNA sequence for Huntington's Disease on Chromosome 4 and have created a genetic test to determine if a person carries this gene. "The child of a person with Huntington's has a 50% chance of inheriting the gene, which inevitably leads to the disease." Once an individual acquires the gene, it is only a matter of time before they acquire the disease. Because the medical costs of treating such persons in terminal illnesses are extremely high, insurance companies who want to stay in business see this genetic test, and others like it, as an opportunity to screen prospective clients for the probability of such diseases. Some people feel that this information gives insurance companies unfair advantage over those covered by medical insurance and point out that release of genetic information to insurance companies puts a severe disadvantage on the person who is screened, as well as violates the patients right to privacy. If this genetic information is not safegua
rded as confidential for the patient's and doctor's knowledge alone, then the patient can be labeled as undesirable and the patient may not be able to receive insurance coverage at any price. This also brings up other ethical questions. "Does genetic testing constitute an invasion of privacy, and would it stigmatize those found to have serious inborn deficiencies? Would prenatal testing lead to more abortions? Should anyone be tested before the age of consent?"
Obviously, many genetic advancements are to come of this research. One biotechnology that will benefit from genetic testing is genetic engineering. It too, may have many social implications depending on what is created from such experimentation.
Gene Therapy is one "spin-off" that has greatly benefited Gene-mapping. It utilizes genetic engineering to treat genetic disorders by "introducing genes into existing cells to prevent or cure diseases" . Most of the methods are still in the experimental stages and have yet to be approved by the FDA. One example would be in a proposed treatment for a brain tumor. Scientists would take a herpes gene and splice it in to a nonvirulent virus. Viruses and liposomes have an uncanny ability to navigate through cell membranes. The virus is then placed into a laboratory animal to reproduce itself, and after reproduction, is injected into the human's brain tumor. The virus is supposed to invade the tumor cells. Thus, the herpes enzyme will render the tumor vulnerable to drugs used to cure herpes, killing the tumor, the virus, and the animals' cells used to manufacture the virus.
With this and other ideas springing out from the "medicine cabinet", many researchers are optimistic about the results of their research. There is also a direct correlation of the sequencing of genes and production of effective drugs on diseases which may have different strands of defective genes, such as Alzheimer's. Locating these genes would be crucial to synthesizing a product to affect that specific location in the gene. The director of the gene-therapy program at the University of Southern California, Dr. W. French Anderson states, "Twenty years from now, gene therapy will have revolutionized medicine. Virtually every disease will have it as one of its treatments." Such an impact on medicine would take much longer to occur with "hit and miss" tactics, rather than methodically mapping out the blueprint for the body.
So whether we, as society, want to go forward in this research slowly, or with blazing speed, scientists will go forward and do what they set out to do. The fact that this research will benefit humanity is resounding, we just need to remember to handle our findings in such a manner that benefits all of society, not just those on top of the economical food chain. Also, persons should be able to decide for themselves if they can handle knowing what their genetic flaws are. Sometimes knowing you will eventually be afflicted by a disease can be as emotionally devastating as actually having the disease.
Some states have already enacted laws guarding the rights of individuals genetically tested . The problem is that most only cover certain procedures and not all of the testing. Whatever way we govern such testing, we have to realize, will be inefficient by most standards, as government always is, in complicated situations. I feel that if genetic information should be public knowledge, then every country using this genetic concept should provide "blanket insurance" coverage for everyone at the same rate. This would be the only fair action that would have the common person's interest in mind, although it is a socialist concept, people would not be discriminated against and it would put everyone on a level playing field. Since I don't see a comprehensive health care plan in our horizon, we should consider making personal genetic information excluded from insurance companies, the government, etc., except for the actual treatment of the patient, which was the original reason that these tests were created. The
reason that I feel genetic information should be totally excluded from insurance companies is this: Once genetic testing becomes widely available, it would be easy for an insurance company to require people to submit to a genetic test before they could be covered. If the person applying is found to be unfit, it could go on his or her insurance "medical report", such as a "credit report", which would blacklist that person from ever getting coverage. Obviously there is a need for governmental laws to prevent this from happening. No one can control what genes they will get, and just because you have "bad" genes doesn't mean you are a "bad" person, thus no one should be discriminated against due to these "weaknesses". I personally feel that the Human Genome Project is a great undertaking intended for the benefit of mankind. There are many advances that have been made in treatments as well as the creation of various machines that automate the process of gene mapping. Machines that may be used to automate th
e study of other organisms. I just don't trust the motives behind the insurance companies who could unduly benefit from such testing. I feel that the individual's right to privacy should remain paramount, and that there should be laws set in motion to prohibit a person from being discriminated against because of genetic predisposition.
Bibliography
Bloch, Hanna; Dan Cray and Christine Sadlowski: "Keys to the Kingdom" and "Do You Want to Know If the News Is Bad," Time Special Issue (vol. 148 No. 14, Fall 1996) pp. 24-29.
The Concise Columbia Encyclopedia is licensed from Columbia University Press. Copyright (c) 1995 by Columbia University Press. All rights reserved.
Duby, Jean-Jacques: "Genetic Discrimination," Science (vol. 270, Nov. 24, 1996) pg. 1282-3.
Holmes, Bob: "Blueprint for Brewer's Yeast," New Scientist (vol. 150, Apr. 27, 1996) pg. 11.
Hudson, Kathy L.: "Genetic Discrimination and Health Insurance: an Urgent Need for Reform," Science (v. 270 Oct. 20, 1995) p 391-3.
Hutton, Richard: "Bio-Revolution: DNA and the Ethics of Manmade Life," New York: New American Library.
Lewis, John: "Automation System Quickens Gene Mapping," Design News (vol. 51, July 8, 1996)
Pennisi, Elizabeth: "New Gene Forges Link Between Fragile Site and Many Cancers," Science (May 3, 1996) pg. 649.
f:\12000 essays\sciences (985)\Biology\A Study of Inheritable Traits in Fruit Flies.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION
The Drosophila melanogaster, more commonly known as the fruit fly, is a popular species used in genetic experiments. In fact, Thomas Hunt Morgan began using Drosophila in the early 1900's to study genes and their relation to certain chromosomes(Biology 263). Scientists have located over 500 genes on the four chromosomes in the fly. There are many advantages in using Drosophila for these types of studies. Drosophila melanogaster can lay hundreds of eggs after just one mating, and have a generation time of two weeks at 21°C(Genetics: Drosophila Crosses 9). Another reason for using fruit flies is that they mature rather quickly and don't require very much space. Drosophila melanogaster has a life cycle of four specific stages. The first stage is the egg, which is about .5mm long. In the 24 hours when the fly is in the egg stage, numerous cleavage nuclei form. Next, the egg hatches to reveal the larva. During this stage, growth and molting occur. Once growth is complete, the Drosophila enter the pupal
stage, where it develops into an adult through metamorphosis. Upon reaching adulthood, the flies are ready to mate and produce the next generation of Drosophila melanogaster.
During this experiment, monohybrid and dihybrid crosses were conducted with Drosophila melanogaster. Our objective was to examine the inheritance from one generation to the next. We collected the data from the crosses and analyzed them in relation to the expected results.
MATERIALS AND METHODS
For the monohybrid cross in this experiment, we used an F1 generation, which resulted from the mating of a male homozygous wild-type eyed fly with a female homozygous sepia eyed fly. Males and females are distinguished by differences in body shape and size. Males have a darker and rounder abdomen in comparison to females, which are more pointed. Another difference occurs on the forelegs of the flies-males have a small bump called sex combs. At week 0, after being anaesthitized by fly-nap, three males and three females were identified under a dissecting microscope and placed in a plastic vial with a foam stopper at the end. The vial remained on it's side until the flies regained consciousness so that they didn't get trapped by the culture medium at the bottom. We allowed the Drosophila to incubate and reproduce for a week.
After one week, the vial contains many larva in addition to the F1 generation flies. Next, we removed the F1 generation flies to prevent breeding between the two generations. Acting as Dr. Kevorkian, we gave the F1 generation a lethal dose of the seemingly harmless anesthesia, fly-nap. A trumpet solo of "Taps" played in our minds as we said goodbye and placed them in the fly morgue. We allowed the F2 larval generation to incubate for two weeks. The experiment called for one week of incubation, but Easter fell during that week which interfered with our lab time. After the two weeks, the F2 flies were also terminally anaesthetized. Only, before saying goodbye, we separated the flies according to sex and eye color(wild-type,red or mutant, sepia), recording the results in Table 1.
The same method was used it the dihybrid cross, except, instead of one trait, two traits were observed. The traits were eye-color(wild-type, red or mutant, sepia) and wing formation(wild-type, full or mutant, vestigial). The F1 generation for the dihybrid cross came from a cross between a male homozygous wild-type for eyes and wings, and a female homozygous for sepia eyes and vestigial wings. The results of this cross were recorded and appear in Table 2.
RESULTS
The monohybrid cross of Drosophila melanogaster produced 25,893 flies for all of the sections combined. Of those flies, 75.9% had wild-type(red) eyes, and 24.1% had mutant(sepia eyes). Overall, more females were produced than males.
TABLE 1: F1 Generation Monohybrid Cross of Drosophila melanogaster (+se x +se)
PHENOTYPE CLASS RESULTS RESULTS FROM ALL CLASSES NUMBER PERCENT RATIO NUMBER PERCENT RATIO
MALES
WILD-TYPE EYES 562 74.8% 3.0 8,960 75.4% 3.1
SEPIA EYES 189 25.2% 1 2,923 24.6% 1
FEMALES
WILD-TYPE EYES 806 75.6% 3.1 10,685 76.3% 3.2
SEPIA EYES 260 24.4% 1 3,325 23.7% 1
BOTH SEXES
WILD-TYPE EYES 1368 75.3% 3.0 19,645 75.9% 3.1
SEPIA EYES 449 24.7% 1 6,248 24.1% 1
The dihybrid cross produced a total of 26, 623 flies for all of the sections combined. 54.9% of the flies had wild-type eyes(red) and wild-type wings(full), 17.7% had wild-type eyes and vestigial wings, 21.3% had sepia eyes and full wings, and 6.1% had sepia eyes and vestigial wings. Again, the number of females produced exceeded the number of males.
TABLE 2: F1 Generation Dihybrid Cross of Drosophila melanogaster(+vg+se x +vg+se)
PHENOTYPE CLASS RESULTS RESULTS FROM ALL CLASSES
MALES NUMBER PERCENT RATIO NUMBER PERCENT RATIO
WILD-TYPE EYES WILD-TYPE WINGS 244 47.8% 6.3 6987 54.4% 8.6
WILD-TYPE EYES VESTIGIAL WINGS 132 25.9% 3.4 2315 18% 2.9
SEPIA EYES WILD-TYPE WINGS 95 18.6% 2.4 2727 21.2% 3.4
SEPIA EYES VESTIGIAL WINGS 39 7.6% 1 808 6.4% 1
FEMALES
WILD-TYPE EYES WILD-TYPE WINGS 281 51.1% 7.0 7615 55.2% 9.3
WILD-TYPE EYES VESTIGIAL WINGS 100 18.2% 2.5 2397 17.4% 2.9
SEPIA EYES WILD-TYPE WINGS 129 23.5% 3.2 2953 21.4% 3.6
SEPIA EYES VESTIGIAL WINGS 40 7.3% 1 821 6.0% 1
BOTH SEXES
WILD-TYPE EYES WILD-TYPE WINGS 525 49.5% 6.6 14,602 54.9% 9.0
WILD-TYPE EYES VESTIGIAL WINGS 232 21.9% 2.9 4,712 17.7% 2.9
SEPIA EYES WILD-TYPE WINGS 224 21.1% 2.8 5,680 21.3% 3.5
SEPIA EYES VESTIGIAL WINGS 79 7.5% 1 1,629 6.1% 1
DISCUSSION
The results from the monohybrid cross for both my class and for all sections were very close to the expected results. "Theoretically, there should be three red-eyed flies for every one sepia-eyed fly. We call this a 3:1 phenotypic ratio" (So What's a Monohybrid Cross Anyway? 2). As indicated in table one, the data comes within one or two tenths of the 3:1 ratio. Therefore, the monohybrid cross was very accurate. However, the results from the dihybrid cross were not quite as accurate. Mendel hypothesized and proved that a dihybrid cross should produce a 9:3:3:1 ratio(Biology 245). In our experiment, the results from my class (both sexes) were not very close to the ratio. In table 2, the ratio shows 6.6:2.9:2.8:1. The data obtained from all classes were slightly more precise. All sections together (both sexes) produced a ratio of 9:2.9:3.5:1.
There are many reasons that our results did not match the expected ratios. For example, when transferring flies from one vial to another, a few flies got away which could have a small effect on the numbers. Another factor affecting the results also happened upon transferring flies. A number of flies were imbedded in the cultural medium. We were forced to leave them there so that we didn't loosen the medium. The largest source of error in the "my class" column came from the amount of time we allowed the flies to reproduce. Since Easter vacation occurred during our lab period, our second generation flies were permitted to stay together for two weeks instead of one. This may have resulted in the F2 generation flies mating with their own offspring, thus throwing off the ratio.
I feel more certain about the results in the "all classes" column since many more trials were performed and more flies were used. In any experiment, the more trials one conducts, the more accurate the results will be. This makes sense when comparing the results from my class versus the results from all classes combined. The numbers of flies used in each column make the difference in trials more evident: 1,060 flies were produced in my class, whereas 26, 623 flies were produced in all classes.
In the monohybrid cross, the ratio for eye color for the females were consistent with the ratio for males. This information implies that the gene for eye color is not sex linked. Through research, I found that in Drosophila melanogaster, chromosome one is the sex chromosome. Eye color is not one chromosome one, but rather on chromosome three. Therefore, eye color in Drosophila is not sex linked(Genetics:Drosophila Crosses).
In each column, the number of females produced outweighed the number of males. This may imply that the X chromosome is dominant over the Y chromosome. This would cause the X chromosome to mix with another X chromosome, producing a female, more often than it would mix with the Y chromosome, which would produce a male.
As a follow-up to the experiment, I would perform many more trials than each person did for this experiment. Also, more flies could be placed in each vial to ensure even more offspring to be included in the data. I would also be sure to remove the flies after just one week to reduce breeding between generations.
This experiment caused Mendel's findings to be more concrete and realistic in my mind. It made the information more than meaningless numbers. The experiment also made me realize how easily biological ideas can be proved. Our results agree with Mendel's discoveries. The only drawback to our learning was the massacre of over 26,000 fruit flies.
REFERENCES
Campbell, Neil A., Biology: Fourth Edition. Menlo Park: Benjamin/Cummings, 1996.
"Genetics: Drosophila Crosses." Lab Handouts, General Biology Lab, 1996.
"So What's a Monohybrid Cross Anyway?" Lab Handouts, General Biology Lab, 1996.
f:\12000 essays\sciences (985)\Biology\Abstract from Where Do We Draw the Line.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract from: Cloning : Where Do We Draw the Line?
The first attempt in cloning was conducted in 1952 on a group of frogs. The experiment was a partial success. The frog cells were cloned into other living frogs however, only one in every thousand developed normally , all of which were sterile. The rest of the frogs that survived grew to abnormally large sizes. In 1993, scientist and director of the in vitro lab at George Washington University, Jerry Hall and associate Robert Stillman, reported the first ever successful cloning of human embryos. It was the discovery of in-vitro fertilization in the 1940's that began the pursuit to ease the suffering of infertile couples. After years of research, scientists learned that "in a typical in-vitro procedure, doctors will insert three to five embryos in hopes that, at most, one or two will implant" (Elmer-Dewitt 38). And that "a woman with only one embryo has about a 10% to 20% chance of getting pregnant through in-vitro fertilization. If that embryo could be cloned and turned into three or four, the cha
nces of a successful pregnancy would increase significantly"(Elmer-Dewitt 38).
The experiment the scientists performed is the equivalent of a mother producing twins. The process has been practiced and almost perfected in livestock for the past ten years, and some scientists believe that it seems only logical that it would be the next step in in-vitro fertilization. The procedure was remarkably simple. Hall and Stillman "selected embryos that were abnormal because they came from eggs that had been fertilized by more than one sperm" (Elmer-Dewitt 38), because the embryos were defective, it would have been impossible for the scientist to actually clone another person. They did however, split the embryos into separate cells, as a result creating separate and identical clones. They began experimenting on seventeen of the defective embryos and "when one of those single-celled embryos divided into two cell...the scientists quickly separated the cells, creating two different embryos with the same genetic information" (Elmer-Dewitt 38). The cells are coated with a protective covering "c
alled a zona pellucida, that is essential to development" (Elmer-Dewitt 38), which was stripped away and replaced with a gel-like substance made from seaweed that Hall had been experimenting with. The scientists were able to produce forty-eight clones, all of which died within six days. Other scientist have been quoted saying that although the experiment is fairly uncomplicated, it had not been tested before because of the moral and ethical issues surrounding an experiment such as this one. Some people believe that aiding infertile couples is the only true benefit to cloning human embryos, and fear that if the research is continued it could get out of hand. Other advantages that have been suggested include freezing human embryos for later use, in the event that a child should get sick or die. If a parent has had their child's embryos cloned and frozen and their child dies at an early age of crib death, the parents could have one of the frozen embryos de-thawed and implanted into the womb. Nine months l
ater, the mother would give birth to a child that was identical to the one they had lost. Or if a four year old child develops leukemia and requires a bone marrow transplant. A couple could implant a pre-frozen embryos clone of their first child and produce an identical twin as a guarantee for a perfect match. The parents would therefore have identical twins that were four years apart. The disadvantages are endless. If this type of technique were exploited and used in vain, we could be heading down "a tunnel of madness"(Elmer-Dewitt 37). "Researchers have developed DNA- analysis techniques to screen embryos for...disorders, but the procedures require snipping cells off embryos, a process that sometimes kills them"( Elmer-Dewitt 39). It is expected that the idea of throwing away an embryos because it is disease ridden will throw pro-life activists into a frenzy (Elmer-Dewitt 39). It is one thing to exercise the freedom of chose to abort an unwanted child for whatever reason, but to throw one away due to a
pre understanding that it carries a disease, in my opinion, is unethical. These types of possibilities are producing moral and ethical debates among ethicists the world over. Most countries have set regulations concerning cloning human embryos and in some countries it is an offense punishable by law and requires incarceration . Between the medical contributions and the ethical questions surrounding cloning human embryos, it is unlikely that we will have the opportunity to discover if further research to Hall and Stillman's experiment could actually produce human beings.
References
Elmer-Dewitt, Philip. "Cloning: Where Do We Draw the Line?" Time Magazine. November 8th, 1993: 37-42.
(38)
(38)
(38)
(37)
(39)
(39)
f:\12000 essays\sciences (985)\Biology\Acid Rain Essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Within this past century, acidity of the air and acid rain have become recognized as one of the leading threats to our planet's environment. No longer limited by geographic boundaries, acid causing emissions are causing problems all over the world. Some laws have been passed which limit the amount of pollutants that are released into the air, but tougher legislation must be implemented before this problem can be overcome.
Acid rain is produced, when automobiles, smelters, power plants, and other industrial factories burn fossil fuels such as gasoline, coal, and fuel oils. When combusted, the non renewable resources release pollutants such as sulfur, carbon and nitrogen oxides into the air. These oxide combine with the humidity in the air and form sulfuric, nitric and carbonic acid. This acidic solution eventually condenses in the air and comes back down to the earth in any from of precipitation (snow, rain, hail).
Upon returning to the earth, the acidic precipitation can have serious repercussions on both the environment and as well as human structures. On average, acid rain is about nine times more acidic than rain water, and has been recorded as low as 2.5 on the pH scale (forty times more acidic than water.) Acid deposition kill fish, soil bacteria, and as well as aquatic and terrestrial plants. the acid also drain the soil of essential nutrients such as aluminum and releases them into bodies of water such as streams, lakes, and ponds. These bodies of water develop highly concentrated levels of these nutrients which can really harm the aquatic life forms in that area Those areas without any alkaline metal deposits in the soil to neutralize some of the acid are hurt the most by this destructive force, destroying crops, trees and even killing an entire pond or lake. Acid rain is also a strong destructive force against man made structures, reacting with marble, plastics and rubber.
The problem of acid rain is derived mostly from northern countries such as the united States, Canada, and many countries of Eastern and Western Europe including Japan. The consequences of the acid precipitation have been most apparent in Norway, Sweden, and Canada, however, due to tall smokestacks many pollutants rise high into the atmosphere where air currents can pick them up and carry them as far as into an entirely different country. This cross-border issue is causing global concerns as it is no longer simply one country's problem.
This concern has been well identified in North America where pollution emissions from Canada and the U.S. are crossing into each others territory. For example coal-powered electric generating stations found in Midwestern U.S seem to be the cause of a severe acid rain problem in eastern Canada.
Acid rain is of strong concern worldwide, and something must be done to reduce, or hopefully end the problem. The acid kills nearly all forms of life, and tens of thousands of lakes have already been destroyed by acid rain. Some of the great monuments of the world such as the cathedrals of Europe and the Coliseum in Rome are beginning to be eaten away by the acidic rainfall. Many laws have been passed such as the Clean Air Act of 1970, and the Clean Air Act of 1990. Both laws have helped in the reduction of acid rain, but much more is still needed to be done. The second law states a 50% decrease of sulfur dioxide, and nitrogen oxide, &0% decrease of carbon monoxide, and 20% of other emissions. Also in 1990, the California Air Resources Board introduced the strictest vehicle emission controls in the world. Many other northeastern states came up with similar controls, but California was the toughest, giving the state until 2003 to decrease hydrocarbon emissions of new cars by 70% and to make sure that al least
105 of all cars produced no harmful emissions. On March 13, 1991, Canada and the U.S. came up with the Air Quality Accord which includes a 40% decrease in annual sulfur dioxide emissions by the U.S from the 1980 level by the year 2000. On December 11 of the same year, Canada came up with the Green Plan which would contain goals such as a 50% reduction of sulfur dioxide emissions in Eastern Canada beyond 1994, and an extension of the acid rain control program to emissions Western Canada.
Many countries feel that the cost of reducing acid rain is too expensive, but Canada's progress in reducing sulfur dioxide emissions is proof that a country's economy can intertwine with environmental protection. Canada has been able to reduce sulfur dioxide emissions from 6.9 million tones in 1970 to 3.7 million ton's in 1990. That is extremely close to a 50% reductio. This is a significant decrease, but more still needs to be done, even if it is a financial burden. The money spent on reducing and eliminating the air pollutants will be more that compensated for the money saved on paying for the damages caused by acid rain. Reduction in cost for repairing man made structures, lees damages on lakes, and plant life. California's vehicle emissions control of 1990 is a step in the right direction as their goals include not only to reduce but also eliminate the contaminating emissions released by cars. Many equivalent, if not stricter standards should be set up internationally, not only for cars, but for all con
tributors to the acid rain problem. Factories should be forced to install inexpensive scrubbers, or other technological devices to reduce air pollution. Being the world superpower, the United States must take the first step, and get all countries involved and to ensure that goals are met. Laws on minimum pollution reduction requirements should be set realistically for the whole world, to help ensure that everyone is doing their part to solve this global issue.
f:\12000 essays\sciences (985)\Biology\Acid Rain.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THOUGHTS ON ACID RAIN
Acid rain is a serious problem with disastrous effects. Each day this serious problem increases, many people believe that this issue is too small to deal with right now this issue should be met head on and solved before it is too late. In the following paragraphs I will be discussing the impact has on the wildlife and how our atmosphere is being destroyed by acid rain.
CAUSES
Acid rain is a cancer eating into the face of Eastern Canada and the North Eastern United States. In Canada, the main sulphuric acid sources are non-ferrous smelters and power generation. On both sides of the border, cars and trucks are the main sources for nitric acid(about 40% of the total), while power generating plants and industrial commercial and residential fuel combustion together contribute most of the rest. In the air, the sulphur dioxide and nitrogen oxides can be transformed into sulphuric acid and nitric acid, and air current can send them thousands of kilometres from the source.When the acids fall to the earth in any form it will have large impact on the growth or the preservation of certain wildlife.
NO DEFENSE
Areas in Ontario mainly southern regions that are near the Great Lakes, such substances as limestone or other known antacids can neutralize acids entering the body of water thereby protecting it. However, large areas of Ontario that are near the Pre-Cambrian Shield, with quartzite or granite based geology and little top soil, there is not enough buffering capacity to neutralize even small amounts of acid falling on the soil and the lakes. Therefore over time, the basic environment shifts from an alkaline to a acidic one. This is why many lakes in the Muskoka,
Haliburton, Algonquin, Parry Sound and Manitoulin districts could lose their fisheries if sulphur emissions are not reduced substantially.
ACID
The average mean of pH rainfall in Ontario's Muskoka-Haliburton lake country ranges between 3.95 and 4.38 about 40 times more acidic than normal rainfall, while storms in Pennsylvania have rainfall pH at 2.8 it almost has the same rating for vinegar.
Already 140 Ontario lakes are completely dead or dying. An additional 48 000 are sensitive and vulnerable to acid rain due
to the surrounding concentrated acidic soils.
ACID RAIN CONSISTS OF....?
Canada does not have as many people, power plants or automobiles as the United States, and yet acid rain there has become so severe that Canadian government officials called it the most pressing environmental issue facing the nation. But it is important to bear in mind that acid rain is only one segment, of the widespread pollution of the atmosphere facing the world. Each year the global atmosphere is on the receiving end of 20 billion tons of carbon dioxide, 130 million tons of suffer dioxide, 97 million tons of hydrocarbons, 53 million tons of nitrogen oxides, more than three million tons of arsenic, cadmium, lead, mercury, nickel, zinc and other toxic metals, and a host of synthetic organic compounds ranging from polychlorinated biphenyls(PCBs) to toxaphene and other pesticides, a number of which may be capable of causing cancer, birth defects, or genetic imbalances.
COST OF ACID RAIN
Interactions of pollutants can cause problems. In addition to contributing to acid rain, nitrogen oxides can react with hydrocarbons to produce ozone, a major air pollutant responsible in the United States for annual losses of $2 billion to 4.5 billion worth of wheat, corn, soy beans, and peanuts. A wide range of interactions can occur many unknown with toxic metals.
In Canada, Ontario alone has lost the fish in an estimated 4000 lakes and provincial authorities calculate that Ontario stands to lose the fish in 48 500 more lakes within the next twenty years if acid rain continues at the present rate.Ontario is not alone, on Nova Scotia's Eastern most shores, almost every river flowing to the Atlantic Ocean is poisoned with acid. Further threatening a $2 million a year fishing industry.
THE DYING
Acid rain is killing more than lakes. It can scar the leaves of hardwood forest, wither ferns and lichens, accelerate the death of coniferous needles, sterilize seeds, and weaken the forests to a state that is vulnerable to disease infestation and decay. In the soil the acid neutralizes chemicals vital for growth, strips others from the soil and carries them to the lakes and literally retards the respiration of the soil. The rate of forest growth in the White Mountains of New Hampshire has declined 18% between 1956 and 1965, time of increasingly intense acidic rainfall.
Acid rain no longer falls exclusively on the lakes, forest, and thin soils of the Northeast it now covers half the continent.
EFFECTS
There is evidence that the rain is destroying the productivity of the once rich soils themselves, like an overdose of chemical fertilizer or a gigantic drenching of vinegar. The damage of such overdosing may not be repairable or reversible. On some croplands, tomatoes grow to only half their full weight, and the leaves of radishes wither. Naturally it rains on cities too, eating away stone monuments and concrete structures, and corroding the pipes which channel the water away to the lakes and the cycle is repeated. Paints and automobile paints have its life reduce due to the pollution in the atmosphere speeding up the corrosion process. In some communities the drinking water is laced with toxic metals freed from metal pipes by the acidity. As if urban skies were not already grey enough, typical visibility has declined from 10 to 4 miles, along the Eastern seaboard, as acid rain turns into smogs. Also, now there are indicators that the components of acid rain are a health risk, linked to human respiratory disease.
PREVENTION
However, the acidification of water supplies could result in increased concentrations of metals in plumbing such as lead, copper and zinc which could result in adverse health effects. After any period of non-use, water taps at summer cottages or ski chalets they should run the taps for at least 60 seconds to flush any excess debris.
STATISTICS
Although there is very little data, the evidence indicates that in the last twenty to thirty years the acidity of rain has increased in many parts of the United States. Presently, the United States annually discharges more than 26 million tons of suffer dioxide into the atmosphere. Just three states, Ohio, Indiana, and Illinois are responsible for nearly a quarter of this total. Overall, two- thirds of the suffer dioxide into the atmosphere over the United States comes from coal-fired and oil fired plants. Industrial boilers, smelters, and refineries contribute 26%; commercial institutions and residences 5%; and transportation 3%. The outlook for future emissions of suffer dioxide is not a bright one. Between now and the year 2000, United States utilities are expected to double the amount of coal they burn. The United States currently pumps some 23 million tons of nitrogen oxides into the atmosphere in the course of the year.
Transportation sources account for 40%; power plants, 30%; industrial sources, 25%; and commercial institutions and residues, 5%. What makes these figures particularly distributing is that nitrogen oxide emissions have tripled in the last thirty years.
Acid rain is very real and a very threatening problem. Action by one government is not enough. In order for things to be done we need to find a way to work together on this for at least a reduction in the contaminates contributing to acid rain. Although there are right steps in the right directions but the government should be cracking down on factories not using the best filtering systems when incinerating or if the factory is giving off any other dangerous fumes. I would like to express this question to you, the public: WOULD YOU RATHER PAY A LITTLE NOW OR A LOT LATER?
f:\12000 essays\sciences (985)\Biology\Acquired Immune Difficiency Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acquired Immune Difficiency Syndrome
AIDS is a life and death issue. To have the AIDS diseas is at present a sentence of
slow but inevitable death. I've already lost one friend to AIDS. I may soon lose others.
My own sexual behavior and that of many of my friends has been profoundly altered
by it. In my part of the country, one man in10 may already be carrying the AIDS virus.
While the figures may currently be less in much of the rest of the country, this is
changing rapidly. There currently is neither a cure, nor even an effective treatment, and no
vaccine either. But there are things that have been PROVEN immensely effective in slowing
the spread of this hideously lethal disease. In this essay I hope to present this
information. History and Overview
AIDS stands for Acquired Immune Defficiency Disease. It is caused by a virus. The
disease originated somewhere in Africa about 20 years ago. There it first appeared as a
mysterious ailment afflicting primarily heterosexuals of both sexes. It probably was
spread especially fast by primarily female prostitutes there. AIDS has already become a
crisis of STAGGERING proportions in parts of Africa. In Zaire, it is estimated that over
twenty percent of the adults currently carry the virus. That figure is increasing. And what
occurred there will, if no cure is found, most likely occur here among heterosexual folks.
AIDS was first seen as a disease of gay males in this country. This was a result of
the fact that gay males in this culture in the days before AIDS had an average of 200 to
400 new sexual contacts per year. This figure was much higher than common practice
among heterosexual (straight) men or women. In addition, it turned out that rectal sex
was a particularly effective way to transmit the disease, and rectal sex is a
common practice among gay males. For these reasons, the disease spread in the gay male
population of this country immensely more quickly than in other populations. It became to
be thought of as a "gay disease". Because the disease is spread primarily by exposure of
ones blood to infected blood or semen, I.V. drug addicts who shared needles also soon
were identified as an affected group. As the AIDS epidemic began to affect
increasingly large fractions of those two populations (gay males and IV drug abusers),
many of the rest of this society looked on smugly, for both populations tended to be
despised by the "mainstream" of society here.
But AIDS is also spread by heterosexual sex. In addition, it is spread by blood
transfusions. New born babies can acquire the disease from infected mothers during
pregnancy. Gradually more and more "mainstream" folks got the disease. Most recently, a
member of congress died of the disease. Finally, even the national news media began to
join in the task of educating the public to the notion that AIDS can affect everyone.
Basic medical research began to provide a few bits of information, and some
help. The virus causing the disease was isolated and identified. The AIDS virus turned out
to be a very unusual sort of virus. Its genetic material was not DNA, but RNA. When it
infected human cells, it had its RNA direct the synthesis of viral DNA. While RNA
viruses are not that uncommon, very few RNA viruses reproduce by setting up the flow
of information from RNA to DNA. Such reverse or "retro" flow of information does not
occur at all in any DNA virus or any other living things. Hence, the virus was said to
belong to the rare group of virues called "Retro Viruses". Research provided the means to
test donated blood for the presence of the antibodies to the virus, astronomically reducing
the chance of ones getting AIDS from a blood transfusion. This was one of the first real
breakthroughs. The same discoveries that allowed us to make our blood bank blood supply
far safer also allowed us to be able to tell (in most cases) whether one has been exposed to
the AIDS virus using a simple blood test
f:\12000 essays\sciences (985)\Biology\ADD.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chris Brown
English 102: section 6
May 3, 1996
ATTENTION DEFICIT DISORDER
Approximately 3-5% of all American children have an
Attention Deficit Disorder (ADD). ADD is a leading cause of
school failure and under-achievement. ADD characteristics often
arise in early childhood. As many as 50% of children with ADD
are never diagnosed. Boys significantly outnumber girls, though
girls are more likely to be undiagnosed with ADD. "ADD is not
an attention disorder, but a disorder of impulse control ( Seminar
notes Barkeley) ."
Characteristics of Attention Deficit Disorder can include :
Fidgeting with hands or feet , difficulty remaining seated,
awaiting turns in games, following through on instructions ,
shifting from one uncompleted task to another, difficulty playing
quietly, interrupting conversations and intruding into other
children's games, appearing to be not listening to what is being
said, doing things that are dangerous without thinking about the
consequences.
Most scientist now believe that a brain dysfunction or
abnormality in brain chemistry could be to blame for the
symptoms of Attention Deficit Disorder. The frontal lobes of the
brain are thought to be most responsible for the regulation of
behavior and attention. They receive information from the lower
brain, which regulated arousal and screens incoming messages
from within and outside of the body. The limbic system , a group
of related nervous system structures located in the midbrain and
linked to emotions and feelings, also sends messages to the frontal
lobes. Finally, the frontal lobes are suspected to be the site of
working memory, the place where information about the
immediate environment is considered for memory storage,
planning, and future-directed behavior. Scientist believe the
activity in the frontal lobes is depressed in people with ADD.
Studies show a decrease in the ability of the ADD brain to use
glucose, the body's main source of energy, leading to slower and
less efficient activity. Neurotransmitters provide the connection
between one nerve cell and another. In essence, neurotransmitters
allow electrical impulses to pass across synapses from one neuron
to another. It is now suspected that people with Attention Deficit
Disorder have a chemical imbalance of a class of neurotransmitters
called catecholamines. Dopamine, helps to form a pathway
between the motor center of the midbrain and the frontal lobes, as
well as a pathway between the limbic system and the frontal lobes.
Without enough dopamine and related catecholamines, such as
serotonin and norepinephrine, the frontal lobes are under
stimulated and thus unable to perform their complex functions
efficiently.
Attention Deficit Disorder is strongly considered genetically
inherited, however, not all cases of ADD may be genetically
linked. . Studies have shown that 20-30% of all hyperactive
children have a least one parent with ADD. The environment is a
big influence on a child during pregnancy and after. Some studies
show that a small percentage of ADD cases were influenced by
smoking, drinking alcohol, and using drugs during pregnancy.
Exposure to toxins, such as lead, may also alter the brain
chemistry and function.
If you suspect that you are suffering from Attention Deficit
Disorder you will need to discuss it with your medical doctor. In
most cases the doctor will recommend that you visit a psychologist
for an evaluation. The psychologist is professionally trained in
human behavior and will be able to provide counseling and testing
in areas related to mental health. The psychologist is not able to
prescribe medication to help you, but may send you to a
psychiatrist to prescribe and monitor medication. A neurologist
may be consulted in order to rule out neurological conditions
causing your symptoms. Your doctor will gather information about
your past and present difficulties, medical history , current
psychological makeup, educational and behavioral functioning.
Depending on your symptoms, your diagnosis may be categorized
as ADD, inattentive type ADD, or hyperactive/impulsive type
ADD. After your diagnosis you may learn that you are also
suffering from a learning disability, depression, or substance
abuse, which is often associated with ADD.
There is no cure for Attention Deficit Disorder. "Along with
increasing awareness of the problem, a better understanding of its
causes and treatment has developed (3 Wender)". There is
medication for ADD which will only alleviate the symptoms. The
medication will not permanently restore the chemical balance.
Approximately 70% of adults with ADD find that their symptoms
significantly improve after they take medication prescribed by
their doctors. The patient is able to concentrate on difficult and
time-consuming tasks, stop impulsive behavior , and tame the
restless twitches that have been experienced in the past. Some
ADD patient's psychological and behavioral problems are not
solved by medication alone, and are required more therapy or
training .
There are two types of drugs that work to balance the
neurotransmitters and have been found to be most effective in
treating ADD. Stimulants are drugs that stimulate or activate brain
activity. Stimulants work by increasing the amount of dopamine
either produced in the brain or used by the frontal lobes of the
brain. There are several different stimulants that may work to
alleviate the symptoms of ADD, including methylphenidate
(Ritalin), dextroamphetamine (Dexedrine), and pemoline (Cylert).
Stimulants are by far the most effective medications in the
treatment of ADD. Some patients respond well to antidepressants.
Antidepressants also stimulate brain activity in the frontal lobes,
but they affect the production and use of other chemicals, usually
norepinephrine and serotonin. The antidepressants considered
most useful for ADD include imipramine (Tofranil), desipramine
(Norpramin), bupropion ( Wellbutrin), and fluoxetine
hydrochloride (Prozac).
All stimulants have the same set of side effects. Some
patients complain of feeling nauseous or headachy at the outset of
treatment, but find that these side effects pass within a few days.
Others find that their appetites are suppressed and or that they
have difficulty sleeping. If the stimulant dosage is too high the
patient may experience feelings of nervousness, agitation, and
anxiety, In rare cases, increased heart rate and high blood pressure
can result with the use of stimulants, especially if the patient has
an underlying predisposition toward hypertension.
Ritalin is the most widely prescribed drug used to treat ADD
in both children and adults. Ritalin appears to work by stimulating
the production of the neurotransmitter dopamine. The benefits of
Ritalin include improved concentration and reduced distractibility
and disorganization.
Dextroamphetamine is another stimulant medication that
appears to have a slightly different pharmacological action than
Ritalin. Both work to boost the amount of available dopamine.
Dextroamphetamine, however, blocks the reuptake of the
neurotransmitter while Ritalin increases its production (334 Kelly,
Ramundo, Press).
All the drugs used to treat ADD have the same goal: to
provide the brain with the raw materials it needs to concentrate
over a sustained period of time, control impulses, and regulate
motor activity. The drug or combination of drugs that work best
for you depends on the individuals brain chemistry and
constellation of symptoms. The process of finding the right drug
can be tricky for each individual. The physicians are not able to
accurately predict how any one individual will respond to various
doses or types of Attention Deficit Disorder medication.
Medication is rarely enough for the patient. Most Attention
Deficit Disorder patients require therapy to give guidance . Adult
patients have the burden of the past that often hinders their
progress. The patient then needs help with the relief of
disappointment, frustration, and nagging sense of self-doubt that
often weighs upon the ADD patient. Some ADD patients suffer
from low-grade depression or anxiety, others with a dependence on
alcohol or drugs, and most with low self-esteem and feelings of
helplessness.
Therapy also helps the ADD patient fully understand the
disorder and how it controls the patients life. The knowledge of
ADD will make the patient and parents more capable of changing
the behaviors or circumstances disliked and enhancing strengths
and assets. A second and most crucial part of the education
process involves informing those around you about the disorder
and its effects. Family members, friends, employers, and
colleagues have been playing roles in the drama called ADD
without ever being aware of it. Explaining how the disorder may
affect the relationships around the patient will help repair any past
damage as well as pave the way to a stable future.
Attention Deficit Disorder is difficult for any family. ADD
challenges the relationships and the issues of daily family life.
Getting a family household to function smoothly is challenging for
any family, with or without the presence of ADD. Adults and
children suffering from Attention Deficit Disorder have trouble
establishing and maintaining physical order, coordinating
schedules and activities, and accepting and meeting
responsibilities. Parents with children suffering with ADD have to
learn how to deal with the obstacles that they will have while
raising their child.
Adults dealing with ADD often have chronic employment
problems, impulsive spending, and erratic bookkeeping and bill
paying. Raising healthy, well-adjusted children requires patience,
sound judgment, good humor, and, discipline which is difficult for
an ADD parent to do. The presence of ADD often hinders the
development of intimate relationships for a variety of reasons.
Although many adults with ADD enjoy successful, satisfying
marriages, the disorder almost always adds a certain amount of
extra tension and pressure to the union. The non-ADD spouse
bears an additional burden of responsibility for keeping the
household running smoothly and meeting the needs of the
children, the spouse with ADD, and, if he or she has time, his or
her own priorities.
Parenting a child who has ADD can be an exhausting and, at
times, frustrating experience. Parents play a key role in managing
the disability. They usually need specialized training in behavior
management and benefit greatly from parent support groups.
Parents often find that approaches to parenting that work well with
children who do not have ADD, do not work as well with children
who have ADD.
Parents often feel helpless, frustrated and exhausted. Too
often, family members become angry and withdraw from each
other. If untreated, the situation only worsens. Parent training
can be one of the most important and effective interventions for a
child with ADD. Effective training will teach parents how to
apply strategies to manage their child's behavior and improve their
relationship with their child.
Without consistent structure and clearly defined expectations
and limits, children with ADD can become quite confused about
the behaviors that are expected of them.
Making and keeping friends is a difficult task for children
with ADD. A variety of behavioral excesses and deficits common
to these children get in the way of friendships. They may talk too
much, dominate activities, intrude in others' games, or quit a
game before its done. They may be unable to pay attention to
what another child is saying, not respond when someone else tries
to initiate and activity, or exhibit inappropriate behavior.
I decided to write my research paper on Attention Deficit
Disorder because my four-year old step-brother has recently been
diagnosed with the disorder. I hope that my relationship with my
brother can become closer now that I have a better understanding
of what he is suffering from.
....................................................................................................
.
Chris Brown
English 102: section 6
May 3, 1996
ATTENTION DEFICIT DISORDER
Approximately 3-5% of all American children have an
Attention Deficit Disorder (ADD). ADD is a leading cause of
school failure and under-achievement. ADD characteristics often
arise in early childhood. As many as 50% of children with ADD
are never diagnosed. Boys significantly outnumber girls, though
girls are more likely to be undiagnosed with ADD. "ADD is not
an attention disorder, but a disorder of impulse control ( Seminar
notes Barkeley) ."
Characteristics of Attention Deficit Disorder can include :
Fidgeting with hands or feet , difficulty remaining seated,
awaiting turns in games, following through on instructions ,
shifting from one uncompleted task to another, difficulty playing
quietly, interrupting conversations and intruding into other
children's games, appearing to be not listening to what is being
said, doing things that are dangerous without thinking about the
consequences.
Most scientist now believe that a brain dysfunction or
abnormality in brain chemistry could be to blame for the
symptoms of Attention Deficit Disorder. The frontal lobes of the
brain are thought to be most responsible for the regulation of
behavior and attention. They receive information from the lower
brain, which regulated arousal and screens incoming messages
from within and outside of the body. The limbic system , a group
of related nervous system structures located in the midbrain and
linked to emotions and feelings, also sends messages to the frontal
lobes. Finally, the frontal lobes are suspected to be the site of
working memory, the place where information about the
immediate environment is considered for memory storage,
planning, and future-directed behavior. Scientist believe the
activity in the frontal lobes is depressed in people with ADD.
Studies show a decrease in the ability of the ADD brain to use
glucose, the body's main source of energy, leading to slower and
less efficient activity. Neurotransmitters provide the connection
between one nerve cell and another. In essence, neurotransmitters
allow electrical impulses to pass across synapses from one neuron
to another. It is now suspected that people with Attention Deficit
Disorder have a chemical imbalance of a class of neurotransmitters
called catecholamines. Dopamine, helps to form a pathway
between the motor center of the midbrain and the frontal lobes, as
well as a pathway between the limbic system and the frontal lobes.
Without enough dopamine and related catecholamines, such as
serotonin and norepinephrine, the frontal lobes are under
stimulated and thus unable to perform their complex functions
efficiently.
Attention Deficit Disorder is strongly considered genetically
inherited, however, not all cases of ADD may be genetically
linked. . Studies have shown that 20-30% of all hyperactive
children have a least one parent with ADD. The environment is a
big influence on a child during pregnancy and after. Some studies
show that a small percentage of ADD cases were influenced by
smoking, drinking alcohol, and using drugs during pregnancy.
Exposure to toxins, such as lead, may also alter the brain
chemistry and function.
If you suspect that you are suffering from Attention Deficit
Disorder you will need to discuss it with your medical doctor. In
most cases the doctor will recommend that you visit a psychologist
for an evaluation. The psychologist is professionally trained in
human behavior and will be able to provide counseling and testing
in areas related to mental health. The psychologist is not able to
prescribe medication to help you, but may send you to a
psychiatrist to prescribe and monitor medication. A neurologist
may be consulted in order to rule out neurological conditions
causing your symptoms. Your doctor will gather information about
your past and present difficulties, medical history , current
psychological makeup, educational and behavioral functioning.
Depending on your symptoms, your diagnosis may be categorized
as ADD, inattentive type ADD, or hyperactive/impulsive type
ADD. After your diagnosis you may learn that you are also
suffering from a learning disability, depression, or substance
abuse, which is often associated with ADD.
There is no cure for Attention Deficit Disorder. "Along with
increasing awareness of the problem, a better understanding of its
causes and treatment has developed (3 Wender)". There is
medication for ADD which will only alleviate the symptoms. The
medication will not permanently restore the chemical balance.
Approximately 70% of adults with ADD find that their symptoms
significantly improve after they take medication prescribed by
their doctors. The patient is able to concentrate on difficult and
time-consuming tasks, stop impulsive behavior , and tame the
restless twitches that have been experienced in the past. Some
ADD patient's psychological and behavioral problems are not
solved by medication alone, and are required more therapy or
training .
There are two types of drugs that work to balance the
neurotransmitters and have been found to be most effective in
treating ADD. Stimulants are drugs that stimulate or activate brain
activity. Stimulants work by increasing the amount of dopamine
either produced in the brain or used by the frontal lobes of the
brain. There are several different stimulants that may work to
alleviate the symptoms of ADD, including methylphenidate
(Ritalin), dextroamphetamine (Dexedrine), and pemoline (Cylert).
Stimulants are by far the most effective medications in the
treatment of ADD. Some patients respond well to antidepressants.
Antidepressants also stimulate brain activity in the frontal lobes,
but they affect the production and use of other chemicals, usually
norepinephrine and serotonin. The antidepressants considered
most useful for ADD include imipramine (Tofranil), desipramine
(Norpramin), bupropion ( Wellbutrin), and fluoxetine
hydrochloride (Prozac).
All stimulants have the same set of side effects. Some
patients complain of feeling nauseous or headachy at the outset of
treatment, but find that these side effects pass within a few days.
Others find that their appetites are suppressed and or that they
have difficulty sleeping. If the stimulant dosage is too high the
patient may experience feelings of nervousness, agitation, and
anxiety, In rare cases, increased heart rate and high blood pressure
can result with the use of stimulants, especially if the patient has
an underlying predisposition toward hypertension.
Ritalin is the most widely prescribed drug used to treat ADD
in both children and adults. Ritalin appears to work by stimulating
the production of the neurotransmitter dopamine. The benefits of
Ritalin include improved concentration and reduced distractibility
and disorganization.
Dextroamphetamine is another stimulant medication that
appears to have a slightly different pharmacological action than
Ritalin. Both work to boost the amount of available dopamine.
Dextroamphetamine, however, blocks the reuptake of the
neurotransmitter while Ritalin increases its production (334 Kelly,
Ramundo, Press).
All the drugs used to treat ADD have the same goal: to
provide the brain with the raw materials it needs to concentrate
over a sustained period of time, control impulses, and regulate
motor activity. The drug or combination of drugs that work best
for you depends on the individuals brain chemistry and
constellation of symptoms. The process of finding the right drug
can be tricky for each individual. The physicians are not able to
accurately predict how any one individual will respond to various
doses or types of Attention Deficit Disorder medication.
Medication is rarely enough for the patient. Most Attention
Deficit Disorder patients require therapy to give guidance . Adult
patients have the burden of the past that often hinders their
progress. The patient then needs help with the relief of
disappointment, frustration, and nagging sense of self-doubt that
often weighs upon the ADD patient. Some ADD patients suffer
from low-grade depression or anxiety, others with a dependence on
alcohol or drugs, and most with low self-esteem and feelings of
helplessness.
Therapy also helps the ADD pa
f:\12000 essays\sciences (985)\Biology\Aids And You.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AIDS and YOU
Introduction:
AIDS is a life and death issue. To have the AIDS disease
is at present a sentence of slow but inevitable death. I've
already lost one friend to AIDS. I may soon lose others. My own
sexual behavior and that of many of my friends has been
profoundly altered by it. In my part of the country, one man in
10 may already be carrying the AIDS virus. While the figures may
currently be less in much of the rest of the country, this is
changing rapidly. There currently is neither a cure, nor even an
effective treatment, and no vaccine either. But there are things
that have been PROVEN immensely effective in slowing the spread
of this hideously lethal disease. In this essay I hope to
present this information. History and Overview:
AIDS stands for Acquired Immune Defficiency Disease. It is
caused by a virus.
The disease originated somewhere in Africa about 20 years
ago. There it first appeared as a mysterious ailment afflicting
primarily heterosexuals of both sexes. It probably was spread
especially fast by primarily female prostitutes there. AIDS has
already become a crisis of STAGGERING proportions in parts of
Africa. In Zaire, it is estimated that over twenty percent of
the adults currently carry the virus. That figure is increasing.
And what occurred there will, if no cure is found, most likely
occur here among heterosexual folks.
AIDS was first seen as a disease of gay males in this
country. This was a result of the fact that gay males in this
culture in the days before AIDS had an average of 200 to 400 new
sexual contacts per year. This figure was much higher than
common practice among heterosexual (straight) men or women. In
addition, it turned out that rectal sex was a particularly
effective way to transmit the disease, and rectal sex is a
common practice among gay males. For these reasons, the disease
spread in the gay male population of this country immensely more
quickly than in other populations. It became to be thought of as
a "gay disease". Because the disease is spread primarily by
exposure of ones blood to infected blood or semen, I.V. drug
addicts who shared needles also soon were identified as an
affected group. As the AIDS epidemic began to affect
increasingly large fractions of those two populations (gay males
and IV drug abusers), many of the rest of this society looked on
smugly, for both populations tended to be despised by the
"mainstream" of society here.
But AIDS is also spread by heterosexual sex. In addition,
it is spread by blood transfusions. New born babies can acquire
the disease from infected mothers during pregnancy. Gradually
more and more "mainstream" folks got the disease. Most recently,
a member of congress died of the disease. Finally, even the
national news media began to join in the task of educating the
public to the notion that AIDS can affect everyone.
Basic medical research began to provide a few bits of
information, and some help. The virus causing the disease was
isolated and identified. The AIDS virus turned out to be a very
unusual sort of virus. Its genetic material was not DNA, but
RNA. When it infected human cells, it had its RNA direct the
synthesis of viral DNA. While RNA viruses are not that uncommon,
very few RNA viruses reproduce by setting up the flow of
information from RNA to DNA. Such reverse or "retro" flow of
information does not occur at all in any DNA virus or any other
living things. Hence, the virus was said to belong to the rar
group of virues called "Retro Viruses". Research provided the
means to test donated blood for the presence of the antibodies
to the virus, astronomically reducing the chance of ones getting
AIDS from a blood transfusion. This was one of the first real
breakthroughs. The same discoveries that allowed us to make our
blood bank blood supply far safer also allowed us to be able to
tell (in most cases) whether one has been exposed to the AIDS
virus using a simple blood test.
The Types of AIDS Infection:
When the AIDS virus gets into a person's body, the results
can be broken down into three general types of situations: AIDS
disease, ARC, and asymptomatic seropositive condition.
The AIDS disease is characterized by having one's immune
system devastated by the AIDS virus. One is said to have the
*disease* if one contracts particular varieties (Pneumocystis,
for example) of pneumonia, or one of several particular
varieties of otherwise rare cancers (Kaposi's Sarcoma, for
example). This *disease* is inevitably fatal. Death occurs often
after many weeks or months of expensive and painful hospital
care. Most folks with the disease can transmit it to others by
sexual contact or other exposure of an uninfected person's blood
to the blood or semen of the infected person.
There is also a condition referred to as ARC ("Aids
Related Complex"). In this situation, one is infected with the
AIDS virus and one's immune system is compromised, but not so
much so that one gets the (ultimately lethal) cancers or
pneumonias of the AIDS disease. One tends to be plagued by
frequent colds, enlarged lymph nodes, and the like. This
condition can go on for years. One is likely to be able to
infect others if one has ARC. Unfortunately, all those with ARC
are currently felt to eventually progess to getting the full
blown AIDS disease.
There are, however, many folks who have NO obvious signs
of disease what so ever, but when their blood serum is tested
they show positive evidence of having been exposed to the virus.
This is on the basis of the fact that antibodies to the AIDS
virus are found in their blood. Such "asymptomatic but
seropositive" folks may or may not carry enough virus to be
infectious. Most sadly, though, current research and experience
with the disease would seem to indicate that EVENTUALLY nearly
all folks who are seropostive will develop the full blown AIDS
disease. There is one ray of hope here: It may in some cases
take up to 15 years or more between one's becoming seropositive
for the AIDS virus and one's developing the disease. Thus, all
those millions (soon to be tens and hundreds of millions) who
are now seropositive for AIDS are under a sentence of death, but
a sentence that may not be carried out for one or two decades in
a significan fraction of cases. Medical research holds the
possibility of commuting that sentence, or reversing it.
There is one other fact that needs to be mentioned here
because it is highly significant in determining recommendations
for safe sexual conduct which will be discussed below:
Currently, it is felt that after exposure to the virus, most
folks will turn seropositive for it (develop a positive blood
test for it) within four months. It is currently felt that if
you are sexually exposed to a person with AIDS and do not become
seropositive within six months after that exposure, you will
never become seropositive as a result of that exposure.
Just to confuse the issue a little, there are a few folks
whose blood shows NO antibodies to the virus, but from whom live
virus has been cultured. Thus, if one is serongative, it is not
absolute proof one is not exposed to the virus. This category of
folks is very hard to test for, and currently felt to be quite
rare. Some even speculate that such folks may be rare examples
of those who are immune to the effects of the virus, but this
remains speculation. It is not known if such folks can also
transmit the virus.
Transmission of AIDS:
The AIDS virus is extremely fragile, and is killed by
exposure to mild detergents or to chlorox, among other things.
AIDS itself may be transmitted by actual virus particles, or by
the transmission of living human CELLS that contain AIDS viral
DNA already grafted onto the human DNA. Or both. Which of these
two mechanisms is the main one is not known as I write this
essay. But the fact remains that it is VERY hard to catch AIDS
unless one engages in certain specific activities.
What will NOT transmit AIDS?
Casual contact (shaking hands, hugging, sharing tools)
cannot transmit AIDS. Although live virus has been recovered
from saliva of AIDS patients, the techniques used to do this
involved concentrating the virus to extents many thousands of
times greater than occurs in normal human contact, such as
kissing (including "deep" or "French" kissing). Thus, there
remains no solid evidence that even "deep" kissing can transmit
AIDS. Similarly, there is no evidence that sharing food or
eating utensils with an AIDS patient can transmit the virus. The
same is true for transmission by sneezing or coughing. There just
is no current evidence that the disease can be transmitted that
way.The same may be true even for BITING,though here there may be
some increased (though still remote) chance of transmitting the
disease.
The above is very important. It means that there is NO
medical reason WHAT SO EVER to recommend tha AIDS suffers or
AIDS antibody positive folks be quarrantined. Such
recommendations are motivated either by ignorance or by sinister
desires to set up concentration camps. Combined with the fact
that the disease is already well established in this country,
the above also means that there is no rational medical basis for
immigration laws preventing visits by AIDS suffers or antibody
positive persons.
The above also means that friends and family and coworkers
of AIDS patients and seropostive persons have nothing to fear
from such casual contact. There is no reason to not show your
love or concern for a friend with AIDS by embracing the person.
Indeed, there appears still to be NO rational basis for
excluding AIDS suffers from food preparation activity. Even if
an AIDS suffer cuts his or her finger and bleeds into the salad
or soup, most of the cells and virus will die, in most cases,
before the food is consumed. In addition, it is extremely
difficult to get successfully attacked by AIDS via stuff you
eat.
AIDS cannot be transmitted by the act of GIVING blood to a
blood bank. All equipment used for such blood donation is
sterile, and is used just once, and then discarded.
How is AIDS transmitted?
Sexual activity is one of the primary ways AIDS is
transmitted. AIDS is transmitted particulary by the transmission
of blood or semen of an infected person into contact with the
blood of an uninfected person. Sex involving penetration of the
penis into either the vagina of a woman or the rectum of either
a woman or a man has a very high risk of transmitting the
disease. It is felt to be about four times MORE likely for an
infected male to transmit AIDS to an uninfected woman in the
course of vaginal sex than it is likely for an infected woman to
transmit AID to an uninfected male. This probably relates to
the greater area of moist tissue in a woman's vagina, and to the
relative liklihood of microscopic tears to occur in that tissue
during sex. But the bottom line is that AIDS can be transmitted
in EITHER direction in the case of heterosexual sex.
Transmission among lesbians (homosexual females) is rare.
Oral sex is an extremely common form of sexual activity
among both gay and straight folks. Such activity involves
contact of infected semen or vaginal secretions with the mouth,
esophagus (the tube that connects the mouth with the stomach)
and the stomach. AIDS virus and infected cells most certainly
cannot survive the acid environment of the stomach. Yet, it is
still felt that there is a chance of catching the disease by
having oral sex with an infected person. The chance is probably
a lot smaller than in the case of vaginal or rectal sex, but is
still felt to be significant.
As mentioned above, AIDS is also transmitted among
intravenous drug users by the sharing of needles. Self righteous
attitudes by the political "leaders" of this country at local,
state, and national levels have repeatedly prevented the very
rational approach of providing free access to sterile
intravenous equipment for IV drug users. This measure, when
taken promptly in Amsterdam, was proven to greatly and
SIGNIFICANTLY slow the spread of the virus in that population.
The best that rational medical workers have succeeded in doing
here in San Francisco is distribute educational leaflets and
cartoons to the I.V. drug abusing population instructing them in
the necessity of their rinsing their "works" with chlorox before
reusing the same needle in another person. Note that even if you
don't care what happens to I.V. drug abusers, the increase in the
nuber of folks carrying the virus ultimately endangers ALL
living persons. Thus, the issue is NOT what you morally think of
I.V. drug addicts, but one of what is the most rational way to
slow the spread of AIDS in all populations.
Testing of donated blood for AIDS has massivly reduced the
chance of catching AIDS from blood transfusions. But a very
small risk still remains. To further reduce that risk, efforts
have been made to use "autotransfusions" in cases of "elective
surgery" (surgery that can be planned months in advance).
Autotransfusion involves the patient storing their own blood a
couple of weeks prior to their own surgery, to be used during
the surgery if needed. Similary, setting up donations of blood
from friends and family known to be antibody negative and at low
risk for AIDS prior to schedualed surgery further can decrease
the already small risks from transfusion.
AIDS and SEX: What are the rational options?
The "sexual revolution" of the 1960's has been stopped
dead in its tracks by the AIDS epidemic. The danger of
contracting AIDS is so real now that it has massively affected
the behavior of both gay and straight folks who formerly had
elected to lead an active sexual life that included numerous new
sexual contacts.
Abstinence
The safest option regarding AIDS and sex is total
abstinence from all sexual contact. For those who prefer to
indulge in sexual contact, this is often far too great a
sacrifice. But it IS an option to be considered.
Safe Sex
For those who wish to have sexual contact with folks on a
relatively casual basis, there have been devised rules for "safe
sex". These rules are very strict, and will be found quite
objectionable by most of us who have previously enjoyed
unrestricted sex. But to violate these rules is to risk
unusually horrible death. Once one gets used to them, tho, the
rule for "safe sex" do allow for quite acceptable sexual
enjoyment in most cases.
For those who wish to indulge in pentration of the vagina
or rectum by a penis: The penis MUST be sheathed in a condom or
"rubber". This must be done "religiously", and NO exceptions are
allowed. A condom must be used by a man even when he is
receiving oral sex. Cunnilingus (oral stimulation of a womans
gentitals by the mouth of a lover) is NOT considerd to be safe
sex. Safe sex includes mutual masturbation, and the stimultion
of one genitals by another's hand (provided there are no cuts in
the skin on that hand). But manual stimulation of another's
genitals is NOT safe if one has cuts on one's hands, unless one
is wearing a glove.
Note that even when one is conscientiously following the
recommendations for safe sex, accidents can happen. Condoms can
break. One may have small cuts or tears in ones skin that one is
unaware of. Thus, following rules for "safe sex" does NOT
guarantee that one will not get AIDS. It does, however, greatly
reduce the chances. There are many examples of sexaully active
couples where one member has AIDS disease and the other remains
seronegative even after many months of safe sex with the
diseased person. It is particularly encouraging to note that,
due to education programs among San Francisco gay males, the
incidence of new cases of AIDS infection among that high risk
group has dropped massively. Between practice of safe sex and a
significant reduction in the number of casual sexual contacts,
the spread of AIDS is being massively slowed in that group.
Similar responsible action MUST be taken by straight folks to
further slow the spread of AIDS, to give our researchers time to
find themeans to fight it.
Monogamy
For those who would have sexual activity, the safest
approach in this age of AIDS is monogamous sex. Specifically,
both parties in a couple must commit themselves to not having
sex with anyone else. At that time they should take AIDS
antibody tests. If the tests are negative for both, they must
practice safe sex until both members of the couple have been
greater than six months since sexual contact with anyone else.
At that time the AIDS blood test is repeated. If both tests
remain negative six months after one's last sexual contact with
any other party, current feeling is that it is now safe to have
"unprotected" sex. Note that this approach is recommended
especially for those who wish to have children, to prevent the
chance of having a child be born infected with AIDS, getting it
from an infected mother. Note also that this approach can be
used by groups of three or more people, but it must be adhered
to VERY strictly.
What to AVOID:
Unscrupulous folks have begun to sell the idea that one
should pay to take an AIDS antibody test, then carry an ID card
that certifies one as AIDS antibody negative, as a ticket to
being acceptable in a singles bar. This is criminal greed and
stupidity. First, one can turn antibody positive at any time.
Even WEEKLY testing will not pick this change up soon enough to
prevent folks certified as "negative" from turning positive
between tests. Much worse, such cards are either directly or
implicitly promoted as a SUBSTITUTE for "safe sex" practices.
This can only hasten the spread of the disease.
If you want to learn your antibody status, be sure to do
so ANONYMOUSLY. Do NOT get the test done by any agency that
requires your real name, address, or any other identifying
information. Fortuntely, in San Francisco, there is a public
place to get AIDS antibody testing where you may identify
yourself only as a number. Tho that place has a three month long
waiting list for testing, there are other private clinics where
one may have the test done for cash, and may leave any false
name one wishes. The reason I suggest this is that currently
there are some very inappropriate reactions by government and
business to folks known to be antibody positive. Protect
yourself from such potential persection by preventing your
antibody status from being a matter of record. That information
is for you, your lover(s), and (if need be) your physician. And
for NO one else.
There currently is NO treatment for AIDS (this includes
AZT) that shows significant promise.
In Conclusion:
It is my own strongly held view, and that of the medical
and research community world wide, that the AIDS epidemic is a
serious problem, with the potential to become the worst plague
this species has ever known. This is SERIOUS business. VASTLY
greater sums should be spent on searching for treatments and
vaccines. On the other hand, we feel strongly that this is
"merely" a disease, not an act by a supernatural power. And
while it does not seem likely we will find either a cure or a
vaccine in the forseeable future, it may be that truly effective
treatments that can indefinitely prolong the life of AIDS
victims may be found in the next few years. When science and
technology do finally fully conquer AIDS, we can go back to
deciding what sort and how much sex to have with who ever we
choose on the basis of our own personal choice, and not by the
coercion of a speck of proteins and RNA. May that time come
soon. In the mean time, we must all do what we can to slow the
spread of this kiler.
f:\12000 essays\sciences (985)\Biology\AIDS Aquired Immune Deficiency Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AIDS:
Aquired Immune Deficiency Syndrome
I am doing a report on AIDS, I don't know
much about AIDS but I will tell you what I know. I
know that it is transmitted by sexual
contact,blood,needles,children during/before birth.
I also know it affects the immune system directly, It
is caused by the virus HIV which they have no
cure for either AIDS or HIV at the current
moment, but they are doing serious research on
them.
I call it the "Generation X Disease" because it
mostly affects my sex-oriented generation. It is
currently one of the leading causes of death in the
USA and the death rate is increasing drastically! It
it the perfect virus of the ninties because of how it is
transmitted is like what the ninties is about... sex
and drugs(what a great generation huh).
Everyone's doing drugs and having pre-marital sex
at a young irresponsable age, it's getting so bad I
am beginning to believe that I am the only normal
person left.
You used to only be able to get an HIV test at
your doctor, but now they have home tests that are
confidential. You go to the store pick one up prick
your finger then mail in the blood sample, in about
two or three weeks you call in and enter your
special pin number and they give you your results. I
think this is cool but there has to be some
drawbacks like them getting samples messed up
and it getting mixed up in the mail or other stupid
things like that, like if you really don't have the
disease but you get someone elses reading who does
have HIV.
Thats about all I know about this horrible
disease,so ill move on to what I found when I
reaserched this topic. I went on the internet to find
some of my information, and i used different books
the librarian recommended me to read on AIDS
and HIV.
AIDS appears to be constantly changing it's
genetic structure so it makes it very hard to find a
cure for it, and very hard for the body to make
antibodies. This makes development of a vaccine
that is able to raise protective antibodies to all virus
strands a difficult task. I also found out that they
have made so much progress in finding a cure
because they know so much about it now.
The only known chemical that is effective in
reducing reactions/symptoms, is the chemical
zidovudine which was formerly called
azidothymidine(AZT). Which was developed in
1987. It is indicated thet few if any are likely to
survive the virus in the long run.
The AIDS epidemic is having a major impact
on many aspects of medicine and health care. The
U.S. Public Health Service estimates thet the annual
cumulative lifetime cost of treating all persons with
AIDS in 1991 is $5.3 billion; this was expected to
reach $7 billion by 1991. People exposed to HIV are
having alot of difficulty obtaining adequate health
insurance coverage. I found that Yearly AZT
expenses, for example can average around six-
thousand dollars, although in 1989 the drug's
maker did offer to distribute AZT freely to HIV-
infected children. The yearly se for DDI is around
two-thousand dollars.
It is a big problem problem when legal action
might have to be taken. Pretty soon mandatory
testing of the disease will be a necessity not a right,
and most people are to scared or to embarassed to
ask their partner if they are infected.
The first case of AIDS was identified in 1979
in New York. Workers in the National Cancer
Institute developed tests for AIDS enabeling them
to follow the transmission of the disease and to
study the origin and it's mechanism.
It is thought that AIDS was originated in
Africa, because it is know to infect some african
monkeys and alot of cases have been reported there.
In 1990 the WHO(World Health Orginization)
announced that there was 203,599 reported cases of
AIDS in 1996 it was in the millions! So this shows
how fast this disease spreads and how it affects
everyone.
f:\12000 essays\sciences (985)\Biology\AIDS HIV.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
Being one of the most fatal viruses in the nation, AIDS (Acquired
Immunodeficiency Syndrome) is now a serious public health concern in most major
U.S. cities and in countries worldwide. Since 1986 there have been impressive
advances in understanding of the AIDS virus, its mechanisms, and its routes of
transmission. Even though researchers have put in countless hours, and millions of
dollars it has not led to a drug that can cure infection with the virus or to a vaccine
that can prevent it. With AIDS being the leading cause of death among adults,
individuals are now taking more precautions with sexual intercourse, and medical
facilities are screening blood more thoroughly. Even though HIV ( Human
Immunodeficieny Virus) can be transmitted through sharing of non sterilize needles
and syringes, sexual intercourse, blood transfusion, and through most bodily fluids,
it is not transmitted through casual contact or by biting or blood sucking insects.
Development of the AIDS Epidemic
The first case of AIDS were reported in 1982, epidemiologists at the Center of
Disease Control immediately began tracking the disease back wards in time as well
as forward. They determined that the first cases of AIDS in the United States
probably occurred in 1977.
By early 1982, 15 states, the District of Columbia, and 2 foreign countries
had reports of AIDS cases, however the total remained low: 158 men and 1 woman.
Surprising enough more then 90 percent of the men were homosexual or bisexual.
Knowing this more then 70 percent of AIDS victims are homosexual or bisexual
men, and less then 5 percent are heterosexual adults. Amazing enough by
December of 1983 there were 3,000 cases of AIDS that had been reported in adults
from 42 states, the District of Columbia, and Puerto Rico, and the disease had been
recognized in 20 other countries.
Recognizing the Extent of Infection
The health of the general homosexual populations in the area with the
largest number of cases of the new disease was getting looked at a lot closer by
researchers. For many years physicians knew that homosexual men who reported
large numbers of sexual partners had more episodes of venereal diseases and were
at higher risk of hepatitis B virus infection than the rest of the population, but
conicidentally with the appearance of AIDS,. other debilitating problems began to do
appear more frequently. The most common was swollen glands, often accompanied
by extreme fatigue, weight loss, fever, chronic diarrhea, decreased levels of blood
platelets and fungal infections in the mouth. This condition was labeled ARC (AIDS
Related complex).
The isolation of HIV in 1983 and 1984 and the development of techniques to
produce large quantities of the virus [paved the way for a battery of tests to
determined the relationship between AIDS and ARC and the magnitude of the
carrier problem. Using several different laboratory tests, scientists looked for
antibodies against the HIV in the blood of AIDS and ARC patients. They found that
almost 100 percent of those with AIDS or ARC had the antibodies-they were
seriopostive. In contrast less then one percent of persons with no known risk factors
were seropositive.
Definition of AIDS
AIDS is defined as a disease, at least moderately predictive of defects in cell-
meditated immunity, occurring in a person with no known cause for diminished
resistance to that disease. Such diseases include Kaposi's Sarcoma, Pneumocystis
carnii pneumonia, and serious other opportunistic infections. After the discovery of
HIV and the development of HIV-antibody test, the case definition of AIDS was
updated to reflect the role of the virus in causing AIDS, but the scope of the
definition remained almost the same.
Transmission
HIV is primarily a sexually transmitted disease, it is transmitted by both
homosexual and bisexual and heterosexual activity. The first recognized case was
among homosexual and bisexual men. Many numbers of studies have shown that
men who have sexual partners and those who practice receptive anal intercourse
are more likely to be infected with HIV than other homosexual men. Researchers
found a strong connection between HIV infection and rectal trauma, enemas before
sex, and physical signs of disruption of the tissue lining the rectum.
Homosexual women tend to have a very low incidence of venereal disease in
general, an AIDS is no exception. Female-to-female transmission is highly
uncommon, however it has been reported in one case and suggested in another. In
the reported case, traumatic sex practices apparently resulted in transmission of HIV
from a woman who had acquired the virus through IV drug abuse to her non-drug-
using sexual partner.
1983 was when the first heterosexual (Male to female; female to male)
transmission was reported. In 1985, 1.7 percent of the adult cases of AIDS reported
to the CDC (Center for Disease Control) were acquired through heterosexual
activity; projections suggest that by 1991 the proportion will rise to 5 percent.
Heterosexual contact is the only transmission category in which women outnumber
men with AIDS. Heterosexual contacts accounts for 29 percent of AIDS cases
among women in the United States, but for only 2 percent of cases among men.
Estimates of the risk of HIV transmission in unprotected intercourse with a person
known to be infected with HIV are 1 in 500 for a single sexual encounter and 2 in 3
for 500 sexual encounters. The use of a condom reduces these odds to 1 in 5,000
for a single encounter and to 1 in 11 for 500 encounters.
Routes NOT Involved in Transmission of HIV
A study of more than 400 family members of adult and pediatric AIDS
patients demonstrate that the virus is not transmitted by any daily activity related to
living with or caring for an AIDS patient. Basically meaning that personal
interactions typical in family relationships, such as kissing on the cheek, kissing on
the lips, and hugging, have not resulted in transmission of the virus.
Patterns
There are three different geographic patterns of AIDS transmission. The first
one is characteristic of industrializing nations with large numbers of reported AIDS
cases, such as the United States, Canada, countries in Western Europe, Australia,
New Zealand, and parts of Latin America. In these areas most AIDS cases have been
attributed to homosexual or bisexual activity and intravenous drug abuse. The
second pattern is seen in areas of central, eastern, and southern Africa and in some
Caribbean countries. Unlike pattern one most AIDS cases in these areas occur
among heterosexuals, and the male-to-female ratio approaches 1 to 1. The third
pattern of transmission occurs in regions of Eastern Europe, the Middle East, Asia,
and most of the Pacific. It is believed that HIV was introduced to these areas in the
early to mid-1980s.
Any study associated with AIDS must begin with the understanding that AIDS
is only one outcome of infection with HIV-1. People infected with the virus may be
completely asymptomtic; they may have mildly debiliating symptoms; or they may
have life-threatening conditions caused by progressive destruction of the immune
system, the brain, or both.
One of the first signs of HIV-1 infection in some patients is an acute fluelike
disease. The condition lasts from a few days to several weeks and is associated with
fever, sweats, exhaustion, loss of appetite, nausea, headaches, soar throat, diarrhea,
swollen glands, and a rash on the torso.
Some of the symptoms of the acute illness may result from HIV-1 invasion of
the central nervous system. In some cases the clinical findings have correlated with
the presence of HIV-1 in the cerebrospinal fluid. Symptoms disappear along with
the rash and other sings of acute viral disease. When the blood test for HIV-1
antibodies become available, researchers demonstrated the lymphadenopathy was a
frequent consequence of infection with the virus. Scientist do not know what causes
the wasting syndrome, but some experts believe that it might result from the
abnormal regulation of proteins called monokines.
Between 5 and 10 percent of patients with AIDS and HIV-related conditions
have bouts of acute aseptic meningtis. About two-thirds of AIDS patients have a
degenerative brain disease called subacute encephalitis. HIV infection also have
been associated with degeneration of the spinal cord and abnormalities of the
peripheral nervous system. Symptoms include progressive loss of coordination and
weakness. Involvement of the peripheral nervous system may result in shooting
pains in the limbs or in numbness and partial paralysis.
HIV destroys the body's defense capabilities, opening itself to whatever
disease-producing agents are present in the environment. The diagnosis of
secondary infection in AIDS patients and others with HIV infection is complicated
because some of the standard diagnostic tests may not work. Often such tests detect
the immune response to a disease-producing microorganism rather than the
organism itself.
The most common life threatening opportunistic infection in AIDS patients is
Pneumocystis carinii Pneumonia, a parasitic infection previously seen almost
exclusively in cancer and transplant patients receiving immunosuppressive drugs.
The first signs of disorder are moderate to severe difficulty in breathing, dry cough,
and fever.
Infection
Infection with HIV is a 2-step process consisting of binding and fusion. The
larger protein, glycoprotein120, is responsible for the binding activity. Its target is a
receptor molecule called CD4, found on the surface of some human cells. The tight
complex formed by glycoprotein120, and CD4 receptor brings the viral envelope
very close to membrane of the target cell. This allows the smaller envelope protein,
glycoprotein41, to initiate a fusion reaction. The envelope of the virus actually fuses
with the cell membrane, allowing the viral core direct access to the inner
mechanisms of the human cell. Once the viral core is inside the cell, the viral RNA
genome is reverse transcribed into DNA and then integrated into the host genome
cells.
Cells infected with HIV carry envelope proteins lodged in their membrane.
These cell-bound proteins can bind to CD4 receptors on uninfected cell. Fusion of
the two cell membranes allow partially formed viral particles to move from the
infected cell to the uninfected cell. Thus, HIV theocratically could spread through
the body without leaving host cells.
Cell Death
HIV infects many different cell types, but it preferentially kills the T4
lymphocyte. There have been suggestions the T4 cells are more vulnerable to HIV-
induced cell death than other cells because they have a higher concerntration of
CD4 receptors. There is speculation that cell death occurs when viral envelope
proteins lodged in the membrane of an infected cell bind to CD4 receptors
embedded in the same membrane. Multiple self fusion reactions could destabilize
the cell membrane and kill the cell.
The massive depletion of T4 cells involves the cell-to-cell fusion reaction
described above. A single infected cell with a high concentration of viral envelope
proteins on its surface can bind to hundreds of uninfected T4 cells. The fused cells
form giant, mulitnucleated structures called syncytia, which are extremely unstable
and die within a day. One cell with a productive viral infection can cause the death
of up to 500 normal cells. Cell death might be related to the presence of free-
floating viral envelope proteins in the bloodstream. These could bind to uninfected
T4 cells, leading to their elimination by the immune system. Other autoimmune
mechanisms also may play a roll in T-cell depletion.
HIV infection also may directly or indirectly suppress the production of new
T4 cells. Direct suppression would occur if HIV damaged T precursor cells in the
bone marrow. Indirect suppression would result if HIV interfered with the
production of specific growth factors. On the other hand, infected cells may secrete
a toxin that shortens the lifespan of T4 cells or other cells required for their survival.
Immune System
The Immune response to HIV infection, does not appear to halt the
progression of disease. Part of the explanation for this failure probably relates to the
structure of the envelope proteins. The most effective way to stop HIV infection
would be to block the binding reaction between the glycoprotein120 and the CD4
receptor. However, antibodies from infected patients rarely do this. Scientists
speculate that 2 or 3 regions of the glycoprotein120 molecule involved in the
binding reaction may form a recessed pocket. The inability of antibodies to get
inside such a pocket could explain the lack of protective immune response.
The envelope proteins also are heavily coated with sugar residues. The
human immune system does not recognize the sugar residues as foreign because
they are products of the host cell rather then the virus. The sugar residues form a
protective barrier around sections of the glycoprotein120 that might otherwise elicit
a strong immune response.
Regulatory Genes
There has been recent studies that indicate HIV's unusual regulatory genes
contributing to its ability to evade the immune system. In the simplest retroviruses
the replication rate is controlled by interactions between the host cell and elements
in the viral LTR. The virus itself has no way of regulating when, here, or how much
virus is produced. In contrast, the human immunodeficiency viruses have elaborate
regulatory control mechanisms in the form of specific genes. Some of the genes
permit explosive replication; other appear to inhibit production of virus.
Mechanisms that suppress the production of certain viral proteins, such as the
envelope proteins, may allow HIV to hide inside infected cells for long periods
without eliciting antibodies or other host immune responses.
Conclusion
As stated above in the last few pages, AIDS is the leading cause of death in
homosexual, and bisexual adult men. However, these statistics were from 1986, 11
years later it has grown to more, not just in homosexual and bisexual men, but also
in heterosexual sexual intercourse. At this point in time there is no cure, nor is
there a vaccination. However, there are ways to prevent HIV, some of those ways
are: abstinence, condoms, not sharing needles used for IV drugs. Public concern is
higher then it was 10 years ago, but that's because people are starting to realize that
not everyone is immune to it, as of right now the only ones immune to the HIV virus
are baboons.
f:\12000 essays\sciences (985)\Biology\Air Pollution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fish 1
CFish
Mr. Nollen
Biology 2B
8 May, 1996
Air Pollution
The Problem
Contamination of the atmosphere by gaseous, liquid, or solid wastes
or by-products that can endanger human health and the health and welfare
of plants and animals, or can attack materials, reduce visibility, or produce
undesirable odors. Among air pollutants emitted by natural sources, only
the radioactive gas radon is recognized as a major health threat. A
byproduct of the radioactive decay of uranium minerals in certain kinds of
rock, radon seeps into the basements of homes built on these rocks.
According to recent estimates by the U.S. government, 20 percent of the
homes in the U.S. harbor radon concentrations that are high enough to pose
a risk of lung cancer.
Fish 2
Each year industrially developed countries generate billions of tons of
pollutants. The level is usually given in terms of atmospheric
concentrations or, for gases in terms of parts per million, that is, number of
pollutant molecules per million air molecules. Many come from directly
identifiable sources; sulfur dioxide, for example, comes from electric power
plants burning coal or oil. Others are formed through the action of sunlight
on previously emitted reactive materials. For example, ozone, a dangerous
pollutant in smog, is produced by the interaction of hydrocarbons and
nitrogen oxides under the influence of sunlight. Ozone has also caused
serious crop damage. On the other hand, the discovery in the 1980s that air
pollutants such as fluorocarbons are causing a loss of ozone from the earth's
protective ozone layer has caused the phasing out of these materials.
Current information about the problem
The tall smokestacks used by industries an utilities do not remove
pollutants but simply boost them higher into the atmosphere, thereby
reducing their concentration at the site. These pollutants may then be
transported over large distances and produce adverse effects in areas far
Fish 3
from the site of the original emission. Sulfur dioxide and nitrogen oxide
emissions from the central and eastern U.S. are causing acid rain in New
York State, New England, and eastern Canada. The pH level, or relative
acidity, of many freshwater lakes in that region has been altered so
dramatically by this rain that entire fish populations have been destroyed.
Similar effects have been observed in Europe. Sulfur dioxide emissions and
the subsequent formation of sulfuric acid can also be responsible for the
attack on limestone and marble at large distances from the source.
The worldwide increase in the burning of coal and oil since the late
1940s has led to ever increasing concentrations of carbon dioxide. The
resulting "greenhouse effect", which allows solar energy to enter the
atmosphere but reduces the remission of infrared radiation from the earth,
could conceivably lead to a warning trend that might affect the global
climate and lead to a partial melting of the polar ice caps. Possibly an
increase in cloud cover or absorption of excess carbon dioxide by the
oceans
would check the greenhouse effect before it reached the stage of polar
melting. Nevertheless, research reports released in the U.S. in the 1980s
Fish 4
indicate that the greenhouse effect is definitely under way and that the
nations of the world should be taking immediate steps to deal with it.
History
In the U.S. the Clean Air Act of 1967 as amended in 1970, 1977, and
1990 is the legal basis for air-pollution control throughout the U.S. The
Environmental Protection Agency has primary responsibility for carrying
out the requirements of the act, which specifies that air-quality standards be
established for hazardous substances. These standards are in the form of
concentration levels that are believed to be low enough to protect public
health. Source emission standards are also specified to limit the discharge
of pollutants into the air so that air-quality standards will be achieved. The
act was also designed to prevent significant deterioration of air quality in
areas where the air is currently cleaner than the standards require. The
amendments of 1990 identify ozone, carbon monoxide, particulate matter,
acid rain, and air toxins as major air pollution problems. On the
international scene, 49 countries agreed in March 1985 on a United Nations
convention to protect the ozone layer. This "Montreal Protocol," which was
Fish 5
renegotiated in 1990, calls for the phaseout of certain chlorocarbons and
fluorocarbons by the year 2000 and provides aid to developing countries in
making this transition.
References
Encarta 95, New Groilers Encyclopedia,
Comptons, ABC News (magazine), Chicago Tribune
Compliments of America Online
f:\12000 essays\sciences (985)\Biology\Albinism.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Albinism
The word "albinism" refers to a group of inherited conditions. People with albinism
have little or no pigment in their eyes, skin, or hair. They have inherited genes that do not
make the usual amounts of a pigment called melanin. One person in 17,000 has some type
of albinism. Albinism affects people from all races. Most children with albinism are born to
parents who have normal hair and eye color for their ethnic backgrounds. Albinism is
found on the eleventh chromosome, section q, loci 14-21.
Oculocutaneous albinism involves the eyes, hair, and skin. Ocular albinism involves
primarily the eye. People with ocular albinism may have slight lightening of hair and skin
colors as well, compared to other family members. At present researchers have found 10
different types of oculocutaneous albinism, and five types of ocular albinism. Newer
laboratory research studying DNA has shown that there are numerous types of changes in
the genes of those with albinism, including within families.
The most common types of oculocutaneous albinism are called "ty-negative" and
"ty-positive". Persons with ty-negative albinism have no melanin pigmentation, and more
difficulty with vision. Those with ty-positive albinism have very slight pigmentation, and
generally less severe visual difficulties. Tests were done on the hair roots of individuals
with albinism, to tell these types of albinism apart. However, these hair tests cannot
identify types of albinism, particularly in young children, whose pigment
systems are immature. Therefore hair tests are not helpful in predicting the extent of visual
disability of a child.
"Ty-Neg" (also called Type 1A) albinism results from a genetic defect in an
enzyme called tyrosinase. Tyrosinase helps the body to change the amino acid tyrosine into
pigment. The genetic defect that causes albinism in other types of albinism is unknown,
but it is speculated that it involves other enzymes used to make pigment.
Albinism is passed from parents to their children through genes. For nearly all
types of albinism both parents must carry an albinism gene to have a child with albinism.
Parents may have normal pigmentation but still carry the gene. When both parents carry
the gene, and neither parent has albinism, there is a one in four chance at each pregnancy
that the baby will be born with albinism. This type of inheritance is called autosomal
recessive inheritance.
If a parent has a child with albinism, it means the parent must carry the albinism
gene. Until recently, unless a person has albinism or has a child with albinism, there was no
way of knowing whether he or she carries the gene for albinism. Recently a test has been
developed to identify carriers of the gene for ty-negative albinism and for other types in
which the tyrosinase enzyme does not function. The test uses a sample of blood to identify
the gene for the tryrosinase enzyme by its DNA code. A similar test can identify ty-
negative or similar albinism in unborn babies, by aminiocentesis.
People with albinism have very normal lives. They play sports, have normal
intelligence, and can have babies. The only difference between normal people and albino
is that they don't have pigment in their skin.
f:\12000 essays\sciences (985)\Biology\Alzheimers Disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Alzheimer's Disease is a progressive and irreversible brain disease that destroys mental
and physical functioning in human beings, and invariably leads to death. It is the fourth
leading cause of adult death in the United States. Alzheimer's creates emotional and
financial catastrophe for many American families every year, but fortunately, a large
amount of progress is being made to combat Alzheimer's disease every year. To fully be able
to comprehend and combat Alzheimer's disease, one must know what it does to the brain,
the part of the human body it most greatly affects. Many Alzheimer's disease sufferers had
their brains examined. A large number of differences were present when comparing the
normal brain to the Alzheimer's brain. There was a loss of nerve cells from the Cerebral
Cortex in the Alzheimer's victim. Approximately ten percent of the neurons in this region
were lost. But a ten percent loss is relatively minor, and cannot account for the severe
impairment suffered by Alzheimer's victims. Neurofibrillary Tangles are also found in the
brains of Alzheimer's victims. They are found within the cell bodies of nerve cells in the
cerebral cortex, and take on the structure of a paired helix. Other diseases that have
"paired helixes" include Parkinson's disease, Down's Syndrome, and Dementia Pugilistica.
Scientists are not sure how the paired helixes are related in these very different
diseases. Neuritic Plaques are patches of clumped material lying outside the bodies of
nerve cells in the brain. They are mainly found in the cerebral cortex, but have also
been seen in other areas of the brain. At the core of each of these plaques is a substance
called amyloid, an abnormal protein not usually found in the brain. This amyloid core is
surrounded by cast off fragments of dead or dying nerve cells. The cell fragments include
dying mitochondria, presynaptic terminals, and paired helical filaments identical to
those that are neurofibrillary tangles. Many neuropathologists think that these plaques
are basically clusters of degenerating nerve cells. But they are still not sure of how and
why these fragments clustered together. Congophilic Angiopathy is the technical name that
neuropathologists have given to an abnormality found in the walls of blood vessels in the
brains of victims of Alzheimer's disease. These abnormal patches are similar to the
neuritic plaques that develop in Alzheimer's disease, in that amyloid has been found
within the blood-vessel walls wherever the patches occur. Another name for these patches
is cerebrovascular amyloid, meaning amyloid found in the blood vessels of the brains.
Acetylcholine is a substance that carries signals from one nerve cell to another. It is
known to be important to learning and memory. In the mid 1970s, scientists found that the
brains of those afflicted with Alzheimer's disease contained sixty to ninety percent less
of the enzyme choline acetyltransferase(CAT), which is responsible for producing
acetylcholine, than did the brains of healthy persons. This was a great milestone, as it
was the first functional change related to learning and memory, and not to different
structures. Somatostatin is another means by which cells in the brain communicate with each
other. The quantities of this chemical messenger, like those of CAT, are also greatly
decreased in the cerebral cortex and the hippocampus of persons with Alzheimer's disease,
almost to the same degree as CAT is lost. Although scientists have been able to identify
many of these, and other changes, they are not yet sure as to how, or why they take
place in Alzheimer's disease. One could say, that they have most of the pieces of the
puzzle; all that is left to do is find the missing piece and decipher the meaning. If
treatment is required for someone with Alzheimer's disease, then the Alzheimer's Disease
and Related Disorders Association(ADRDA), a privately funded, national, non-profit
organization dedicated to easing the burden of Alzheimer victims and their families and
finding a cure can be contacted. There are more than one hundred and sixty chapters
throughout the country, and over one thousand support groups that can be contacted for
help. ADRDA fights Alzheimer's on five fronts 1- funding research 2- educating and thus
increase public awareness 3- establishing chapters with support groups 4- encouraging
federal and local legislation to help victims and their families 5- providing a service to
help victims and their families find the proper care they need.
f:\12000 essays\sciences (985)\Biology\Americas Zoos Entertainment to Conservation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
America's Zoos: Entertainment to Conservation
The children run ahead, squealing with delight. Their parents lag behind holding the children's brightly colored balloons and carrying the remnants of the half-eaten cotton candy. The family stops to let the children ride the minitrain and take pictures together under the tree. They walk hand-in-hand toward the exit, stopping first at the gift shop where they each splurge on a treat to remind them of the day's adventure. Although this may sound like a typical scene from the local amusement park, it's actually the city zoo. All that forgotten was walking from cage to cage watching the anxious animals pace back and forth in their closed-in prisons (Hope, 1994). Their cages feel cold and desolate. The concrete floor provides no warmth and the atmosphere is sterile. The animals do not appear very happy in this closed-in environment. Just who are these anxious animals? They are the common everyday animals any child could name: the bears, the tigers, the elephants and the monkeys. What about the rest of the world's unique creatures? Hundreds of species are endanger of becoming extinct, and conservation is in need. Extinction is a permanent issue. The treatment of all our animals and their rights is important as well. As concern for the world's animals becomes more prominent in the news, our zoos rise up to meet the challenge. Animal's rights and their treatment, regardless of species, have been brought to attention and positive movements made. While the number of endangered species grows, zoos attempt to do their part in conservation. Both in and out of the park, zoos and their scientist do their best to help these species. Efforts out in the field within the United States as well as other countries are currently in progress. The question lies in the worthiness of these efforts. Is the conservation successful? Are these efforts being done for the right reasons? Will zoos remain as a form of family entertainment or will the enjoyment of the patrons become unimportant? While it is obvious that things are changing, the eventual goals might not be so clear. As the concern shifts from entertainment to conservation, the zoo's efforts are examined, both in the park and beyond, and their motives judged.
As cities became more and more urbanized, it was harder to still have first-hand contact with nature. Time schedules were busier and no one could really afford to spend an entire day to drive out to the countryside. City zoos took over that connection to nature, especially for the cityfolks. Afternoon visits to the zoo became a fun form of family entertainment (Arrandale, 1990). Even though the bars separated the two worlds, it allowed the people to see the animals. When this interaction began to take place, people examined these institutions for their concern for the animals. The intentions were obvious, to provide the public with the ability to be around these creatures, but were their methods ethical? Animals were displayed for the general public's enjoyment (Diamond, 1995). As one critically judges the physical environment of these animals they can personally decide whether ethics were compromised. Some argued that the zoos provided a safe home and regular meals for the animals, and for this they should be happy. On the flip side, these creatures were caged and unable to thrive in the wild (Burke, 1990). Under observation, zoos are examined for the humanity with which they treat the animals. Animal welfare has become a concern within our country. This group is not to be confused with the animal rights movement. Without the use of violence, one of the animal welfare movement's goals is to improve the way these institutions, like the city zoos, provide for these animals (Burke, 1990). Honoring the conservation efforts, they simply want to make sure the animals are cared for with the highest levels of concern, both physically and nutriently (Diamond, 1995). Human rights are established in the written form of laws, and these activists speak on behalf of the animal's rights (Burke, 1990). While some views, like fighting for the equality of animals and humans, might seem extreme, no one can argue that the animal's rights need to remain an important issue when providing care at the zoos (Burke, 1990).
The days of a zoo simply providing a recreational place for a family to spend their afternoon together are over. The purpose of zoos has changed considerably since their formation. The switch from pure entertainment to education and conservation is a direct result of the growing number of endangered species. Programs are now continually being implemented to try to rebuild the numbers of diminishing species. Careful captive breeding is turning the city zoos into conservations and wildlife institutions (Hope, 1994). Zoos are the places many of these species are beginning to call home (Stevens, 1993). Home is becoming more and more comfortable. Concrete floors have disappeared, as well as the steel black bars. They are replaced with a more natural environment. Trying to simulate their old habitats, trees and landscape now encompass the exhibits (Arrandale, 1990). Barely visible moats or unbreakable glass barriers is all that separates the visitors from the animals. For infants born within the zoo, these environments help prepare them for the real world should they ever be released. Animals are no longer inmates in the prison of zoos. Not only has the appearance changed, zoos have also added some unique members to their list of residents. The bears, tigers, elephants and monkeys can still be found, but so can more unusual animals. Many endangered species need what these zoos and their scientists can do for them. Providing a stable environment first, the scientists can then try to reestablish their numbers (Diamond, 1995). Measures are taken when there is still enough reproductive animals to strengthen the species. The earlier the captive breeding programs start, the more successful they will be. Without sufficient numbers, inbreeding may occur which will eventually lead to more problems. So these programs are very important in the efforts to try to save these special species. One of the biggest benefits of captive breeding is that the general public gets the privilege of observing these species first-hand (Diamond, 1995). Perhaps one of the most important changes the zoos have made is their purpose. Rather than just providing entertainment, the goal is education through enjoyment. Teaching the public about these animals and their habitats can be extremely crucial in keeping these species afloat (Stevens, 1993). Zoos are making great strides in the education, showing people the creatures and the environments they might have indirectly helped in the destruction of (Arrandale, 1990). With this newfound knowledge, zoos hope the visitors will be still have the enjoyment, but will also be more informed about the world.
As the people walk through the exit gates of the zoos, they probably will not realize how much of the zoos efforts they will not see. So much goes on behind the scenes, unaware to the general public. Hopefully they are now educated about the endangered species being breed in captivity on the premises. The ultimate goal of this breeding is to strengthen the numbers of these animals, not only in zoos but in the wild as well. The species need to be placed back into their wild habitats as soon as the numbers becomes more stable. The reintroduction involves a slow adjustment period while they animals, many of which are born in these breeding institutes, adapt to their new surroundings. After they are released, scientists continue to subtly watch over these animals, while anticipating instinctual reproduction in the wild. Observations continue, data is collected and more breeding is planned. Studies are done to examine new species who also be diminishing in numbers. Their environment and climate must be studied in order to learn enough the surroundings in which they inhabit (Arrandale, 1990). This information will help scientists plan a conservation strategy to insure these species do not become endangered. Field study is truly the first step in keeping these endangered species thriving healthy on Earth. This field work is not limited to strictly examining species within the United States. Much work is being done between our nation's zoos and other countries. One particular scientist is from the Bronx Zoo in New York, George Schaller (Stevens, 1993). The Bronx Zoo/ Wildlife Conservation Park, as it is now called, holds a prestigious title as a very effective conservation for endangered species (Hope, 1994). Dr. Schaller has helped the Chinese government reserve a 100,00-square-mile area in a region of Tibet for the sake of the country's treasured, but endangered, pandas (Stevens, 1993). In a completely opposite part of the world, a seven nation effort is in effect to recreate the natural migration route of cougars from Belize to Panama (Stevens, 1993). This is just a small sampling of one zoo scientist's efforts to help save endangered species out in the field. When the number of zoo scientists who do conservation field work from all the different parks is considered, the work of zoos reaches far beyond the gates.
A family spends a late Saturday afternoon together, laughing and walking through the city zoo. The cotton candy and balloons remain, but the fun does not stop there. The monkeys live in the new "Rain Forest" exhibit, which is next to "Feline Mountain" where the tigers run free. The habitats teach the children about where these animals really live in the wild (Arrandale, 1990). The koala bears, which were endangered, are separated from the visitors by a stream and a gate fence. The family learns a lot about the endangered species being breed, and even about how they can help in trying to save these animals. The fun was still there, and the family got to truly witness nature's creatures first-hand. Simply because it's become more of an educational tool, the entertainment remains. The children still squeal with delight and the afternoon is considered a success by all. Now, the goal of zoos reaches beyond the education factor. Although enjoyment is still a concern, education of the public and conservation of endangered species have become more prominent issues within the zoos. On the park's premises, complicated studies are taking place to preserve these species. The habitats are more like "home" and therefore more comfortable for the animals. Captive breeding is the product of careful planning to produce a strong new generation. Visitors are allowed to see these unique creatures, which is perhaps one of the most important parts of the changing roles. By witnessing first-hand how critical the survival of these species are, the patrons may realize how even they personally can help to their preservation. The interaction between animals and visitors relates the reality of nature to the public. Reaching one step farther, the scientists then move out into the real world. The captive breed animals must be reintroduced to the wild. Scientists keep in contact in order to make sure the species stays strong., while simultaneously looking for more species that may need their help. United States zoo scientists travel to the ends of the Earth if there is a species in need. Work is currently being done in combination with other countries to prevent these animals from becoming extinct and allowing them to lead natural lives. As the concern shifts from entertainment to conservation, the zoo's efforts are examined, both in the park and beyond, and their motives judged. In my opinion, the intent of zoos has completely changed since their formation. Animals are no longer just prisoners in concrete cages for the public's enjoyment. They are respected and considered treasured individuals. Personally, I fondly remember visiting the zoo as a child. Although, the parts I seem to remember most are the stops at the gift shop and the strange odor that lurked around some of the cages. Growing up in the city, this was pretty much my only chance to have contact with wildlife, limited as it was. Now, going back to visit these "old" zoos with new names and faces, I am able to gain so much more knowledge. Standing alone as one individual, I learn the natural habitats of these animals, how close some are to extinction, efforts happening in other countries and how I can personally help. If I am able to come to all of these realizations on my own, imagine how much more knowledgeable the public will be as a whole on these matters. Both education and species conservation are gained. Honestly, many members of the human population may not realize what life is truly like out in the wild. Nature has been difficult for many animals and these scientists are trying to rebuild what Mother Nature, in combination with the human race, has almost destroyed. The role has shifted, but I believe that the motives have also changed considerably. The concern of the patrons will always be a factor, but with so many people worried about the animals, they are not forgotten. Perhaps if the general public, meaning those who do not have the privilege of visiting these zoos becomes more informed about the work, less questions will be raised about this transition. Personally, I cannot differentiate the one who suffers in this arrangement. The animals' rights are looked after, the public becomes more aware and the endangered numbers of many species are strengthened. If the children still squeal, the animals are safe and measures are being taken to help Earth's creatures, I would consider the venture successful and applaud it as well.
f:\12000 essays\sciences (985)\Biology\Amyotrophic Lateral Sclerosis Lou Gherigs disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Amyotrophic Lateral Sclerosis
(Lou Gherig's Disease)
Amyotrophic Lateral Sclerosis is a deadly disease of the nervous system. Also known as Lou Gehrig's disease, ALS at this time affects 25,000 people in the U.S. today. One in 50,000 people will be affected in any one year. The average age for diagnosis of ALS is between 30 and 70, although there have been cases of teenagers contracting it.The average life span after diagnosis are three to 10 years, although 20 percent of those affected will outlive their prognosis by a number of years. ALS affects more men than women. Approximately 60 percent of those affected are male, 40 percent are female.
Little is known about the exact cause of ALS at this time, although it can be traced back to chromosome 21. The defect is inherited as an autosmal dominant trait. Other theories such as metal poisons, viral infections, even aging have been considered. ALS attacks the motor neurons in your nervous system that control your muscles. Your motor neurons slowly deteriorate, causing your muscles to not receive information from your brain. Your muscles then become useless and begin to deteriorate.
Symptoms of ALS include:
Tripping and falling
Loss of motor control in hands and arms
Difficulty speaking and swallowing or difficulty breathing
Persistent fatigue
Twitching and cramping, sometimes severely
As ALS progresses, all voluntary muscles become useless. The patient cannot eat, breathe or communicate with others. Total life support may be the only thing keeping them alive. ALS can lead to total paralysis.
Although there is no cure, medications such as siazepam can assist with controlling spasms and muscle cramps and saliva. Siazepam can also help control muscle twitching. Physical therapy is important for patients with ALS to maintain flexibility in joints and to prevent contractures, or fixations of muscles.
Diagnosis of ALS is difficult, since there is no clinical or laboratory test to identify it. Diagnosis is done through careful examination of a patient's history, neurological testing, and electromyograms.
Researchers have been studying whether a defective metabolism of glutamate, an amino acid, is detrimental to the nerve cells in the muscles of ALS patients. Scientists are trying to determine whether they can prevent the toxic effects of glutamate. Other scientists are studying Threostat, which may increase the amino acid called glycine, which might neutralize glutamate found in ALS patients.
ALS and Muscular Dystrophy are commonly confused due to their similar symptoms. The main difference is that ALS affects the nervous system, whereas Muscular dystrophy affects the muscle.
Sources
"ALS." Internet site. Post date: June 1995.
Hopkins, Harold. "Amyotrophic Latral Sclerosis." CD-ROM: Grolier Encyclopedia. 1995 ed.
Williams, D. B. "Amyotrophic Lateral Sclerosis." Mayo Clinic Proc. Jan. 1991.
Found on CD-ROM: The Family Doctor.
f:\12000 essays\sciences (985)\Biology\An essay about AIDSepidermiology causes pathology and symp.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1.INTRODUCTION
In June 1981, the centers for The Disease Control of the United States reported that five young homosexual men in the Los Angels area had contracted Pneumocystis Carinii pneumonia( a kind of pneumonia that is particularly found in AIDS patient). 2 of the patients had died. This report signalled the begninning of an epidemic of a viral disease characterized by immunosuppression associated with opportunistic infection( an infection caused by a microrganism that does not normally produce disease in human; it occurs in persons with abnomality functioning immune system), secondary neoplasms( any abnormal growth of new tissue, benign or malignant) and neurologic mainfestation, which has come to be known as AIDS.
Though Aids was first discovered in U.S.A, AIDS has been reported from more than 163 countries around the world and an estimated 10 million people are infected worldwide. Worsestill, the pool of HIV- infected persons in Africa is large and expanding.
2.RISK GROUP AND MODE OF TRANSMISSION
Studies in the U.S.A. have retentified five groups of adults at risk for developing AIDS. The case distribution in these groups are as follows:
(1). Homosexuals or bisexual males constitute the largest group, about 60% of the reported cases. This includes 5% who were intravenuous drug as well.
(2). Intravenous drug users with no previous history of homosexuality compose the next largest group, about 23% of all patients.
(3). Hemophiliacs (the people who have inborn disease characterized by excesssive bleeding and occuring only in males) especially those who received factor VIII concentrate before 1985, about 1% of all patients.
(4).Recipents of blood and blood components who are not hemophiliacs but who received tranfusions of HIV-infected whole blood components (e.g. platelet, plasma) account for 2 %.
(5). Other high risk groups: 86% of patients acquire disease through heterosexual contacts with members of other high risk groups. 80% of children with AIDS have a HIV-infected parents and suffer from transplacental or perinatal transmission.
Thus from the preceding discussion, it should be aparent that transmission of HIV occurs under conditions that facilitate exchange of blood fluids containing the virus-infected cells. Hence, the three major routes of transmission are sexual contact , parenteral routes( ie adminstration of a substance not through the digestive system) and the passage of the virus from infected mothers to their new borns where are mainly by three routes: in the womb by transplacental spread, during delivery through a infected birth canal, and after birth by ingestion of breast milk.
3. CAUSES
It is little doubt that AIDS is caused by HIV-I, a human type C retrovirus ( RNA virus the contains the enzyme, reverse transcriptase , to replicate its RNA genome to DNA) in the same family as the animal lentivirus family. It is also closely related to HIV- II, which cause a similiar disease, primarily in Africa.
3.1 Biology of HIV-I ( please refer to fig. 1)
HIV is a retrovirus inducing immunodeficiency by destruction of target T cells. Like most C-type retrovirus, it is spherical and contains a electron-dense core surrounded by a lipid envelop derived from the host cell membrane. The virus core contains four core proteins, including p24 and p18, two strands of genomic RNA and the enzyme reverse transcriptase. Studding the envelope are two glycoprotein gp120 and gp41 and the former one is important in binding the host CD4+ molecules to cause viral infection. And the proviral genome contains several genes that are not present in the other retrovirus. Many genes such as tat and rev regulate the HIV propagation and hence may be targeted for therapy.
3.2 The Development of AIDS
There are two major targets of HIV: the immune system and the central nervous system (CNS). The effects of HIV infection on each of these will be discussed seperately.
3.2.1HIV infection of lymphocyte & Monocytes -- the immune system (fig. 2 & fig.3)
Central to the pathogenesis of AIDS is the depletion of CD4+ helper T cells. The CD4 antigen is the high affinity receptor to the gp120 protein on HIV-I. After binding to the host cell, the virus is internalized and the genome undergoes reverse transcription; the proviral DNA is then integrated in to the genome of host. Transcription or translation and viral propagation may subsequently occur only with T-cell activation (e.g. antigenic stimulation). In the absence of T-cell activation, the infection enters a latent phase.
For the infected monocytes and macrophages they are refractory to cell breakdown caused by virus and thus they either act as reservioirs for HIV or as vehicles for viral transport, expecially to the central nervous system.
In addition to T-cell depletion, there are also qualitative defects in T-cell functions with a selcetive loss of T-cell memory early in the cause of disease.
3.2.2 Central nervous system involvement by HIV
The CNS is a major target in HIV infection. This occurs predominantly, if not exclusively via monocytes. Infected monocytes circulate to the brain and are somehow activated either to release toxic cytokines directly or to recruit other nervous damaging inflammatory cells.
4. NATURAL HISTORY OF HIV INFECTION (fig. 4)
Generally, the interactions of HIV with the host immune system can be divided into 3 phase. The early ,acute phase, is characterized by the presenceof viremia(virus in the blood), a fall in CD4+ Cells and a rise of CD8+ cells. Clinically, patient may have self-limited acute illnesses with sore throat, nonspecific muscle pain and aseptic meningitis . Recovery occur within 6-12 weeks. Then the middle, chronic phase, characterized by clinical latency with low-level viral replication and a gradual decline of CD4+ counts, may last from 7 to 10 years. Patients may develop persistent generalized lymph node enlargement with no constitutional symptoms. Toward the end of this phase, fever, rash, fatigue, and viremia appear. The final, crisis phase, characterized by a rapid decline in host defenses manifested by low CD4+ counts, is also recognized as full brown AIDS which include features of loss of weight, diarrhea, opportunistic infections, spectrum of bacterial infections, secondary neoplasm and neurologic inv
olvement. With AIDS, the 5-year mortality rate is 85% and with longer intervals the rate approaches 100%. Anyone with HIV infections and CD4+ t-cell count less than 200 cells/ ul may also consider having AIDS even if no clinical features are present.
f:\12000 essays\sciences (985)\Biology\Anabolic steroids.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Anabolic steroids are synthetic coumpounds formulated to be like the
male sex hormone testosterone. Many athletes use anabolic steroids male
and female alike, such as body builders , weightlifters, baseball players,
football players, swimmers, and runners. They do so because they mistakenly
believe that they will gain strength and size.
In a male testosterone is released by the leydig cells in the testes. The
testosterone has two main functions androgenic and anabolic. Androgenic is
the development of male sex characteristics. Anabolic is the development of
muscle tissue.
To treat patients who suffer from a natural lack of testosterone
pharmacoligists alter one form of testosterone slightly, increasing th length of
time the drug is active. Testosterone was first isolated in 1935, soon forms of
testosterone such as dianabol, durabolin, deca-durabolin, and winstrol were
produced.
One of the main effects of anabolic steroids is to increase the number
of red blood cells and muscle tissue without producing much of the
androgenic effects of testosterone. There are only four legal uses for steroids
treatment for certain forms of cancer, pituatary dwarfism, and serious
hormone disturbances.
There are two forms of anabolic steroids those taken orally and those
injected. The immediate effects of both are mood swings of many different
kinds. In one study, physicians Ian Wilson, Arthur Prang, Jr., and Patricio
Lara found that four out of five men suffering from dippresion when given a
steroid suffered from dillusions. A research team from Great Britian Found
that a patient given steroids became dizzy, dissoriented, and incoherent.
Physicians William Layman and William Annitto have had a case of a
young man who was diagnosed as schizophrenic took steriods to help with
his wieghtlifting. After taking these drugs he suffered severe deppresion and
anxiety and had trouble sleeping.
Most people who use steroids do not have side affects this severe.
Steroids make changes in the electroencephalogram (an image of brain
ellectrical activity). Researchers believe that these changes are responsible for
some of the behavior changes in users of steroids like increased hostility
and aggressivness.
Even though there not supposed to some of the masculinizing effects
of testosterone still show on users of anabolic steroid . People who use
them sometimes develop acne, deepend voice, abnormal hair growth. Some
of these side effects can not be reveresed. Men who use steroids may become
more or less interested in sex.
The most severe effect of anabolic steroids is on the liver they call it
peliosis hepatis or blood filled cysts in the liver. If the cysts rupture they can
cause liver failure which can kill a person. After stoping the use of steroids
though these cysts may become smaller and dissapear. Steroids can also
cause cancerous tumors in the liver that can also kill they can also dissapear
if the drugs are not used anymore. Steroids are also increase the risk of
getting gallstones.
Use of steroids also effects your cardiovascular system. There are two
kinds of cholesterol in your body high-density lipoprotein cholesterol (HDL)
and low-density lipoprotein cholesterol. HDL is good for the body but LDL is
very hazardous to your health. Using steroids lowers the number of HDL in
the body in most of the users. The lowering of the amount of HDL in the
body puts the person at a higher risk of developing high blood pressure and
blood-clotting problems. Some of the physicians also think that steroids also
cause abnormal fat deposits in the body and changes the way the body
processes carbohydrates.
Men who use steroids also can develop serious reproductive system
problems. Steroids lower the amount of sperm in the semen which makes
conception difficult or even impossible. A mans testes will decrease in size,
and the amount of natural testosterone and the hormones that nourish them
also decrease. In some users the prostate gland will get larger. Most of the
symptoms of the drug use goes away after the drug is not used anymore but
some of them stay such as abnormal tissue in the liver, testes, and prostate
gland.
There are also serious effects that take place in women as well. besides
the masculinizing effects the steroids lower the amount of female hormones
for example estrogen and progestrogen which are essential to the function of
a womens menstrual cycle. Some women claim to stop menstruating
completely.
Steroids are prohibited in any competion wether it be international or
national. Even though this is true many top athletes as well as amateur
athletes use steroids. Athletes first used drugs in compettition in 1954 when
the team physician for the world weight lifting team, John Ziegler, went with
the team to the world championships in Vienna, Austria. Ziegler then met the
Russian team doctor that told him that the Russians where using testosterone.
Ziegler brough the news back to the united states and soon every weight lifter
was using steroids. Thirty years later it was said that four out of five weight
lifters where using steroids.
I told you earlier that there is only four legal uses for steroids but in a
recent report it was said that only 20-30% of the steroids produced where
used for those purposes. Another study taken of where the athletes got these
drugs said that 36% of them got them from a physcian, 10% got them from
trainers, 9% from pharmacists, and 45% said that they bought them illegally.
Now the big question. Why do atheletes use this drug? Well its
because of the false information that steroids will get you bigger and stronger
than a person that is dieting and training hard. Researchers have concluded
though that using steroids does not get the user any bigger or stronger but
only increases hostility and aggresiveness. Some researchers say that the
added aggresivness and hostility may make a person train and work harder.
Part of the body size contraversy is about the type of tissue
growth that drugs promote. Now, every researcher agrees that steroids do
make the users gain weight, but some researchers think it's in real muscle
tissue and others think that it's in abnormal muscle tissue and that the weight
gained is due to water retained in the body. The ASCM states that steroids
gain weight only in the lean mass compartment of the body.
In May of 1986 the federal government set up a task force with the
U.S Department of Justice, the Food and Drug Administration, and the
Federal Bureau of Investigation to prosecute anabolic steroid users as well as
dealers. The first athlete to go to prison on steroid use charges was former
british track star David Jenkins. He was sentenced to seven years and was
fined $75,000 in 1987. By 1990 there was 125 legal actions on steroid related
charges in twenty seven different federal districts. About eighty five people
recieved charges totaling up to eighty years in jail time on steroid related
charges. They were fined $1.2 million and the government has siezed about
$18 million dollars worth of anabolic steroids that includes counterfeit,
diverted and smuggled supplies. The Food Drug Administration has limited
the number of anabolic steroids that can be sold and because of this it is
estimated that the black market makes about $400 million a year on
anabolic steroids alone.
Most pharmacies that manufacture anabolic steroids have improved
their security efforts. Some of the steroids have actually been taken off the
market which makes it even more difficult for the black market dealers to
obtain legal manufactured drugs. New ways for the dealers to make and buy
the drug have been created to serve the demand for them. The government
now believes that most of the steroids now produced and being sold are more
of a health risk.
A spokesman for the Justice Department anabolic steroid offenses are
"a high priority item" for the government. One of the highest concerns is
steroids and there attraction to younger people because they are exposed to
drug dealers so much. Many of these kids buy drugs that are unsterile and
have been manufactured under filthy conditions.
So far in this report all I have been talking about is the bad things that
anabolic steroids have been asssociated with well now its time to talk about
some of the good things that they are good for. Anabolic Steroids are used for
some medical applications such as chronically ill patients during nutritional
support, patients with cronic anemias, patients undergoing kidney dialysis,
patients with osteporosis, patients with a defiiciency of testosterone.
Following trauma or major surgery patients usually respond to the
stress with a catabolic response which is attributed to increased body
demands for proteins and calories for wound healing and energy production.
If these demands by the body are not met a significant wasting of body
tissues will occur. The wasting of body tissue is vital to the patient. So
doctors give patients anabolic steroids in this stage till the patient gets back to
his or her normal stage.
Throughout my research of this topic I read alot of diffferent stories
about anabolic steroids. I read from the researchers that anabolic steroids
show few if any effects at all and I read from athletes that there is a very
large effect on muscle gain and endurance. I came across only one book
though that addressed this issue between researchers and atheletes. The book
sayed that The American College of Sports Medicine stated a report on the
use and abuse of anabolic steroids. It stated that for many people any benefits
of anabolic steroids are small and not worth the health risk. Yet almost all the
athletes who use anabolic steroids feel that the steroids had a great effect and
that they would not have been successful without them. The big gap between
researchers and athletes has caused a big contraversy athletes say one thing
and researchers say another. The researchers have found a reason that maybe
is the cause that anabolic steroid users see efects that researchers say are not
possible they call it the "placebo effect".
The placebo effect works by the power of suggestion athletes believe
that the steroids will improve there performance so they do. The placebo
effect is real the performance is improved and the gains are not imagi
f:\12000 essays\sciences (985)\Biology\Animal Cruelty.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The topic of animal cruelty is one of great importance to the wold today. Why we humans have the right reserved to treat animals as lesser individuals is beyond me. Animals are fulfilling their part in the ecosystems and communities of the earth, and to the best extent that they are able. For example, a spider is being the best possible spider that it can be, spinning webs and working diligently at what it knows best, not bothering any creatures of the earth besides the ones which it needs to capture for food. We, as humans, decided that we are a much advanced life form and can basically treat anything else in this world in ways which we cannot imagine being treated. As a result the threads holding our earth together tightly in the balance are being slowly unraveled to lead into ultimate self-destruction.
In 1988, 16, 989 animals died in laboratories in the United Kingdom. This was due to such tests as acute and chronic toxicity experiments, where the animals are forced to consume substances such as perfume, make-up and other beauty products and are often literally poisoned, their systems literally overloaded by the substance in question. Another testing method is the Lethal Dose 50 percent test or LD50. In this procedure at least half of the animals must die in order for the government to figure out how much a human can ingest without dying.
In one such test some animals were fed 4. lb. of lipstick and one ended up dying of intestinal obstruction. In another, 7 pints of melted eye shadow was fed to rats. In yet another, mice were wrapped in tin foil and grilled in ultraviolet light to test a sun block cream for a total of 96 hours. The results of the test were that the longer the mice stayed in the rays, the more sunburnt they got.
But that is not all. A wax product used in many cosmetics was dosed into animals by a stomach tube. The amount that they used is equivalent to feeding 1 " lb of the stuff to humans. The animals involved soon began salivating, bleeding from the nose and mouth, and had extreme diarrhoea. As the test progressed some more, the animals became emaciated and unkempt, had congestion in the lungs and kidneys and solid wax in the stomach.
The infamous Draize eye test cannot be forgotten either. Chemicals are instilled into the eyes of rabbits in stocks, often for up to seven days. And because their eyes are physiologically different from ours, they cannot produce enough tears to wash the substance away and it remains there for long periods of time. Unfortunately for them, rabbits are cheap and simple to maintain, and they also have large eyes.
In the acute inhalation test the animals are subjected to intense amounts of a certain substance or toxin in a small caged environment for four days to test the effects of chemical inhalants used in aerosol spray cans and other gaseous materials. The animals which actually survive the test are then killed to be examined. This is also done with tobacco products and alcohol.
Another instance included the removal of infant rhesus monkeys from their mothers at birth and isolated or given cheap substitutes to study the need for a maternal figure early on in life. After 4 months some of the babies were able to integrate back into a normal monkey society, the ones isolated for a year or more had definite social problems.
To attempt to find more out about our sexuality we of course turn to cats. After some nerve surgery, the cats involved became disoriented and lost interest in sexual activity.
There was also the dastardly one in which some silly scientist removed a cat's brain to see if it could still walk afterwards.
Vivisection is of course the live dissection of animals for scientific research, and is quite widespread in use today. Most of these are performed without the use of anaesthetics.
The ironic thing regarding this entire situation is the fact that animals, for the most part, have a very different body chemistry than us, and, as a whole, are very different than us. So basically none of these tests has any relevancy for us today. As well, there is a widespread range of natural products already available to us which most scientists and doctors refuse to acknowledge because there is more money in animal research. This is sadistic and wrong. Some governments are pushing for mandatory animal testing on all products, even completely safe products like honey. This is quite unnecessary for the survival of humans. Not to mention the countless numbers of animals which have been injected with infectious diseases so that they can be researched.
There have been some breakthroughs in the use of human tissue culture in various experiments, but of course it is not as good as the real thing.
The destruction of so many animals with such harmful products is not exactly healthy for our earth either. There is really no safe to dispose of a noxious dead body. How unsafe and unreal.
As well there are more frightening prospects. All a company has to do to sell a "cruelty- free" product is to not have tested it within the past 5 years. This means that in reality products could be tested now and be on the market in 5 years. How frightening!
Once again, this is unusual and unnecessary with all of what the earth has already provided for us, the healing plants, most of which we are destroying with the clear cut and pollution problems experienced within the past century or so. What a huge power trip these scientists must be on to have thousands of animals lives hanging in the balance at their command every day. What a complete act of superiority. With so many other options, don't you think that the situation would lessen or differ somewhat? No, of course not. Humans are always looking for the easy way out of situations, and if that means torturing innocent and helpless animals, then so be it. I am personally against this mode of action.
Bibliography:
Lorraine Kay,"Living without cruelty," Sidgwick and Jackson press, London, 1990.
J. J. McCoy,"Animals in research; Issues in conflict," Impact press, U.S.A., 1993.
Lynda Dickinson,"Victims of vanity," Summerhill press, Canada, 1989.
B.P. Robert Stephen Silverman,"Defending animals' rights is the right thing to do," S.p.i. books, U.S.A.,1992.
Kathy Snow Guillermo,"Monkey business," National press books, U.S.A., 1993.
f:\12000 essays\sciences (985)\Biology\animal rights 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
English 103
Paper #1 Animal Rights
¯An Argument For Animal Research
Medicine has come a very long way since the days
when men used to puncture holes into the skull to
release tension or evil spirits. In the last one
hundred years, for the sake of humanity, numerous
vaccinations have been developed, disease and disorders
of all types have been prevented, surgical techniques
have been advanced, drugs have been developed to cure
ailments and the list continues endlessly. The
progress that has been achieved in knowledge as well as
safety in medical practice is correlated directly to
animal research. It is one argument to control animal
research so that needless death of animals are not
rendered, but it is absolutely different to argue that
animals have rights which supersede human subsistence.
"For most of the past decade, the animal-rights
movement hasn't merely opposed animal research; it has
tried to destroy it." ( ¯The Wall Street Journal(r),
"Animals and Sickness", Page 378.) Animal rights
advocates and activists generally have ethical
objections regarding treatment of animals during
experimentation, but the use of animals in research for
the benefit of all people is and always will be
justifiable.
Over 99 percent of all animal experiments are on
rats and mice developed expressly for laboratory use.
"Less than 1 percent of experiments involve cats, dogs,
farm animals, nonhuman primates, frogs, fish, and
birds." ( ¯Encyclopedia of Medicine, AMA(r), "Animal
Experimentation", Page 110.) Animal rights advocates
try to sway public opinion by showing grotesque
pictures of destroyed cats, dogs, farm animals,
dolphins, and monkeys which account for less than 1
percent of the experiments, yet it seems 99 percent of
their advertising and campaigning deal with this one
percent. At least the American public realizes even
those who portray ethical righteousness can be wrong.
For instance, " an American Medical Association ( AMA )
poll found that 77 percent of adults think that using
animals in medical research is necessary." ( ¯The Wall
Street Journal(r), "Animals and Sickness", Page 378.)
It is a curious thing to see animal welfare groups
try to hinder animal research by threatening
researchers lives and destroying years of data
collected. Animal rights groups are promoting even
more animal testing because the same tests will have to
be repeated to replace the lost data. In every major
medical research university there have been some form
of nuisance to deter animal testing whether it was a
quiet riot or endangering the lives of researchers.
Animal rights groups must realize research is done
out of necessity for human welfare. Whenever possible
alternatives to animal experiments are used. "The
development of modern research techniques, such as CAT
scans, PET scans, needle biopsies, and tissue cultures"
( Stephen Kaufman, M.D., ¯Breakthroughs Don't Require
Torture(r), Page 380.) allow researchers to thoroughly
exhaust their options before testing on animals. In
this age where fiscal conservatism is a priority even
when human lives are concerned, researchers are doing
their part to conserve. It takes a lot of time, money,
and care to take care of animals that are going to be
subjects of tests. "No responsible scientist would
incur the substantial expense and devote the
considerable space required for housing and caring of
animals when other equally satisfactory models were
available." (Michael E. DeBackey, ¯Holding Human Health
Hostage(r), Page 361.)
Contributions resulting from animal research are
too numerous to mention. All that can be said is
without testing and researching on animals human lives
would have been lost, medical technology would have
been tremendously delayed, and future breakthroughs
will be nearly impossible. When we consider the
diseases that used to terrorize our society 100 or even
50 years ago, its a blessing to realize animals are
similar to humans in biology; That we can confirm
studies of medical and surgical methods before it is
carried out on people. Animal research saves lives.
f:\12000 essays\sciences (985)\Biology\Animal Rights.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ever since The Society for the Prevention of Cruelty to Animals in England in 1824 was formed there has been long running debates on the topic of animal rights. The first societies were formed to protect and maintain human treatment of work animals, such as cattle, horses and house hold pets. Towards the end of the 19th century more organizations were formed, this time to protest the use of animals in scientific experimentation. In today's society groups such as People for the Ethical Treatment of Animals (PETA) have continued these traditional fights as well as adding new agendas. These new agendas include hunting and fishing, and dissection of animals in science classes. This paper will discuss the pros and cons of animal experimentation and research, animals in the classroom, animal organizations and hunting. Along with these topics my personal opinion will be stated, before and after researching the topic.
The rights of animals have always been important to me during my life. This is due to the fact that I have had a dog for a pet for as long as I remember. On this topic I feel as though having domesticated animals in the home is fine as long as proper care is taken of them. As for more controversial issue like animal research and experimentation my views vary. A few years ago I felt that any research or experimentation on animals was inhumane and unjust. However after maturing and becoming more aware of the world, I now feel as though there are definite 'goods' that come from animal research that can not come from doing tests on humans. This view is by no means one sided. I also feel that there are some things being done to animals that just should not happen, such as the testing of cosmetics. In other areas of animal rights like dissection in the classroom I think that as long as the animals died naturally it is fine to use them to further a students education along with human cadavers. Of course, I hope that
animal dissection can become a thing of the past with the advent of new technologies. On the topic of hunting I have had a first hand experience. The deer population where I live grew out of control a few years ago and as a last resort the town decided to have a hunt. It was very controlled safe and had a limit as to how many deer were killed. This sort of animal control is extreme and in my opinion should be avoided at all costs. However, the overpopulation of deer was causing health risks to the town, like a spread of lymes disease, which made hunting a necessity. The rights of animals are watched out for by organizations dating back to the early 1800's. This, I feel is an important step in protecting animals as long as they protest within there legal rights. In order to sum my opinion up animals do have certain rights but if experiments, research, hunting and dissection provide positive increases in knowledge that furthers the existence of the world it is a necessary thing that must be done.
Perhaps the biggest and most debated subject dealing with the rights of animals is the use of them in research and experimentation. "Very few people would object to the use of animals if human lives were saved as a consequence." (Minkoff, 26) However the extremists who do object would do so on a few key points. Firstly, animals which are used are subjected to in humane treatment. This consists of tests such as the LD50, which entails giving an animal a lethal dose of a chemical or drug until 50% of them die. Also, experimenters are subjecting them to wound experiments, radiation experiments and studies on the effects of chemical warfare.(PETA, 2) Organizations such as PETA are also opposed to cosmetic testing on animals due to experimenters spraying, injecting, and feeding cosmetics to animals which cause labored breathing, blindness and death in some cases. These organizations argue that cosmetics have already been tested on animals in the past why continue doing the same tests. Due to the protests of The C
enter for Alternatives to Animal Testing in 1981, Avon and Revlon have stopped using animals in their research.(Comptons, CD) Experiments and research on animals such as the LD50 test and cosmetic tests are, according to animal rights organizations cruel and inhumane towards animals. They believe that animals have rights and they are just as important to society as humans are, therefore if humans are not used for these experiments then animals shouldn't either. Despite these objections for experimenting on animals there are positive results that come from it.
Research on animals is important in understanding diseases and developing ways to prevent them. The polio vaccine, kidney transplants, and heart surgery techniques have all been developed with the help of animal research. Through increased efforts by the scientific community, effective treatments for diabetes, diphtheria, and other diseases have been developed with animal testing.(Bioethics, 148) There are many reasons given for it to be necessary to work with animals in research. First scientists must be able to test medical treatments for effectiveness and drugs for their toxicity before being tested on humans. Also new surgical techniques before being used on humans must be tested on living things with circulatory and pulmonary systems like ours. No "computer models, cell cultures, nor artificial substances can simulate flesh, muscle blood, bones and organs."(Ampef, 2) If considered carefully there is no alternative to animal research. It is impossible to explain or predict the course of many diseases wi
thout observing the effects of it on the entire living system. In the classroom, it is argued, dissections must go on in order to further our knowledge. But, what about computer programs like the virtual frog? The answer to this is simply that even with today's technologies these kind of computer programs are not sophisticated enough to reproduce a living organism.
In researching the topic of animal rights my eyes have been opened to various different reasons to support and not to support animal rights. After serious consideration of both sides of the argument, my opinion is that animals should be used in research and experiments, excluding cosmetic experiments. In my opinion this type of animal use is fine as long as it results in positively advancing the human race. Despite this point of view I also believe this research must produce these results in a humane manner. Animals do have rights and should not be used for unnecessary things such as hunting which is purely taking advantage of animals because they can not defend themselves and no good comes from this sport. The only exception to this was stated earlier in which hunting was used as a last resort to curb a possible health threat. Finally my hope for the use of animals in the classroom is that someday there will be enough technological advances for computer programs that will enable them to simulate a real anima
l. This actually goes for all animal testing, if we could simulate an animal or human, on a computer we would not have to subject anyone to testing. Animals do have the right not to treated inhumanely whether it be in the home, laboratory, classroom or field, yet as long as animals are being used to help benefit the world, animals in my opinion can be used in some respects.
Works cited:
· Compton's Interactive Encyclopedia. CD-ROM. Compton's New Media Inc. : 1994
· Encyclopedia of Bioethics, Simon and Schuster. New York: 1995
· Minkoff, Eli C., Biology Today: An Issues Approach. McGraw-Hill Companies, Inc., New York: 1996. (pp25 - 32)
· Miller, David Lee Winston. " The LD50 Test, A Failure of Extreme, but Measurable, Proportions."1997. Online. Available:
http://www.sunyit.edu/~millerd1/LD50.HTM
· "Without Animal Research." Americans for Medical Progress Educational Foundation. 1997. Online. Available:
http://www.ampef.org/research.htm
· "Animal Experimentation." People for the Ethical Treatment of Animals. 1997. Online. Available:
http://www.envirolink.org/arrs/peta/facts/exp/fsexp01.htm
f:\12000 essays\sciences (985)\Biology\Animals in The Research Lab.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The use of living animals is an important way to solve a medical problem. Researchers continually seek other models to understand the human organism, study disease processes, and test new therapies. In seeking quicker and not so expensive ways to look for biological information that can be applied to human disease, scientists sometimes study simpler things such as bacteria, fruit flies and a few other things. Researchers have spent many years learning how to sustain cells, tissues and organs from animals and humans outside the body to understand biological processes and develop new medical treatments. Computers allow scientists to analyze vast amounts of data and test new ideas. But, in the end, the results obtained must be verified in appropriate animal systems and, possibly as the final step, in clinical trials using humans who will volunteer.
Before beginning a project, all research proposals involving animals must be reviewed and approved by a committee comprised of scientists, veterinarians, and private citizens.
Animal activist organizations believe that there are no moral reasons for the use of animals in research. This has attempted to slow or halt the work of scientists. Some activists groups intimidate or harass individual scientists, conduct demonstrations, or sometimes commit acts of vandalism. There are a few health professionals who support the activist movement but they truly stand apart from the vast majority of physians and most Americans who readily accept the fact that animal research is necessary to gain medical progress.
The use of living animals remains a very important way to solve medical problems. Almost every medical achievement of the last century has depended either directly or indirectly on RESEARCH WITH ANIMALS. The knowledge gained from this research has extended human life and made it healthier.
In conclusion, I think that animals should be used in the lab because I would rather risk the lives of a couple animals and not the lives of a couple thousand people.
f:\12000 essays\sciences (985)\Biology\ants.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
All About Ants (almost)
Among the many hundreds of thousands of astonishing organisms with which we must share this earth, there is one seemingly ordinary group of specimens which fascinates many people beyond all others. There is nothing too extraordinary in the proportions or appearance of ants, but it is their history and culture that induces a second look. These insects are about as different from us mammals as two organisms can be, yet it appears that of all the known animals their way of life appears closest to our human way of life. The similarities in the ways in which we organize our lives are astounding. Ants are doubtlessly the most successful of all the social insects of the Hymenoptera, an order also including wasps and bees.
The earliest known specimens are found entombed in the Scandinavian Baltic Amber samples which scientists date in upwards of 100 million years old (The Ant Colony Œ89). These primitive samples have evolved into the 5000 to 10000 species known today which vary amongst themselves as widely as the numbers suggest (Social Insects Œ68). These remarkably adaptive creatures are found in some form on all continents and all habitats but the extreme arctics. Their success is manifested in the claim that at any time there are at least 1 quadrillion living ants on earth(Groliers Œ93).
All species of ants are social. They live in organized communities or colonies, which may contain anywhere from a few hundred to more than 20 million individuals. These are organized into a complex system which may contain two or more castes and sub castes which can be roughly organized into three groups. Queens, males and workers.
The queen is much larger than the other ants, and has wings until mating. Her primary task is to lay eggs for the colony. Some colonies have one queen; others have up to 5000. Queens develop from fertilized ordinary eggs, nobody is exactly certain what causes these to develop into queens but it is generally thought that the process comes from an altered diet in the pupae and larvae stages and as a pheremone response, which will later be discussed. Queens have an extended life span of up to 25 years and can lay millions of eggs in that time (Ant Colony Œ89).
Male ants are winged as well, their sole purpose is to mate with the queens. For this reason they are the shortest lived ants in the colony. Hatching in the spring, they mate in the summer and upon completion of this task promptly die. As in all Hymenoptera, they are formed from non-fertilized eggs (Social Insects Œ65).
The majority of the ants in the colony are wingless females who are generally non-reproductive. These ³workers² must perform the tasks of sustaining the colony and all life therein. They are responsible for building, repairing, and defending the nest, and for caring for the queen and the brood. They also generate a source of nutrition and feed all the members of the colony. Some will will perform a single task for their whole lives, while others change constantly.
In polymorphic species, where the workers vary in size, the worker sub casts are most destinguishable. Here there is found a larger or major worker often referred to as a soldier. Her function is often associated with specialization such as guarding the colony, carrying heavy loads, or in species where necessary, foraging for food. While the minima or smaller workers tend the larvae and queen.
Once or twice each year, commonly on a warm summer day, every ant colony becomes the source of great excitement. Well rested and cared for young alates begin to make for the escapes and exits from deep within the colony. Large soldiers guard the door as the young winged members are escorted to the open by hordes of workers. Suddenly, yet unbeknownst to man nature gives a signal. Soldiers retreat, and workers make space and assemble on the ground as the males and queens are hustled to the sky. Hastening into the air they often meet with winged¹s from other colonies with the same objective. For the first and only time in their lives they will mate, often in mid-air or settling on leaves and branches. Now the queen is equipped with a lifetime supply of sperm. After a brief hour or two of this nuptual flight they return to the ground. Males having accomplished their duty die, while the queens task has only begun. She will return to her original colony, inhabit another established colony or form her own.
Not all queens will survive this lonely dangerous task.
Her first objective is to shed her wings, for she will never fly again. She breaks them off herself, or is aided by worker ants. If she is to form her own colony she goes about finding a spot . Depending on her species any wide number of sights may be chosen. In the majority of cases a queen will tunnel a cell underground. She uses her jaws and forelegs to move the earth. Alone and unprotected she seals herself into her new home. Then, following a variable gestation period she lays her eggs. It may be nine months before the first workers hatch(A closer Look Œ75). She must find food in this time when she is all alone busily laying eggs. Her body is able to break down her no longer needed wing muscles from which she may gain nutrition. Often she must eat some of her eggs to survive(Groliers Œ93).
The first ants that hatch are workers. This first group is consistently smaller than workers to come. As you will find out they did not receive the same nurturing that will become standard for the brood in a fully functioning ant colony. They instinctively venture out to find a way to feed their feeble mother. From now on, she will be cared for as true royalty, licked and fed by the nurse workers, her only job, to lay a lot of eggs.
Once she has been attended to, these busy workers will go about the task of enlarging and enhancing the anthill. First they will provide a place for the brood. Those that live in the earth tunnel chambers in the soil, these are logically referred to as nurseries. Here the eggs and smaller larvae are cared for.
Insect development consists of three stages. The first of which is the egg. These are carried to nurseries as soon as they are laid. Each chamber differs in temperature and humidity. In order for the eggs to develop properly the eggs must have a temperature of 77 digrees F(Colony Œ89). Nurses move the eggs from room to room. These chambers are often found in the deepest recesses of the colony. By licking them the sticky ant saliva causes the eggs to cluster together, for easier carrying. After 14 days this first stage is complete as the tiny larvae hatch(Colony Œ89). These larvae lack legs and eyes and hardly resemble adult ants. The helpless infants rely on the nurses to feed and clean them. This developmental stage requires a temperature of 82 degrees F with a high humidity, as a result the larvae are stuck together and carried about just as the eggs are(Colony Œ89). They receive a special diet as well. For the next 8 to 20 days the larvae grow quickly(Colony Œ89). So quickly, in fact, that th
ey will grow right out of their skin. ³Bursting at the seams,² they slither out as do snakes. When this has taken place four or five times they enter into the third stage and pupate. The larvae excrete a white solution which quickly solidifies upon contact with the air. This is spun into a protective cocoon, which looks very much like a large egg. For an unknown reason, there are a number of larvae which go through pupation without a cocoon. Their colorless legs and antennas are pushed helplessly to their bodies, giving the same appearance as their counterparts within cocoons. In a dry location of 86 degrees F, they finish up their childhood near the surface of the anthill where they may be seen from the outside. After two to three weeks in the cocoon the transformation is complete. Gnawing a hole from the inside, the nurses are alerted of their condition and aid them in escape. For the first few days the exoskeleton has not hardened so the young ants body is soft. It¹s chest (thorax) is light brow
n, legs are pale, and heads and abdomen are gray. Still vulnerable, if they are in danger, they are swept to safety by nurses.
The body of an ant is divided into three segments which are the head, thorax, and abdomen. On the head are antennae, eyes, and mouth parts. The tiny feeler like antennae are perhaps the Swiss Army Knife of the insect world as they enable the ant to touch, taste, smell and sense vibrations. These antennae are also used to help the ants communicate with each other.
All worker ants have two compound eyes, these sense organs are made of many lenses set close together, each lens seeing a tiny part of what the creature is looking at , the combined effect is a fragmented picture of the whole object. This means of vision is beneficial to the ant because it enables them to very easily see movement. Males and queens do not, however, need such a complex system. They have three simple eyes on the top of their heads called ocelli which distinguish between light and dark(Groliers Œ93).
The two primary mouth parts are mandibles and maxillae. Mandibles are a moving jaw like apparatus. These are used for fighting, digging and carrying objects. The smaller maxillae reside behind the mandibles and chew food. On the front of the maxillae is a row of tiny hairs which operate like a comb to clean the legs and antennae.
The middle section is called the thorax, here the heart is located, as are three pairs of legs. The wings of unmated queens are attached here as well. Two tiny hooks on each leg enable the ant to climb vertically and upside down. Some, use the front claws to tunnel underground. A tiny row of hairs on the front legs serve the same purpose as those on the maxillae.
There are two pieces which make up the abdomen, the waist like petiole and an enlarged segment which is called a gaster. The petiole is made up of one or two movable segments with humps on top and connect the gaster to the thorax. An ants gaster contains a crop and intestine. Some varities may also contain a poison gland, filled with formic acid that can be sprayed at a moments notice. This substance has proven very useful to people as it may be used as an insecticide, antibiotic, preservative, and disinfectant. Ants were originally the sole industrial source but it can now be artificially produced. Contact with minimal doses of the ants product is not harmful to humans but the mass doses of thousands can suffocate a person (Colony Œ89).
Ants digest liquids only. Chewed food is moved to a pouch just below the mouth, contractions squeeze the juices out and they are swallowed. Solids are regurgitated, and liquids are stored in the crop. Now when the ant is hungry, food from the crop will travel through a small valve to the intestine where it can nourish the body. The crop lies just within the gaster and has thin elastic walls. A full crop is large enough that this process can happen several times before the food supply is seriously depleted.
Due to the many specialized roles in the ant community not all members are in charge of the important task of gathering food. As a result these gatherers must feed the other members of their community. The means employed to accomplish this task are unique and intriguing. A hungry ant uses its antennae and legs to tap and stroke a food gatherer on the head. Following this signal the two ants will put their mouths together and food is passed from the crop of the gatherer to the hungry member, this is called mutual feeding or trophallaxis. An ant with a full crop can be distributed food to 8-10 others in this way. And as they share their supplies one ant can feed up to 80 others(Groliers Œ93).
Ants have an elaborate system of communication, which includes visual, auditory, tactile, gustatory and olfactory signals(Groliers Œ93). While eating, many animals socialize and communicate. Few, however, are able to learn so much from their meals. Modern science has discovered the importance of this method of feeding. While people used to believe that ants were able to work together as they do because they were highly intelligent insects. We now know that this is not the case. Although they are capable of learning, ants as individuals are not particularly intelligent at all. Secretions received from the food share tell the ant what to do. These substances come from secretions the ants have picked up by licking the body of the queen and her brood. Nest mates constantly feed, lick and touch each other so these secretions are passed all around the colony. These vital secretions act as memos in a large office building. Because each colony has its own individual scent, they help ants to identify each ot
her by smell and touch. They tell an ant everything from what jobs need doing in the nest, to communicating excitement and danger. Special glands enable various ants to give off an alarm secretion, lay trails and attract sister workers to a new food source, this Olfactory communication is made possible through the release of chemicals called pheromones. So it is not special intelligence which enables ants to communicate as they do but the passing of and ability to react to secretions, which keep up a bond between colony members and helps them work together.
Across the many different species there are various specialized colonies and means of nesting. While in the majority of cases ants live in the soil or wood or any number of natural cavities. Some nomadic army ants may form temporary nests , or bivouacs, consisting entirely of ants themselves a living suspended ball (A Closer Look Œ75). Other ants build ³carton nests² of plant tissue. African weaver ants make their nests of living leaves bound by larval silk. Others form a symbiotic relationship with Acatia trees eating from the plants and guarding against other destructive insects and competitive vegetation.
Many ants also have specialized ways of obtaining food. Nomadic army ants raid and retrieve in groups, these large species live predominantly on other organisms. They forage en mass and are therefore able to overtake much larger prey.
Fungus growing ants are highly specialized herbivores that ³ cultivate subterranean fungus gardens on fecal or plant-derived substrates.² These ants live solely on fungus. ³Leaf cutters² gather green leaves, which they chew and grow fungus on.
Harvester Ants feed on seeds. Living in hot dry climates they construct elaborate nests up to 2m below the earth devoting massive chambers entirely to the storage of seeds, which are often topped off with a layer of gravel and sand, as the ancient Egyptians protected their grain supplies. Harvesters often husk collected seeds before storing(Groliers Œ93).
Gatherers and herders, gather plant liquids directly from wounds and nectaries. Others collect honeydew, a substance excreted by aphids which feed on plant juices. The aphids are unable to digest many of the nutrients from these juices which are beneficial to the diets of the ants. Thus, in exchange for protection from enemies the ³cow² allows the ant to feed off of its excretions.
Perhaps the most interesting however are the parasitic and slave making ants. Two or more species may form joint nests in which the broods are separated, and the parasitic species obtains food from the host species. In another category called mixed colonies, the broods are mixed and cared for as one. Some parasitic ants are permanent residents of the host colony and are so specialized that they have lost the work caste. Here slave making may result. But perhaps the most blatant exploitation made by one species over another found in nature aside from we humans is the slave-making species. These raid other colonies and steal worker pupae that they enslave to carry out the work of their colonies. Some species, such as the ants of the Amazon are so specialized for capturing slaves that they can not forage for food or care for their young. Without slaves they quickly perish.
Ants are often called the most fascinating insects of all. While they can be vastly destructive, stripping valuable trees bare in the tropics, and a general nuisance marching through kitchens and pantries they are extremely helpful to man as they help to clear the earth of pests like termites. Wood ants clear forests of millions of tree-destroying insects over a single summer. They have been here for approximately 53 million years, and 56 percent of genera represented among the extensive Baltic amber are living today, and show no sign of dying out soon. In our great pursuit of knowledge it is my hope that we can derive something of value from studying the culture and life-style of the hardest working organisms in the world. (With the exception, of course, of the Villanova biology teachers).
f:\12000 essays\sciences (985)\Biology\Army Ants.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Anthony Palmieri
November 20, 1996
Contemporary Science Topics
Army Ants
A quote made by Lewis Thomas, "Ants are so much like human beings as to be an
embarrassment. They farm fungus, raise aphids as livestock, launch armies into war, use
chemical sprays to alarm and confuse enemies, and exchange information ceaselessly.
They do everything but watch television." I am going to focus this report on the part of the
quote, "..launch armies into war..," which sets a metaphor of ants and our armies in today's
society. Ants have many tactics, so to speak, that are similar to the way our armies have
when going to war.
Ants have many different roles in their society. One of the main roles that army ants
or soldier ants have is that they forage in masses for food. These masses of ants travel
together and are able to overcome and capture other social insects and large anthropoids,
they may occasionally kill larger animals but they do not eat them. As the need for food for
the larvae increases, food gathering raids become more intense.
The hunting raids made by ants are carried out by "armies" of thousands of ants and
set out from the bivouac in various directions. They form two or three parties going out
simultaneously in different directions for 100 yards or more. In the U.S. army we attack
countries in different areas to weaken the force we are attacking. We send out thousands of
troops in various directions and try to surround the source of the location being attacked. For
instance, if there are several locations that needed to be attacked to weaken the enemy,
like their weapon storage or air force base, we send several sets of troops to attack each
individual location. This is very similar to the way army ants set out on a hunting raid. They
will send out thousands of ants at once in two or three different directions.
When ants go out on their raids, a subgroup called Dorgline ants, walk along
margins of the trails as though protecting the smaller individuals in the center. Dorglines are
large soldiers that broaden the trail where it follows a narrow ledge of bark and twigs or
smooth the path where it crosses a rough plate and they do this with their own body. They do
this because footing for the large ants is better along the margins than in the midst of dense
mass of scurrying ants.
When the army wants to invade or occupy a county, they usually will set up aircraft
carriers in the surrounding oceans and set up air forces in neighboring countries. They do
this to protect the inside forces of troops and clears out a root for them to attack. They did
this type of tactic during the Persian Gulf War when we sent aircraft carriers into the Persian
Gulf and the Mediterranean and set up air forces and troops in the neighboring countries to
set up an attack. We later launched sea and air attacks to weaken the forces in Iraq. We
need these forces surrounding the area to launch missions to kill or damage the powerful
sources and then we send in the troops to tack care of the rest, like taking hostages or
capturing any of our hostages.
When the ants are sent from the bivouac, the leading ants have no odor for others to
follow. They often hesitate and hold back an advance but the pressure built up from behind
forces on side of the front line to bulge forward. As this movement slows down because of
the relief of pressure behind it, a new bulge develops and extends forward in another part of
the front. The result is a series of advances of different parts of the front which suggests
flanking movements. A presence of prey will accelerate the advance but the capture will
slow down as the prey is dismembered. So in turn, insects that ants come upon are attacked
very quickly by the mass of ants rushing upon them. Their pieces are brought back to the
biouvac for food and other sources of energy.
As the United States Army is sent from the homeland or base, they are very similar
to the way army ants behave. The Army sends a front line, as do ants, which is usually made
up of tanks and armed troops. As they advance closer to the site of the attack the front line
slows down till a backup force of a second and third line reaches the site. This is very similar
to the ants' flanking motion. A presence of the enemy will accelerate the attack from the
masses of troops and tanks that were formed. When the enemy has control of the attack
slows down and it brings any hostages or injured troops back to the home base. The army
ants have a very similar tactic in the presence of prey.
Both the army ants and the United States Army have many of the same military
tactics when going into war. It is amazing that ants and humans have some of the same
styles of launching wars and yet we are very different in physical attributes.
f:\12000 essays\sciences (985)\Biology\australopithiceus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By: Chris Stewart
Date: December 15th 1996
Biology
Australopithecus
There are many types of the hominid called australopithecus, which means southern apes. These were small ape-like creatures(with a height between 107cm and 152cm) that showed evidence of walking upright. It is difficult to tell whether these begins are "humans" or "apes". Many of their characteristics are split between humans and apes. The many species of australopithecus include A.(australopithecus) ramidus, A. anamenesis, A. afarensis, A. africanus, A .acthiopicus, A. robustus, and A. boisei. The oldest known and identified species of australopithecus that roamed the earth was a. ramidus who lived about 4.5 million years ago. Next came A. Anamensis, A afarensis, A africanus ,
A. acthiopicus, a boisei and a. robustus. Ausralopithecus boisei roamed the earth as early as 1.1 million years ago and was on earth at the sametime as homo habilis and homo erectus. Most of the australopithecus fossils that have been discovered have been found in eastern africa and have been dated between 4.5 million and 1.1 million years old. There has also been evidence that the australopithecus "man" lived in australia where fossils have also been found. The first discovery of an australopithecus fossil was made in 1924. The body of the australopithecus is smaller than humans, but biffer than chimpanzees. Also, The brain size of australopithecus is bigger than humans at about 475 cubic centimeters. This is also biffer than the chipanzees, but the brains were not developed in most areas. For example speech. The australopithecus species all had mostly the same features with a low forehead, a "bony ridge"over the eyes, a flat nose and no chin. Their jaws stuck out and they had large back teeth. Even today, m
any more species of australopithecus are being discovered. Australopithecus anamensis was only named in august 1995, even though it is one of the oldest species of australopithecus(living between 4.2 and 3.9 million years ago.)
f:\12000 essays\sciences (985)\Biology\Bacteria.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Antibiotic Resistance in Bacteria
For about 50 years, antibiotics have been the answer to many bacterial infections.
Antibiotics are chemical substances that are secreted by living things. Doctors prescribed these
medicines to cure many diseases. During World War II, it treated one of the biggest killers
during wartime - infected wounds. It was the beginning of the antibiotic era. But just when
antibiotics were being mass produced, bacteria started to evolve and became resistant to these
medicines.
Antibiotic resistance can be the result of different things. One cause of resistance could
be drug abuse. There are people who believe that when they get sick, antibiotics are the answer.
The more times you use a drug, the more it will decrease the effect it has on you. That is
because the bacteria has found a way to avoid the effects of that antibiotic. Another cause of
resistance is the improper use of drugs. When patients feel that the symptoms of their disease
have improved, they often stop taking the drug. Just because the symptoms have disappeared it
does not mean the disease has gone away. Prescribed drugs should be taken until all the
medicine is gone so the disease is completely finished. If it is not, then this will just give the
bacteria some time to find a way to avoid the effects of the drug.
One antibiotic that will always have a long lasting effect in history is penicillin. This was
the first antibiotic ever to be discovered. Alexander Fleming was the person responsible for the
discovery in 1928. In his laboratory, he noticed that in some of his bacteria colonies, that he was
growing, were some clear spots. He realized that something had killed the bacteria in these
clear spots, which ended up to be a fungus growth. He then discovered that inside this mold was
a substance that killed bacteria.
It was the antibiotic, penicillin.
Penicillin became the most powerful germ-killer known at that time. Antibiotics kill
disease-causing bacteria by interfering with their processes. Penicillin kills bacteria by attaching
to their cell walls. Then it destroys part of the wall. The cell wall breaks apart and bacteria dies.
After four years, when drug companies started to mass produce penicillin, in 1943, the
first signs of penicillin-resistant bacteria started to show up. The first bacteria that fought
penicillin was called Staphylococcus aureus. This bug is usually harmless but can cause an
illness such as pneumonia. In 1967, another penicillin-resistant bacteria formed. It was called
pneumococcus and it broke out in a small village in Papua New Guinea. Other penicillin
resistant bacteria that formed are Enterococcus faecium and a new strain of gonorrhea.
Antibiotic resistance can occur by a mutation of DNA in bacteria or DNA acquired from
another bacteria that is drug-resistant through transformation. Penicillin-resistant bacteria can
alter their cell walls so penicillin can not attach to it. The bacteria can also produce different
enzymes that can take apart the antibiotic.
Since antibiotics became so prosperous, all other strategies to fight bacterial diseases
were put aside. Now since the effects of antibiotics are decreasing and antibiotic resistance is
increasing, new research on how to battle bacteria is starting.
Antibiotic resistance spreads fast but efforts are being made to slow it. Improving
infection control, discovering new antibiotics, and taking drugs more appropriately are ways to
prevent resistant bacteria from spreading. In developing nations, approaches are being made to
control infections such as hand washing by health care people, and identifying drug resistant
infections quickly to keep them away from others. The World Health Organization has began a
global computer program that reports any outbreaks of drug-resistant bacterial infections.
In the early 1900's, the discovery of penicillin began the antibiotic era. People thought
they have finally won the battle with bacteria. But now since antibiotic resistance is increasing
rapidly, new strategies must be developed to destroy these microbes. To many scientists the
antibiotic era is over.Bibliography
Bylinsky, Gene. Sept. 5,1995. The new fight against killer microbes.
Fortune. p. 74-76.
Dixon, Bernard. March 17,1995. Return of the killer bugs.
New Statesman & Society. p. 29-32.
Levy, Stuart B. Jan. 15,1995. Dawn of the post-antibiotic era?
Patient Care. p. 84-86.
Lewis, Ricki. Sept. 1995. The rise of antibiotic-resistant infections.
FDA Consumer. p. 11-15.
Miller, Julie Ann. June 1995. Preparing for the postantibiotic era.
BioScience. p. 384-392.
an excellent news article summary, got me a 100, by strife007
f:\12000 essays\sciences (985)\Biology\Bacterial Growth.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biology II
1996
The Effects of Antibiotics on Bacterial Growth
Bacteria are the most common and ancient microorganisms on earth. Most bacteria are microscopic, measuring 1 micron in length. However, colonies of bacteria grown in a laboratory petri dish can be seen with the unaided eye.
There are many divisions and classifications of bacteria that assist in identifying them. The first two types of bacteria are archaebacteria and eubacteria. Both groups have common ancestors dating to more than 3 billion years ago. Archaebacteria live in environments where, because of the high temperature, no other life can grow. These environments include hot springs and areas of volcanic activity. They contain lipids but lack certain chemicals in their cell wall. Eubacteria are all other bacteria. Most of them are phototrophic, i.e. they use the sun's energy as food through the process of photosynthesis.
Another classification of bacteria is according to their need of oxygen to live. Those who do require oxygen to live are considered aerobes. The bacteria who don't use oxygen to live are known as anaerobes.
The shape of specific bacteria provides for the next step in the identification process. Spherical bacteria are called cocci; the bacteria that have a rodlike shape are known as bacilli; corkscrew shaped bacteria are spirilla; and filamentous is the term for bacteria with a threadlike appearance.
Hans Christian Joachim Gram, a Danish microbiologist, developed a method for distinguishing bacteria by their different reaction to a stain. The process of applying Gram's stain is as follows: the bacteria are stained with a violet dye and treated with Gram's solution (1 part iodine, 2 parts potassium iodide, and 300 parts water). Ethyl alcohol is then applied to the medium; the bacteria will either preserve the blue color of the original dye or they will obtain a red hue. The blue colored bacteria are gram-positive; the red bacteria are identified as gram-negative.
Bacteria contain DNA (deoxyribonucleic acid) just like all cells. However, in bacteria the DNA is arranged in a circular fashion rather than in strands. Bacteria also contain ribosomes which, like in eukaryotic cells, provide for protein synthesis. In order for a bacterium to attach itself to a surface, it requires the aid of pili, or hairlike growths. Bacteria, just like sperm cells, have flagella which assist in movement. But, sperm cells only have one flagellum, whereas bacteria contain flagella at several locations throughout their body surface.
Although most bacteria are not harmful, a small fraction of them are responsible for many diseases. These bacterial pathogens have affected humans throughout history. The "plague", an infamous disease caused by bacteria, has killed millions of people. Also, such a disease as tuberculosis, a disease responsible for the lives of many, is caused by bacterial pathogens ingested into the body.
Bacteria affect everyone in their daily life because they are found nearly everywhere. They are found in the air, in food, in living things, in non-living things, and on every imaginable surface.
Escherichia coli is a disease causing gram-negative bacillus. These bacteria are commonly found within the intestines of humans as well as other vertebrates. This widely spread bacteria is known to cause urinary tract infections as well as diarrhea.
Microcococcus Luteus are gram-positive parasitic spherical bacteria which usually grows in grapelike clusters. This species is commonly found in milk and dairy products as well as on dust particles.
Bacillus Cereus are a spore forming type of bacteria. They are gram-positive and contain rods. Due to the fact that this bacteria is known to survive cooking, it is a common cause of food poisoning and diarrhea.
Seratia Marscens a usually anaerobic bacteria which contains gram-negative rods. This bacteria feeds on decaying plant and animal material. S. marscens are found in water, soil, milk, foods, and certain insects.
In spite of the fact that bacteria are harmful to the body, certain measures can be taken in order to inhibit their growth and reproduction. The most common form of bacteria fighting medicines are antibiotics. Antibiotics carry out the action which their Greek origin suggests: anti meaning against, and bios meaning life. In the early parts of the 20th century, a German chemist, Paul Ehrlich began experimentation using organic compounds to combat harmful organisms without causing damage to the host. The results of his experimentation began the study and use of antibiotics to fight bacteria.
Antibiotics are classified in various ways. They can be arranged according to the specific action it has on the cell. For example, certain antibiotics attack the cell wall, others concentrate on the cell membrane, but most obstruct protein synthesis. Another form of indexing antibiotics is by their actual chemical structure.
Practically all antibiotics deal with the obstruction of synthesis of the cell wall, proteins, or nucleic acids. Some antibacterials interfere with the messenger RNA, consequently mixing up the bacterial genetic code.
Penicillins act by inhibiting the formation of a cell wall. This antibiotic works most effectively against gram-positive streptococci, staphylococci (e.g. Micrococcus Luteus) as well as certain gram-negative bacteria. Penicillin is usually prescribed to treat syphilis, gonorrhea, meningitis, and anthrax.
Tetracycline inhibits protein synthesis in pathogenic organism. This antibiotic is obtained from the culture of Streptomyces.
Streptomycin an antibiotic agent which is obtained from Streptomyces griseus. This antibiotic acts by limiting normal protein synthesis. Streptomycin is effective against E. Coli, gram-negative bacilli, as well as many cocci.
Neomycin an antibiotic derived from a strain of Streptomyces fradiae. Neomycin effectively destroys a wide range of bacteria.
Kanamycin an antibiotic substance derived from Streptomyces kanamycetius. Its antibacterial action is very similar to that of neomycin. Kanamycin works against many aerobic gram-positive and gram-negative bacteria, especially E. coli. Protracted use may result in auditory as well as other damages.
Erythromycin is an antibiotic produced by a strain of Streptomyces erythreaus. This antibiotic works by inhibiting protein synthesis but not nucleic synthesis. Erythromycin has inhibitory effects on gram-negative cocci as well as some gram-positive bacteria.
Chloramphenicol is a clinically useful antibiotic in combating serious infections caused by certain bacteria in place of potentially hazardous means of solving the problem. In lab tests, it has been shown that this medicine stopped bacterial reproduction in a wide range of both gram-positive and gram-negative bacteria. The inhibition of cell reproduction caused by Chloramphenicol takes place through interference with protein synthesis.
An experiment was conducted in order to determine which antibiotics are most effective in inhibiting bacterial growth. First, the different bacteria were placed on agar inside petri dishes. Then, antibiotic discs were placed into the dishes. Each bacteria was exposed to every one of the antibiotics listed above. The bacteria used in the experiment were: Bacillus Cerus, Escerichia Coli, Seratia Marscens, and Micrococcus Luteus.
After a 24 hour incubation period, the results were measured. In order to determine which antibiotic had the most effect their zones of inhibition were recorded. The zone of inhibition refers to the distance from the disc to the outermost section around the disc where no bacterial growth was present. The results can be seen on the graph and data chart.
The following is a table showing the different zones of inhibition of each antibiotic in the bacteria culture:
Tetracycline Chloramphenicol Kanamycin Neomycin Penicillin Streptomycin Erythromycin
B. Cerus 5.5 9 5 6.6 1 7 13
E. Coli 7 4.2 5.5 4.5 no effect 4.6 no effect
S. Marscens no effect no effect 4.5 4 no effect 3 no effect
M. Luteus 23 22 10 11 23.5 11.5 19
After analysis of the data obtained it is obvious that each antibiotic had a distinct effect on the growth of the different bacteria. The results of this experiment are very important, since they teach of how each bacteria reacts to different antibiotics. This is very valuable because it is the information which assists physicians in prescribing certain medications to cure diseases caused by bacteria.
Bibliography
1) Encart Encyclopedia 1994, CD-ROM.
2) McGraw-Hill Encyclopedia of Science and Technology, 1992.
3) Physicians' Desk Reference, 1996.
f:\12000 essays\sciences (985)\Biology\Bacteriological Rep .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE AMERICAN UNIVERSITY IN CAIRO
BIOLOGY DEPARTMENT
SCIENCE 453 : BIOLOGY FOR ENGINEERS
REPORT No.1
BACTERIA CLASSIFICATION BY GRAM STAINING
Presented By : Karim A. Zaklama
92-1509
Sci. 453-01
24/2/96
Objective:
To test a sample of laboratory prepared bacteria and categorise it according to Christian's gram positive and gram negative classes and also by viewing it under a high powered microscope and oil immersions; classify its shape and note any special characteristics.
Introduction:
Bacteria was categorised into two groups in 1884 by the Danish Bacteriologist Christian, gram positive and gram negative by a staining technique where the ability to avoid de-coloration of Crystal Violet solution by alcohol would render the category of gram positive, and gram negative if the bacteria is de-coloured. This could be noted by the final colour of the bacteria: a violet colour where Gram positive and a pink colour of the Safranin added pending the de-colouring process.
Materials:
1. Bacteria Sample
2. Microscope Slide
3. Gram Staining Kit and Wash Bottles
a. Crystal Violet Solution
b. Iodine Solution
c. 95% Ethyl Alcohol
d. Safranin
e. Distilled Water
4. Bibulous Blotting Paper
5. Microscope
6. Oil
Procedure:
A. Preparation :
1. Bacteria is cultivated on agar jelly in an incubator at 25°C for 24 hours.
2. Obtain a microscope slide and with a toothpick, smear a thin coat of the bacteria sample onto the slide
3. Cover the smear with a drop Crystal Violet and leave standing for 20 seconds
4. Wash off the stain with distilled water; drain and blot off the excess with bibulous paper.
5. Apply Gram's Iodine on the smear and leave to stand for 1 minute.
6. Drain the excess iodine and apply 95% Ethyl alcohol for 20 second duration or till the alcohol runs clearly from the slide.
7. The smear should rinsed for a few seconds with distilled water to stop the action of the alcohol.
8. Drain and blot off the excess with bibulous
9. Introduce Safranin to the smear and leave standing for 20 seconds.
10. Wash off the stain with distilled water; drain and blot off the excess with bibulous paper.
11. Leave the slide to air dry.
B. Examination:
1. Place the slide under microscope on low powered lens.
2. Move the slide using the apparatus until the sample can be seen as a blur under the microscope.
3. Focus the lens to ensure that there is a sample directly under the lens.
4. Move to higher powered lens, repeat step 3.
5. Move to higher powered lens, repeat step 3
6. Move microscope aside and add Oil immersion, leave for a few seconds and re-examine the slide.
Note Shape and colour and any other observations.
Results and Observations:
It was evident by visual examination that the alcohol was de-colouring or a least partially de-colouring the bacteria.
The sample appeared a dark pink or close to violet by the naked eye; a microscope was needed to ensure results.
Under the low powered microscope shades of pink were noted.
Under the medium power, the shades were more clear but no shape could be made out.
Under the high powered microscope clumps of pink rod (bacilli) shaped bacteria cells could be observed.
Under Oil Immersion and high powered lens the cells could seen more spaced out and thus a clearer indication of the pink colour, bacilli shape and spores could be made out in the individual cells.
Conclusion:
The Shape was noted as Bacilli (Rod-like) shaped cells; a gram variable shape, distinct in either Gram Negative or Gram positive bacteria.
The final colour of the cells were stained pink by the Safranin showing the de-coloration of the crystal violet proving the bacteria is of the gram negative class.
Under oil immersion the cells became more sparse and under the high powered lens of the microscope spores could be seen, as little bubbles, in the cells. This tells us that the bacteria was in its terminal state.
The presence of spores in the bacteria at its terminal state tells us that the bacteria could be an old culture. Old bacteria cultures which are gram positive tend to de-colour, yet more slowly than gram negative bacteria. The speed of de-coloration was not inspected very clearly thus no further conclusion could be reached, yet it is possible that this an old culture of Bacilli shaped Gram Positive bacteria.
Recommendation:
It is recommended that the same sample be tested again for de-coloration; focusing on de-coloration speed. If the de-coloration is fast then the sample is definitely gram negative, slow de-coloration would tell us it is gram positive.
For future samples it would be recommended to keep the bacteria sample for this specific test for only 16 hours as recommended to avoid the presence of old cultures which are anomalous to this test.
f:\12000 essays\sciences (985)\Biology\Bats.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Biology\Becoming an Ecologist is an Exciting Venture.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Becoming an Ecologist is an Exciting Venture
Because of the increasing changes in the environment, a career as an ecologist is an important venture, especially for an earth-science oriented person with a love for nature and animals. With the number of ecological disasters escalating every year there is an ever increasing need for ecologists and people trained in ecology. Along with these disasters there are hundreds of animals and plants that are disappearing off the planet everyday. There is also an increasing demand for a person with the training to take care for, rehabilitate and then return to the wild injure animals, which is the prime responsibility of an ecologist. Ecologists mainly study the ways in which mankind is destroying the natural ecosystems of the earth and how people can help to revive them. Louise Miller once said that,". . . the ecologist is the one that brings together the study of all natural systems- earth, air, water, plants, and animals. Connections between living organisms and effects of their interactions are ecologists'
concerns. . . . .The balance of nature, wherever it occurs, is what you will investigate and analyze"(17).
Since a career as an ecologist is usually long term, there are certain characteristics a person should have in order to maintain a successful career. One of these characteristics that is the most important is patience. Patience is important because as an ecologist a person will have to at some point in their career talk to another person who knows nothing of or about ecology. Another reason why patience is important is because through the first years of being an ecologist finding and possibly attempting to tag an animals or gathering research material will be the main
Hawkins 2
jobs of that person and will take a very long time to accomplish. For a person who is just starting out as an ecologist, manual dexterity is just as important as patience. However, later on in a career as an ecologist, many traits will have to surface . Some of these traits include a sense of professionalism, enthusiasm for the work at hand, deep concern for the world, curiosity and dedication. However, skills that are common to most of the ecologists in the world today are creativity and problem solving which are just as important and probably even more important than the rest. A deep love for the environment around a person is one of the best and most desirable characteristics that a person who wants a job as an ecologist could have.
Pre-training occurs when a person volunteers their services to any organization so that they can get the much needed information that they need to perform their job more efficiently. This information would include how to conduct research, how to track animals, and how to clean up disasters in new ways. Getting to see what the animals and nature that a person will most likely be working with or around or would be like is also one of the job details that they must become fluent in. Many organizations, from scouts to the World Wildlife Foundation, can use the services of a ready and able bodied and able minded people to help them out in their conduction of research and concluding hypothesis. Through these organizations a person can make valuable contacts that can help that person get a job in the environmental field or to get a promotion later on in their career.
Because of the changes in the world around us, a person in the field of ecology must stay focused on all of the upcoming and new technologies in the world today. A person needs to have a formal education of at least a bachelor's degree. A person will also need at least some experience in conducting research so they will be able to take advantage of certain opportunities in the future. While a person is still living at home, they can find new and inventive ways to
Hawkins 3
apply their knowledge of the environmental sciences. Planting and raising a garden, designing a garbage disposable system that will gather the different recyclables are some of the things a person can do. A person can leave high school at the age of sixteen and get a job in some practical part of ecology such as a green house attendant or a tour guild at a local natural reserve. However, if a person further educates themselves they will have a better chance of getting a job that they would want much more. These are all reasons why Fanning says, "While still in high school take a well rounded program including biology, mathematics, physics, geology, chemistry, social sciences and humanities"(30). A college education is mandatory, with an emphasis on biology. Yet that person should also round out their education with all of the other science classes that they can take. Also, taking social sciences and many mathematics classes will round out their schedule. To become an ecologist it is mandatory to have
at least a bachelor's degree in an environmental scientist. However, if that person gets a master's or even a doctorate's degree, he or she will be hired before a person with only the bachelor's degree. While combining the pre-training and a college education a person will be building a solid basis for their future career.
Ecologists mainly focus their attention on the ecosystems of the world and the impacts that man has on the environment and energy developments on the environment. They also try to understand the links between organisms and their environments. It is just like what the Encyclopedia of Vocational Guidance says about ecologists, "...a primary concern of ecologists today is to study and attempt to find solutions for disruption in various ecosystems. Increasingly, an area of expertise is the reconstruction of ecosystems"(687). Many of the duties which a person in the field of ecology has are of so much importance that many things in this world could not be done without them. Most of the working conditions for a person who goes into
Hawkins 4
the field of ecology will mainly be in the outdoor with Mother Nature. Peter Newman once said that, ". . .you will be pioneering what has become on of the most dynamic of the new career opportunities today"(18). Many of the state and federal agencies hire technicians to gather data from backwoods city landfills or even lake and ocean bottoms. As an entry-level ecologist a person will usually be concentrating on hands on work, going out and cleaning up and caring for an area. However, through the seasons an entry-level ecologist may be forced to transfer frequently. After many years as an ecologist working outdoors, he or she may be stuck behind a desk for some time. Over a couple of years as an ecologist a person will possibly be responsible for taking care of an entire woodland. Some of the main job duties of an ecologist are the ability to care for and raise an animal, tag animals, and collect research materials.
There are many different jobs available for an ecologist. Some of these jobs are in the private, federal, and public sectors. Because salaries range from state to state and sector to sector it is hard to get an exact amount per year. A salary for a person working for the federal government is dependent on a person's GS level. A GS-5 level person is what a person will probably start at, which is a person with at least a bachelor's degree in biology or a related subject. A GS-5 person salary ranges from $16,973-$22,067. However a person with the highest level GS was last paid $97,317. The federal government's pay scale is slightly better than that of many of the states yet lower than that in the private sector. In the state sector the starting pay in about $12,000 up to $34,000 and usually end up with a job paying about $60,000. All of this information came from the following three books: Careers for Nature Lovers and Other Outdoor Types, Opportunities in Environmental Careers, and The Encyclopedia of
Careers and Vocational Guidance.
f:\12000 essays\sciences (985)\Biology\bigbang.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
i It is always a mystery about how the universe began, whether
if and when it will end. Astronomers construct hypotheses called
cosmological models that try to find the answer. There are two
types of models: Big Bang and Steady State. However, through
many observational evidences, the Big Bang theory can best
explain the creation of the universe.
The Big Bang model postulates that about 15 to 20 billion
years ago, the universe violently exploded into being, in an
event called the Big Bang. Before the Big Bang, all of the
matter and radiation of our present universe were packed together
in the primeval fireball--an extremely hot dense state from which
the universe rapidly expanded.1 The Big Bang was the start of
time and space. The matter and radiation of that early stage
rapidly expanded and cooled. Several million years later, it
condensed into galaxies. The universe has continued to expand,
and the galaxies have continued moving away from each other ever
since. Today the universe is still expanding, as astronomers
have observed.
The Steady State model says that the universe does not
evolve or change in time. There was no beginning in the past,
nor will there be change in the future. This model assumes the
perfect cosmological principle. This principle says that the
universe is the same everywhere on the large scale, at all
times.2 It maintains the same average density of matter forever.
There are observational evidences found that can prove the
Big Bang model is more reasonable than the Steady State model.
First, the redshifts of distant galaxies. Redshift is a Doppler
effect which states that if a galaxy is moving away, the spectral
line of that galaxy observed will have a shift to the red end.
The faster the galaxy moves, the more shift it has. If the
galaxy is moving closer, the spectral line will show a blue
shift. If the galaxy is not moving, there is no shift at all.
However, as astronomers observed, the more distance a galaxy is
located from Earth, the more redshift it shows on the spectrum.
This means the further a galaxy is, the faster it moves.
Therefore, the universe is expanding, and the Big Bang model
seems more reasonable than the Steady State model.
The second observational evidence is the radiation produced
by the Big Bang. The Big Bang model predicts that the universe
should still be filled with a small remnant of radiation left
over from the original violent explosion of the primeval fireball
in the past. The primeval fireball would have sent strong
shortwave radiation in all directions into space. In time, that
radiation would spread out, cool, and fill the expanding universe
uniformly. By now it would strike Earth as microwave radiation.
In 1965 physicists Arno Penzias and Robert Wilson detected
microwave radiation coming equally from all directions in the
sky, day and night, all year.3 And so it appears that
astronomers have detected the fireball radiation that was
produced by the Big Bang. This casts serious doubt on the Steady
State model. The Steady State could not explain the existence of
this radiation, so the model cannot best explain the beginning of
the universe.
Since the Big Bang model is the better model, the existence
and the future of the universe can also be explained. Around 15
to 20 billion years ago, time began. The points that were to
become the universe exploded in the primeval fireball called the
Big Bang. The exact nature of this explosion may never be known.
However, recent theoretical breakthroughs, based on the
principles of quantum theory, have suggested that space, and the
matter within it, masks an infinitesimal realm of utter chaos,
where events happen randomly, in a state called quantum
weirdness.4
Before the universe began, this chaos was all there was. At
some time, a portion of this randomness happened to form a
bubble, with a temperature in excess of 10 to the power of 34
degrees Kelvin. Being that hot, naturally it expanded. For an
extremely brief and short period, billionths of billionths of a
second, it inflated. At the end of the period of inflation, the
universe may have a diameter of a few centimetres. The
temperature had cooled enough for particles of matter and
antimatter to form, and they instantly destroy each other,
producing fire and a thin haze of matter-apparently because
slightly more matter than antimatter was formed.5 The fireball,
and the smoke of its burning, was the universe at an age of
trillionth of a second.
The temperature of the expanding fireball dropped rapidly,
cooling to a few billion degrees in few minutes. Matter
continued to condense out of energy, first protons and neutrons,
then electrons, and finally neutrinos. After about an hour, the
temperature had dropped below a billion degrees, and protons and
neutrons combined and formed hydrogen, deuterium, helium. In a
billion years, this cloud of energy, atoms, and neutrinos had
cooled enough for galaxies to form. The expanding cloud cooled
still further until today, its temperature is a couple of degrees
above absolute zero.
In the future, the universe may end up in two possible
situations. From the initial Big Bang, the universe attained a
speed of expansion. If that speed is greater than the universe's
own escape velocity, then the universe will not stop its
expansion. Such a universe is said to be open. If the velocity
of expansion is slower than the escape velocity, the universe
will eventually reach the limit of its outward thrust, just like
a ball thrown in the air comes to the top of its arc, slows,
stops, and starts to fall. The crash of the long fall may be the
Big Bang to the beginning of another universe, as the fireball
formed at the end of the contraction leaps outward in another
great expansion.6 Such a universe is said to be closed, and
pulsating.
If the universe has achieved escape velocity, it will
continue to expand forever. The stars will redden and die, the
universe will be like a limitless empty haze, expanding
infinitely into the darkness. This space will become even
emptier, as the fundamental particles of matter age, and decay
through time. As the years stretch on into infinity, nothing
will remain. A few primitive atoms such as positrons and
electrons will be orbiting each other at distances of hundreds of
astronomical units.7 These particles will spiral slowly toward
each other until touching, and they will vanish in the last flash
of light. After all, the Big Bang model is only an assumption.
No one knows for sure that exactly how the universe began and how
it will end. However, the Big Bang model is the most logical and
reasonable theory to explain the universe in modern science.
ENDNOTES
1. Dinah L. Mache, Astronomy, New York: John Wiley & Sons,
Inc., 1987. p. 128.
2. Ibid., p. 130.
3. Joseph Silk, The Big Bang, New York: W.H. Freeman and
Company, 1989. p. 60.
4. Terry Holt, The Universe Next Door, New York: Charles
Scribner's Sons, 1985. p. 326.
5. Ibid., p. 327.
6. Charles J. Caes, Cosmology, The Search For The Order Of
The Universe, USA: Tab Books Inc., 1986. p. 72.
7. John Gribbin, In Search Of The Big Bang, New York: Bantam
Books, 1986. p. 273.
BIBLIOGRAPHY
Boslough, John. Stephen Hawking's Universe. New York: Cambridge
University Press, 1980.
Caes, J. Charles. Cosmology, The Search For The Order Of The
Universe. USA: Tab Books Inc., 1986.
Gribbin, John. In Search Of The Big Bang. New York: Bantam
Books, 1986.
Holt, Terry. The Universe Next Door. New York: Charles
Scribner's Sons, 1985.
Kaufmann, J. William III. Astronomy: The Structure Of The
Universe. New York: Macmillan Publishing Co., Inc., 1977.
Mache, L. Dinah. Astronomy. New York: John Wiley & Sons, Inc.,
1987.
Silk, Joseph. The Big Bang. New York: W.H. Freeman and Company,
1989.
----------------
f:\12000 essays\sciences (985)\Biology\Bioethics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BIOETHICS
Progress in the pharmacological, medical and biological sciences involves experimentation on all living species, including animals and humans. The effectiveness of medications investigative procedures and treatments must at some point be tested on animals and human beings. Although tests are conducted much more frequently on lab animals, especially those most related to humans, they do not provide sufficient information.
The history of medicine shows that there has always been a need for experimentation on human beings. Examples of these consist of the inoculation of Newgate prisoners in 1721, who had been condemned to death with Smallpox. In 1796, Edward Jenner, also studying Smallpox, inoculated an eight year old boy with pus from a diseased cow. The list goes on, and such experiments continue even until today.
Nowadays these experiments would be ethically and legally unacceptable. Nevertheless, there have been clear documented cases of abuse in recent times. An example of this is the experiments conducted by Nazi doctors on prisoners in the concentration camps during the Holocaust.
Does this mean that since there is potential for abuse, all experimentation should be banned? This would mean that society would be condemned to remain at the same level of knowledge (status quo)?
Bioethically speaking, how far can we go in the study of the human without crossing the line? The fundamental question is, since we are the ones drawing the line, where do we draw it?
The purpose of this essay is to provide a clear sense of the present law on this issue. Second, to review the problems raised by experimentation on animals. To show some different examples of bioethics. Third, to show the biblical view of the matter. Finally, to bring the reader to his or her own clear conclusion, without a bias opinion on the matter.
THE CURRENT STATE OF THE LAW
Biomedical experimentation on human subjects raises many complex legal problems that the law must deal with accordingly. For example, infringement on the rules subjects the researcher not only to criminal sanctions, but also civil sanctions (damages for harm caused), administrative sanctions (withdrawal of funds), or disciplinary sanctions (suspension from the researchers' professional association).
Since we are in Canada, there are two categories of law dealing with regulating experimentation. The first is Federal and Provincial Legislation. The second consists of documents, codes of ethics and reports, which while not necessarily enforceable, strongly urge researchers experiments on human subjects to observe certain standards of conduct.
A. FEDERAL AND PROVINCIAL LEGISLATION
The Canadian Charter of Rights and Freedoms governs here. Some of its provisions in effect make certain kinds of experiments illegal. "Any experimental activity which endangers the protected values is thereof illegal."~ Another is according to current case law, "treatment" may be broadly construed rather than being limited to therapy.~
Criminal sanctions dealing with offences against the person make it possible to penalize those causing harm to a subject who has not given valid consent to an experiment. Explaining this, many experiments on humans are legal and performed everyday. No experiment is performed without a purpose. The most common is during surgery, the patients give valid consent to have experiments conducted on them during the operation.
With respect to medications, citizens of Canada are given protection by the Food and Drug Act. These laws control new medications into the market. Although this seems as though it contains no ethical procedures it touches upon the experimentation prior to the release of the medication. Many animals have been used in order to bring these medications to the market. Furthermore, humans must have been used during experimentation. According to the Law, any experiment performed on a person to bring out any new medication may result in criminal sanction (homicide, damages for harm, suspension).
Here are a few examples given by the Charter of the Rights and Freedoms.
The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the diseased of other problem under study that the anticipated results will justify the performance of the experiment.
The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.
*The voluntary consent of the human is absolutely essential.
B. ETHICAL DOCUMENTS
In 1977, a report of the Canada Council was prepared on ethics. It was responsible for construing ethical guidelines for the people to abide by. Although the report deals with ethics in the bio-medical studies, it emphasizes more on other issues.
ANIMAL RIGHTS
EXPERIMENTATION ON FETUSES
euthanasia, abortion, genetic engineering
Since the law states that most experimentation performed on animals and humans is unethical yet provides fruitful results, it should be left to the people to make the decision whether or not experimentation should continue and to what extent.
If we are considered to be a moral race, then should we be allowed to make the choice for anyone who cannot make the choice for them?, just like a mother for her own child? One who agrees with this statement, most likely agrees that we should decide whether or not any experimentation on a creature that has no developed morals or rights can be performed. One who disagrees with the aforementioned statement has no question in his or her mind that, no experimentation should be performed if it results in the harm of the subject, be it a rat or a human.
The essence of this is based on human moral. Since we cannot communicate with the specimens other than humans (fetuses, animals, mentally disabled) we do not know of what moral standing these specimens should be granted, so we give them none. Is this fair. We limit ourselves to a certain amount of knowledge if certain experiments that are considered to be immoral are performed. The real question is again, where do we draw the line? Since animals are not themselves direst objects of moral concern, there are nonetheless certain things that are not morally justifiable when done to animals.~ On this view, unnecessary cruelty towards animals is forbidden because of the psychological fact that people who brutalize animals will or may tend to behave cruelly towards other people.~
Again, there are two views that can be taken from this point. One is that, no experiment that one wouldn't perform on his fellow man, should not be performed on any animal. The other view is, if the experiment provides positive results, and is not cruel to the subject, then it should be allowed to be performed.
Although much abuse and infringement on animals rights has occurred over the past century in the field of study, that shouldn't stop us now from continued learning.
Here are some examples of abuse on animals and some issues involving bioethics. At the Department of Psychology at MIT, hamsters were blinded in a study showing that "blinding increases territorial aggression in male Syrian golden hamsters."~ At UCLA, monkeys were also blinded to study the effects of hallucinogens on them. Another example, lab rabbits were tested to see how they react to a companions death. These examples are true and show how far some people would let their curiosities take them. They are not necessary and such researcher should be suspended.
More examples of bioethics are such things like abortion and euthanasia. Genetic engineering, organ transplants, prostheses and artificial insemination are just a few examples that are considered to be unethical by some and ethical for others. Even such things as surrogate motherhood are considered unethical. To give you a better taste of what opposing arguments on a certain bioethical topic is, the artificial heart will be used as an example. The artificial heart should be used, even though it does not promise the subject an easy life, it does promise them life and that is all the patients want to hear, that they are going to live even just a year, month or week. The other side of the matter says that the artificial heart in not only unethical, it is too expensive. They believe that what G-D giveth, G-D can taketh awayeth. This brings us to the Biblical view on the matter.
BIBLICAL VIEW
Often in theses days it is said that the primary question is just that of human survival. Many say that we live on borrowed, and probably brief time. An "apocalyptic vision of a barren, radioactive, peopleless planet haunts the minds of young people......victims of instants cremation or inexorable, agonized death!"~ This statement is talking about society's technological advancements that are able to leave the world desolate and barren from people, plants and all living creatures.
What does this have to do with the study of Bioethics.? First, let's show how this relates to biblical text. Study of man has brought us to the possibility of complete desolation on an entire planet. Biblically, man should not interfere with what he has not produced or belonging to him. Even life does not belong to man himself, the choice cannot be made by him to take the lives of others. This is where the study of bioethics comes in. Even if the results of any experiments provide fruitful results, they cannot be performed if they involve interfering with what is not rightfully their own. This is like taking someone's life into your own hands, "playing god" as many say is a sin. Especially abortion, euthanasia, any birth control etc. This leaves society with no room for advancement, yet being a believer in G-D, the points sound valid ethically, yet more religiously. Many people of today's society feel that such a view will or may keep society from helping themselves provide better lives for themselv
es. The Biblical believers say contradict this with a very strong belief in G-D.
Finally, the time has come to make a valid conclusion. The decision is up to you to decide. The purpose of this essay was not to make the decision for you, it was to show both sides of the argument clearly without a bias opinion, and to let you the
reader decide. Ladies and Gentlemen the choice is now yours...
f:\12000 essays\sciences (985)\Biology\Biology AT1 on the affect of hydogen peroxide on the liver.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aim: "To see what factors affect the decomposition of hydrogen
peroxide by the enzyme catalase which is found in the liver"
Introduction: Enzymes are biological catalysts. They speed up the chemical reactions which go on inside living things. Without them the reactions would be so slow that life would grind to a halt. Enzymes work by when a substrate molecule bumps into a molecule of the right enzyme, it fits into a depression on the surface of the enzyme molecule. This depression is called the active site. The reaction then takes place and the molecules of product leave the active site, freeing it for another substrate molecule.
Hydrogen peroxide is a waste product produced during respiration. Hydrogen peroxide is produced to kill off dangerous bacteria. The hydrogen peroxide is broken down, so that it can not be dangerous anymore, into water (which is given off when you perspire) and into oxygen (which can be given off when you exhale).
A toxin like hydrogen peroxide must be broken down because if it is kept in the body for too long it can react with cell walls and damage them or break them down.
The variables that affect the decomposition of hydrogen peroxide are:
1. The temperature of the liver
2. The surface area of the liver
3. The pH of the hydrogen peroxide
4. The concentration of the enzymes
The two variables I am going to look at are temperature and surface area.
Hypothesis: I think that when we test the surface area the bigger the surface area the quicker the reaction will be. This is because there are more liver particles that will be exposed to the hydrogen peroxide,
therefore the liver that is ground up will be the quickest reaction.
For temperature it will be the temperature that is forty degrees Celsius because the enzymes in the liver will not exist after the temperature goes over forty degrees Celsius. I know this due to previous experiments that I have carried out. I think the rate of decomposition will double every ten degrees Celsius that the liver is warmed up, but when the liver goes over forty degrees Celsius, the rate of decomposition will slow down.
Therefore my hypothesis is that the rate of decomposition will be at its fastest with a temperature of forty degrees Celsius and a surface area of grinded liver.
Apparatus: Trough, bung, water, burette, hydrogen peroxide, liver
(1.4g), test tube (with delivery tube), beaker, Bunsen burner, thermometer, tripod , gauze, clamp and boss, stand.
Method:
1. Set up the apparatus as in the diagram.
2. Put 1.4g of liver in a test tube with delivery tube. Add sand to all experiments to have a fair test with the liver that is ground up (sand separates the liver easily).
3. Add 10cm' of hydrogen peroxide to the liver. Replace the bung quickly so that no gas is lost.
4. After 10 seconds record the amount of water displaced in the burette. Record this amount on a table.
5. Repeat steps 1-4 heating the liver to 30'C, 40'C and 50'C.
6. Repeat steps 1-5 with liver weighting 1.4g cut in half, into quarters and ground up.
7. Remember to wear goggles and use all the safety procedures.
f:\12000 essays\sciences (985)\Biology\bioluminescence in fungi.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION
What is Bioluminescence?
The current paper main focus is on bioluminescent Fungi but the basic features of bioluminescence discussed are common to all bioluminescent organisms. Bioluminescence is simply light created by living organisms. Probably the most commonly known example of bioluminescence by North Americans is the firefly, which lights its abdomen during its mating season to communicate with potential mates. This bioluminescent ability occurs in 25 different phyla many of which are totally unrelated and diverse with the phylum Fungi included in this list (an illustration of a bioluminescent fungi is displayed in figure 1). One of the features of biological light that distinguishes it from other forms of light is that it is cold light. Unlike the light of a candle, a lightbulb, bioluminescent light is produced with very little heat radiation. This aspect of bioluminescence especially interested early scientists who explored it. The light is the result of a biochemical reaction in which the oxidation of a compound called "Luci
ferin" and the reaction was catalyzed by an enzyme called "Luciferase". The light generated by this biochemical reaction has been utilized by scientists as a bioindicator for Tuberculosis as well as heavy metals. On going research involving bioluminescence is currently underway in the areas of evolution, ecology, histology, physiology, biochemistry, and biomedical applications.
History of Bioluminescent Fungi
The light of luminous wood was first noted in the early writings of Aristotle which occurred in 382 B.C.(Johnson and Yata 1966 and Newton 1952) The next mention of luminous wood in the literature occurred in 1667 by Robert Boyle who noticed glowing earth and noted that heat was absent from the light. Many early scientists such as Conrad Gesner, Francis Bacon, and Thomas Bartolin all observed and made notation of luminous earth(Johnson and Yata 1966 and Newton 1952 ). These early observers thought that the light was due to small insects or animal interactions. The first mention that the light of luminous wood was due to fungi occurred from a study of luminous timbers used as supports in mines by Bishoff in 1823. This opened the way for further study by many other scientists and by 1855 modern experimental work began by Fabre ( Newton 1952). Fabre established the basic parameters of bioluminescent fungi, those being:
· The light without heat
· The light ceased in a vacuum, in hydrogen, and carbon dioxide
· The light was independent of humidity, temperature, light, and did not burn any
brighter in pure oxygen
The work by Herring (1978) found that the luminescent parts of the included pileus(cap), hymenium(gills) and the mycelial threads in combination or separately(figure 2) also the individual spores were also seen to be luminescent. Herring also stated that if the fruiting body (mushroom) was bioluminescent then the mycelial threads were always luminescent as well but not vice versa.
From the 1850's to the early part of the 20th century the identification of the majority of fungal species exhibiting bioluminescent traits was completed. The research of bioluminescent fungi stagnated from the 1920's till 1950's (Newton 1952 and Herring 1978 ). After which extensive research began involving the mechanisms of bioluminescence and is still carried out to the present.
The Process of Bioluminescence
Bioluminescence results because of a certain Biochemical reaction. This can be described as a chemiluminescent reaction which involves a direct conversion of chemical energy transformed to light energy( Burr 1985, Patel 1997 and Herring1978). The reaction involves the following elements:
· Enzymes (Luciferase) - biological catalysts that accelerate and control the rate of chemical reactions in cells.
· Photons - packs of light energy.
· ATP - adenosine triphosphate, the energy storing molecule of all living organisms.
· Substrate (Luciferin) - a specific molecule that undergoes a chemical charge when affixed by an enzyme.
· Oxygen - as a catalyst
A simplified formula of the bioluminescent reaction:
ATP(energy) + Luciferin (substrate)+ Luciferase(enzyme) + O2(oxidizer) == == light (protons)
The bioluminescent reaction occurs in two basic stages:
1) The reaction involves a substrate (D-Luciferin), combining with ATP, and oxygen which is controlled by the enzyme(Luciferase). Luciferins and Luciferase differ chemically in different organisms but they all require molecular energy (ATP) for the reaction.
2) The chemical energy in stage one excites a specific molecule (The Luminescent Molecule: the combining of Luciferase and Luciferin). The excitement is caused by the increased energy level of the luminescent molecule. The result of this excitement is decay which is manifested in the form of photon emissions, which produces the light. The light given off does not depend on light or other energy taken in by the organism and is just the byproduct of the chemical reaction and is therefore cold light.
The bioluminescence in fungi occurs intracellulary and has been noted at the spore level(Burr 1985, Newton 1952 and Herring 1978). This may at times be mistaken for a extracellular source of light but this is due to the diffusion of the light through the cells of the fungus. In examining the photograph in figure 1, it appear that the cap of the fungus is glowing but after study, it was observed that just the gill structures that emits the light and the cap (which is thin) emits the light of the gills by diffusion(Herring 1978).
The energy in photons can vary with the frequency (color) of the light. Different types of substrates(Luciferins) in organisms produce different colors. Marine organisms emit blue light, jellyfish emit green, fireflies emit greenish yellow, railroad worms emit red and fungi emit greeny bluish light (Patel 1997).
Fungal Families Exhibiting Bioluminescence
The phylum Fungi is composed of the following 5 divisions (Newton 1952):
· Myxomycetes (slime molds)
· Schizomycestes (bacteria)
· Phycomycetes (moulds)
· Ascomycetes ( yeasts, sac fungi and some molds)
· Basidiomycetes (smuts, rusts, and mushrooms)
Of the above divisions the majority of bioluminescence occurs in the Basidiomycetes and only one observation has been made involving the Ascomycetes; specifically in the Ascomycete genus Xylaria (Harvey 1952). At present there are 42 confirmed bioluminescent Basidiomycetes that occur world wide and share no resemblance to each other visually, other than the ability to be bioluminescent. Of these 42 species that have been confirmed 24 of these have been identified just in the past 20 years and as such many more species may exhibit this trait but are yet to be found.
The two main genus that display bioluminescence are the genus Pleurotus which have at present 12 species which occur in continental Europe and Asia. The genus Mycena have 19 species identified to date with a world wide distribution range. In North America only 5 species of bioluminescent basiodiomycetes have been reported. These include the Honey mushroom -Armillaria mellea (illustrated in figure 3), the common Mycena -Mycena galericulata (illustrated in figure 1), the Jack O'Latern - Ophalalotus olearius (pictured in figure 4), Panus styticus and Clitocybe illudens.
The question of whether bioluminescent mushrooms were all poisonous was raised in the discussions between my laboratory partner and myself. After examining the literature and a mushroom field guide book it was evident that there was no correlation between the edibility of the mushroom and its bioluminescence. Some mushrooms such as Armillaria mellea the Honey mushroom was listed as being excellent to eat. While the Jack O'Latern - Omphalalotus olearius was listed as poisonous and caused sever gastrointestinal cramps. The edible merits of the common Mycea were unknown and while Panus stypticus was listed as poisonous it was found to contain a clotting agent and useful in stopping bleeding (Lincoff 1981, Newton 1952 and Herring 1978). As it only a field guide to North American mushrooms was available, only the North American varieties were examined. If all 42 species of bioluminescent basidiomycetes were included in the search, a possible correlation may have been found.
Bioluminescence Research Applications
Luminescence has a unique advantages for scientific studies as it is the only biochemical process that has a visible indicator than can be measured. The light given off in the bioluminescent reaction is now able to be accurately measured with the use of a luminometer. This ability to easily and accurately detect small amounts of light has led to the use of the bioluminescent reaction in scientific research involving biological process applications. The following are just a few applications, some of which have been developed in only the last few years (Johnson and Yata 1966, and Patel 1997). The following are two examples of which have been recently developed.
The Tuberculosis Test
Testing for tuberculosis has long been a problem because of the long time it takes for the species to grow to a size that is detectable by modern medicine. Typically growing a culture of Mycobacterium tuberculosis large enough to determine the strain that a particular patient has can take up to three months. Of course, this poses a problem because the patient often can not wait for the diagnosis and must be given drugs that his strain may be resistant to. This is further complicated because there are 11 drugs used to combat TB, picking the right one before determining the strain has a 1/11 chance of success. Recently a way of incorporating bioluminescence into the TB tests has been found and can sharply reduce the diagnosis time to as little as 2 days. The technique involves inserting the gene that codes for luciferase into the genome of the TB bacterial culture taken from the patient. The gene is introduced through a viral vector and once incorporated, the bacteria produces the luciferase. When luciferin i
s added to the culture, light is produced. Since less than 10,000 bacteria are needed to code for enough luciferase to produce a detectable amount of light, the culture time is reduced to only 2-3 days. Since the luciferase-luciferin reaction requires ATP, the resistance of the strain in the culture can be tested by adding a drug and watching for light. This will indicate which of the 11 drugs therapy's will be effective in treating Tuberculosis. By reducing the time needed to prescribe the correct drugs for treatment, this application of bioluminescence will someday be ready to save some of the 3 million killed each year by tuberculosis (Patel 1997).
Biosensors
Bioluminescence has also been used for several years as a biosensor of many substances. As seen in the tuberculosis example, bioluminescence can be used a sensor for the presence of ATP because ATP is needed in the light producing reaction. Other techniques have been used for detecting ions of mercury and aluminum, among others, by using bacteria with light genes fused to their ion-resistant regulons. For example, if a bacteria that is resistant to Hg is in the presence of Hg, the genes coding for its Hg resistance will be activated. The activation of that gene will also activate the luciferase gene fused to it, so the bacteria will produce luciferase whenever Hg is present. Adding luciferin and testing for light production with a luminometer reveals the presence of the metal ion in the solution. This technique is especially useful in testing for pollutants in the water supply when concentrations are too low to detect by conventional means(Herring 1978, and Patel 1997).
Other areas that are currently using bioluminescence in scientific research include evolution, ecology, histology, physiology, biochemistry, biomedical applications, cytology and taxonomy. Any area that involves a living organism can utilize bioluminescent technology as a biosensor.
Conclusion
The glow light generated by bioluminescent Fungi has for centuries generated interest from philosophers and scientists and has benefited science by providing problems to solve -How does it work and does it have a practical application? The answers to those basic problems that have been discovered today and have resulted in benefiting mankind, by bettering our lives especially in regard to it's biomedical applications. Further research with bioluminescent Fungi is being conducted on a world wide scale and include North America, Japan, and Europe. Future research may lead to new discoveries and uses from bioluminescent organisms such as the Fungi group.
References
Burr, G.J. 1985. Chemiluminescence and Bioluminescence. Marcel Dekker, Inc. New
York, U.S.A.
Johnson, F. H. and Yata, H. 1966. Bioluminescence in progress. Princton, New
Jersey, Princeton University Press.
Lincoff,G.H. 1981. The Audubon Society field guide to North American Mushrooms.
Knopf Inc. New York. U.S.A.
Newton, H.E. 1952. Bioluminescence. Academic Press. New York. U.S.A.
Herring, P.J. 1978. Bioluminescence in Action. Academic Press. New York. U.S.A.
Patel, P.Y. 1997. Bioluminescence in scientific research. Jan 10, 1997.
Http://www. Pranovp@umich.edu.
Wood, M.F. and Stevens, F. 1997. The Myko web page -Fungi Photos. Jan 10, 1997.
http://www.mycoweb.com/ba_index.html#A
WED. AM
GROUP
BIOLOGY 201
BIOLUMINESCENT FUNGI
DUE MARCH 7, 1997
f:\12000 essays\sciences (985)\Biology\Biome Broadcast.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LANCASTER / PENNSYLVANIA
This morning Darian, Danny, Laura, and I were bored so we decided that we would all go on a hike at Blue Ridge Mountain. All of us went home, got our hiking equipment, and packed a lunch. We then met at my house. I drove all of us up to Blue Ridge Mountain. We got there in a half hour it was around 10:30 AM.
It was probably one of the most beautiful days we had all year, it was around seventy to seventy five degrees and there was barely any humidity. Even though we have all four seasons and varied amounts of precipitation throughout the year it felt like it was either a scorching humid summer day or it was a freezing snowy winter day. It felt like we only had two seasons all year either summer or winter.
When we stepped out of the car we could see and hear birds singing. We could also smell, hear, and see the beautiful trees swaying in the gentle breeze.
Pictures of a robin and a cardinal that we saw while stepping out of the car.
We all got our gear out of the car and walked over to the trails. We had to decide what trail to take. We had three choices the first trail was half a mile long, the second trail was two miles long, and the third trail was four miles long. Since it was such a beautiful day we all decided to take the third trail that was four miles long.
We started hiking around 11:00 AM. While we were hiking we heard wings flapping, we all turned and saw a robin fly towards the ground, pick up a worm, and feed it to her babies. Everyone thought that it was cute. After, we watched the robin for a while we continued hiking until 12:30 PM.
Everyone was hungry so we decided to find a spot to eat our lunches. We found a perfect spot, it had a great view, a patch of beautiful dandelions, and a big beautiful maple tree to sit under.
A picture of the great view we had during lunch.
A picture of one of the many dandelions that were in the patch that we were sitting next to during lunch.
A picture of the maple tree we sat under while eating lunch.
We all sat down on the big blanket that we had brought along and ate our lunches. Laura went over and picked a dandelion and smelled it. We talked for a while and admired how big and beautiful the Blue Ridge Mountain was. We finished having lunch around 1:30 PM. We then started hiking again. As we were hiking we heard a splashing noise, we could not really identify what it was. We walked toward the noise and encountered some ruff terrain. We finally found what we have been hearing. It was several bears It was several bears trying to catch fish in a stream. One of the bears caught a fish and all of the other bears ran over to see if they could get some but it was to late because the bear
had already devoured the fish.
A picture of the bears trying to catch fish.
We decided to move on and not interfere with the bears because that is not a situation we wanted to be in. As we moved on we saw a rattlesnake eat a mouse. We were about fifty feet away though because if he were to bite one of us on top of this mountain with nobody to help us our chances of survival were pretty low. As we walked on we saw a nest of baby rattlesnakes we looked at them from a safe distance when all of a sudden the mother of the rattlesnakes shot out from behind a rock. We all ran for our lives! We ran for a good five minutes before the rattlesnake stopped chasing us. Nobody was bit thank god.
A picture of the baby rattlesnakes that we saw.
But we had a big problem we were totally lost! We had traveled pretty far off the trail while the rattlesnake was chasing us. We all decided to travel down the mountain because it seemed to be the most logical way because we hiked up the mountain so if we hiked down the mountain we would probably make it to the car.
We started to make our way down the mountain. As we were making our way down the mountain Laura tripped and rolled down the mountain about ten feet. We all ran over to her and discovered she had broke her leg. Darian and I went to find some branches so we could make a splint for Laura's leg. We found some branches and went back over to Danny and Laura. We assembled a splint and put it on Laura's leg. It was just strong enough for her to be able to walk.
So we continued making our way down the mountain and luckily we found a compass! We knew we had to travel north because that was the way the trail traveled. So we started to hike north.
We eventually made our way back to the trail. We saw some other people that were hiking. They saw we needed help and they gladly provided us with their assistance. They helped us get back to my car.
I got in my car and drove to the nearest phone. I called an ambulance and drove back to the mountain. We all waited for the ambulance to come. When the ambulance arrived they put Laura in the back and took her off to the hospital. All of us watched the ambulance drive off into the sunset.
A picture of the sunset by the Blue Ridge Mountain.
f:\12000 essays\sciences (985)\Biology\Biotechnology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biotechnology
Over the past decade, Biotechnology has advanced much to the advantage
of many people. We have learned that with certain chemicals, we are able to cut-
and-paste the DNA of certain organisms, and alter them to comply to our
sociable needs. But this can also affect modern medicine, political factors,
economic, and societal balances in our nation.
For medicine, Biotechnology has been a blessing, healing people who
suffer from a sex-linked trait known as Hemophilia. Hemophilia is a condition
where the person may die of blood loss when cut or wounded. This is caused by
a lack of a certain chemical known as Factor 9, which allows the patient to heal
from wounds. Scientists may now insert a gene into the patients own DNA
causing the patient to heal skin, which has been impossible until now, with
Genetic-Engineering. I doubt that there have been any real disadvantages with this
technology, since it works to heal the patient, but we really can't predict what
kind of medical misfits there will be in the future, using this life-saving
technology to their own personal, perhaps evil, advantages.
Dealing with politics, Bioengineering has opened a whole new door
pertaining to the military, whose use of it may create an ultimate destruction. The
alterance of nature is un-natural, and creates an unbalancement in life. When we
use this technology towards the wrong side, we may all be burned. You see,
Biotechnology has the ability to altar what diseases we humans are susceptible to,
and when scientists create something to eliminate immunities to diseases, it may
result in a mass destruction of the evolved living being known as the human.
This may sound tragic, but this is what Biotechnology is all about; changing
genes so that they may fit our societal and economic needs. The government has
probably taken this germ-warfare in to consideration, and is leading to an
unhappy resolution. In this case, Biotechnology has no advantages.
Socially, Biotechnology is a breakthrough in science. Our new techniques
of giving nature a hand, in this case, will pave the road of the genetic highway
to come in the future. We are now able to create the perfect tomato, the largest,
reddest apples, and the plumpest grapes. We can score more milk by the cow,
and create new chemicals to heal people. This is a society where we need not
worry about a plague affecting our fruits and vegetation, nor our dairy and meat
products. When we do not need to worry about certain factors like a draught or
massive rain, we have a sociable balance between us and nature. Scientists have
used biotechnology to an advantage here, and it seems applaudable. Yet,
disadvantages may include harmful new substances in the plants being
compounded together that may cause an allergic reaction to people. This is rare,
and we shouldn't really worry about it, but we should be open-minded and
consider these things.
When it comes to the economy, people are most in desire for foods which
are worth the money they spend on. Since we can now create the ultimate fruit,
we should be able to produce them more quickly. And since consumers want
more, we will then gain much capitol, creating a fine economic balance. For
example, we are not making enough money because our cows cannot produce
enough milk. Scientist and genetic engineers can now insert a gene which lets
them make not only more milk, but the type of milk they want created. Now, we
are wasting less to create more of something, and indeed has shown to increase
economical factors extensively.
Even though we have accomplished this in modern society, we are still
young in the genetic age of science. We have yet to unleash the complete
possibilities of biotechnology, like the creation of whole organisms, fully
functional and self-sufficient. We have successfully generated an entire plant from
a leaf, but compared to that, us humans are far more complicated and
sophisticated. I myself stand that biotechnology can be used to our advantage, but
I sternly warn you as Albert Einstein warned President Roosevelt about his
discoveries of nuclear energy, that biotechnology can be a devastating new
weapon, used to destroy human kind. I believe that the technology will flourish
into the society as the computer has, having your own genetic lab at home so
that you may cure yourself of a disease. But biotechnology is more than that,
being that it will pave the future technological highway; perhaps instead of
robots, we will have cloned entire human beings to work for us. There are
endless possibilities, and we must take caution in them.
f:\12000 essays\sciences (985)\Biology\Birds.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Birds are some of the most amazing animals on earth. Most have the ability to fly.
Some use ground travel. Some use claws, others use only their beaks. Birds come in many
varieties of colors and sizes.
Birds are warm-blooded, egg-laying creatures from the aves class. Along with the
obvious feathers and wings, birds have other adaptations for flying such as a wide keel on
the sternum, with large wing muscles attached, air spaces and sacs throughout the body
and bones, to decrease their weight, and they have various bone fusions and reductions to
strengthen and streamline their body.
There are more than 8700 species of birds. Their habitats range from icy shores of
Antarctica to the hottest parts of the tropics and from mountains, deserts, plains, and facts
to open oceans and deeply urbanized areas.
The sizes of birds range from the tiny bee hummingbird, which has a total length of
two and a half inches, to the albatross, which has a wing span of eleven and a half feet.
The largest bird is a bird that cannot fly, the ostrich. Ostriches can stand almost eight feet
high and can weigh near 350 pounds. Other extinct birds have been measured to stand
over ten feet high.
The evolution of birds is still being argued. Most people believe that birds evolved
from reptiles. Because of birds mainly delicate bones, few fossils have been left behind for
scientists to study. The earliest bird fossils come from archaeopteryx. The fossils that have
been discovered from archaeopteryx include six partial skeletons and one single feather.
Archaeopteryx , unlike modern birds, had teeth, a reptile like tail, and three claws on each
wings. Scientists think it could fly, but only weakly.
Approximately 85 species and 50 sub species have become extinct in the last 300
years. Over half of them occurred in the 1800's. Another thirty percent occurred in the
1900's. Over ninety percent of these extinction's were island forms, which are particularly
vulnerable to human interference. Destruction of habitat is the biggest cause of extinction.
Other causes are the introduction of predacious animals, and disease plays it's part too.
The respiratory system in birds serves to transfer oxygen to the bird's bloodstream.
Unlike mammals, birds do not have sweat glands. So they cannot cool themselves through
perspiring. Air sacs throughout the body are connected to the lungs. As the bird breathes,
the air sacs help cool the birds organs. The average body temperature of birds is about
106° F.
Birds do not have any teeth. This means that birds must cut food up with their
beaks or swallow it whole. On a bird's esophagus their is a bag-like swelling called the
crop. Bird's can store food there until there is room in the stomach for it. They can also
store food their for their young. In most birds, the stomach is two parts. the first part is
where digestive juices are added. The second part, called the gizzard, has thick , muscular
walls for grinding up food. This replaces chewing. A lot of birds help the grinding process
by swallowing coarse materials like gravel. The nutritious matter is absorbed in the small
intestine. Then waste matter moves on to the large intestine. All waste from birds release
from the bird's vent in the rear of the body.
The circulatory system distributes blood through the bird's body. The heart of a
large bird, like an ostrich, beats approximately the same rate of a human's heart, 70 times
a minute. Other small birds, like a hummingbird, have a heart beat of more than 1000
times a minute! Arteries in birds carry blood from the heart to organs in the body. Veins
return blood to the bird's heart.
A bird's nervous system consists basically of nerves and a brain. Nerves carry
messages from a bird's senses to the brain, and from the brain to the muscles. This
provides a reaction to something. On a bird's brain, the cerebellum is relatively larger than
a cerebellum on a mammal. The cerebellum is what birds use to control balance and the
muscles they use to fly.
Male birds have testes and the female birds have ovaries, just like in other
vertebrates. Most birds mate by pressing their vents together. Sperm cells quickly pass
into the female's vent and unite with one or more egg cells. The union produces a
fertilized egg, or a zygote. When the egg is laid, the zygote develops into an embryo as the
egg is incubated.
f:\12000 essays\sciences (985)\Biology\Birth defects.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No one is immune to birth defects, yet not everyone is equally susceptible. Birth defects are not merely a medical problem. They have profound effects on the social and psychological well being of their family and friends.
In the normal course of fetal development, cells migrate to their appropriate destination so that organs and limbs form where they should. Usually, the genes perform flawlessly, but mistakes can and do occur. Some of the most common birth defects results from the interaction between one or two abnormal genes out of 100,000 that make up who we are. This is caused by the genes parents pass on or effected by drugs and alcohol upon the fetus of a new born child.
Down's syndrome, the most common genetic disease formerly known as mongolism, "occurs one in every six hundred births throughout the world" ( Storm 102). It is caused by chromosomal error, where there is an extra chromosome 21. Instead of have two chromosomes as does a normal individual, there are three. These children's features include up slanted eyelids, depressed foreheads, hearing loss, dental problems, poor speech development, heart disease and intestinal problems where surgery is required. Parents feel very helpless and guilty in many of these and similar situations, feeling as if they are abnormal. However most can learn to walk, talk, dress themselves and eat. Special work programs are available that can help the child reach their education level. Also these work programs help takes off the many stresses facing parents. They no longer have to go it alone.
Tay Sachs disease is another selective genetic disorder that destroyed nerve cells. This causes mental retardation, loss of muscle control and death. Children who inherit an abnormal gene from both parents will inherit the decease. The carrier parents have one normal gene and one defective gene. Carriers of Tay-Sachs disease have no symptoms. " If two carriers have children, each child has twenty-five percent chance of inheriting the defective gene (both parents)" (Strom 174). These children are unable to produce an enzyme that breaks down fats in the brain and nerve cells. The cells become clogged with fat and prevent the cells from functioning normally. Within three to four years their bodies dies.
Sandra vividly remembers how happy she was to have a baby brother and what a beautiful, healthy little boy he was at first. Then, at about six months of age,
her brother began to change. He stopped smiling, crawling and turning over,
and he lost his ability to grasp objects or to reach out. Over the next few years, he gradually became paralyzed and blind. Finally, he became so affected that he was completely unaware of anything or anyone around him. Then, just before his fourth birthday, he died. (Gravelle 56).
" About one in three hundred people carries this disease, but carriers are ten times more common among mid and eastern European Jews" (Gravelle 56). This devastating disease has a tremendous emotional effect on the parents. From day one they watch their beautiful healthy child grow up and live a normal life. Their child could live a normal life for three to four years without any symptoms. And then with no warning their normal way of life changes dramatically as they watch their child suffer a slow traumatizing death. Along with watching their child, they also have to face their new life. They now have to sent most of their time and money on the child, but how ? If they both take off work who will pay for all the doctor bills. If one takes off work who should it be? Physical breakdowns are a major component facing parents as the deal with all this added pressure. Their life will consist living around hospitals and live in nurses which many might get to help cope with the child. Their sex life chang
es. Most of the time parents feel dirty or diseased them selves causing intimacy to stop and from this, parents soon grow farther apart. Their are no winners in this battle, especially with no cure available.
Sickle cell anemia, a genetic disorder in which malformed red blood cells interfere with the supply of oxygen to parts of the body. Inadequate oxygen levels allow the cells to sickle and become a cresent-like shape. As a result, the cells can no longer flow freely and thus, begin to clog blood vessels. Inflammation of tissues, pain in limbs, abdomen, lower back and head occurs. The main organs severely affected are lungs, bones, spleen, kidneys, heart and brain. It is inherited and acquired only at birth. At the present time, there is no treatment that can eliminate the condition.
Lorraine's first pregnancy seemed effortless. because she was only twenty-five
and therefore not at high risk. besides, there was no history of congenital
defects in either Lorraine's or her husband's family. Thus when their son Jeremy was born with a server form of spina bifida, the couple was stunned.
( Gravelle 54) Spina bifida is a defect of the spinal column in which the infant's spine does not develop completely, enclosing the cord. The spinal cord may pole through the spine, forming a cyst or lump on the child's back. " In Jermey's case, the lower part of his spinal cord was affected, leaving his legs paralyzed. In a severe case of spina bifida, there is an excess fluid of water surrounding the brain which can lead to brain damage. " In spite of the fact that approximately 16,500 infants are born with spina bifida in the United States each year, researchers still do not know exactly what causes the condition" (Gravelle 55). Spina bifida is hereditary and some other factors may be involved, such as drugs or alcohol and even the environment.
" True genetic disease are distinguished from diseases in which genetic factors play a part in the causation of the disorder, but are not totally responsible for the disease" Strom 117). Mutations causing birth defects are not the result of a single gene but, have some genetic components in their causation. Therefore certain birth defects are prone to occur repeatedly in families but, not to be considered purely genetic such as spina bifida.
Other causes of birth disorders are causes from drug and alcohol abuse while pregnant. When a woman uses drugs during pregnancy, she is not only damaging her health, but also that of her unborn child. The most harmful drugs are those classified as narcotics ( cocaine ,heroin ect). Other harmful substance include alcohol, tobacco and caffeine. " A women's inter- uterine environment is designed to protect the fetus from external injury and to assure proper nutrition. Fetal homeostasis is however heavily dependent on the maternal habitat and can easily be subjected to the harmful effects of drug and alcohol misuse (Gardner 1).
In marked contrast, alcohol purchase and consumption carries few restrictions and in terms of damage to the health of the developing fetus, " it is by far the most harmful drug available ( Gardner 1). Conflicting evidence exists as to the link between alcohol consumption and fetal damage. Fetal Alcohol Syndrome describes a set of abnormalities occurring in babies where alcohol consumption has taken place. The cause of FAS, appears "to be related to the effects of alcohol on the fetal central nervous system during the early stages of development. FAS has been recognised as the third most common cause of mental retardation, affecting 1 in 750 live births" ( Gardner 6). The major characteristics of FAS fall into four categories:
a) Growth retardation: The average birth weight of 71b is reduced to 41b.
b) Facial features: the eyes may be small and the mid face poorly formed with a short upturned noise. flattened nasal bridge and prominent nostrils.
c) neuro-developmental abnormalities: The average IQ of a person ranges greatly. However, the mean average is about 70 and follow up studies indicate that no great improvement is likely.
d) Congenital abnormalities: Health defects occur in up to 50 percent of cases and skeletal defects are common, predominantly fusion of the bones of the fingers, toes and arms.
( Gardner 6)
However fetal harm cannot be contributed to alcohol alone when it is involved. " The British studies cited earlier clearly indicates the need to consider other features such as nutritional level, stress, smoking and health all may be factors" (Gardner 7).
Precise evidence also have related drugs to fetal neonate harm. Although the fetus is protected by the placenta, drugs can easily pass through to the fetus with little method of release.
Many risks of using drugs while pregnant are " pre-natal mortality, low birth weight babies due to premature birth or growth retardation, average of 2.7 to 3.2 percent of these births show signs of organ malfunction or growth retardation" ( Gardner 4).
The increased figures with regard to both drugs and alcohol use combined with rising concern about the effects of substance abuse during pregnancy, highlight the need to provide a range of services and care both pre and post natal to support the family and the child. It is possible during pregnancy to implement a medically- supervised withdrawal from most drugs. It is vital that care is given to aid slow withdrawal because, " although the mother may not be physically dependent, her fetus may be. If a women decides not to withdrawal from drugs, ( should not be an option) a programme of methadone maintenance, which is ideal for high level long term users can be suggested to reduce fetal distress" ( Gardner 8). " For most adults, whether professional or lay, the sight of a tiny baby, Sweating and twitching, vomiting and screaming inconsolably, arose powerful emotional response of anger and pity." ( Gardner 1)
It is understandable that parents have a hard time coping with the emotions of seeing the child deformed but, families must learn to accept , adjust to and cope with the sorrows and frustrations engendered by the birth of their handicapped children. Parental acceptance means many different things. Parents have many different ways of excepting their child and many ways of hiding their true feeling of unacceptance. the two main ways of seeing how and if a parents expects their child is through two parts, the clinical view and the interactionist view.
The clinical view is the overcoming of the internal quilt reaction. Many parents show sighs of physical illness, nervous conditions or display defence mechanisms such as denial, not excepting their child is handicapped. Solnit and Stark (1961) suggested " that parents must mourn the loss of their anticipated healthy child before they can love their defective child" ( Darling 50). They also suggest that the completion of morning in such a case involves three stages of parental adjustment:
1) Disintegration: At this stage, parents are shocked, disorganized, and completely unable to face reality.
2) Adjustment: This phase involves chronic sorrow and partial acceptance. The defect is recognized, but prognosis may be denied.
3) Reintegration: Parents maturely acknowledge their child's limitations.
Several studies have attempted to measure differences in adjustment between parents of defective children and parents of normal children. And it was found that " Mothers of retarded children were more depressed and had a lower sense of maternal competence. They also enjoy their children less than control group mothers . Similarly, farther of retarded children experience greater stress that farther of normal children" ( Darling 53). Another factor is the age of the parents. Some physicians felt that older, more experienced parents would be able to adjust better. However , some also noted that older parents might be less accepting if they waited a long time for the child and felt that they might not be able to have another.
I saw her for the first time when she was 10 days old... I think I was the
most petrified I'd ever been in my life, turning the corner and wondering what I would see... She was much more deformed than I had been told. At the time I thought,' Oh, my god, What have I done?' ( the mother of a
spina bifida child). (Gardner 20)
The Interactionist view consists of attitude. " Attitudes, such as acceptance or rejection of handicapped children, are socially determined" ( Darling 56). Rejection is learned through socialization in a stigmatizing society. From a very early age, we are exposed to negative attitudes towards those who deviate from society's norms of physical and mental development.
So a person growing up in a hutterite community, for example, might learn to be more tolerant of the deviant than a child exposed only to the culture of the majority. " Because attitudes are acquired, they are subject to change. Socialization never ends; we constantly grow and mature. Thus negative attitudes towards the handicapped might well change in the course of caring for a handicapped child"(Darling 61).
The families who manage best were not those in the upper classes. These parents were ambitious for their children and never overcame their frustration and disappointment. The ideal parents were those who, while sufficiently intelligent to appreciate the needs of the child and to have insight into the difficulties, did not have great ambition, and so they did not constantly display their disappointment. They were perhaps rather fatalistic in their outlook. They looked upon the child as a gift for which to be thankful whatever the condition.
( Darling 54)
Most people have had experiences with birth defects. Even people who think they have never encountered someone with a birth defect are likely to be wrong. " Since two hundred and fifty thousand babies with birth defects of varying severity are born in the United States each year" ( Gravelle 6), it would be hard not to meet some of these people. In the past few decades, many strides have been taken to help understand the causes of such diseases with hope of treatments and cures. Also works of finding ways to help the parents cope with their emotional devastation have been taken as many accomplishments have been made. Parents are now finding ways to move past their anger and frustration and enjoy a loving relationship with their child. With a wider knowledge of information available and treatment to drug addits families can pull though. Caring for a child is a tough emotional and physical battle but should always be looked as a gift, these children have much to offer.
Birth Defects
1995 05 18
Work Sited
Darling, Jon. Children Who Are Different. Toronto: The C.V. Mosby Company, 1982.
Gardner, Suzy. Substance Abuse During Pregnancy: Protecting The Foetus And New Born Child.
Norwich: UEA Norwich., 1992.
Gravelle, Karen. Understanding Birth Defects. U.S.A: Frankin Watts, 1990.
Strom, Charles. Heredity and Ability. U.S.A: Plenum Press, 1990.
f:\12000 essays\sciences (985)\Biology\Body Systems.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Body Systems
There are 10 body systems, one of them is the Integumentary (skin).
It is composed of hair, skin, nails, sence receptions and oil glands. Its
functionis to protect from outside, to regulate the body temperature, to
make synthesis of hormones & chemicals and is used as a sense
organ.
Another one is the Skeletal System (bones). It is made of about 206
bones, that are divided in tho categories: axial bones (in the body by
itself) and apendicular bones (arms & legs). We have Joints too.
Thei`re divided in Ball Socket (like elbow and shoulders) and sattle
(fingers). This system`s function are movement, storage of minerals,
blood formation, support of the body and protection of body parts.
The next one is the Muscular System. It is composed of muscles
(dah). The muscles are divided in visceral or involuntary or smooth
(The one in the organs, like intestines), skeletal or voluntary or
striated (found superficial to the bones, like biceps, triceps...) and
cardiac (heart). Their functions are movement, to maintain body
posture & tone and in the production of body heat.
Now its time for the Nervous System. Its constructed of the brain, the
spinal cord and the nerves (neurons). Its functions are to
communicate (fast with short duration), integration, and to control.
The subsequent system is the Endocrine System (known as ductless
too...). This is composed of a lot of things... They are:pituitary gland -
below the brain (master gland), pineal gland - brain (It`s called the
"third eye" by some, because its sensitive to light cycles),
hypothalamus - also in the brain (it works with the pituitary), the
thyrodic - neck (controls the metabolism), adrenal - kidneys
(responsible for the adrenaline), pancreas - near stomach (produces
insulin), ovaries - on females and testes - on males (it produces
estrogen and ova - testosterone and sperm, respectively). Their
functions are to secrete hormones into the blood, communication
(slow & long duration), integration and control.
f:\12000 essays\sciences (985)\Biology\Brain Transplant .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Medical technology has seemed to advance enough so that doctors are able to perform brain transplants. So far this procedure has only been successfully performed on animals, and now doctors hope to perform this procedure on humans. I believe brain transplants should not be performed at all, and especially not on humans because of the numerous problems and side effects that could arise.
Even though brain transplants can be successfully performed on animals, this does not mean that it will be successful with humans. The human brain is much more complex than the brain of animals, so there will be many more complications during surgery. For example, the healthy brain that was removed could have been damaged in some way without the doctors knowing it. It would also be very difficult to attach a person's brain in a different body because of the millions of neurons that send and receive messages to and from all over the body. It would be almost impossible to reconnect every single neuron, and without them a person could not function normally. Many psychological effects are also possible because the human brain is so complex. Our brain makes us who we are, and with a different brain we would no longer be unique. A person with a different brain would seem to be a total stranger and in many ways they would be. Hopefully these dangerous side effects will convince doctors not to perform this pr
ocedure on humans.
The advancement of technology can be very beneficial to everyone, but I do not believe that this medical technology of brain transplants will help anyone. We were all born with one brain and through childhood to adolescence our mind developed into who we are. No one should steal our identity from us, even if we are seriously injured, and change it to a completely new one. Also for the people who have died with healthy brains, that was their identity and it should not be given to anyone else.
Another problem with brain transplants is how can doctors choose what are "healthy" or "normal" brains. An elderly person who has died would have an aged brain that would not be as efficient as a younger person's brain. Then would doctors have to find healthy brains of the same age as the person who needs it? This could also bring up other factors such as intelligence, gender, or physical problems that a person might have had before death. Also another problem might be with the period of time a brain can be kept "alive" after death and how it can be kept "alive" without damage. Overall, my feelings about this surgery are that it should not be done on humans until doctors have overcome all the problems and obstacles that stand in their way of making brain transplants with humans successful.
f:\12000 essays\sciences (985)\Biology\Bubonic Plague.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"BUBONIC PLAGUE"
The disease is called the Bubonic Plague. It is caused by the bacteria Bacillus. Also now known as the "Bubonic plague". It is a plague because of its widespread fatality throughout history. The cause of this disease is the Yersinia petis bacterium. The Bubonic plague is transmitted from fleas to humans. You can contract the disease either by being bitten by the oriental flea, Xenopsylla Cheopis, or be exposed to plague infected tissue.
The "Bubonic Plague" has an incubation period of 2 to 6 days. Within a week the body's temperature rises to 104 degrees F. Shortly after they show signs of a fever other symptoms come about which are delirium, mental disorganization, shivering, vomiting, headache, giddiness, intolerance to light, and a white coating on your tongue. The symptoms become worse as the disease spread through the bloodstream and lymphatic system. The later symptoms, as you begin to experience the last stages of the disease, are your back starts to hurt and painful swelling of your lymph nodes. Hard lumps filled with blood and puss called "boboes",from which the disease gets it name, form on various parts of your lymphatic system, such as your neck, inner thigh, groin, and armpits. This stage is the most painful. Blood vessels break and later the dry blood turns black underneath your skin.
The treatment for the plague is a vaccine that lasts 6 months. You can use the preferred vaccine Streptomycin, or gentamicin, teracyclines, or chloramphenicol are all good substitutes. The treatment must start within 15 hours of the first symptom of death is inevitable.
Some special characteristics of this disease are you can be any age to contract it, it is most common is unsanitary condition, or where there are an abundance of rodents and if untreated death will occur within 3 days. The most amazing fact about this disease is that it has killed over 75 million people over the centuries.
f:\12000 essays\sciences (985)\Biology\Can Geneics Cause Crime.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Can Genetics Cause Crime ?
Introduction to Criminal Justice System
Dr. Mike Carlie
Are genetic factors more likely to make one person
perform violent acts? Many doctors and researchers in the
field of genetics have searched for a answer to this
question.
During 1989-93 one such researcher named Dr. Sullivan found
some interesting points about genetics and crime.
Sullivan while working for the Bush administration's
secretary of health and human services during 1989-1993 was
appalled by the epidemic of violent crimes he saw taking
place in American cities. According to Dr. Sullivan,
" more than 26,000 Americans were murdered,
and six million violent crimes were committed
with young men and minorities falling victim
most frequently".
Sullivan also reported that about one in every 27 black men,
compared to one in every 205 white men, died violently also
1 in 117 black women met a untimely end as compared to white
women which only 1 in 496 were killed due to violent crimes.
This is not surprising that young males commit most of the
serious crimes. According to an article in Scientific
American, only 12.5 percent of violent crime in the U.S. in
1992 was committed by females. What is also surprising
according to W.W. Gibbs the author of "Seeking the Criminal
Element," in Scientific American,(1995 March) pp 100-107,
is that a very small number of criminals are responsible for
the majority of the violent crime.
Sullivan who is now the president of the Morehouse
School of Medicine in Atlanta wanted to try and address the
violence as a public health issue. In an interview after he
left office in 1993, Dr. Sullivan explains that his rational
for this was that the higher increases in violent crimes and
specifically homicide in the young male population in large
cities. Which was higher than any other social group in
America at this time.
Dr. Sullivan then began to organize his department's
research resources under the banner of the so called
"Violence Initiative" as he put it. With the predominant
thought of looking at unemployment, poverty, the use of
drugs and any other factors that might help to contribute to
the likelihood of causing violence. Primarily Sullivans'
research was directed towards the psychological and
sociological point of view. Sullivan primarily working with
the before mentioned points and only worked lightly with the
biological aspects, such as race, gender, brain chemistry
and genetic make up.
Dr. Sullivans research, did find some links between
aggressive behavior, and disturbances in the level of a
chemical called Serotoin. Which is directly related to
certain genes. Although there was no conclusive proof that
this abnormal gene was completely responsible for a
increases in violence, Another study in 1993 also found a
link between genes and violence. The X chromosome mutation
which was discovered in a certain Dutch family was found to
be associated with mild retardation and aggressive,
sometimes violent criminal behavior. The mutation causes
complete deficiency of the enzyme monoamine oxidase also
called (maoa), which metabolizes the neurotransmitters
serotonin, dopamine, and noradrenaline.
According to David Goldman, a geneticist at the
National Institute of Alcoholism and Alcohol Abuse points
out,
" men who possess this abnormal
gene may typically engage in impulsive
aggression, but the time, place, type, and
seriousness of their crimes ( which include
exhibitionism, attempted rape, and arson)
have been diverse and unpredictable."
Although these are examples of gene related violence,
genetic information so far has been fairly unpredictable.
Finding a defect such as the maoa mutation is an
exceedingly rare event. Also according to Margret McCarthy
of the University of Maryland School of Medicine, what
matters in not whether someone possesses a gene, but whether
that gene is expressed.
Although seems that genetics is unlikely to tell us
much of practical value about crime, other aspects of human
biology may be more useful. Adrian Rain of the University of
Southern California at Los Angeles, showed cat scans
comparing brain activity in 42 murderers with that of an
equal number of normal controls. The murderers tended to
have less prefrontal activity, was consistent with Raine's
Hypothesis that a damaged prefrontal cortex can lead to
impulsive aggressive behavior. But murderers, like the rest
of us, are a heterogeneous group of people, Rain cautioned
strongly against regarding such scans as diagnostic. And
that you can't do brain scanning on everyone and tell if
they will commit murder. In short applying this kind of
research to crime control often raises ethical and political
issues and the same can be expected of genetic scanning and
other aspects of biological research when it's related to
controlling crime.
It is possible that genetic research may eventually
contribute something to our knowledge of crime, and perhaps
even to its control. But the contribution will most likely
be indirect. And any aspects of genetic disorders or other
biological factors, most likely will be contributed to other
things such as alcoholism and addictions rather than genes
being blamed for the violent behavior. Diana Fishbein, of
the US Department of Justice states that, criminologists
need to call for more research into behavioral disorders and
attention disorders and certain other temperamental traits
like impulstivity that might be more likely to turn up
better results in the fight against crime.
Sources Cited
Gidds W.W. (1995,March) "Seeking the Criminal Element,"
Scientic American, pp 100-108.
Hallinan J. (1995, March 19th) "Prisons Becoming Major
Industry," the Huntsville Times, pp A19-20.
Internet Address Text: NYU@.crime.htm.com, Genetics and
Crime, By Wilson R.J. (1994),
f:\12000 essays\sciences (985)\Biology\cancer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cancer
Right now, cancer is one of the most feared diseases in the world. In the early 1990s almost 6 million new cancer cases developed and more than 4 million deaths from cancers occurred. Also more than one-fifth of all deaths were caused by cancer and it has been predicted, by the American Cancer Society, that about 33% of Americans will eventually develop this disease. This is a huge disease that is killing people all over the world.
The field of cancer study is called Oncology. The government has spent billions of dollars on research of this fatal disease. Cancer is the most aggressive disease of a larger class known as neoplasms. Neoplasms do not fully comply with the parts of the cell that control the growth and functions of the cell. These cells eventually become abnormal growths and can be recognized as not normal tissue. These traits are passed down as the cell reproduces therefore spreading the cancer.
Neoplasms are generally classified into two groups: malignant and benign. Malignant tumors, or abnormal tissue, grow more rapidly than benign tissue and they invade normal tissue. Benign tissue is structured similar to normal tissue while malignant tissue is abnormal and has an unstructured appearance. Of greater importance, benign tissue does not metastasize, or begin to grow in other sites, like malignant tumors do. Cancer always refers to metastasized tumors but the term tumor is not always necessarily cancer. A tumor is any living tissue that is distinguishable as abnormal living tissue.
After a cancer forms, it can also change from a benign to a malignant state, therefore making the cell grow at a more rapid rate. The development of the cell starts when it forms notable abnormalities in chromosomes and then multiplies exceedingly. Then metastasis usually occurs and generally causes the death of the host.
There are many different cancers which form on just about all parts of the body. In the US, skin cancer is the most common cancer, then prostate cancers in males, and then breast cancer in women. Leukemia is clearly the dominant cancer in children. The number one killing cancer in the world today is lung cancer, mostly caused by the smoking of cigarettes. Some researchers have stated that if Americans stopped smoking, lung-cancer deaths could disintegrate within two decades. Stomach cancer is the second most fatal cancer in males and Third is the leading cancer in women, breast cancer.
The prevention of cancer rides upon what is known about the causes. Any agent that causes cancer is called a carcinogen. Carcinogens are generally classified into three groups- chemical, biological, and physical.
Chemicals that cause cancer have many different molecular structures and can be just about any type of chemical. Some substances that cause cancer are complex chemicals and gases, certain metals, drugs, hormones, substances in molds and plants and many more. Many nitrosamines, or simple organic oxides of nitrogen, are carcinogenic. That and hydrocarbons are carcinogens in cigarette smoke and increase the risk of lung cancer. Also another chemical gas, vinyl chloride, gas has been found to be an agent of sarcoma of the blood vessels in the liver. Many drugs as well as alkylating agents used to treat cancer are carcinogenic. Even though these chemicals break the DNA of cancerous cells, which kills them, it also induces cancer in normal cells. Some hormones created in humans can also cause cancer. High levels of estrogen, which is marked as a female hormone, can increase the chances of getting a cancer of the uterus (in women). Aflatoxin B is a substance produced by the mold Aspergillus that causes a number of c
ancer, but generally liver cancer. Many carcinogens have not been discovered but research is being done to discover other carcinogens to keep people aware and safe.
There are also many different biological agents which cause cancers. The most common biological agents are the oncogenic viruses that commonly bring about the formation of neoplasms in some smaller animals. Also some of these viruses are thought to cause human cancers, and one has even been proven to cause leukemia. These viruses can be split into DNA viruses and RNA viruses, depending on their genetic structure. The DNA viruses generally insert the genetic information straight into the cells of their host. The RNA virus, contrasting, requires that the genetic information be transcribed into DNA by an enzyme, called reverse transcriptase, provided by the virus. All onconogenic viruses have one or more genes that are needed for the transformation of the infected cell into the neoplastic cell. These genes, called oncogenes, are best portrayed in the genomes of oncogenic RNA virus. It is now seen that the oncogenes have close relation with the counterparts in the normal cells which they infect. The oncogene, non
etheless, has a different structure and appears to be activated and expressed abnormally by one mechanism or another, leading to transformation in the cell. Some oncogenic viruses may activate the normal cellular counterparts of oncogenes, called c-oncogenes, thus causing the neoplastic transformation to occur. Similar reactions may be a result from the action of chemicals or radiation or both.
Ultraviolet and high-energy radiation are also agents for some types of cancer. A bond exists between exposure to the sun's ultraviolet rays and the occurrence of skin cancer in humans. Some cancers caused by radiation are leukemia, along with cancers of the thyroid, breast, stomach, uterus, and bone. Consequently everyday diagnostic tools such as the X ray are used very carefully so that a person is not overexposed. Physically induced sarcomas can also be induced when films or disks are enclosed under the skin, as done in experimental animals. After the implant has been in the animal for many years, sarcomas usually develop around the implant. If the implant is pulverized or its structure is markedly altered before insertion, no sarcomas develop. This shows that the physical structure causes the cancer, other than the chemical composition.
Even though it is unknown why cancer happens in some people and not in others heredity seems to have an effect in some forms of cancer. Because of this, family history may be significant in predicting and diagnosing cancer.
Prevention is the best way to avoid cancer. Since the majority of cancer is related to the environment, it would be best to identify and control these factors. There are two tests to discover carcinogens. The Ames test measures the ability of a substance to cause mutations in bacteria and is more than 90 percent effective. The alternative is animal testing. It is exhausting and expensive work but it is the only certain way to be sure if a particular agent is carcinogenic.
One of the most important ways to protect yourself is early detection. It is stressed that you get treatment as soon as possible. The American Cancer Society has come up with seven warning signs of cancer to help you identify cancer symptoms and get help early. They are: 1.a change in bowel or bladder functions; 2.a sore that does not heal; 3.unusual bleeding or discharge; 4.a thickening or lump on the body; 5.indigestion or trouble swallowing; 6.an obvious change in a wart or mole; and 7. A nagging cough or hoarseness. If one or more of these symptoms are present a person is advised to see a doctor immediately.
If treatment of a cancer is to be successful, the diagnosis must be as early as possible. Though no tests have been discovered to detect all cancers, there are tests to pick up some. A pap test is used to search for cancer of the uterine cervix. Mammograms are used to detect breast cancer and blood and stool tests are used to find colon cancer. There are many other tests besides these to test for all different types of cancer. There are really three different types of cancer. Surgical removal is the first, it is the most basic treatment. It is used to remove the cancerous legions in the body. The second is radiation therapy, which uses radiation to try and kill the tumor. One of the main problems with that is not only does it kill cancerous tissue, it also kills healthy ones. The third way to treat cancer is chemotherapy, which is treatment by chemical agents. The chance of complete cure is not common but it is usually used to increase the life of the carrier. The desired effect of treatment in cancer patie
nts is remission, where the cancer is gone from your body and if it does not return within 5 years it is considered cured.
Right now the government is spending huge sums of money on cancer research. They are continuously finding cures and finding better ways to treat and deal with cancer. Research has come very far and helped many people fight cancers.
Thanks to all the work done, cancer is still a dreaded disease but is now curable which gives many people faith in recovery.
Bibliography
Compton's Encyclopedia(1992). Cancer . Chicago: Compton's Learning Company
Encyclopedia Britannica(1992). Cancer. Chicago: Encyclopedia Brittanica, Inc.
American Cancer Society website(1996): http://www.cancer.org/acs.html
f:\12000 essays\sciences (985)\Biology\Carnivorous Plants.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In a world where plants are at the bottom of the food-chain, some individual plant species have evolved ways to reverse the order we expect to find in nature. These insectivorous plants, as they are sometimes called, are the predators , rather than the passive prey. Adaptions such as odiferous lures and trapping mechanisms have made it possible for these photosynthesizers to capture, chemically break-down and digest insect prey (and in some cases even small animals.) There is no reason to fear them though. The majority are herbaceous perennials, usually only 4 to 6 inches high, and nothing like the plant in "Little Shop of Horrors".
Almost all carnivorous plants have a basically similar ecology and several different species are often found growing almost side by side. They are most likely to be found in swamps, bogs, damp heaths and muddy or sandy shores. Drosophyllum lusitanicum from Portugal and Morocco is the one exception, it grows on dry gravelly hills. Like other green plants, carnivorous plants contain the organic pigment chlorophyll. This pigment helps to mediate a chemical process called photosynthesis. This converts light energy into the chemical bond energy of carbohydrate which is utilized as cellular energy, plant growth and development. Water, carbon dioxide, nutrients and minerals are also needed for survival. In wetlands, where stagnate water contains acidic compounds and chemicals from decaying organic matter many plants have a difficult time obtaining necessary nutrients. It is in these nutrient poor conditions that some plants evolved different ways of obtaining nutrients. The ability of carnivorous plants to
digest nitrogen -rich animal protein enables these plants to survive in somewhat hostile environments.
The evolution of carnivorous plants is speculative due to the paucity of the fossil record. It is believed that plant carnivory may have evolved millions of years ago from plants whose leaves formed depressions that retained rain water. Small insects would sometimes fall into these water reservoirs and drown, eventually being decomposed by bacteria in the water. The nutrients from the insects would be absorbed by the leaf. The deeper the leaf depression the more insects that could be drowned. This would have created a distinct survival advantage allowing some plants to better compete in nutrient poor soil. As time passed, these plants would evolve more effective trapping mechanisms.
There are more than 500 known species of carnivorous plant, although some are now extinct. Classification is done using the standard binomal system and is based primarily on the floral characteristics of the plants, not the trapping mechanisms. They are divided into two groups based on corolla structure; Choripetalae and Sympetalae. The group of plants categorized as carnivorous belong to seven families, which are recognized by the suffix 'aceae', and fifteen genera. More than half of the species belong to the family Lentibulariacene that is marked by bilaterally symmetrical flowers with fused petals. The remainder of the species belong to six families marked by radially symmetrical flowers with separate petals. Classification is illustrated in the chart below in addition to the geographic range, the number of species, and the type of trapping mechanism.
Number of
Family Genus species Geographic Distribution Type of Trap
Byblidaceae Byblis 2 Australia Passive flypaper
Cephalotaceae Cephalotus 1 S.W. Australia Passive pitfall
Dioncophyllaceae Triphyophyllum 1 West Africa Passive flypaper
Droseraceae Aldrovanda 1 Europe, Asia, Africa, Australia Active
Dionaea 1 North & South Carolina Active steel
Family Genus # of species Geographic Distribution Type of Trap Drosera 120 Omnipresent Passive flypaper
Drosophyllum 1 Morocco, Portugal, Spain Passive flypaper
Nepenthaceae Nepenthes 71 East Indies Passive pitfall
Sarraceniaceae Darlingtonia 1 California & Oregon, Passive pitfall
Western Canada
Heliamphora 6 North and South America Passive pitfall
Sarracenia 9 North America Passive pitfall
Lentibulariaceae Genlisea 14 Tropical Africa and Passive lobster
South America, Madagascar
Pinguicula 50 Northern Hemisphere and Passive pitfall
South America
Polypompholyx 2 Australia Active mousetrap
Ultricularia 300 Omnipresent Active mousetrap
In the above chart, it can be seen that there is a large number of different types of traps. The modified leaf traps of carnivorous plant can each can be categorized as either active or passive. An active trap is one that employs rapid movement as an integral part of the trapping mechanism, a passive trap does not use rapid movement.
Active traps are categorized as "steel" or "mousetrap". Active steel type traps consists usually of two rectangular lobes that are hinged on one side. The two lobes move rapidly toward each other to entrap prey when stimulated. Active mousetraps are suction traps that use egg-shaped leaves or bladders that have an opening with a door on one side. When trigger hairs on the door are touched the leaf releases pressure and sucks the prey into the trap. In the aquatic species of the genus Utricularia, this is the most complex and rapidly acting trap; prey is sucked up into the bladders in 1/30 of a second.
There are three types of passive traps; "pitfall traps", "lobster traps", and "flypaper traps". Not completely passive, Lobster traps employ slow moving tentacles that are powered by cell growth. These plants lead prey into their trap using these two hairy spiral arms to guide the prey. Many plants capture prey by forming clever containers creatures enter but can not escape from. Passive pitfall traps, such as the ones employed by the butterworts (genus Pinguicula) and pitcher plants (Darlingtonia, Sarraceniaceae, & Nepenthes), attempt to lure insects into their cylindrically shaped hollow vessel and into it's stomach, which is often referred to as the pitcher. The insects get stuck in the digestive enzymes of the pitcher and die. Flypaper traps, such as the sundew (Drosophyllum & Drosera), produce sticky mucilage that covers the upper surface of its leaves. Insects become mired in this and leaves then bend around or roll up to enclose the prey for digestion.
Within the carnivorous plant world there are some truly amazing plants. Of all the hundreds of species Dionaea muscipula, the Venus fly trap, is probably the most dramatic. It is the only species in it's genus and there are no other plants quite like it. It's hinged leaf lobes are capable of snapping shut on prey in less than a half a second, eventually crushing the insect. Like many carnivorous plants, the Venus fly trap lures prey in with bate, which in this case is the smell of nectar. When an insect enters one of the bizarre traps it might bend one of the three stiff trigger hairs in the center of the leaf. When bent a couple of times in succession these hairs activate the trap. The plant does not have muscle tissue, the process of closing instead involves electrical signals and changes in water pressure. The book The Nature of Life briefly describes the process of the Venus fly trap closing once triggered by saying that:
trigger cells at the foot of the hair are deformed, as if pried by a lever. Stimulated by the stress, trigger cells generate an electric signal that flows from cell to cell through the leaf. Specialized motor cells receive the signal, change shape, and cause the trap to close.
About ten days are needed for digestion after which the leaf slowly opens up again revealing only the undigestible chitin remains. The trap, not the plant itself, turns black and dies when the plant tries to digest fats or eventually after three or four captures.
The largest carnivorous plants belong to the genus Nepenthes. The vines of these plants are usually tens of meters long. This genus is also capable of catching some of the largest prey in their pitchers, including creatures as large as frogs and small rodents. Nepenthes are unique amongst carnivorous plants as the only dioecious genus, which means there are separate male and female plants. These plants are very endangered and several species or extinct. Some species of Nepenthes are sold for hundreds of dollars to collectors and are involved in illegal overseas trade.
The growing of carnivorous plants has become very popular in recent years. Unfortunately the endangered status of many species does not stop collectors from risking high fines and field collecting them. This has had seriously impact on many species, but collectors are not the biggest problem facing carnivorous plants. In the USA and other developed countries wetlands are considered useless and are being drained and developed on. At present it is estimated that only 3-5% of carnivorous plant habitat remain in the US. Another problem is that fires are put out before they spread even though many plants, such as the Venus fly trap, benefit from periodic burns. Habitat destruction from slash and burn agriculture, however, does not benefit any of the carnivorous plants and is also causing a great deal of the extinctions.
f:\12000 essays\sciences (985)\Biology\Cell Theory.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
History of the Cell Theory:
Cells, the make-up of all living things. Some cell are complete organisms, such as unicellular bacteria and protozoa. Other types of cells are called multicellular, such as nerve cells and muscle cells. Withen the cell is genetic material, Deoxyribonucleic Acid (DNA) containing coded instructions for the behavior and reproduction of the cell. The cell was first discovered by the 1665 English scientist Robert Hooke, who studied the dead cells of cork with a crude microscope. Robert Hooke was born on the isle of Whight and educated at the University of Oxford. Hooke could not have discovered the cell without the microscope which was developed by Antoni van Leeuwenhoek a 1674 Dutch maker of microscopes. Leeuwenhoek born in Delft, Holland and had little or no scientific education. Leeuwenhoek also confirmed the discovery of capillary systems. Theodor Schwann a German physiologist born in Neuss and educated at the universities of Bonn, Wurzburg, and Berlin, Schwann was involved in the study of the structure of plant and animal tissues. Along with Matthias Jakob Schleiden a German botanist, Schwann proposed the cell theory.
The cell theory has three parts:
1. All organisms are composed of cells.
2. Cells are the basic units of structure and function in organisms
3. All cells come from preexisting cells.
The impact on science was very great due to the discovery of cells and the cell theory. Many or all things were effected by the discovery of cells, everything was looked upon in a different way. Some people still did not believe that all living organisms were made of tiny microscopic chambers called cells. Many people thought that people that believed in cells were insane, some people even wanted to put Antoni van Leeuwenhoek in an insane asylum because he believed in cells. As you can see the impact of cells was very great on people. Other things were discovered due to the discovery of cells, such as the dicovery of atoms, the make-up of all things on the Earth.
Ryan Strehlein
Preiod 2
f:\12000 essays\sciences (985)\Biology\Cell.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CELLS
Cells are the basic unit of all life. Even though they are the smallest unit of life
they are highly complex. Each cell has enough parts to it to practically survive on its own.
There are two types of cells; the plant cell and the animal cell. These two cells do not vary
intensely, but there are some major factors that separate them completely.
Animal cells are highly organized. The many parts that make up the cell work in
synch with each other. These parts are called organelles. The most important organelle in
the cell is the nucleus. The nucleus holds all of the blueprint information for the cell. The
DNA of a cell is found in the nucleus along with RNA. The nucleus is surrounded by two
membranes due to the need to be highly selective with materials that enter the cell's
nucleus. The cell itself is surrounded by a membrane. In between the membrane of the
nucleus and the cell membrane is cytoplasm. It is in the cytoplasm where all of the other
organelles are stored. There are six main organelles in the cytoplasm. First, the
mitochondria, which provides energy to the cell through ATP and respiration. Then there
is the endoplasmic reticulum which separates parts of the cell. Then there is the Golgi
apparatus which is used for sorting, storing, and secretion for the cell. Next are
lysosomes, which hydrolyze macromolecules. Then there are centrioles that play a major
role in cell division. And lastly there are vacuoles which have a variety of storage
functions.
The plant cell is similar in most ways. The only really big differences between the
plant cell and the animal cell are as follows. The first is the outer membrane. The plant
cell has a cell wall that highly acts as support to the cell, where the animal cell has a more
flexible, softer outer membrane. Also in the plant cell are chloroplasts, which are not in
the animal cell. Chloroplasts carry out photosynthesis which is the plants ability to make
it's own food. This also accounts for the large central vacuole in the plant cell. It is used
for storage.
f:\12000 essays\sciences (985)\Biology\Cells Damn near everything there is to know about cells.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biology
Cell Report
Damn near everything there is to know about cells:
There are many parts of a cell, they all have specific duties, and are all needed to continue the life of the cell. Some cells exist as single-celled organisms that perform all of the organism's metabolism within a single cell. Such single-celled organisms are called unicellular. Other organisms are made up of many cells, with their cells specialized to perform distinct metabolic functions. One cell within an organism may be adapted for movement, while another cell carries out digestion. The individual cells no longer carry out all life functions, but rather depend on each other. Many-celled organisms are called multicellular.
When a group of cells function together to perform an activity, they form a tissue. The cells of a human are organized into tissues such as muscle and nerve tissues. Plant tissues include those of the stem and root. Many cells in tissues are linked to each other at contact sites called cell junctions. Cell junctions help maintain differences in the internal environment between adjacent cells, help anchor cells together, and allow cells to communicate with one another by passing small molecules from one cell to another. Groups of two or more tissues that function together make up organs. An organ system is a group of organs that work together to carry out major life functions.
Eukariotic Cell Structure:
Boundaries and Control:
Plasma Membrane - The plasma membrane is sometimes called the cell membrane, or the cellular membrane. It is the outermost part of the animal cell, and it's purpose is to enclose the cell, and change shape if needed. The cell membrane is capable of allowing materials to enter and exit the cell. Oxygen and nutrients enter, and waste products such as excess water leave. The plasma membrane helps maintain a chemical balance within the cell.
Cell wall - The cell wall is an added boundary to the cell. It is relatively inflexible, and surrounds the plasma membrane. The cell wall is much thinker than the plasma membrane and is made of different substances in different organisms. The cells of plants, fungi, almost all bacteria, and some protists have cell walls. Animal cells have no cell walls. Plant cells contain cellulose molecules, which form fibers. This fibrous cellulose of plants provides the bulk of the fiber in our diets. Chitin, a nitrogen-containing polysaccharide, makes up the cell walls of fungi.
Nucleus - The nucleus of the cell is the organelle that manages cell functions in a eukariotic cell. The nucleus contains DNA, the master instructions for building proteins. DNA forms tangles of long strands called chromatin, which is packed into identifiable chromosomes when the cells are ready to reproduce. Also within the nucleus is the nucleolus, a region that produces tiny cell particles that are involved in protein synthesis These particles, called ribosome's, are the sites where the cell assembles enzymes and other proteins according to directions of the DNA.
Assembly, Transport, and Storage:
Cytoplasm - The material that lies outside the nucleus and surrounds the organelles is the cytoplasm, a clear fluid that is a bit thinner than toothpaste gel. It usually constitutes a little more than half the volume of a typical animal cell.
Endoplasmic Reticulum - The endoplasmic reticulum (ER) is a folded membrane that forms a network of interconnected compartments inside the cell. The ER membranes contain the enzymes for almost all of the cell's lipid synthesis, they serve as the site of lipid synthesis in the cell. The ER functions as the cell's delivery system. Some parts of the ER are studded with ribosome's. In the cell, the sites of protein assembly are the ribosome's.
Golgi Apparatus - The Golgi apparatus's main purpose is to store materials. The Golgi apparatus is a series of closely stacked, flattened membrane sacs that receives newly synthesized proteins and lipids from the ER and distributes them to the plasma membrane and other cell organelles. Proteins are transferred from the ER to the Golgi apparatus in small, membrane-bound transport packages. These packages, called vesicles, have pinched off from the membrane of the ER and contain proteins. The Golgi apparatus modifies the proteins chemically, then repackages them in new vesicles for their final destination in the cell. They may be incorporated into the cell structures, expelled, or remain stored for later usage.
Vacuole - A vacuole is a sac of fluid surrounded by a membrane. Vacuoles often store food, enzymes, and other materials needed by cells, and some vacuoles store waste products. A plant cell has one large vacuole that stores water and other substances.
Lysosomes - In addition to the assembly and storage of macromolecules, cells also can disassemble things. Lysosomes, organelles that contain digestive enzymes, digest excess or worn out cell parts, food particles, and invading viruses or bacteria.
Mitochondria - Mitochondria are organelles in which food molecules are broken down to release energy. This energy is then stored in other molecules that can power cell reactions easily. A mitochondrion has an outer membrane and a highly folded inner membrane. As with ER, the folds of the inner membrane provide a large surface area in a small space. Energy is produced on the inner folds.
Chloroplasts - Chloroplasts transform light energy directly into usable chemical energy and store that energy in food molecules. These foods include sugars and starches. Chloroplasts contain the molecule chlorophyll, a green pigment that traps the energy from sunlight and gives plants their green color.
The chloroplast belongs to a group of plant organelles called plastids, which are used for storage. Some plastids store starches or lipids, whereas others contain pigments, molecules that give color.
Structures for Support and Locomotion:
The cytoskeleton is a network of thin, fibrous elements that act as a sort of scaffold to provide support for organelles. It also helps maintain cell shape in a manner similar to the way poles maintain the shape of a tent. The cytoskeleton is usually composed of microtubules and microfilaments. Microtubules are thin, hollow cylinders made of protein. Microfilaments are thin, solid protein fibers. Microtubules and microfilaments make up most of the sytoskeleton.
Cilia - Cilia are only contained in some cells. They are short, numerous, hairlike projections out of the plasma membrane. Cilia tend to occur in large numbers on a cell's surface, and their beating activity is usually coordinated.
Flagella - Flagella are longer projections that move with a whiplike motion. Cells that have flagella only have one or two per cell.
In single-celled organisms, cilia and flagella are the major means of locomotion. Sperm cells of animals and some plants move by means of flagella. Organisms that contain many cells, including humans, have cilia that move fluids over a cell's surface, rather than moving the cell itself.
f:\12000 essays\sciences (985)\Biology\Charles Darwin 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
More than a century after his death, and four generations
after the publication of his chief work, "The Origin of Species",
Charles Darwin may still be considered the most controversial
scientist in the world. His name is synonymous with the debate
that continues to swirl around the theory of evolution, a theory
that deeply shook the Western view of humanity and its place in
the world.
We tend to speak simply of the theory of evolution, leaving
off the explanatory phrase, "through natural selection." At most,
perhaps, the general public has heard of "survival of the
fittest" a poor phrase as far as I'm concerned, since fitness in
everyday usage is associated with physical conditioning and
athletic ability. "Survival of the most suited to its
environment" would be a more accurate, and convincing expression
for this pedicular concept. But to most of us, "evolution" simply
means that human beings are descended from apes, a slight
misunderstanding, since both humans and modern apes are
descendants of a mutual ancestor that is now extinct. It's not
evolution but the theory of natural selection and the evidence he
collected to prove to fellow scientists, peers, students, and
most importantly the masses of public and the church that were at
the heart of Darwin's contribution to biological science.
Charles Darwin did not invent the concept of evolution. A
number of prominent scientists and other thinkers during the
eighteenth century and the first half of the nineteenth century
(among them Charles Darwin's grandfather, Erasmus Darwin) had
offered detailed theories of evolution (Clark, 1984, pg.24-25).
Therefor the idea of evolution went very far back in Western
history.
At that time this concept was referred to as The Great Chain
Of Life and was conceived in the middle ages, based on a mixture
of classical and Biblical ideas. The ranking order ranged from
the "lowest" forms of life to "higher" living beings (lion),
through the various classes of human beings from peasants to
nobles to Popes, and upwards through the hierarchy of angles to
God.
This concept, in and of itself, has nothing to do with
evolution, in fact it seems to be anti-evolutionary, since every
member is fixed in its own place. This chain was created in a
time when the world was considered to be more static rather than
a diverse collection of dynamic ideas.
But the Newtonian revolution of the seventeenth century
replaced the old static world with a new world view in which
everything was naturally in motion. In the course of the
eighteenth century the notion of progress, of gradual but
relentless pursuit of betterment, began to take hold in western
thought. It was only natural that the ideas of change and of
progress should eventually be applied to the Great Chain of
Being. The natural implication of a "dynamic" chain of being was
a sort of tree of life, gradually sprouting upward from basic
primordial ooze, branching outward into all the varied species on
our fine planet, ending with, of course, eighteenth century Man.
This could be called evolutionary, but it does not offer a
theory of evolution, an order in which evolution took place. It
was no longer acceptable to say "God did it". Therefor, if
evolution was to ever become a science, a rational explanation
had to be offered.
Such an explanation was proposed by Jean Babtiste Lamarck
toward the end of the eighteenth century, and Lamarck became best
known for his pre-Darwin theory of evolution. According to
Lamarck, the acquired characteristics of the parents could be
handed down to their offspring. Suppose, to take the most over
used example, that the first generations of giraffe had a neck of
ordinary length. Because the lower branches of the trees they fed
off were easily striped, these early giraffes stretched out their
necks to reach higher branches. In doing so, they caused their
offspring to be born with slightly longer necks, until the
ultimate result was the giraffe of today.
This theory had virtues far beyond the necks of giraffes.
Taking this concept to its extreme one would now be under the
impression that all that the past European forefathers have
passed on all their acquired traits to the younger generations
following them. The reasoning powers of the great philosophers,
the valour of Crusading knights should have been endowed in all
rather than a meagre few. According to this theory of evolution
descendants could one day attain the heights Europeans had
already scaled.
The Lamarckian evolution had only one crucial defect, it was
entirely untrue. One could cut off a rat's tail, but its
offspring would have normal tails. The rules of genetics were not
known in Lamarck's day, and were not known until long after
Darwin's, when the pioneering work of Mendel was rediscovered at
the turn of the twentieth century. But animal breeders had long
since discovered certain principles of breeding for desired
characteristics, and acquired characteristics played no part in
this process. Only through proper training could one find out if
a hunting dog had favourable qualities. But the training did not
create those characteristics in the dog's offspring.
Lamarckianism was now discredited, and the question of
evolution remained a mystery. Many scientists rejected evolution
and the Great Chain of Life feeling that its concepts had no
place in biological science. The key was produced by the theorist
of the "dismal science" of economics, Thomas Malthus. Malthus
said that human (and animal) populations increased at a geometric
rate, whereas food supply increased only at an arithmetic rate.
Therefore population was continually outstripping food supply,
and was kept in check only by starvation, or by indirect acts
such as war and diseases.
Malthusianism raised a very good question which is not easily
noticed. Which individuals survived in hard times, and which
died? Luck was probably the largest factor, but not the only one,
other factors applied, such as the strong, the courageous, or the
adaptable had a somewhat better chance of surviving than those
who lacked those characteristics. To the degree that strength,
drive, or adaptability were acquired characteristics, they would
have no effect on future generations since Lamarckianism had been
proven wrong. But to the degree that some individuals inherited
these characteristics, they were more likely to survive, to hand
down these same characteristics to their descendants. As the
lower branches of the ancient African trees were plucked bare,
the longer-necked ancestral giraffes were more likely to survive
than their shorter-necked cousins, and they handed down the
tendency toward long necks to their descendants. The modern,
long-necked giraffe thus evolved through countless generations of
natural selection.
A few people may have stumbled upon this idea before Darwin
did, but Darwin was the first to develop it. The development was
indeed more crucial to the ultimate acceptance than was the
insight alone. By itself, evolution by natural selection is an
amazing theory, and although this might explain a great deal, it
does not prove that it is true. Lamarckianism was amazing in its
time but it did not stand up close to scrutiny.
Before offering his insight to the world, Charles Darwin
determined that he would subject it to close scrutiny. He spent
the next two decades of his life collecting masses of evidence,
from the distribution of natural species to the experience of
pigeon breeders, to develop and support his argument. As far as
he was concerned, Darwin was nowhere near ready to present his
theory when, in 1858, Alfred Lord Wallace, sent a paper to him.
Wallace's paper stated the very theory that Darwin had been
labouring on for two decades. Soon a joint paper was written and
published , and the theory of evolution through natural selection
was at least presented to the scientific world (Darwin and
Wallace, 1858). Two years later Darwin published his full theory
in The Origin of Species.
If Thomas Edison said that invention was one percent
inspiration and ninety-nine percent perspiration, Darwin showed
that the same was true of discovery. Evolution through natural
selection was a brilliant idea, and one that might be debated
endlessly.
The process through which one species evolved into a
distinctly different species was far too time consuming to be
directly demonstrated. All the naturalists had available, apart
from bones, was an understanding of the state of life as it is on
earth today. Essentially, The Origin of Species approached the
problem of evolution through two lines of argument and
interpretation, both rooted in the concept of inherited
variation. Darwin showed that the distinction between species was
not hard and fast. There are varieties of a given intricately
adapted to different conditions, that routinely interbred along
the boundary between their home territories, but the mixed
varieties tend to remain confined to the boundary area, since
they are less adapted than either of the base varieties to their
own home territories. Specification in nature did not require
dramatic jumps, but could emerge out of gradually widening
variation.
Darwin examined variation under domestication to demonstrate
that the deliberate selection of breeders could produce varieties
as markedly different from the root stock as the varieties found
in nature. Deliberate selection through breeding was obviously a
much faster and "efficient" process than natural selection
through differential rates of survival in the face of
environmental pressures, but the end result would be essentially
the same, the variety of a given species that was most adapted to
a given environment would gradually replace the root stock in
that environment.
The theory of evolution through natural selection would most
certainly have appeared, even without Darwin, it would have
appeared at the same time, since it was Wallace's independent
development of the theory that prompted his and Darwin's joint
paper. The idea of natural selection was circulating in the mid-
nineteenth century, just as the idea of evolution had been
circulating in the eighteenth.
But had the century of evolution through natural selection
appeared only in outline form, it might have been many more years
in winning general acceptance. The collected evidence of The
Origin of Species was sufficient to persuade most biologists that
this was the key that they had been looking for. Quite a few
scientists held out, notably Louis Agassiz, but the younger
generation of students coming into the field seem almost without
exception to agree to the adopted theory. Within a few decades,
evolution through natural selection was a fundamental paradigm of
biological thought.
The development of biology through the century since that time
has not essentially altered the situation. Alot of changes have
been introduced, or at least debated. Once genetics was more
fully understood, it was realized that major steps in
specification might just owe more to favourable mutations than to
the regular process of variation. But the introduction of
mutation did not change the principle of natural selection.
Natural selection, as Darwin saw it, simply can not be
ignored. For just as a largely barren earth is re-colonized by
the survivors descendants, which must adapt through either
variation or mutation to fill the ecological niches left empty by
the prior extinctions. Just as an area devastated by a forest
fire are filled by an evolution of new forms, not by the existing
ones from unburned areas. We may not be able to see the entire
history of evolution but from our viewpoint we have hundreds of
examples of natural selection taking place all around us each and
every second of each and every day. Fortunately, Charles Darwin
(and maybe I should credit Alfred Lord Wallace) had the insight
and boldness to conceive and develop a theory so controversial to
his time and culture.
Chad Galloway
Clark, R.W. (1984). The Survial of Charles Darwin. New York:
Random House
Sproule, Anna (1990). Charles Darwin. Concord:Irwin
Warburton, Lois (1992). Human Origins-Tracing Humanity's
Evolution. San Diego:Lucent Books
Howell, F.C. (1980). Early Man. Virginia:Time-Life Books
Nouvelle, C (1885). The First People. Paris:Silver Burdett Co.
f:\12000 essays\sciences (985)\Biology\charles darwin.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Charles Darwin
"The Catholic church has absolutely no view on 'Darwin's Theory of Evolution' or 'Darwinism' what is commonly believed by the magistarium is that one should not necessarily take the Bible in a literal sense ..."
-An excerpt from Robert Richard's
The Meaning of Evolution.
Charles Darwin, a British naturalist has revolutionized biological and genetic studies with his new idea of "Natural Selection." His theory on evolution, which held that a species had emerged from preexisting or "basic" forms. His liberal ideas in Natural History had aroused several disagreements among scientists and caused a division among them. In cognizance to Darwin's theory(ies) scientists today gives him the credit as being the first in all time to explain some of the disagreements between geologists. Some of these where how some rock layers were higher than others in some are but in other areas they were lower.
Early Years
Charles Darwin was born in Shrewsbury, England on February 12, 1809. He was the son of Robert Warren Darwin, a family doctor and of Susannah Wedgewood Darwin daughter of a porcelain manufacturer. His grandfather, infact, was the great English poet Erasmus Darwin. His early school training was at a small school house in Shrewsbury. After which his father put him into Edinburgh University in 1825 to 1827 for medical studies. Darwin showed no interest in being a physician after witnessing several major operations without anesthesia. He was then sent to be a pastor in the Church of England. He studied at Christ College at Cambridge University in 1828.
He lost his interest in Holy order by the and became interested in something never before, Natural History. In 1831 he graduated from Cambridge with a B.A. He met many connections who were his allies in a "war" against the scientific community's belief of how evolution does occur.
Infact, one of his "connections" a professor and friend of his, Johns Stevens Henslow endorsed Darwin for an unpaid position as naturalist on a scientific five-year voyage on the H.M.S. Beagle. The ship took off on December 27, 1831, to explore and evaluate the western coast of South America and several islets of the coast of South America. Its Secondary mission was to set up Navigational posts along the coast line. Darwin was to learn of the biological and geological (of which he was not educated for!) Developments of the areas.
Research
Darwin, an uneducated (in geology, that is) supposed "geologist" made two critical discoveries of which later brought a contribution to his evolution theory. These were that volcanoes and earth quakes changed the rock layers and their order. This first discovery led to his second that is a key piece in marine biology, that coral reefs were made by the clumping together of skeletons of small animals and as more died and then clumped it made a large mass-the coral reef. His two geological and biological discoveries led to his primary conclusion that changes geologically and biological that things in nature will change over long periods of time. He published three books concerning his conclusions about these; Geological Observations on South America (1846), Coral Reefs (1842), and his most famous geological piece Volcanic Islands (1844).
It was 1856, Darwin's theories had been generally excepted among the scientific community. Except one, Natural Selection, this theory he had not yet, unleashed yet for it was far to complicated to be explained by tongue. He did tell his friends this, who in turn set him to meet with another individual Alfred Russel Wallace whom also had the same theory. Wallace had sent Darwin a letter outlining what he had thought about Natural Selection. The two went together after two years of research to London's Linean Society in 1858 to reveal what they thought. There was printed on November 24, 1859, it was under the title "The Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life."
On the eastern seaboard South America, Darwin researched many topics of which made a strong importance to the scientific community. When the Beagle docked to the west coast and islands of the shore called the Galapagos he studied and researched like never before. This is also the scene for his most well known discovery - Darwin's Finches.
He noticed that every one of the ecosystems of the islands were exactly the same, the climate, geography, and the humidity; just alike. He noticed that there were a wide variety of avians (birds) on the islands. These birds, he noted, were similar in many aspects except their beaks. Some had long and slim beak -used for the small seeds, others had short, large, powerful beaks used for crushing the bigger seeds, he noticed also some with small, fine beaks -used for obtaining small insects. He later concluded, from fossil evidence, that all of these birds had a common ancestor who the "Evolved" into the different species we see today.
Conclusion
The impact of Charles Darwin will always be remembered. Under the influence of his spouse, and after keeping his new ideas to himself for years after arriving back in England, he finally recorded in a scientific journal what he found (explained above). His remarkable discoveries opened a new frontier in the scientific realm. He will always be remembered as a true pioneer in the theory of evolution.
Bibliography
Bowlby, John, Charles Darwin: A New Life; 1991.
Bowler, PJ., Charles Darwin: The Man and His Influence; 1990.
Keynes, R.D.,ed., Charles Darwin's Beagle Diary; 1988.
Moorhead, Alan, Darwin and the Beagle; 1969.
Richards, Robert, The Meaning of Evolution; 1993.
Andy Zerzan
Biology, Mr Herron
1st Hour Extra Credit
10/95
Charles Darwin:
His Life Story
of Dicovery
f:\12000 essays\sciences (985)\Biology\Chaucers Knight.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Canterbury Tales
A Character Sketch of Chaucer's Knight
Geoffrey Chaucer's Canterbury Tales, written in approximately
1385, is a collection of twenty-four stories ostensibly told by various
people who are going on a religious pilgrimage to Canterbury Cathedral from
London, England. Prior to the actual tales, however, Chaucer offers the
reader a glimpse of fourteenth century life by way of what he refers to as
a General Prologue. In this prologue, Chaucer introduces all of the
characters who are involved in this imaginary journey and who will tell the
tales. Among the characters included in this introductory section is a
knight. Chaucer initially refers to the knight as "a most distinguished
man" (l. 43) and, indeed, his sketch of the knight is highly complimentary.
The knight, Chaucer tells us, "possessed/Fine horses, but he
was not gaily dressed" (ll. 69-70). Indeed, the knight is dressed in a
common shirt which is stained "where his armor had left mark" (l. 72).
That is, the knight is "just home from service" (l. 73) and is in such a
hurry to go on his pilgrimage that he has not even paused before beginning
it to change his clothes.
The knight has had a very busy life as his fighting career has
taken him to a great many places. He has seen military service in Egypt,
Lithuania, Prussia, Russia, Spain, North Africa, and Asia Minor where he
"was of [great] value in all eyes (l. 63). Even though he has had a very
successful and busy career, he is extremely humble: Chaucer maintains that
he is "modest as a maid" (l. 65). Moreover, he has never said a rude thing
to anyone in his entire life (cf., ll. 66-7).
Clearly, the knight possesses an outstanding character.
Chaucer gives to the knight one of the more flattering descriptions in the
General Prologue. The knight can do no wrong: he is an outstanding
warrior who has fought for the true faith--according to Chaucer--on three
continents. In the midst of all this contenton, however, the knight
remains modest and polite. The knight is the embodiment of the chivalric
code: he is devout and courteous off the battlefield and is bold and
fearless on it.
In twentieth century America, we would like to think that we
have many people in our society who are like Chaucer's knight. During this
nation's altercation with Iraq in 1991, the concept of the modest but
effective soldier captured the imagination of the country. Indeed, the
nation's journalists in many ways attempted to make General H. Norman
Schwarzkof a latter day knight. The general was made to appear as a
fearless leader who really was a regular guy under the uniform.
It would be nice to think that a person such as the knight
could exist in the twentieth century. The fact of the matter is that it is
unlikely that people such as the knight existed even in the fourteenth
century. As he does with all of his characters, Chaucer is producing a
stereotype in creating the knight. As noted above, Chaucer, in describing
the knight, is describing a chivalric ideal. The history of the Middle
Ages demonstrates that this ideal rarely was manifested in actual conduct.
Nevertheless, in his description of the knight, Chaucer shows the reader
the possibility of the chivalric way of life.
how the hell do you work this thing?
f:\12000 essays\sciences (985)\Biology\Chimps v Humans Simlarities and Differences.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chimpanzee versus Humans
similarities & differences
Since the first days of human thought into their beginnings, chimpanzees have played a vital role in showing who we were. The chimpanzee, one of the great apes, makes it home in the forests of Central and West Africa. Their long arms and legs adapt them for living in such regions as lowland jungles and mountainous regions. Humans are classified in the order Primates, and family Hominade. Within this family, human beings, our nearest living relatives, the African apes, are also placed. Though in some classification standards, apes are placed in the family Pongidae.
The defining characteristic of Hominids is their ability to walk bipedally, using two feet and walking upright. This form of movement lead to many adaptations within the Hominids skeleton. There are notable changes in the spinal cord, pelvis bone and legs. The chimpanzee does have the ability to walk upright and does, but it spends most of the time walking on four limbs. It uses it's arms as it's front legs and walks on it's knuckles. Our brain capacity is about twice as large as that of the chimp. Humans have a brain capacity of 1300 to 1500 cc, while the chimps are about 600 - 800 cc. It is though by scientists that our brain size grew over time as were evolved into making complex tools and we became increasingly sophisticated. The human skull is slightly different from that of our primate ancestors, these changes occurred over thousands of years of evolution. Over time the humans skull and teeth have decreased from that of our ancestors. The chimp has much larger canine teeth and a protruding jaw line. A similarities though can be seen on how uniform the layout of teeth are between the humans and chimps, both possess canines, premolars, and molars.
Other notable similarities between us and the primates, specifically the chimps is the use of an opposable digit, a thumb. The thumb allows for grasping tools and food. This proves to be a vital asset when chimps go out hunting for prey. Chimps also resemble humans socially and in their actions. After Jane Goodall showed the world what chimps were like, we saw that they act like us. The chimps stay together in packs ranging from two to fifty. The men guard and protect the pack from rival packs. In their hunting technique they were seen pushing the prey into a trap, something very smart on their part. All this sounds very much human-like, but then they also were seen making tools. Chimps used basic tools and twigs to catch ants and eat them. They used a rock to smash nuts. How this is learned is a mystery, many think it is passed down through lineage. Chimps do lack forethought, which is thinking ahead and doing something. The chimpanzee and human differences and similarities is very eye-opening, especially to someone who is seeing most of this information for the first time. Although I believe that their is a lot to prove I do think our direct ancestors were the primates and there should be a lot of research to prove even more similarities.
f:\12000 essays\sciences (985)\Biology\Circulatory System.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Circulatory System
The circulatory system in anatomy and physiology is the course taken by the blood through the arteries, capillaries, and veins and back to the heart. In humans and the higher vertebrates, the heart is made up of four chambers the right and left auricles, or atria, and the right and left ventricles. The right side of the heart pumps oxygen-poor blood from the cells of the body back to the lungs for new oxygen; the left side of the heart receives blood rich in oxygen from the lungs and pumps it through the arteries to the various parts of the body. Circulation begins early in fetal life. It is estimated that a given portion of the blood completes its course of circulation in approximately 30 seconds.
Pulmonary circulation is where the blood from the entire body is transported to the right auricle through two large veins. The superior vena cava and the inferior vena cava. When the right auricle contracts, it forces the blood through an opening into the right ventricle. Contraction of this ventricle drives the blood to the lungs. Blood is prevented from returning into the auricle by the tricuspid valve, which completely closes during contraction of the ventricle. In its passage through the lungs, the blood is oxygenated, that is, then it is brought back to the heart by the four pulmonary veins, which enter the left auricle. When this chamber contracts, blood is forced into the left ventricle and then by ventricular contraction into the aorta. The bicuspid, or mitral, valve prevents the blood from flowing back into the auricle, and the semilunar valves at the beginning of the aorta stop it from flowing back into the ventricle. Similar valves are present in the pulmonary artery.
The aorta divides into a number of main branches, which in turn divide into smaller ones until the entire body is supplied by an elaborately branching series of blood vessels. The smallest arteries divide into a fine network of still more minute vessels, the capillaries, which have extremely thin walls; thus, the blood is enabled to come into close relation with the fluids and tissues of the body. In the capillaries, the blood performs three functions then it releases its oxygen to the tissues, it furnishes to the body cells the nutrients and other essential substances that it carries, and it takes up waste products from the tissues. The capillaries then unite to form small veins. The veins, in turn, unite with each other to form larger veins until the blood is finally collected into the superior and inferior venae cavae from which it goes to the heart, completing the circuit.
In addition to the pulmonary and systemic circulations described above, a subsidiary to the venous system exists, known as portal circulation. A certain amount of blood from the intestine is collected into the portal vein and carried to the liver. There it enters into the open spaces called sinusoids, where it comes into direct contact with the liver cells. In the liver important changes occur in the blood, which is carrying the products of the digestion of food recently absorbed through the intestinal capillaries. The blood is collected a second time into veins, where it again joins the general circulation through the right auricle. In its passage through other organs, the blood is further modified.
Coronary circulation is the means by which the heart tissues themselves are supplied with nutrients and oxygen and are freed of wastes. Just beyond the semilunar valves, two coronary arteries branch from the aorta. These then break up into an elaborate capillary network in the heart muscle and valve tissue. Blood from the coronary capillary circulation enters several small veins, which then enter directly into the right auricle without first passing into the vena cava.
The action of the heart consists of successive alternate contraction and relaxation of the muscular walls of the auricles and ventricles. During the period of relaxation, the blood flows from the veins into the two auricles, gradually distending them. At the end of this period, the auricles are completely dilated then their muscular walls contract, forcing almost the entire contents through the auriculoventricular openings into the ventricles. This action is sudden and occurs almost simultaneously in both auricles. The mass of blood in the veins makes it impossible for any blood to flow backward. The force of blood flowing into the ventricles is not powerful enough to open the semilunar valves, but it distends the ventricles, which are still in a condition of relaxation. The tricuspid and mitral valves open with the blood current and close readily at the beginning of ventricular contraction.
The ventricular systole immediately follows the auricular systole. The ventricular contraction is slower, but far more forcible then the ventricular chambers are virtually emptied at each systole. The apex of the heart is thrown forward and upward with a slight rotary motion then this impulse, called the apex beat, can be detected between the fifth and sixth ribs. The heart is entirely at rest for a short time after the ventricular systole occurs. The entire cycle can be divided into three periods then in the first, the auricles contract and in the second, the ventricles contract; in the third, both the auricles and the ventricles remain at rest. In humans, with a normal heart rate of approximately 72 heartbeats per minute, the cardiac cycle has a duration of about 0.8 second. Auricular systole requires about 0.1 second; ventricular systole occupies approximately 0.3 second. Thus, the heart is completely at rest for about 0.4 second, or during perhaps half of each cardiac cycle.
With every beat, the heart emits two sounds, which are followed by a short pause. The first sound, coinciding with the ventricular systole, is dull and protracted. The second sound, made by the sudden closure of the semilunar valves, is shorter and much sharper. Diseases of the heart valves may change these sounds, and many factors, including exercise, cause wide variations in the heartbeat, even in healthy people. The normal heart rate of animals varies widely from species to species. At one extreme, the heart of a hibernating mammal may beat only a few times a minute; at the other, the hummingbird has a heart rate of 2000 heartbeats per minute.
When it enters the arteries at the moment of ventricular contraction, the blood stretches the walls of the arteries. During diastole, the distended arteries return to their normal diameter, in part because of the elasticity of connective tissue and in part because of the contraction of muscles in the arterial walls. This return to normal is important in maintaining a continuous flow of blood through the capillaries during the period while the heart is at rest. The expansion and contraction of the arterial walls that can be felt in all the arteries near the surface of the skin is called the pulse.
The rate and strength of the heartbeat are controlled by nerves through a series of reflexes that speed it up or slow it down. The impulse to contraction, however, is not dependent on external nervous stimuli, but arises in the heart muscle itself. A small bit of specialized tissue called the sinoauricular node, embedded in the wall of the right auricle, is responsible for initiating the heartbeat. The contraction then spreads over the auricles in the septum between the auricles, it excites another node called the auriculoventricular node. The auriculoventricular bundle conducts the impulse from this node to the muscles of the ventricles, and in this way contraction and relaxation of the heart are coordinated. Each phase of the cardiac cycle is associated with the production of an electrical potential that can be recorded by electrical instruments to produce a reading known as an electrocardiogram.
Circulation of the blood in superficial capillaries can be observed under the microscope. The red blood cells can be seen moving along rapidly in the middle of the blood current, while the white cells advance more slowly along the walls of the capillaries. The capillaries present a far larger surface with which the blood comes in contact than do other blood vessels end because they consequently offer the greatest resistance to the progress of the blood, they have a great influence on the circulation. Capillaries expand when temperature rises and help to cool the blood then they contract in cold and help preserve internal heat.
f:\12000 essays\sciences (985)\Biology\Clinical Chemistry Tests in Medicine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Of the diagnostic methods available to veterinarians, the clinical chemistry test has developed into a valuable aid for localizing pathologic conditions. This test is actually a collection of specially selected individual tests. With just a small amount of whole blood or serum, many body systems can be analyzed. Some of the more common screenings give information about the function of the kidneys, liver, and pancreas and about muscle and bone disease. There are many blood chemistry tests available to doctors. This paper covers the some of the more common tests.
Blood urea nitrogen (BUN) is an end-product of protein metabolism. Like most of the other molecules in the body, amino acids are constantly renewed. In the course of this turnover, they may undergo deamination, the removal of the amino group. Deamination, which takes place principally in the liver, results in the formation of ammonia. In the liver, the ammonia is quickly converted to urea, which is relatively nontoxic, and is then released into the bloodstream. In the blood, it is readily removed through the kidneys and excreted in the urine. Any disease or condition that reduces glomerular filtration or increases protein catabolism results in elevated BUN levels.
Creatinine is another indicator of kidney function. Creatinine is a waste product derived from creatine. It is freely filtered by the glomerulus and blood levels are useful for estimating glomerular filtration rate. Muscle tissue contains phosphocreatinine which is converted to creatinine by a nonenzymatic process. This spontaneous degradation occurs at a rather consistent rate (Merck, 1991).
Causes of increases of both BUN and creatinine can be divided into three major categories: prerenal, renal, and postrenal. Prerenal causes include heart disease, hypoadrenocorticism and shock. Postrenal causes include urethral obstruction or lacerations of the ureter, bladder, or urethra. True renal disease from glomerular, tubular, or interstitial dysfunction raises BUN and creatinine levels when over 70% of the nephrons become nonfunctional (Sodikoff, 1995).
Glucose is a primary energy source for living organisms. The glucose level in blood is normally controlled to within narrow limits. Inadequate or excessive amounts of glucose or the inability to metabolize glucose can affect nearly every system in the body. Low blood glucose levels (hypoglycemia) may be caused by pancreatic tumors (over-production of insulin), starvation, hypoadrenocorticism, hypopituitarism, and severe exertion. Elevated blood glucose levels (hyperglycemia) can occur in diabetes mellitus, hyperthyroidism, hyperadrenocorticism, hyperpituitarism, anoxia (because of the instability of liver glycogen in oxygen deficiency), certain physiologic conditions (exposure to cold, digestion) and pancreatic necrosis (because the pancreas produces insulin which controls blood glucose levels).
Diabetes mellitus is caused by a deficiency in the secretion
or action of insulin. During periods of low blood glucose, glucagon stimulates the breakdown of liver glycogen and inhibits glucose breakdown by glycolysis in the liver and stimulates glucose synthesis by gluconeogenesis. This increases blood glucose. When glucose enters the bloodstream from the intestine after a carbohydrate-rich meal, the resulting increase in blood glucose causes increased insulin secretion and decreased glucagon secretion. Insulin stimulates glucose uptake by muscle tissue where glucose is converted to glucose-6-phosphate. Insulin also activates glycogen synthase so that much of the glucose-6-phosphate is converted to glycogen. It also stimulates the storage of excess fuels as fat (Lehninger, 1993).
With insufficient insulin, glucose is not used by the tissues and accumulates in the blood. The accumulated glucose then spills into the urine. Additional amounts of water are retained in urine because of the accumulation of glucose and polyuria (excessive urination) results. In order to prevent dehydration, more water than normal is consumed (polydipsia). In the absence of insulin, fatty acids released form adipose tissue are converted to ketone bodies (acetoacetic acid, B-hydroxybutyric acid, and acetone). Although ketone bodies can be used a energy sources, insulin deficiency impairs the ability of tissues to use ketone bodies, which accumulate in the blood. Because they are acids, ketones may exhaust the ability of the body to maintain normal pH. Ketones are excreted by the kidneys, drawing water with them into the urine. Ketones are also negatively charged and draw positively charged ions (sodium, potassium, calcium) with them into urine. Some other results of diabetes mellitus are cataracts (because of abnormal glucose metabolism in the lens which results in the accumulation of water), abnormal neutrophil function (resulting in greater susceptibility to infection), and an enlarged liver (due to fat accumulation) (Fraser, 1991).
Bilirubin is a bile pigment derived from the breakdown of heme by the reticuloendothelial system. The reticuloendothelial system filters out and destroys spent red blood cells yielding a free iron molecule and ultimately, bilirubin. Bilirubin binds to serum albumin, which restricts it from urinary excretion, and is transported to the liver. In the liver, bilirubin is changed into bilirubin diglucuronide, which is sufficiently water soluble to be secreted with other components of bile into the small intestine. Impaired liver function or blocked bile secretion causes bilirubin to leak into the blood, resulting in a yellowing of the skin and eyeballs (jaundice). Determination of bilirubin concentration in the blood is useful in diagnosing liver disease (Lehninger, 1993). Increased bilirubin can also be caused by hemolysis, bile duct obstruction, fever, and starvation (Bistner, 1995).
Two important serum lipids are cholesterol and triglycerides. Cholesterol is a precursor to bile salts and steroid hormones. The principle bile salts, taurocholic acid and glycocholic acid, are important in the digestion of food and the solubilization of ingested fats. The desmolase reaction converts cholesterol, in mitochondria, to pregnenolone which is transported to the endoplasmic reticulum and converted to progesterone. This is the precursor to all other steroid hormones (Garrett, 1995).
Triglycerides are the main form in which lipids are stored and are the predominant type of dietary lipid. They are stored in specialized cells called adipocytes (fat cells) under the skin, in the abdominal cavity, and in the mammary glands. As stored fuels, triglycerides have an advantage over polysaccharides because they are unhydrated and lack the extra water weight of polysaccharides. Also, because the carbon atoms are more reduced than those of sugars, oxidation of triglycerides yields more than twice as much energy, gram for gram, as that of carbohydrates (Lehninger, 1993).
Hyperlipidemia refers to an abnormally high concentration of triglyceride and/or cholesterol in the blood. Primary hyperlipidemia is an inherited disorder of lipid metabolism. Secondary hyperlipidemias are usually associated with pancreatitis, diabetes mellitus, hypothyroidism, protein losing glomerulonephropathies, glucocorticosteroid administration, and a variety of liver abnormalities. Hypolipidemia is almost always a result of malnutrition (Barrie, 1995).
Alkaline phosphatase is present in high concentration in bone and liver. Bone remodeling (disease or repair) results in moderate elevations of serum alkaline phosphatase levels, and cholestasis (stagnation of bile flow) and bile duct obstruction result in dramatically increased serum alkaline phosphatase levels. The obstruction is usually intrahepatic, associated with swelling of hepatocytes and bile stasis. Elevated serum alkaline phosphatase and bilirubin levels suggest bile duct obstruction. Elevated serum alkaline phosphatase and normal bilirubin levels suggest hepatic congestion or swelling. Elevations also occur in rapidly growing young animals and in conditions causing bone formation (Bistner, 1995).
Aspartate aminotransferase (AST) is an enzyme normally found in the mitochondria of liver, heart, and skeletal muscle cells. In the event of heart or liver damage, AST leaks into the blood stream and concentrations become elevated (Bistner, 1995). AST, along with alkaline phosphatase, are used to differentiate between liver and muscle damage in birds.
Alanine aminotransferase (ALT) is considered a liver-specific enzyme, although small amounts are present in the heart. ALT is generally located in the cytosol. Liver disease results in the releasing of the enzyme into the serum. Measurements of this enzyme are used in the diagnosis of certain types of liver diseases such as viral hepatitis and hepatic necrosis, and heart diseases. The ALT level remains elevated for more than a week after hepatic injury (Sodikoff, 1995).
Fibrinogen, albumin, and globulins constitute the major proteins of the blood plasma. Fibrinogen, which makes up about 0.3 percent of the total protein volume, is a soluble protein involved in the clotting process. The formation of blood clots is the result of a series of zymogen activations. Factors released by injured tissues or abnormal surfaces caused by injury initiate the clotting process. To create the clot, thrombin removes negatively charged peptides from fibrinogen, converting it to fibrin. The fibrin monomer has a different surface charge distribution than fibrinogen. These monomers readily aggregates into ordered fibrous arrays. Platelets and plasma globulins release a fibrin-stabilizing factor which creates cross-links in the fibrin net to stabilize the clot. The clot binds the wound until new tissue can be built (Garrett, 1995).
The alpha-, beta-, and gamma-globulins compose the globulins. Alpha-globulins transport lipids, hormones, and vitamins. Also included is a glycoprotein, ceruloplasmin, which carries copper and haptoglobulins, which bind hemoglobin. Iron transport is related to beta-globulins. The glycoprotein that binds the iron is transferrin (Lehninger, 1993). Gamma-globulins (immunoglobulins) are associated with antibody formation. There are five different classes of immunoglobulins. IgG is the major circulating antibody. It gives immune protection within the body and is small enough to cross the placenta, giving newborns temporary protection against infection. IgM also gives protection within the body but is too large to cross the placenta. IgA is normally found in mucous membranes, saliva, and milk. It provides external protection. IgD is thought to function during the development and maturation of the immune response. IgE makes of the smallest fraction of the immunoglobulins. It is responsible for allergic and hypersensitivity reactions.
Altered levels of alpha- and beta- globulins are rare, but immunoglobulin levels change in various conditions. Serum immunoglobulin levels can increase with viral or bacterial infection, parasitism, lymphosarcoma, and liver disease. Levels are decreased in immunodeficiency.
Albumin is a serum protein that affects osmotic pressure, binds many drugs, and transports fatty acids. Albumin is produced in the liver and is the most prevalent serum protein, making up 40 to 60 percent of the total protein. Serum albumin levels are decreased (hypoalbuminemia) by starvation, parasitism, chronic liver disease, and acute glomerulonephritis (Sodikoff, 1995). Albumin is a weak acid and hypoalbuminemia will tend to cause nonrespiratory alkalosis (de Morais, 1995). Serum albumin levels are often elevated in shock or severe dehydration.
Creatine Kinase (CK) is an enzyme that is most abundant in skeletal muscle, heart muscle, and nervous tissue. CK splits creatine phosphate in the presence of adenosine diphosphate (ADP) to yield creatine and adenosine triphosphate (ATP). During periods of active muscular contraction and glycolysis, this reaction proceeds predominantly in the direction of ATP synthesis. During recovery from exertion, CK is used to resynthesize creatine phosphate from creatine at the expense of ATP. After a heart attack, CK is the first enzyme to appear in the blood (Lehninger, 1993). CK values become elevated from muscle damage (from trauma), infarction, muscular dystrophies, or inflammation. Elevated CK values can also be seen following intramuscular injections of irritating substances. Muscle diseases may be associated with direct damage to muscle fibers or neurogenic diseases that result in secondary damage to muscle fibers. Greatly increased CK values are usually associated with heart muscle disease because of the large number of mitochondria in heart muscle cells (Bistner, 1995).
When active muscle tissue cannot be supplied with sufficient oxygen, it becomes anaerobic and produces pyruvate from glucose by glycolysis. Lactate dehydrogenase (LDH) catalyzes the regeneration of NAD+ from NADH so glycolysis can continue. The lactate produced is released into the blood. Heart tissue is aerobic and uses lactate as a fuel, converting it to pyruvate via LDH and using the pyruvate to fuel the citric acid cycle to obtain energy (Lehninger, 1993). Because of the ubiquitous origins of LDH, the total serum level is not reliable for diagnosis; but in normal serum, there are five isoenzymes of LDH which give more specific information. These isoenzymes can help differentiate between increases in LDH due to liver, muscle, kidney, or heart damage or hemolysis (Bistner, 1995).
Calcium is involved in many processes of the body, including neuromuscular excitability, muscle contraction, enzyme activity, hormone release, and blood coagulation. Calcium is also an important ion in that it affects the permeability of the nerve cell membrane to sodium. Without sufficient calcium, muscle spasms can occur due to erratic, spontaneous nervous impulses.
The majority of the calcium in the body is found in bone as phosphate and carbonate. In blood, calcium is available in two forms. The nondiffusible form is bound to protein (mainly albumin) and makes up about 45 percent of the measurable calcium. This bound form is inactive. The ionized forms of calcium are biologically active. If the circulating level falls, the bones are used as a source of calcium.
Primary control of blood calcium is dependent on parathyroid hormone, calcitonin, and the presence of vitamin D. Parathyroid hormone maintains blood calcium level by increasing its absorption in the intestines from food and reducing its excretion by the kidneys. Parathyroid hormone also stimulates the release of calcium into the blood stream from the bones. Hyperparathyroidism, caused by tumors of the parathyroid, causes the bones to lose too much calcium and become soft and fragile. Calcitonin produces a hypocalcemic effect by inhibiting the effect of parathyroid hormone and preventing calcium from leaving bones. Vitamin D stimulates calcium and phosphate absorption in the small intestine and increases calcium and phosphate utilization from bone. Hypercalcemia may be caused by abnormal calcium/phosphorus ratio, hyperparathyroidism, hypervitaminosis D, and hyperproteinemia. Hypocalcemia may be caused by hypoproteinemia, renal failure, or pancreatitis (Bistner, 1995).
Because approximately 98 percent of the total body potassium is found at the intracellular level, potassium is the major intracellular cation. This cation is filtered by the glomeruli in the kidneys and nearly completely reabsorbed by the proximal tubules. It is then excreted by the distal tubules. There is no renal threshold for potassium and it continues to be excreted in the urine even in low potassium states. Therefore, the body has no mechanism to prevent excessive loss of potassium (Schmidt-Nielsen, 1995).
Potassium plays a critical role in maintaining the normal cellular and muscular function. Any imbalance of the body's potassium level, increased or decreased, may result in neuromuscular dysfunction, especially in the heart muscle. Serious, and sometimes fatal, arrythmias may develop. A low serum potassium level, hypokalemia, occurs with major fluid loss in gastrointestinal disorders (i.e., vomiting, diarrhea), renal disease, diuretic therapy, diabetes mellitus, or mineralocorticoid dysfunction (i.e., Cushing's disease). An increased serum potassium level, hyperkalemia, occurs most often in urinary obstruction, anuria, or acute renal disease (Bistner, 1995).
Sodium and its related anions (i.e., chloride and bicarbonate) are primarily responsible for the osmotic attraction and retention of water in the extracellular fluid compartments. The endothelial membrane is freely permeable to these small electrolytes. Sodium is the most abundant extracellular cation, however, very little is present intracellularly. The main functions of sodium in the body include maintenance of membrane potentials and initiation of action potentials in excitable membranes. The sodium concentration also largely determines the extracellular osmolarity and volume. The differential concentration of sodium is the principal force for the movement of water across cellular membranes. In addition, sodium is involved in the absorption of glucose and some amino acids from the gastrointestinal tract (Lehninger, 1993). Sodium is ingested with food and water, and is lost from the body in urine, feces, and sweat. Most sodium secreted into the GI tract is reabsorbed. The excretion of sodium is regulated by the renin-angiotensin-aldosterone system (Schmidt-Nielsen, 1995).
Decreased serum sodium levels, hyponatremia, can be seen in adrenal insufficiency, inadequate sodium intake, renal insufficiency, vomiting or diarrhea, and uncontrolled diabetes mellitus. Hypernatremia may occur in dehydration, water deficit, hyperadrenocorticism, and central nervous system trauma or disease (Bistner, 1995).
Chloride is the major extracellular anion. Chloride and bicarbonate ions are important in the maintenance of acid-base balance. When chloride in the form of hydrochloric acid or ammonium chloride is lost, alkalosis follows; when chloride is retained or ingested, acidosis follows. Elevated serum chloride levels, hyperchloremia, can be seen in renal disease, dehydration, overtreatment with saline solution, and carbon dioxide deficit (as occurs from hyperventilation). Decreased serum chloride levels, hypochloremia, can be seen in diarrhea and vomiting, renal disease, overtreatment with certain diuretics, diabetic acidosis, hypoventilation (as occurs in pneumonia or emphysema), and adrenal insufficiency (de Morais, 1995).
As seen above, one to two milliliters of blood can give a clinician a great insight to the way an animals' systems are functioning. With many more tests available and being developed every day, diagnosis becomes less invasive to the patient. The more information that is made available to the doctor allows a faster diagnosis and recovery for the patient.
Bibliography
Barrie, Joan and Timothy D. G. Watson. "Hyperlipidemia."
Current Veterinary Therapy XII. Ed. John Bonagura.
Philadelphia: W. B. Saunders, 1995.
Bistner, Stephen l. Kirk and Bistner's Handbook of Veterinary
Procedures and Emergency Treatment. Philadelphia: W. B.
Saunders, 1995.
de Morais, HSA and William W. Muir. "Strong Ions and Acid-Base
Disorders." Current Veterinary Therapy XII. Ed. John
Bonagura. Philadelphia: W. B. Saunders, 1995.
Fraser, Clarence M., ed. The Merck Veterinary Manual, Seventh
Edition. Rahway, N. J.: Merck & Co., 1991.
Garrett, Reginald H. and Charles Grisham. Biochemistry. Fort
Worth: Saunders College Publishing, 1995.
Lehninger, Albert, David Nelson and Michael Cox. Principles of
Biochemistry. New York: Worth Publishers, 1993.
Schmidt-Nielsen, Knut. Animal Physiology: Adaptation and
environment. New York: Cambridge University Press, 1995.
Sodikoff, Charles. Labratory Profiles of Small Animal Diseases.
Santa Barbara: American Veterinary Publications, 1995.
f:\12000 essays\sciences (985)\Biology\cloning of animals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cloning of Animals
On Sunday, February 23, 1997, Scottish researchers broke one of nature's greatest laws by cloning a lamb from a single cell of an adult ewe. This breakthrough opens the door to the possibility for the cloning of other mammals including humans.
This remarkable achievement is being looked at as a great advancement in animal agriculture. But this achievement could lead to ethical questions of standard.
Researchers lead by Ian Wilmut of the Roslin Institute in Midlothian, Scotland, showed that a fully differentiated cell from the mammary tissue of an ewe could be manipulated in such a way as to produce a genetically identical copy of the animal that the DNA was acquired.
Scientist long believed that once a cell became differentiated, that most of its approximately 100,000 genes shut off. Only a few genes remained active to allow the cell to perform its specific function of life. All efforts to reactivate the shut-off genes have failed. English researchers have came the closest by teasing frog body cells to develop into tadpoles. The tadpoles, however, never matured into frogs.
The Scottish researchers have failed many times with sheep cells before their success, but the task was perfected and accomplished. Now this accomplishment has made it possible for the cloning of almost any mammal, including humans.
To the average person, exactly how the technique works is unclear. Scientist predicted that by making cells dormant and bringing them close to death, something happens to break the chemical locks (barriers) that keep most of the genes inactive. The mammary cell is inserted into an unfertilized sheep egg cell that has already had all of its own genetic material removed. By fusing the cells together tricks the egg into thinking that it has become fertilized.
After being fused together, researchers believe that the chemical machinery inside the egg cell goes to work to reprogram the mammary cell genes into starting over again, as if they were brought together as sperm and egg. The cell divides, produces an embryo, fetus and a newborn that is identical to the animal from which it was cloned.
Although the United States government prohibits government funds being spent on human cloning research, and ethicists decry it, nevertheless, human cloning could be achieved, Neal First said. First is a professor of animal biotechnology and reproductive biology at the University of Wisconsin.
Overall, there is no apparent reason to clone humans. A duplicate body does not mean a duplicated mind. The clone's brain would be far different, for the clone would have to learn everything from its own experiences. Is cloning a human ethical? Should we try to clone humans?
I believe that nature will clone what it wants to clone. Researchers should be careful for we no nothing of the stability of any animal that is cloned by scientist. We don't know if that animal will be dominant over the animal from which it was clone or if it will turn hostile. From my point of view having a "clone" is not all it is cracked up to be.
f:\12000 essays\sciences (985)\Biology\Cloning paper.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A few years ago if you were to ask someone about the possibilities of cloning they would most likely say it was impossible. This attitude towards cloning has been held into belief up until recently when scientists in Scotland cloned a sheep. And immediately after scientists in Oregon cloned a monkey (Fackelmann 276). The most major breakthroughs of the century in science has occurred and we are not ready for it. The scientific breakthrough of cloning has caused a great deal of controversy in the media and also in the government. The advantages of cloning are tremendous to the human race and cannot be ignored.
I believe that cloning humans is what the human race needs to advance. Humans would be stronger, smarter, and more perfect. Scientists could remove bad genes from the parents and replace it with a good one. If one of the parents had a bad gene or hereditary disease this could be removed from the embryo and replaced with another "clean" gene. This process is called embryo screening it is used to determine if the child has received the defective gene. Several embryos could be cloned, then the DNA from one of the embryos would then be removed and standard genetic testing would be used to detect whether or not that embryo contained the genetic disease. If this cloned embryo containd a disease then one of the other embryos could be used for implantation in a parent, this guarantees that the child would be free of genetic disease (Marshall 1025). For those who disagree with cloning I am sure if there child could be saved from a genetic disease they would reconsider cloning.
Imagine if one of your friends or family members was in need of a liver or kidney. Most likely you would donate your own liver or kidney to save there life. But then you are one organ short. Well this happens a lot and seems to work fine. But if they needed a new heart you might have trouble finding one. Not if you had a clone of yourself that could supply you with a new organ or maybe even a relatives organ that was naturally stronger (Cloning 1117). Someone could replace their old organs with new ones and extend their life span. Thousands of lives that could be saved if we had the technology and advanced science of cloning available. Even accepting an organ from a relative it may fail, it has to be compatible with our body system , if its your clone, then its a perfect match.
Cancer is one of the largest killers and also one of the largest dilemmas scientists face today. Well, cancer research is possibly the most important reason for embryo cloning. Oncologists (People who study Cancer) believe that embryonic study will advance understanding of the rapid cell growth of cancer. Cancer cells develop at approximately the same great speed as embryonic cells do. By studying the embryonic cell growth, scientists may be able to determine how to stop it, and also stop cancer growth in turn (Watson 66).
Whenever there is a draft for a war people protest hide and even leave the country why should people be sent to fight for something they don't believe in or even in my case a country they don't want to die for. We cannot dispense human lives as if they were candy. If we produced smart, strong and loyal clones, we could have the perfect soldier. There would no longer be humans in the military, there would be no worries about losing lives or family members. Clones made specifically with a sole mission to die for there country these perfect soldiers would make up a perfect army.
Well, in the case of a lost relative, more specifically parents losing their child. Parents could clone a child who had died, as a homage of love. The saying "there is nothing that you can do to bring him/her back" would now be obsolete with the process of cloning. Of course parents could never have their child back exactly as he/she were but they could definitely start over again. Or parents could simply clone the traits of a famous person or favorable traits of someone else and put them in their child's embryo. Maybe even if they wanted twins or even sextuplets. Parents would be able to make more decisions on their child or children's traits.
The benefits of cloning do not stop at humans it extends out to animals. A lot of controversy is brought about when animals are used for laboratory testing. We should clone animals specifically for laboratory tests, these animals would not be depleting any populations nor would they be taken away from their habitats destroying any food chains.
Also farmers would benefit from cloning their select animals. This would also give the consumer better meat quality and lower the price of meat. Governments in countries where famine is present could master cloning techniques and provide for the starving populations in their country. One of the most beneficial effects of cloning animals is that species of animals whose populations are almost extinct could be replenished by cloning. Animals such as the Blue whale, the condor, and the Norfolk whale. These are only three examples of endangered species, there are practically hundreds of endangered species that could be saved with the process of cloning. We wouldn't be bothered by activists or any more "save the whales" foundations asking for you to donate money. If the amount of animals produced were controlled according to the food chains and habitats, these species could thrive once again.
We could grow plants that would be immune to bugs or pesticides. Farmers would have crop that would survive the winter and frost. We could eliminate the loss of crops that forces the prices to go up in the supermarkets. Crops that require less water or can grow in certain types of soil. This would be could for California where water is scarce.
This is only the begging of the benefits we can achieve from cloning. Creating a stronger and more advanced human race. Diseases would become weeded and cleaned out of humans' genetic makeup. Of course there is a chance of this getting in the hands of a madman or someone who would use it for ill purposes. But we can not let this amazing discovery stopped by people that can only see the bad side of cloning. They must also see the vast benefits of cloning, how it can save lives and entire species from extinction. I believe that cloning is a part of our evolution, our ancestors evolved by using their hands and minds, by creating language and civilizations, this advanced them. Cloning is what will advance our race more. Our bodies have stopped using and have disposed of unnecessary organs and body features which have proved to be useless. Diseases and deformities are useless and cloning can aid to the evolution of humans by cleansing our bodies of such ill and in some cases deadly burdens. There are too many adva
ntages in cloning for us to ignore it.
f:\12000 essays\sciences (985)\Biology\Cloning Today.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A clone is a group of organisms that are genetically identical. Most clones result from asexual reproduction, a process in which a new organism develops from only one parent. The one process of cloning, called nuclear transfer, replaces the nucleus of an immature egg with a nucleus from another cell. Most of the work with clones is done from cultures. An embryo has about thirty or forty usable cells but a culture features an almost endless supply. When the nucleus has been inserted into the egg cell, the cell is given an electric shock to initiate the development. Traditionally this is the sperm's role. In this paper we will be discussing the advantages of different types of clones, such as they are useful for research. We will also be discussing the disadvantages and different techniques that result from the cloning of different organisms.
First lets start with the history of cloning. The modern era of laboratory cloning began in 1958 when F.C. Steward cloned carrot plants from mature single cells placed in a nutrient culture containing hormones. The first cloning of animal cells took place in 1964. John B. Gurdon took the nuclei from tadpoles and injected them into unfertilized eggs. The nuclei containing the original parents' genetic information had been destroyed with ultraviolet light. When the eggs were incubated, Gurdon discovered that only 1% to 2% of the eggs had developed into fertile adult toads. The first successful cloning of mammal was achieved nearly twenty years later. Scientists from Switzerland and the U.S. successfully cloned mice using a method similar to Gurdon's, but required one extra step. After the nucleus was taken from the embryos of one type of mouse, they were transferred into the embryos of another mouse who served as the surrogate mother. This mouse went through the birthing process to create the cloned mice. The c
loning of cattle was achieved in 1988, when embryos from prize cows were transplanted to unfertilized cow eggs whose own nuclei had been removed. In 1993 the first human embryos were cloned using a technique that placed individual embryonic cells (blastomeres) in a nutrient culture where the cells then divided into 48 new embryos. These fertilized eggs did not develop to a stage that could be used for transplantation into a human uterus.
Cloning can do many good things for our wild life and for our economy. The process of cloning can save us a lot of money. A crop that is imported to our country can instead be cloned here. It would also make the product cheaper. Cloning would also develop stronger plants, resistant to disease, parasites, and insect damage. With better plants, cloning could lead to more profit for farmers and we could clone an abundance of trees. This would help the ecological health of our planet. Cloning is good for out wildlife because with cloning it is easier for us, as a nation and a world, to save many different types of endangered species. We would also be able to keep a type of animal from overpopulating its environment. We would be able to keep an animal within a controlled number. Another possibility for cloning would be the creation of new organs for someone who is in need of a transplant. The organ could be cloned from someone matching the person's type. This way people would not need to wait for someone to die t
o find a replacement organ. These ideas have not been put into effect yet, but that does not mean that they are far away in the future. The ideas for cloning are infinite. There is no telling what the possibilities can be. Edward Squires, an equine reproduction biologist at Colorado State, says, "You could blow your mind thinking about the possibilities." These are just a few of the awesome possibilities in the world of cloning.
Now we will discuss some of the disadvantages of cloning. Cloning of certain crops will increase the yield and quality. However this will also increase the danger of a disease being able to destroy the entire crop. Cloning destroys the genetic diversity of life. When everything is the same genetically then it is more likely that the entire population will be wiped out by either disease or predator. Ian Wilmut, a researcher in Roslin Scotland says, "The more you interfere with reproduction, the more danger there is of things going wrong."
Is cloning ethical? That is a question that will be with us for a long time. Are there benefits of cloning? The answer to that is a resounding yes. Is there a bad side to cloning? This is Another irrefutable affirmative. Should we Clone? This is where things start to get a little shaky. The answer is more of a yea kind of, answer. Most scientists agree that we ought to do more research on clones and even use some of the benefits that come through cloning. However, most scientists also agree that lines should be drawn. Where should we raw those lines? Everyone has an opinion in this category and they are all different. The ability is there, at conception, to clone a human. Should this person be allowed to grow and be a genetic backup for the "real" person. So if the "real" person was to need a transplant of some organ there would be an exact copy ready and waiting. This is just one of the ethical questions that need to be answered. The question of cloning is no longer can we but should we.
f:\12000 essays\sciences (985)\Biology\Color Blindness.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Color blindness is the inability to distinguish particular colors. It is generally an inherited
trait, but can result from a chemical imbalance or eye injury.
There are three primary colors. They are red, blue, and yellow. All other colors are the
results of different combinations of primary colors. Special visual cells, called cones, are respon-
sible for our ability to see color. People with normal vision have three different types of cones,
each responsible for a different primary color.
The absence of particular cones causes the absence of particular colors. This can be one
cause of color blindness. There are four types of color blindness. The rarest forms are mono-
chromatism and a-typical monochromatism. People with monochromatic vision, or total color
blindness, has no cones at all. As a result, they have no ability to see colors, and no hue discrimi-
nation whatsoever. Monochromatic vision is very similar to watching a black and white television
program.
Somebody with a-typical monochromatic vision has just one type of cone, and can see just
one color, and various shades of that color. This form is even rarer than the "typical" monochro-
matism.
Another, more common, form of color blindness is called dichromatism. People with di-
chromatic vision tend to confuse red, green, and gray, but can easily distinguish blue and yellow.
Some cannot even see the longest wavelengths of light -- the red end. Though it is rare, others
cannot see the short wavelengths, near the violet end. These people tend to confuse blue, yellow,
and gray, but not red and green.
Normal vision is called trichromatism. Most color blind people have a version of trichro-
matism called anomalous trichromatism. People with this condition can see the same colors as
people with normal vision, but not as well. For example, many people with this common form of
color blindness are "green-weak". This means, they see green, but to see the same color normal
people see when green and yellow is mixed, more green must be added. "Green-weak" and "red-
weak" are the most prevalent forms of anomalous trichromatism. The "blue-weak" form is rare.
Color blindness can be tested in a variety of ways. The Hardy-Rand-Rittler and Ishihara
tests indicate both the type of degree of color blindness. In these tests, a variety of shapes, letters,
and numbers lie in a jumbled mess of dots. The dots vary in both color and intensity, which cam-
ouflage the shapes. A person's ability to detect such shapes directly corresponds with their degree
of color blindness.
Other tests, such as the Holmgren yarn-matching test, and the Farnsworth-Munsell 100-
hue disk-matching test, measure one's ability to match colors. This can be useful when determines
one's degree of anomalous chromatism.
When color blindness is inherited (which is almost always the case), it is inherited from the
X chromosome. Nearly eight percent of the male population is color blind, but only one out of
two hundred females has this condition. The reason is, males have one X chromosome, and fe-
males have two. Color blindness is recessive on the X chromosome. So, if the X chromosome on
a male carries the color blindness gene, the male will be color blind. If one of the X chromosomes
of a female carries the gene for color blindness, generally the other will not, so there is a dominant
gene to take the place of the recessive one.
Currently, there is no cure for color blindness. There have been treatments that have
failed, but none that have worked. One failed treatment for anomalous trichromatism was color
tinted glasses. If you were green-weak, you would wear glasses with a tint of green. There were
obvious problem. The first was, everything appeared in shades of your weak color. If you looked
through green tinted glasses, your green vision was normal, but all others colors were off.
Bibliography
Adolf Fick, Color Vision, 1961 ed., s.v. "Color Blindness," 40-46.
David Marr, Vision (New York: W. H. Freeman and Company, 1982), 252-264.
Mark Fineman, The Inquisitive Eye (New York: Oxford University Press, 1981), 3-12.
Ramesh C. Tripathi, Brenda J. Tripathi, World Book Encyclopedia, 1993 ed., s.v. "Color Blind-
ness," 827.
Gypsytech Vision Site (Internet). "http://www.gypsytech.com/clrbld.htm", as of December 2,
1996.
f:\12000 essays\sciences (985)\Biology\Creationism vs Evolution Through the eyes of Stephen Jay Go.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Creationism vs evolution: through the eyes of Jay Gould
It has been over 100 years since English naturalist Charles Darwin first told the world his revolutionary concept about how livings things develop. Evolution through natural selection and adaptation was the basis of his argument as it remains to this day a debated subject by many. Across this nation, a "return" to "traditional" values has also brought the return of age old debated topics. One issue that truly separates Americans is the issue of creation versus evolution. Since the 19th century, this divisive topic has been debated in school boards and state capitols across America. In many instances religious fundamentalists won the day by having banned the instruction or even the mention of "ungodly" evolutionary thinking in schools. With today's social and political climate, this question is back with greater force than ever. This is why this subject is more important now than ever. In Jay Gould's book The Panda's Thumb, an overview of and an argument for Charles Darwin's evolutionary thinking is conducted with flowing thoughts and ideas. This essay titled "Natural Selection and the Human Brain: Darwin vs. Wallace" takes a look directly at two hard fought battles between evolutionists and creationists. Using sexual selection and the origins of human intellect as his proponents, Gould argues his opinion in the favor of evolutionary thought.
In this essay titled "Natural Selection and The Human Brain: Darwin vs. Wallace," Gould tells about the contest between Darwin and another prominent scientist named Alfred Wallace over two important subjects. These topics, one being sexual selection and the other about the origins of the human brain and intellect were debated by men who generally held the same views on evolution. However on these two subjects, Wallace chose to differ as he described it as his "special heresy" (53). The first of these two areas of debate between the two men was the question of "sexual selection." Darwin theorized that there laid two types of sexual selection. First a competition between males for access to females and second the choice "exercised by females themselves" (51). In this, Darwin attributed racial differences among modern human beings to sexual selection "based upon different criteria of beauty that arose among various peoples" (51). Wallace, however, disputed the suggestion of female choice. He believed that animals were highly evolved and beautiful works of art, not allowing the suggestion of male competition to enter his mind.
The debate of sexual selection was but a mere precursor to a much more famous and important question . . . the question of the origins of the human mind. Gould's discussion of the origins of the human mind is one that he in which he vocalizes his own opinions and feelings in a much more critical manner. Gould begins the topic of human origins by briefly criticizing Wallace for his different views on this subject. Wallace believed that human intellect and morality were unique and could not be the product of natural selection. Wallace suggested that "some higher power" (53) must have "intervened to construct this latest and greatest of organic innovations." Gould sharply chastises Wallace for "simple cowardice, for inability to transcend the constraints of culture and traditional views of human uniqueness, and for inconsistency in advocating natural selection so strongly" (53). The argument that human intelligence was divine along with the belief that all people of all races have the same capacity of intellect, but are limited only by their culture was at the heart of Wallace's opinions. Gould rebuts Wallace by going into Darwin's "subtler view." Gould writes that our brains may have "originated 'for' some set of necessary skills . . . but these skills do not exhaust the limits of what such a complex machine can do" (57). Gould ends by describing Wallace's thinking as having direct ties with creationist thought. A school of thought that Gould obviously portrays as wrong throughout his essay.
Throughout The Panda's Thumb, Gould tells us about the debate between Darwin and Wallace over sexual selection and the origins of human intellect. Throughout his essay Gould gives vivid accounts of the different views expressed by the two men as he analyzes the validity of each. He makes a clear opinion and backs up his claim. In this, Gould sufficiently argues his points that he makes. As a writer, Gould tells his opinion through clear and precise words in a style that anyone could grasp immediately. To make his point unmistakable, Gould gives direct and continuous analysis, commentary, and criticism as he digs deeper into his subjects. Gould's style of writing is not only appropriate, but is favorable for this type of discussion and can be applauded. Rather than submitting to a scientists ever present tendency to over explain and over analyze while using incomprehensible vocabulary, Gould gets the job done with brief yet fulfilling summaries and statements.
In the end, however, Gould must be judged by his judgement. His argument is the ultimate standard bearer and in this there are few weaknesses. His excellent use of clear language and style as he analyzes a particular subject is commendable. Never does Gould stray into incomprehensible scientific hog wash. Never does Gould let himself begin to attack mercilessly without a shred of evidence.
But even with Gould's excellent story telling in his essay, there remains subtler, yet still present weaknesses in his argument. While Gould appropriately attacks Wallace for his creationist stance on human intellect, he in turn fails through his lack of creationist related discussions. While he does argue and does it well, he leaves something to be desired in his attack on creationist thought. In addition, Gould doesn't seem to write enough about Darwin's own feelings about the human intellect, though he states Darwin's underlying opinion, it would had been beneficial for Gould to have done more in this area.
Jay Gould's essay "Natural selection and the human brain" is one that strikes the readers mind with interest and curiosity. Written in a style and format that is "reader friendly" while sufficiently and consistently arguing a clear and precise point are the attributes that make Gould's essay such a delight to read. More important, however, is the social implications of this essay. While school boards across the nation debate the subject of whether evolution should be taught in the schools, Gould's work stands out with it's overriding validity and straightforwardness. It is an example of reasonable argument as evolution's opponents use nothing but rhetoric and fear to displace scientific analysis. Through Gould's work, a greater sense of understanding about how creatures evolved can be gained through these two excellent examples.
f:\12000 essays\sciences (985)\Biology\Current status of malaria vaccinology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CURRENT STATUS OF MALARIA VACCINOLOGY
In order to assess the current status of malaria vaccinology one must first take an overview of the whole of the whole disease. One must understand the disease and its enormity on a global basis.
Malaria is a protozoan disease of which over 150 million cases are reported per annum. In tropical Africa alone more than 1 million children under the age of fourteen die each year from Malaria. From these figures it is easy to see that eradication of this disease is of the utmost importance.
The disease is caused by one of four species of Plasmodium These four are P. falciparium, P .malariae, P .vivax and P .ovale. Malaria does not only effect humans, but can also infect a variety of hosts ranging from reptiles to monkeys. It is therefore necessary to look at all the aspects in order to assess the possibility of a vaccine.
The disease has a long and complex life cycle which creates problems for immunologists. The vector for Malaria is the Anophels Mosquito in which the life cycle of Malaria both begins and ends. The parasitic protozoan enters the bloodstream via the bite of an infected female mosquito. During her feeding she transmits a small amount of anticoagulant and haploid sporozoites along with saliva. The sporozoites head directly for the hepatic cells of the liver where they multiply by asexual fission to produce merozoites. These merozoites can now travel one of two paths. They can go to infect more hepatic liver cells or they can attach to and penetrate erytherocytes. When inside the erythrocytes the plasmodium enlarges into uninucleated cells called trophozites The nucleus of this newly formed cell then divides asexually to produce a schizont, which has 6-24 nuclei.
Now the multinucleated schizont then divides to produce mononucleated merozoites . Eventually the erythrocytes reaches lysis and as result the merozoites enter the bloodstream and infect more erythrocytes. This cycle repeats itself every 48-72 hours (depending on the species of plasmodium involved in the original infection) The sudden release of merozoites toxins and erythrocytes debris is what causes the fever and chills associated with Malaria.
Of course the disease must be able to transmit itself for survival. This is done at the erythrocytic stage of the life cycle. Occasionally merozoites differentiate into macrogametocytes and microgametocytes. This process does not cause lysis and there fore the erythrocyte remains stable and when the infected host is bitten by a mosquito the gametocytes can enter its digestive system where they mature in to sporozoites, thus the life cycle of the plasmodium is begun again waiting to infect its next host.
At present people infected with Malaria are treated with drugs such as Chloroquine, Amodiaquine or Mefloquine. These drugs are effective at eradicating the exoethrocytic stages but resistance to them is becoming increasing common. Therefore a vaccine looks like the only viable option.
The wiping out of the vector i.e. Anophels mosquito would also prove as an effective way of stopping disease transmission but the mosquito are also becoming resistant to insecticides and so again we must look to a vaccine as a solution
Having read certain attempts at creating a malaria vaccine several points become clear. The first is that is the theory of Malaria vaccinology a viable concept? I found the answer to this in an article published in Nature from July 1994 by Christopher Dye and Geoffrey Targett. They used the MMR (Measles Mumps and Rubella) vaccine as an example to which they could compare a possible Malaria vaccine Their article said that "simple epidemiological theory states that the critical fraction (p) of all people to be immunised with a combined vaccine (MMR) to ensure eradication of all three pathogens is determined by the infection that spreads most quickly through the population; that is by the age of one with the largest basic case reproduction number Ro. In case the of MMR this is measles with Ro of around 15 which implies that p> 1-1/Ro " 0.93 Gupta et al points out that if a population of malaria parasite consists of a collection of pathogens or strains that have the same properties as common childhood viru
ses, the vaccine coverage would be determined by the strain with the largest Ro rather than the Ro of the whole parasite population. While estimates of the latter have been as high as 100, the former could be much lower.
The above shows us that if a vaccine can be made against the strain with the highest Ro it could provide immunity to all malaria plasmodium "
Another problem faced by immunologists is the difficulty in identifying the exact antigens which are targeted by a protective immune response. Isolating the specific antigen is impeded by the fact that several cellular and humoral mechanisms probably play a role in natural immunity to malaria - but as is shown later there may be an answer to the dilemma.
While researching current candidate vaccines I came across some which seemed more viable than others and I will briefly look at a few of these in this essay.
The first is one which is a study carried out in the Gambia from 1992 to 1995.(taken from the Lancet of April 1995).The subjects were 63 healthy adults and 56 malaria identified children from an out patient clinic
Their test was based on the fact that experimental models of malaria have shown that Cytotoxic T Lymphocytes which kill parasite infected hepatocytes can provide complete protective immunity from certain species of plasmodium in mice. From the tests they carried out in the Gambia they have provided, what they see to be indirect evidence that cytotoxic T lymphocytes play a role against P falciparium in humans
Using a human leucocyte antigen based approach termed reversed immunogenetics they previously identified peptide epitopes for CTL in liver stage antigen-1 and the circumsporozoite protein of P falciparium which is most lethal of the falciparium to infect humans. Having these identified they then went on to identify CTL epitopes for HLA class 1 antigens that are found in most individuals from Caucasian and African populations. Most of these epidopes are in conserved regions of P. falciparium.
They also found CTL peptide epitopes in a further two antigens trombospodin related anonymous protein and sporozoite threonine and asparagine rich protein. This indicated that a subunit vaccine designed to induce a protective CTL response may need to include parts of several parasite antigens.
In the tests they carried out they found, CTL levels in both children with malaria and in semi-immune adults from an endemic area were low suggesting that boosting these low levels by immunisation may provide substantial or even complete protection against infection and disease.
Although these test were not a huge success they do show that a CTL inducing vaccine may be the road to take in looking for an effective malaria vaccine. There is now accumulating evidence that CTL may be protective against malaria and that levels of these cells are low in naturally infected people. This evidence suggests that malaria may be an attractive target for a new generation of CTL inducing vaccines.
The next candidate vaccine that caught my attention was one which I read about in Vaccine vol 12 1994. This was a study of the safety, immunogenicity and limited efficacy of a recombinant Plasmodium falciparium circumsporozoite vaccine. The study was carried out in the early nineties using healthy male Thai rangers between the ages of 18 and 45. The vaccine named R32 Tox-A was produced by the Walter Reed Army Institute of Research, Smithkline Pharmaceuticals and the Swiss Serum and Vaccine Institute all working together. R32 Tox-A consisted of the recombinantly produced protein R32LR, amino acid sequence [(NANP)15 (NVDP)]2 LR, chemically conjugated to Toxin A (detoxified) if Pseudomanas aeruginosa. Each 0.4 ml dose of R32 Tox-A contained 320mg of the R32 LR-Toxin-A conjugate (molar ratio 6.6:1), absorbed to aluminium hydroxide (0.4 % w/v), with merthiolate (0.01 %) as a preservative.
The Thai test was based on specific humoral immune responses to sporozoites are stimulated by natural infection and are directly predominantly against the central repeat region of the major surface molecule, the circumsporozoite (CS) protein. Monoclonal CS antibodies given prior to sporozoite challenge have achieved passive protection in animals. Immunisation with irradiated sporozoites has produced protection associated with the development of high levels of polyclonal CS antibodies which have been shown to inhibit sporozoite invasion of human hepatoma cells. Despite such encouraging animal and in vitro data, evidence linking protective immunity in humans to levels of CS antibody elicited by natural infection have been inconclusive possibly this is because of the short serum half-life of the antibodies.
This study involved the volunteering of 199 Thai soldiers. X percentage of these were vaccinated using R32 Tox -A prepared in the way previously mentioned and as mentioned before this was done to evaluate its safety, immunogenicity and efficacy. This was done in a double blind manner all of the 199 volunteers either received R32Tox-A or a control vaccine (tetanus/diptheria toxiods (10 and 1 Lf units respectively) at 0, 8 and 16 weeks. Immunisation was performed in a malaria non-transmission area, after completion of which volunteers were deployed to an endemic border area and monitored closely to allow early detection and treatment of infection. The vaccine was found to be safe and elicit an antibody response in all vaccinees. Peak CS antibody (IgG) concentrated in malaria-experienced vaccinees exceeded those in malaria-naïve vaccinees (mean 40.6 versus 16.1 mg ml-1; p = 0.005) as well as those induced by previous CS protein derived vaccines and observed in association with natural infections. A log rank com
parison of time to falciparium malaria revealed no differences between vaccinated and non-vaccinated subjects. Secondary analyses revealed that CS antibody levels were lower in vaccinee malaria cases than in non-cases, 3 and 5 months after the third dose of vaccine. Because antibody levels had fallen substantially before peak malaria transmission occurred, the question of whether or not high levels of CS antibody are protective still remains to be seen. So at the end we are once again left without conclusive evidence, but are now even closer to creating the sought after malaria vaccine.
Finally we reach the last and by far the most promising, prevalent and controversial candidate vaccine. This I found continually mentioned throughout several scientific magazines. "Science" (Jan 95) and "Vaccine" (95) were two which had no bias reviews and so the following information is taken from these. The vaccine to which I am referring to is the SPf66 vaccine. This vaccine has caused much controversy and raised certain dilemmas. It was invented by a Colombian physician and chemist called Manual Elkin Patarroyo and it is the first of its kind. His vaccine could prove to be one the few effective weapons against malaria, but has run into a lot of criticism and has split the malaria research community. Some see it as an effective vaccine that has proven itself in various tests whereas others view as of marginal significance and say more study needs to be done before a decision can be reached on its widespread use.
Recent trials have shown some promise. One trial carried by Patarroyo and his group in Columbia during 1990 and 1991 showed that the vaccine cut malaria episodes by over 39 % and first episodes by 34%. Another trail which was completed in 1994 on Tanzanian children showed that it cut the incidence of first episodes by 31%. It is these results that have caused the rift within research areas.
Over the past 20 years, vaccine researchers have concentrated mainly on the early stages of the parasite after it enters the body in an attempt to block infection at the outset (as mentioned earlier). Patarroyo however, took a more complex approach. He spent his time designing a vaccine against the more complex blood stage of the parasite - stopping the disease not the infection. His decision to try and create synthetic peptides raised much interest. At the time peptides were thought capable of stimulating only one part of the immune system; the antibody producing B cells whereas the prevailing wisdom required T cells as well in order to achieve protective immunity.
Sceptics also pounced on the elaborate and painstaking process of elimination Patarroyo used to find the right peptides. He took 22 "immunologically interesting" proteins from the malaria parrasite, which he identified using antibodies from people immune to malaria, and injected these antigens into monkeys and eventually found four that provided some immunity to malaria. He then sequenced these four antigens and reconstructed dozens of short fragments of them. Again using monkeys (more than a thousand) he tested these peptides individually and in combination until he hit on what he considered to be the jackpot vaccine. But the WHO a 31% rate to be in the grey area and so there is still no decision on its use.
In conclusion it is obvious that malaria is proving a difficult disease to establish an effective and cheap vaccine for in that some tests and inconclusive and others while they seem to work do not reach a high enough standard. But having said that I hope that a viable vaccine will present itself in the near future (with a little help from the scientific world of course).
f:\12000 essays\sciences (985)\Biology\Cystic Fibrosis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CYSTIC FIBROSIS
ONE OUT OF EVERY 2,500 BIRTHS IN THE UNITED STATES WILL BE DIAGNOSED WITH CYSTIC FIBROSIS. THIS FACT MAKES CYSTIC FIBROSIS ONE OF THE MOST COMMON GENETIC DISEASES IN THE NATION. ABOUT 30,000 AMERICANS HAVE THE DISEASE, BUT
EVEN THOUGH CYSTIC FIBROSIS IS THE NATIONS MOST COMMON GENETIC DISEASE THE MAJORITY OF AMERICANS KNOW LITTLE ABOUT IT. CYSTIC FIBROSIS IS RELATIVELY COMMON IN CALCASTION PEOPLE BUT RARE IN AFRICAN-AMERICAN. THE DISEASE IS VERY UNCOMMON IN MONGOLIANS. FIVE PERCENT OF THE POPULATION IN THE UNITED STATES ARE CARRIERS OF THE GENETIC DISEASE.
CYSTIC FIBROSIS, SOMETIMES CLASSIFIED AS MUCOVISCIDOSIS, IS A DISORDER IN WHICH THE EXCRORINE GLANDS SECRETE ABNORMALLY THICK MUCUS. THIS LEADS TO THE OBSTRUCTION OF THE PANCREAS AND CHRONIC INFECTIONS OF THE LUNGS, WHICH GENERALLY CAUSES DEATH IN CHILDHOOD OR EARLY ADULTHOOD. SOME MILDLY AFFECTED PATIENTS MAY SURVIVE LONGER. PATIENTS WITH PANCREATIC INSUFFICIENCY TAKE PANCREATIC ENZYMES WITH MEALS. THOSE WITH RESPIRATORY INFECTIONS ARE TREATED WITH ANTIBIOTICS, MOSTLY WITH AEROSOLS THAT RELIEVE CONSTRICTION OF THE AIRWAYS. PHYSICAL THERAPY IS USED TO HELP PATIENTS COUGH UP THE OBSTRUCTING MUCUS. INTESTINAL OBSTRUCTION, WHICH OCCURS MOSTLY IN INFANCY, MAY REQUIRE SURGERY.
IN 1989, RESEARCHERS FOND THE ABNORMAL GENE THAT CAUSES CYSTIC FIBROSIS. THIS GENE IS LOCATED ON CHROMOSOME 7 . A PERSON WHO HAS TWO CYSTIC FIBROSIS GENES HAS THE DISEASE . A PERSON THAT CARRIES ONE OF THE GENES DOES NOT HAVE THE GENETIC DISEASE, BUT IS A CARRIER.
THE SYMPTOMS OF CYSTIC FIBROSIS SOMETIMES OCCUR IMMEDIATELY AFTER BIRTH. MUCUS SECRETIONS MAY APPEAR IN THE BABY'S INTESTINES, WHICH CAN CAUSE OBSTRUCTION IN THE INTESTINES. IN ALL CASES, THE CHILD WILL GAIN LITTLE WEIGHT RIGHT FROM BIRTH, BECAUSE THE PANCREAS IS NOT PRODUCING ENZYMES. LITTLE TO NO NUTRIENTS ARE ABSORBED IN THE CHILD'S SYSTEM. A CHILD WITH CYSTIC FIBROSIS MAY HAVE REOCCURRING RESPIRATORY INFECTIONS, ALONG WITH COUGH AND FEVER. THIS MAY BE MORE SEVERE AND PERSISTENT THAT NORMAL THIS IS A RESULT OF THE THICK, STICKY MUCUS THAT WILL HOLD AND TRAP GERMS IN THE BRONCHIAL TUBES. IT SHOULD BE TAKEN IN TO CONSIDERATION THAT CHILDREN WITH CYSTIC FIBROSIS HAVE LARGE APPETITES AND EAT A GREAT DEAL. IN SPITE OF THEIR MALNUTRITION, THEY ART NOT IN PAIN AND DO NOT GENERALLY FEEL IT.
EXTRACTS OF ANIMAL PANCREAS, IN POWDER OR GRANULE FORM, ARE PRESCRIBED TO REPLACE THE MISSING ENZYMES FROM THE PANCREAS, AND THE AMOUNT OF FAT IS DECREASED IN THE CHILD'S DIET. WITH THIS TREATMENT THE CHILD BEGINS TO GAIN WEIGHT. TO KEEP THE LUNGS FREE OF AS MUCH MUCUS AS POSSIBLE , THE PATIENTS MAY NEED TO HAVE DAILY RESPIRATORY PHYSICAL THERAPY. ANY RESPIRATORY INFECTION THAT ARISE ARE TREATED WITH LARGE AMOUNTS OF ANTIBIOTICS.
CYSTIC FIBROSIS CAN NOT YET BE CURED. ALTHOUGH THE IDENTIFICATION OF CHROMOSOME 7 HAS PAVED THE WAY FOR GENE THERAPY. ANTIBIOTICS AND ENZYMES ARE NOT THE ONLY TREATMENTS FOR CYSTIC FIBROSIS. ONE RELATIVELY NEW TREATMENT IS A BIOTECH DRUG THAT THINS THE MUCUS, WHICH HELPS THE LUNGS FUNCTION BETTER AND REDUCES THE RISK OF INFECTIONS. GENE THERAPY IS STILL IN EXPERIMENTAL STAGES.
f:\12000 essays\sciences (985)\Biology\Design of a Psychological Experiment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Psyc 100 (0110-0129)
Fall, 1996
Dr. Sternheim
Report #1 (10 points)
Design of a Psychological Experiment
Problem: Suppose you are a psychologist who is interested in the effects of caffeine on the eye-hand coordination of students enrolled at UMCP. Design an experiment to test the hypothesis that caffeine enhances a student's ability to hit a baseball. Describe your experiment by answering the following questions:
1) What are the independent and dependent variables?
The independent variable would be the caffeine. The results of the students' hitting of the baseball would be the dependent variable.
2) What are the experimental conditions and what are the tasks for the experimenter, the participants in
your experiment, and any other people you might ask to help?
The experimental conditions would be the same for all participants, probably in an indoor stadium so the weather won't affect the students. The task for the experimenter would be to make sure to have a control group, to have a wide variety and different types of participants, to make sure all participants use the same equipment, and have controlled amounts of caffeine. The tasks for the participants would be to carefully follow the instructions of the experimenter, that is to hit the baseball.
3) Will you treat all the participants in the same way?
No, I would not treat all the participants in the same way. The control group would not be given caffeine. However, I would treat all experimental groups the same because that will give more accurate results. If the participants were not treated the same I would not be able to accurately measure how much or how little the caffeine affected the students.
4) How will you select the participants of your study so that they are representative of the students enrolled at UMCP?
I would randomly chose participants of different ethnic groups, ages, weights, and sexes.
5) What factors must be controlled when using the experimental method in this manner?
The factors that must be controlled in this experiment would be the amount of caffeine consumed, the equipment used (must have same bat and baseball), and the environment in which they will perform their assigned task. The environment should be indoors so that weather will not affect the results.
6) Suppose your experiment provided evidence that caffeine enhances eye-hand coordination. Would it be reasonable to expect, based on your results, that a pilot would be better able to land an airplane if given caffeine?
No, since landing a plane and hitting a baseball are two very different skills. Landing a plane requires more skill and the side effects of caffeine which are not evident in the above experiment might show up in a pilot. Caffeine may cause some people to become nervous and shake and that would not help a pilot land a plane. The only way to find out would be to setup and experiment about the effects of caffeine on pilot landing planes.
f:\12000 essays\sciences (985)\Biology\Detection of biological molecules.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DETECTION OF BIOLOGICAL MOLECULES
Introduction: Without carbon, nitrogen, hydrogen, sulfur, oxygen and phosphorus, life
wouldn't exist. These are the most abundant elements in living organisms. These elements
are held together by covalent bonds, ionic bonds, hydrogen bonds, and disulfide bonds.
Covalent bonds are especially strong, thus, are present in monomers, the building blocks
of life. These monomers combine to make polymers, which is a long chain of monomers
strung together. Biological molecules can be distinguished by their functional groups. For
example, an amino group is present in amino acids, and a carboxyl group can always be
found in fatty acids. The groups can be separated into two more categories, the polar,
hydrophilic, and the nonpolar, hydrophobic. A fatty acid is nonpolar, hence it doesn't mix
with water. Molecules of a certain class have similar chemical properties because they
have the same functional groups. A chemical test that is sensitive to these groups can be
used to identify molecules that are in that class. This lab is broken down into four
different sections, the Benedict's test for reducing sugars, the iodine test for the presence
of starch, the Sudan III test for fatty acids, and the Biuret test for amino groups present in
proteins. The last part of this lab takes an unknown substance and by the four tests,
determine what the substance is.
BENEDICT'S TEST
Introduction: Monosaccharides and disaccharides can be detected because of their free
aldehyde groups, thus, testing positive for the Benedict's test. Such sugars act as a
reducing agent, and is called a reducing sugar. By mixing the sugar solution with the
Benedict's solution and adding heat, an oxidation-reduction reaction will occur. The sugar
will oxidize, gaining an oxygen, and the Benedict's reagent will reduce, loosing an oxygen.
If the resulting solution is red orange, it tests positive, a change to green indicates a
smaller amount of reducing sugar, and if it remains blue, it tests negative.
Materials:
onion juice 5 test tubes 1 beaker
potato juice ruler hot plate
deionized water permanent marker 5 tongs
glucose solution labels
starch solution 6 barrel pipettes
Benedict's reagent 5 toothpicks
Procedure:
1. Marked 5 test tubes at 1 cm and 3 cm from the bottom. Label test tubes #1-#5.
2. Used 5 different barrel pipettes, added onion juice up to the 1 cm mark of the first
test tube, potato juice to the 1 cm mark of the second, deionized water up to the 1
cm mark of the third, glucose solution to the 1 cm mark of the fourth, and the
starch solution to the 1 cm mark of the fifth test tube.
3. Used the last barrel pipette, added Benedict's Reagent to the 3 cm mark of all 5
test tubes and mix with a toothpick.
4. Heated all 5 tubes for 3 minutes in a boiling water bath, using a beaker, water, and
a hot plate.
5. Removed the tubes using tongs. Recorded colors on the following table.
6. Cleaned out the 5 test tubes with deionized water.
Data:
Benedict's Test Results
Discussion: From the results, the Benedict's test was successful. Onion juice contains
glucose, and of course, glucose would test positive. Starch doesn't have a free aldehyde
group, and neither does potato juice, which contains starch. Water doesn't have glucose
monomers in it, and was tested to make sure the end result would be negative, a blue
color.
IODINE TEST
Introduction: The iodine test is used to distinguish starch from monosaccharides,
disaccharides, and other polysaccharides. Because of it's unique coiled geometric
configuration, it reacts with iodine to produce a blue-black color and tests positive. A
yellowish brown color indicates that the test is negative.
Materials:
6 barrel pipettes potato juice starch solution
5 test tubes water iodine solution
onion juice glucose solution 5 toothpicks
Procedure:
1. Used 5 barrel pipettes, filled test tube #1 with onion juice, second with potato
juice, third with water, fourth with glucose solution, and fifth with starch solution.
2. Added 3 drops of iodine solution with a barrel pipette, to each test tube. Mixed
with 5 different toothpicks.
3. Observed reactions and recorded in the table below. Cleaned out the 5 test tubes.
Data:
Iodine Test Results
Discussion: The iodine test was successful. Potato juice and starch were the only two
substances containing starch. Again, glucose and onion juice contains glucose, while
water doesn't contain starch or glucose and was just tested to make sure the test was done
properly.
SUDAN III TEST
Introduction: Sudan III test detects the hydrocarbon groups that are remaining in the
molecule. Due to the fact that the hydrocarbon groups are nonpolar, and stick tightly
together with their polar surroundings, it is called a hydrophobic interaction and this is the
basis for the Sudan III test. If the end result is a visible orange, it tests positive.
Material:
scissors deionized water margarine Sudan III solution
petri dish starch ethyl alcohol forceps
lead pencil cream 5 barrel pipettes
filter paper cooking oil blow dryer
Procedure:
1. Cut a piece of filter paper so it would fit into a petri dish.
2. Used a lead pencil, and marked W for water, S for starch, K for cream, C for
cooking oil and M for margarine. Draw a small circle next to each letter for the
solution to be placed.
3. Dissolve starch, cream, cooking oil and margarine in ethyl alcohol.
4. Used a barrel pipette for each solution, added a small drop from each solution to
the appropriate circled spot on the filter paper.
5. Allowed the filter paper to dry completely using a blow dryer.
6. Soaked the paper in the Sudan III solution for 3 minutes.
7. Used forceps to remove the paper from the stain.
8. Marinated the paper in a water bath in the petri dish, changed water frequently.
9. Examined the intensity of orange stains of the 5 spots. Record in the table below.
10. Completely dried the filter paper, and washed the petri dish.
Data:
Sudan III Test Results
Filter paper:
Discussion: The results indicate that the Sudan III test was sucessful. Water and starch
definitely doesn't contain any fatty substances. Cream and cooking oil no doubtedly does
contain lipids. It was surprising to find that margarine doesn't contain any fat.
BIURET TEST
Introduction: In a peptide bond of a protein, the bond amino group is sufficiently
reactive to change the Biuret reagent from blue to purple. This test is based on the
interaction between the copper ions in the Biuret reagent and the amino groups in the
peptide bonds.
Materials:
6 test tubes egg white solution starch solution 6 toothpicks
ruler chicken soup solution gelatin 6 parafilm sheets
permanent marker deionized water sodium hydroxide
labels glucose solution copper sulfate
Procedures:
1. Used 6 test tubes, and labeled them at 3cm and 5cm from the bottom. Labeled
each #1 to #6.
2. Added egg white solution to the 3cm mark of the first tube, chicken soup solution
to the 3-cm mark of the second tube, water to the 3 cm mark of the third test tube,
glucose solution to the fourth, starch to the fifth, and gelatin to the sixth, all at the
3 cm mark.
3. Added sodium hydroxide to the 5 cm mark of each tube and mix with 6 different
toothpicks.
4. Added 5 drops of Biuret test reagent, 1% copper sulfate, to each tube and mix
by placing a parafilm sheet over the test tube opening, and shake vigorously.
5. Held the test tubes against a white piece of paper, and recorded the colors and
results. Discarded the chemicals, and washed the test tubes.
Data:
Biuret Test Results
Discussion: The Biuret test seemed to have been successful. Glucose and starch are both
carbohydrates, while water has no proteins. Egg white definitely has proteins, and so does
gelatin. Chicken soup had a hint of protein content.
Unknown Chemical # 143
Introduction: By performing the Benedict's Test, the Iodine Test, the Sudan III Test,
and the Biuret Test, chemical #143 should be identified.
Materials:
materials from the Benedict's Test materials from the Sudan III Test
Materials from the Iodine Test materials from the Biuret Test
Procedures:
1. Performed the Benedict's Test, and recorded results.
2. Performed the Iodine Test, and recorded results.
3. Performed the Sudan III Test, and recorded results.
4. Performed the Biuret Test, and recorded results.
Data:
Properties of Chemical #143
chemical #143 was a white powderish substance.
Conclusion: After ruling out the obvious wrong substances from the list like ground
coffee, egg white and yolk, table sugar and salt, syrup and honey, the small amount of
proteins was taken into factor. That also eliminated powdered skim milk, and soy flour.
The low, or none fat content ruled out some more choices like enriched flour. The only
choices left was corn starch, glucose, and potato starch. Because of the low reducing
sugar, glucose can be ruled out also.
The starch content of substance #143 was very high. The protein content was around the
10% range, so potato starch would be a better guess then corn starch. But corn starch
contained only a trace of fat when potato starch contained 0.8%. But 0.8% is very
insignificant. The most educated guess to what chemical #143 is potato starch.
f:\12000 essays\sciences (985)\Biology\Development of the Human Zygote.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
November 16, 1995
Hundreds of thousands of times a year a single-celled zygote, smaller than a grain of sand, transforms into an amazingly complex network of cells, a newborn infant. Through cellular differentiation and growth, this process is completed with precision time and time again, but very rarely a mistake in the "blueprint" of growth and development does occur. Following is a description of how the pathways of this intricate web are followed and the mistakes which happen when they are not.
The impressive process of differentiation changes a single-cell into a complicated system of cells as distinct as bold and bone. Although embryonic development takes approximately nine months, the greatest amount of cellular differentiation takes place during the first eight weeks of pregnancy. This period is called embryogenesis.
During the first week after fertilization, which takes place in the Fallopian tube, the embryo starts to cleave once every twenty-four hours (Fig. 1). Until the eight or sixteen cell stage, the individual cells, or blastomeres, are thought to have the potential to form any part of the fetus (Leese, Conaghan, Martin, and Hardy, April 1993). As the blastomeres continue to divide, a solid ball of cells develops to form the morula (Fig. 1). The accumulation of fluid inside the morula, transforms it into a hollow sphere called a blastula, which implants itself into the inner lining of the uterus, the endometrium (Fig. 1). The inner mass of the blastula will produce the embryo, while the outer layer of cells will form the trophoblast, which eventually will provide nourishment to the ovum (Pritchard, MacDonald, and Gant, 1985).
Figure 1:Implantation process and development during
embryogenesis (Pritchard, MacDonald and
Gant, 1985)
During the second week of development, gastrulation, the process by which the germ layers are formed, begins to occur. The inner cell mass, now called the embryonic disc, differentiates into a thick plate of ectoderm and an underlying layer of endoderm. This cellular multiplication in the embryonic disc marks the beginning of a thickening in the midline that is called the primitive streak. Cells spread out laterally from the primitive streak between the ectoderm and the endoderm to form the mesoderm. These three germ layers, which are the origins of many structures as shown in Table 1, begin to develop.
Table 1: Normal Germ Layer Origin of Structures in Some or all Vertebrates (Harrison, 1969)
Normal Germ Layer Origin of Structures in Some or All Vertebrates
Ectoderm Mesoderm Endoderm
Skin epidermis Hair Feathers Scales Beaks Nails Claws Sebaceous, sweat, and mammary glands Oral and anal lining tooth enamel Nasal epithelium Lens of the eye Inner earBrainSpinal cordRetina and other eye partsNerve cells and gangliaPigment cellsCanal of external earmedulla of the adrenal glandPituitary gland Dermis of the skinConnective tissueMusclesSkeletal componentsOuter coverings of the eyeCardiovascular system Heart Blood cells Blood vesselsKidneys and excretory ductsGonads and reproductive ductsCortex of the adrenal glandSpleenLining of coelomic cavitiesMesenteries LiverGall bladderPancreasThyroid glandThymus glandParathyroid glandsPalatine tonsilsMiddle earEustachian tubeUrinary bladderPrimordial germ cellsLining of all organs of digestive tract and respiratory tract
During the third week of development, the cephalic (head) and caudal (tail) end of the embryo become distinguishable. Most of the substance of the early embryo will enter into the formation of the head. Blood vessels begin to develop in the mesoderm and a primitive heart may also be observed (Harrison, 1969). Cells rapidly spread away from the primitive streak to eventually form the neural groove, which will form a tube to the gut. When the neural folds develop on either side of the groove, the underlying mesoderm forms segmentally arranged blocks of mesoderm called somite. These give rise to the dermis of the skin, most skeletal muscles, and precursors of vertebral bodies. the otocyst, which later becomes the inner ear, and the lens placodes, which later form the lenses of the adult eyes, are derived from the ectoderm.
The strand of cardiovascular functioning is apparent during the fourth week. The heart shows early signs of different chambers and begins to pump blood through the embryo which simultaneously has well developed its kidneys, thyroid gland, stomach, pancreas, lungs, esophagus, gall bladder, larynx, nd trachea (Carlson, 1981).
Several new structures are observed, organs continue developing, and some previously formed structures reorganize during the fifth week of embryogenesis. The cranial and spinal nerves begin to form and the cerebral hemispheres and the cerebellum are visible. The spleen, parathyroid glands, thymus gland, retina, and gonads, all new structures, also begin to form. The gastrointestimer tract undergoes considerable development as the middle part of the primitive intestine becomes a loop larger than the abdominal cavity. Next, it must then project into the umbilical cord until there is room for the entire bowel. Finally, the heart develops walls or atrial and ventricular septa and atriventricle cushion. These cushions thicken the junction of the atrium and ventricle. the atrial and ventricular septa meanwhile divide their respective chambers into right and left halves (Harrison, 1969).
The sixth week is characterized by the completion of most organ formation. The embryo has a more identifiable human face with basic structure of the eyes and ears now developed. Hard and soft palates appear, the salivary glands begin to form, and there is an early differentiation of the cells that later develop into the teeth. Division of the heart is essentially completed and the valves begin to form. The primitive intestinal tract is divided into the anterior and posterior chambers that will later develop into the urinary bladder and the rectum, respectively. At the end of the week, the gonads are histologically recognizable as either testes or ovaries (Pritchard, MacDonald, and Gant, 1985).
The embryo looks similar to miniature human when it enters the seventh week of embryogenesis. During this last week, the pituitary gland takes a definitive structure, the eyelids become visible, the last group of muscles begin to form, and bone marrow appears for the first time. the main concerns of this period are the different developments taking place in the male and female. This is first shown as the Müllerian ducts degenerate in males, but continues to develop in females, where they will later differentiate to become the Fallopian tubes, the uterus and the inner part of the vagina. The Wolffian ducts degenerate in female embryos, but continue to develop into the ductus deferens in the male. Although the external genitalia continue to grow and develop, they are still unable to be visibly identified as male or female. By the end of this week the placenta begins to take on definite characteristics, and for the first time blood from the maternal circulation enters the placental circulation (Carlson,1981).
After this period of embryogenesis the embryo is given the name fetus. The remainder of pregnancy is primarily concerned with growth and cellular differentiation, but during this period of growth, mistakes which can cause birth defects are still highly effective, as they were in the first seven weeks of development. What are some of these defects which begin during the first trimester of pregnancy and how are they caused?
Obviously the process of a developing embryo and fetus is very complicated and although most of the babies born each year are free from any abnormalities, up to five percent of all newborn infants have congenital anomalies, birth defects (Cunningham, MacDonald, and Gant, July/August 1989). Seventy percent of birth defects are unknown spontaneous errors of development. Of the thirty percent which are known, twenty-five percent are associated with genetic factors that include major chromosomal defect and point mutations, three percent with venereal diseases such as syphilis and rubella, and two percent with teratogens, medications and drugs (Cunningham, MacDonald, and Gant, Feb./March 1991).
Spontaneous errors in development, whose causes are unknown, can happen in the central nervous system, face, gut, genitourinary system, and heart as shown in Table 2. The time during pregnancy which these may occur is also is also shown in Table 2 and ranges from twenty-three days to twelve weeks, all which fall into the first trimester. How these anomalies are triggered in birth defects is unknown. Neural Tube Defects, which causes are also unknown, are some of the most common defects and result in infant mortality or serious disability. These abnormalities include anencephaly, a malformation characterized by cerebral hemispheres that are absent, and spina bifida, an exposed , ruptured spine (Medicine, March 1993).
TABLE 2. Relative timing and development of pathology of certain birth defects (Adapted from Cunningham, MacDonald and Gant, February/ March 1991).
Birth defects by area Time limit
Central Nervous System Closure of anterior neural tube Closure in a portion of posterior neural tube 26 days28 days
Face Closure of lip Fusion of maxillary palatal shelves resolution of branchial cleft 36 days10 weeks8 weeks
Gut Lateral septation of foregut into trachea Lateral septation of cloaca into rectum and urogenital sinus Recanalization of duodenum Rotation of intestinal loop Return of midgut from yolk sac to abdomen Obliteration of vitelline duct Closure of pleuroperitoneal canal 30 days6 weeks7 to 8 weeks10 weeks10 weeks10 weeks6 weeks
Genitourinary system Migration of infraumbilical mesenchyme Fusion of lower portion of Müllerian ducts Fusion of urethral folds (labia minora) 30 days10 weeks12 weeks
Heart Directional development of bulbous cordis septum ventricular septum closure 34 days6 weeks
Limb Genesis of radial bone Separation of digital rays 38 days6 weeks
Complex Prechordal mesoderm development Development of posterior axis 23 days23 days
On the other hand the effects and consequences of teratogens are known. "A teratogen is any agent such as a medication or other systemically absorbed chemical or factor like hyperthermia, that produces permanent abnormal embryonic physical development or physiology (Cunningham, MacDonald, and Gant, Feb./March 1991). The embryonic period is most critical with respect to malformations because it encompasses organogenesis. Drugs and chemicals such as alcohol and organic mercury can cause mental retardation, while infection such as varicella, the chicken pox, can cause limb defects, neurologic anomalies, and skin scars (Baker, April 1990). A more complete list of drugs, chemicals and infections, and their effects are listed in Table 3. These type of birth defects are unique because abnormalities due to drugs and chemical exposure are potentially preventable (Cunningham, MacDonald, and Gant, Feb./March 1991).
TABLE 3. Effects and comments of documented teratogens (ACOG Technical Bulletin, Feb.1985)
Agent Effects Comments
Drugs and Chemicals
Alcohol Growth retardation, mental retardation, various major and minor malformations Risk due to ingestion of one or two drinks per day (1-2 oz) may cause a small reduction in average birth weight.
Androgens Hermaphroditism in female offspring, advanced genital development in males Effects are dose dependent and related to stage of embryonic development. Depending on time of exposure, clitoral enlargement or labioscrotal fusion can be produced.
Anticoagulants Hypoplastic nose, bony abnormalities, broad short hands with shortened phalanges, intrauterine growth retardation, deformations of neck, central nervous system defects Risk for a seriously affected child is considered to be 25% when anticoagulants that inhibit vitamin K are used in the first trimester.
Antithyroid drugs fetal goiter Goiter in fetus may lead to malpresentation with hyperextended head.
Diethylstilbestrol (DES) Vaginal adenosis, abnormalities of cervix and uterus in females, possible infertility in males and females Vaginal adenosis is detected in over 50% of women whose mothers took these drugs before the ninth week of pregnancy.
Lead Increased abortion rate and stillbirths Central nervous system development of the fetus may be adversely affected.
Lithium Congenital heart disease Heart malfunctions due to first trimester exposure occur in approximately 2%.
Organic mercury Mental retardation, spasticity, seizures, blindness Exposed individuals include consumers of contaminated grain and fish. Contamination is usually with methyl mercury
Isotrtinoin (Accutane) Increased abortion rate, nervous system defects, cardiovascular effects, craniofacial dysmorphism, cleft palate First trimester exposure may result in approximately 25% anomaly rate
Thalidomide Bilateral limb deficiencies-days 27-40, anotia and microtia-days 21-27, other abnormalities Of children whose mothers used thalidomide, 20% show the effect.
Trimethadione Cleft lip or cleft palate, cardiac defects, growth retardation, mental retardation Risks for defects or spontaneous abortion is 60-80% with first trimester exposure.
Valproic acid Neural tube defects Exposure must be prior to normal closure of neural tube during first trimester to get open defect.
Infections
Rubella Cataracts, deafness, heart lesions, plus expanded syndrome including effects on all organs Malformation rate is 50% if mother is infected during first trimester.
Varicella possible effects on all organs including skin scarring and muscle atrophy Zoster immune globulin is available for newborns exposed during last few days of gestation.
Chromosomal abnormalities, the leading cause of birth defects, develop during meiotic division in the gonad, the organ which produces sex cells. A chromosome may drop out of the dividing cell and thus be lost. Fertilization of this type of gamete results in a zygote with a missing chromosome. If the gamete fails to split equally at meiotic division and the cell with the extra chromosome is fertilized, the zygote becomes trisomic (Pritchard, MacDonald, and Gant, 1985). Down Syndrome, the most common chromosomal defect, results from an extra chromosome (trisomy 21). Less common is chromosomal translocation defect. Translocation is the transfer of a segment of one chromosome to a different site on the same chromosome or to a different chromosome (Pritchard, MacDonald, and Gant, 1985). Many other syndromes, their chromosomal complement, and signs of these syndromes which are recognizable at birth are shown in Table 4.
TABLE 4. Findings in established chromosomal abnormalities in man (Pritchard, MacDonald, and Gant, 1985)
Syndrome Chromosomal Complement Signs Recognizable at Birth
Turners 45 / X Lymphangiectatic edema of hands and feet
Klinefelters 47 / XXY None
Triple X 47 / XXX None
YY 47 None
Downs trisomy 21 47 Mongoloid facies, Simian line
Translocation 46 Same
Trisomy 13 - 15 47 Cleft palate, Harelip, Eye defects, Polydactyly
Trisomy 16 - 18 47 Finger flexion, Lowest ears, Digital arches
Cat cry 46 (Deletion B 5) Cat cry, Moon face
During the first trimester of prgnancy, an embryo must correctly
make its way through a complex matrix of differentiation and development to become a normal infant. When something does go wrong, the embryo or fetus will unfortunately have some type of defect. The amazing accuracy with which a single cell can become something as complex as a newborn infant is a truley incredible feat!
Works Cited
Baker, David A. "Danger of Varicella-Zoster Virus Infection." Contemporary OB/GYN April 1990: 52.
Carlson, Bruce M. Patten's Foundations of Embryology. McGraw-Hill Inc. 1981.
Cunningham, MacDonald, and Gant. Williams Obstetrics, Supplement no. 10. 18th ed, Prentice-Hall, Inc. Februay/March 1991: 2,3.
"Folic Acid for the Prevetion of Recurrent Neural Tube Defect." Medicine March 1993.
Harrison, Ross G. Organization and Develpment of the Embryo. Yale University Press. 1969.
Leese, Conaghan, Martin, and Hardy. "Early Human Embryo Metabolism." Bio Essays vol. 15, No. 4 April 1993: 259.
Pritchard, MacDonald, and Gant. Williams Obstetrics. 17th ed, Prentice-Hall, Inc. 1985: 139-142, 800.
Pritchard, MacDonald, and Gant. Williams Obstetrics, Supplement no. 13. 17th ed, Prentice-Hall, Inc. July/August 1987: 2.
"Teratology." ACOG Technical Bulletin February 1985.
f:\12000 essays\sciences (985)\Biology\Diabetes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Diabetes
Diabetes is a disease in which your body is unable to properly use and store glucose. Glucose backs up in the bloodstream causing your blood glucose or "sugar" to rise too high.
There are two major types of diabetes, Type I and Type II. In Type I diabetes, your body completely stops producing any insulin, a hormone that lets your body use glucose found in foods for energy. People with Type I diabetes must take daily insulin injections to survive. This form of diabetes usually develops in children or young adults, but can happen at any age. In Type II diabetes, the body produces insulin, but not enough to properly convert food into energy. This form of diabetes usually occurs in people who are over 40, overweight, and have a family history of diabetes.
People with diabetes often experience symptoms. Some of the symptoms are:
1)being very thirsty
2)having to go to the bathroom very frequently
3)weight loss
4)increased hunger
5)blurry vision
6)skin infections
7)wounds that don't heal
8)and/or extreme unexplained fatigue
In some cases, there are no symptoms, this happens at times with Type II diabetes. In this case, people can live for months, even years without knowing they have the disease. This form of diabetes comes on so gradually that symptoms might not even be recognized.
Diabetes can occur in anyone. However, people who have close relatives with the disease are somewhat more likely to develop it. The risk of getting diabetes also increases as people grow older. People who are over 40 and overweight are more likely to get diabetes. So are people of African-American, Hispanic or Asian heritage. Also, people who develop diabetes while pregnant are more likely to develop full-blown diabetes later in life.
There are certain things that everyone who has diabetes, whether Type I or Type II, needs to do to be healthy. You need to have an eating plan. You need to pay attention to how much you exercise, because exercise can help your body use insulin better to convert glucose into energy for cells. Everyone with Type I diabetes, and some people with Type II diabetes, also need to take insulin injections. Some people with Type II diabetes take pills called "oral agents" which help their bodies produce more insulin and/or use the insulin it is producing better. Some people with Type II diabetes can control their disease with weight loss, diet and exercise alone and don't need any medication.
Everyone who has diabetes should be seen at least once every six months by a diabetes specialist. You should also be seen periodically by other members of a diabetes treatment team, including a diabetes nurse educator, and a diabetes dietitian educator who helps you develop a meal plan that works best for you. Ideally , you should also see an exercise physiologist for help in developing an exercise plan, and if you think you need it, a social worker, psychologist or other mental health professional for help with the stresses and challenges of living with a chronic disease. Everyone who has diabetes should have regular eye exams at least once a year by an ophthalmologist to make sure that any eye problems associated with diabetes are caught early, and treated before they become serious.
Also, people with diabetes need to learn how to monitor their blood sugars day-to-day at home using home blood sugar monitoring. This daily testing, which your diabetes educator can explain to you, will help you see how well your meal plan, exercise, and medication are working to keep your blood sugars in a normal range.
Your health care team will encourage you to follow your meal plan and exercise program, use your medications and monitor your blood sugars regularly to keep your blood sugars in as normal a range as possible as much of the time as possible. Why is this so important? Because poorly managed diabetes can lead to a host of long-term complications among them are heart attacks, strokes, blindness, kidney failure, blood vessel disease that requires an amputation, nerve damage, and impotence in men.
But happily, a recent nationwide study completed over a 10-year period showed that if people keep their blood sugars as close to normal as possible, they can reduce their risk of developing some of these complications by 50 percent or more.
A study being conducted at Joslin Diabetes Center and several other sites nationwide is screening the immediate relatives of someone with Type I diabetes because we can now identify those who will develop this form of the disease as much as five or more years in advance.
Type II diabetes is the most common type of diabetes, yet we still do not understand it very well. But recent research does suggest that there are some things you can do to prevent this form of diabetes, particularly if it runs in your family, or if you have had gestational diabetes, or if you are a member of an ethnic group that is more prone to this disease.
In simplest terms, to prevent or slow the development of Type II diabetes you should try to maintain your weight in as normal a range as possible. If you are overweight, lose weight. And, try to develop a regular exercise program, as the exercise will help your body use insulin more effectively.
f:\12000 essays\sciences (985)\Biology\Differential Diagnosis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Differential Diagnosis
Today the full-blown case of TS is unlikely to be confused with any other disorder.
However, only a decade ago TS was frequently misdiagnosed as schizophrenia,
obsessive-compulsive disorder, Sydenham's chorea, epilepsy, or nervous habits. The
differentiation of TS from other tic syndromes may be no more than semantic, especially
since recent genetic evidence links TS with multiple tics. Transient tics of childhood are
best defined in retrospect. At times it may be difficult to distinguish children with
extreme attention deficit hyperactivity disorder (ADHD) from TS. Many ADHD children, on
close examination, have a few phonic or motor tics, grimace, or produce noises similar to
those of TS. Since at least half of the TS patients also have attention deficits and
hyperactivity as children, a physician may well be confused. However, the treating doctor
should be aware of the potential dangers of treating a possible case of TS with stimulant
medication. On rare occasions the differentiation between TS and a seizure disorder may be
problematic. The symptoms of TS sometimes occur in a rather sharply separated paroxysmal
manner and may resemble automatisms. TS patients, however, retain a clear consciousness
during such paroxysms. If the diagnosis is in doubt, an EEG may be useful. We have seen TS
in association with a number of developmental and other neurological disorders. It is
possible that central nervous system injury from trauma or disease may cause a child to be
vulnerable to the expression of the disorder, particularly if there is a genetic
predisposition. Autistic and retarded children may display the entire gamut of TS symptoms,
but whether an autistic or retarded individual requires the additional diagnosis of TS may
remain an open question until there is a biological or other diagnostic test specifically
for TS. In older patients, conditions such as Wilson's disease, tardive dyskinesia, Meige's
syndrome, chronic amphetamine abuse, and the stereotypic movements of schizophrenia must be
considered in the differential diagnosis. The distinction can usually be made by taking a
good history or by blood tests. Since more physicians are now aware of TS, there is a
growing danger of overdiagnosis or over-treatment. Prevailing diagnostic criteria would
require that all children with suppressible multiple motor and phonic tics, however
minimal, of at least one year, should be diagnosed as having TS. It is up to the clinician
to consider the effect that the symptoms have on the patient's ability to function as well
as the severity of associated symptoms before deciding to treat with medication.
TABLE 1. RANGE OF SYMPTOMS OF TS
Motor
Simple motor tics: fast, darting, and meaningless.
Complex motor tics: slower, may appear purposeful
Vocal
Simple vocal tics: meaningless sounds and noises.
Complex vocal tics: linguistically meaningful utterances such as words and
phrases (including coprolalia, echolalia, and palilalia).
Behavioral and Developmental
Attention deficit hyperactivity disorder, obsessions and compulsions,
emotional problems, irritability, impulsivity, aggressivity, and self-injurious
behaviors; various learning disabilities
Symptomatology
The varied symptoms of TS can be divided into motor, vocal, and behavioral manifestations
(Table 2). Complex motor tics can be virtually any type of movement that the body can
produce including gyrating, hopping, clapping, tensing arm or neck muscles, touching people
or things, and obscene gesturing. At some point in the continuum of complex motor tics, the
term "compulsion" seems appropriate for capturing the organized, ritualistic character of
the actions. The need to do and then redo or undo the same action a certain number of times
(e.g., to stretch out an arm ten times b?e>
Uploader : Robert Dickan
Email : spamdude@computer.net
Language : English
Subject : Biology
Title : Tourette's Syndrome
Grade : 90
System : High School
Age : 18
Country : U.S.A/
Comments : 28 pages long
Where I got Evil House of Cheat Address : newsgroup
Date : may 11, 1996
Tourette's Disorder
Table of Contents
Tourette Syndrome And Other Tic Disorders
Definitions of Tic Disorders
Differential Diagnosis
Symptomatology
Associated Behaviors and Cognitive Difficulties
Etiology
Stimulant Medications
Epidemiology and Genetics
Non-Genetic Contributions
Clinical Assessment Of Tourette Syndrome
Treatment Of Tourette Syndrome
Monitoring
Reassurance
Pharmacological Treatment of Tourette Syndrome
Psychodynamic Psychotherapy
Family Treatment
Genetic Counseling
Academic and Occupational Interventions
Bibliography
Definitions of Tic Disorders
Tics are involuntary, rapid, repetitive, and stereotyped movements of individual muscle
groups. They are more easily recognized than precisely defined. Disorders involving tics
generally are divided into categories according to age of onset, duration of symptoms, and
the presence of vocal or phonic tics in addition to motor tics. Transient tic disorders
often begin during the early school years and can occur in up to 15% of all children.
Common tics include eye blinking, nose puckering, grimacing, and squinting. Transient
vocalizations are less common and include various throat sounds, humming, or other noises.
Childhood tics may be bizarre, such as licking the palm or poking and pinching the
genitals. Transient tics last only weeks or a few months and usually are not associated
with specific behavioral or school problems. They are especially noticeable with heightened
excitement or fatigue. As with all tic syndromes, boys are three to four times more often
afflicted than g! irls. While transient tics by definition do not persist for more than a
year, it is not uncommon for a child to have series of transient tics over the course of
several years. Chronic tic disorders are differentiated from those that are transient not
only by their duration over many years, but by their relatively unchanging character. While
transient tics come and go - with sniffing replaced by forehead furrowing or finger
snapping, chronic tics - such as contorting one side of the face or blinking - may persist
unchanged for years. Chronic multiple tics suggest that an individual has several chronic
motor tics. It is often not an easy task to draw the lines between transient tics, chronic
tics, and chronic multiple tics. Tourette Syndrome (TS), first described by Gilles de la
Tourette, can be the most debilitating tic disorder, and is characterized by multiform,
frequently changing motor and phonic tics. The prevailing diagnostic criteria include onset
before the age of 21; recurrent, involuntary, rapid, purposeless motor movements affecting
multiple muscle groups; one or more vocal tics; variations in the intensity of the symptoms
over weeks to months (waxing and waning); and a duration of more than one year. While the
criteria appear basically valid, they are not absolute. First, there have been rare cases
of TS which have emerged later than age 21. Second, the concept of "involuntary" may be
hard to define operationally, since some patients experience their tics as having a
volitional component - a capitulation to an internal urge for motor discharge accompanied
by psychological tension aefore writing, to even up, or to stand up and push a chair into
"just the right position") is compulsive in duality and accompanied by considerable
internal discomfort. Complex motor tics may greatly impair school work, e.g., when a child
must stab at a workbook with a pencil or must go over the same letter so many times that
the paper is worn thin. Self-destructive behaviors, such as head banging, eye poking, and
lip biting, also may occur. Vocal tics extend over a similar spectrum of complexity and
disruption as motor tics ( The most socially distressing complex vocal symptom is
coprolalia, the explosive utterance of foul or "dirty" words or more elaborate sexual and
aggressive statements. While coprolalia occurs in only a minority of TS patients (from
5-40%, depending on the clinical series), it remains the most well known symptom of TS. It
should be emphasized that a diagnosis of TS does not require that coprolalia is present.
Some TS patients may have a tendency to imitate what they have just seen (echopraxia),
heard (echolalia), or said (palilalia). For example, the patient may feel an impulse to
imitate another's body movements, to speak with an odd inflection, or to accent a syllable
just the way it has been pronounced by another person. Such modeling or repetition may lead
to the onset of new specific symptoms that will wax and wane in the same way as other TS
symptoms.
TABLE 2. EXAMPLES OF MOTOR SYMPTOMS
Simple motor tics
Eye blinking, grimacing, nose twitching, lip pouting, shoulder shrugging, arm jerking,
abdominal tensing, kicking, finger movements, jaw snapping, tooth clicking, frowning,
tensing parts of the body, and rapid jerking of any part of the body.
Complex motor tics
Hopping, clapping, touching objects (or others or self), throwing, arranging, gyrating,
bending, "dystonic" postures, biting the mouth, the lip, or the arm, headbanging, arm
thrusting, striking out, picking scabs, writhing movements, rolling eyes upwards or
side-to-side, making funny expressions, sticking out the tongue, kissing, pinching,
writing over-and-over the same letter or word, pulling back on a pencil while writing,
and tearing paper or books.
Copropraxia
"Giving the finger" and other obscene gestures.
Echopraxia
Imitating gestures or movements of other people.
TABLE 3. EXAMPLES OF VOCAL SYMPTOMS
Simple vocal tics
Coughing, spitting, screeching, barking, grunting, gurgling, clacking, whistling, hissing,
sucking sounds, and syllable sounds such as "uh, uh," "eee," and "bu."
Complex vocal tics
"Oh boy," "you know," "shut up," "you're fat," "all right," and "what's that."
or any other understandable word or phrase
Rituals
Repeating a phrase until it sounds "just right" and saying something over 3 times.
Speech atypicalities
Unusual rhythms, tone, accents, loudness, and very rapid speech.
Coprolalia
Obscene, aggressive, or otherwise socially unacceptable words or phrases.
Palilalia
Repeating one's own words or parts of words.
Echolalia
Repeating sounds, words, or parts of words of others.
The symptoms of TS can be characterized as mild, moderate, or severe by their frequency,
their complexity, and the degree to which they cause impairment or disruption of the
patient's ongoingctivities and daily life. For example, extremely frequent tics that occur
20-30 times a minute, such as blinking, nodding, or arm flexion, may be less disruptive
than an infrequent tic that occurs several times an hour, such as loud barking, coprolalic
utterances, or touching tics. There may be tremendous variability over short and long
periods of time in symptomatology, frequency, and severity. Patients may be able to inhibit
or not feel a great need to emit their symptoms while at school or work. When they arrive
home, however, the tics may erupt with violence and remain at a distressing level
throughout the remainder of the day. It is not unusual for patients to "lose" their tics as
they enter the doctor's office. Parents may plead with a child to "show the doctor what you
do at home," only to be told that the youngster "just doesn't feel like doing them" or
"can't do them" on command. Adults will say "I only wish you could see me outside of your
office," and family members will heartily agree. A patient with minimal symptoms may
display more usual severe tics when the examination is over. Thus, for example, the doctor
often sees a nearly symptom-free patient leave the office who begins to hop, flail, or bark
as soon as the street or even the bathroom is reached. In addition to the moment-to-moment
or short-term changes in symptom intensity, many patients have oscillations in severity
over the course of weeks and months. The waxing and waning of severity may be triggered by
changes in the patient's life; for example, around the time of holidays, children may
develop exacerbations that take weeks to subside. Other patients report that their symptoms
show seasonal fluctuation. However, there are no rigorous data on whether life events,
stresses, or seasons, in fact, do influence the onset or offset of a period of
exacerbation. Once a patient enters a phase of waxing symptomatology, a process seems to be
triggered that will run its course - usually within 1-3 months. In its most severe forms,
patients may have uncountable motor and vocal tics during all their waking hours with
paroxysms of full-body movements, shouting, or self-mutilation. Despite that, many patients
with severe tics achieve adequate social adjustment in adult life, although usually with
considerable emotional pain. The factors that appear to be of importance with regard to
social adaptation include the seriousness of attentional problems, intelligence, the degree
of family acceptance and support, and ego strength more than the severity of motor and
vocal tics. In adolescence and early adulthood, TS patients frequently come to feel that
their social isolation, vocational and academic failure, and painful and disfiguring
symptoms are more than they can bear. At times, a small number may consider and attempt
suicide. Conversely, some patients with the most bizarre and disruptive symptomatology may
achieve excellent social, academic, and vocational adjustments.
Associated Behaviors and Cognitive Difficulties
As well as tics, there are a variety of behavioral and psychological difficulties that are
experienced by many, though not all, patients with TS. Those behavioral features have
placed TS on the border between neurology and psychiatry, and require an understanding of
both disciplines to comprehend the complex problems faced by many TS patients. The most
frequently reported behavioral problems are attentional deficits, obsessions, compulsions,
impulsivity, irritability, aggressivity, immaturity, self-injurious behaviors, and
depression. Some of the behaviors (e.g., obsessive compulsive behavior) may be an integral
part of TS, while others may be more common in TS patients because of certain biological
vulnerabilities (e.g., ADHD). Still others may represent responses to the social stresses
associated with a multiple tic disorder or a combination of biological and psychological
reactions.
Obsessions and Compulsions
Although TS may present itself purely as a disorder of multiple motor and vocal tics, many
TS patients also have obsessive-compulsive (OC) symptoms that may be as disruptive to their
lives as the tics - sometimes even more so. There is recent evidence that
obsessive-compulsive symptomatology may actually be another expression of the TS gene and,
therefore, an integral part of the disorder. Whether this is true or not, it has been well
documented that a high percentage of TS patients have OC symptoms, that those symptoms tend
to appear somewhat later than the tics, and that they may be seriously impairing. The
nature of OC symptoms in TS patients is quite variable. Conventionally, obsessions are
defined as thoughts, images, or impulses that intrude on consciousness, are involuntary and
distressful, and while perceived as silly or excessive, cannot be abolished. Compulsions
consist of the actual behaviors carried out in response to the obsessions or in an effort
to ward them off. Typical OC behaviors include rituals of counting, checking things over
and over, and washing or cleaning excessively. While many TS patients do have such
behaviors, there are other symptoms typical of TS patients that seem to straddle the border
between tics and OC symptoms. Examples are the need to "even things up," to touch things a
certain number of times, to perform tasks over and over until they "feel right," as well as
self-injurious behaviors.
Attention Deficit Hyperactivity Disorder (ADHD)
Up to 50% of all children with TS who come to the attention of a physician also have
attention deficit hyperactivity disorder (ADHD), which is manifested by problems with
attention span, concentration, distractibility, impulsivity, and motoric hyperactivity.
Attentional problems often precede the onset of TS symptoms and may worsen as the tics
develop. The increasing difficulty with attention may reflect an underlying biological
dysfunction involving inhibition and may be exacerbated by the strain of attending to the
outer world while working hard to remain quiet and still. Attentional problems and
hyperactivity can profoundly affect school achievement. At least 30-40% of TS children have
serious school performance handicaps that require special intervention, and children with
both TS and ADHD are especially vulnerable to serious, long term educational impairment.
Attention deficits may persist into adulthood and together with compulsions and obsessions
can seriously impair job performance.
Emotional Lability, Impulsivity, and Aggressivity
Some TS patients (percentages vary greatly in different studies) have significant problems
with labile emotions, impulsivity, and aggression directed to others. Temper fits that
include screaming, punching holes in walls, threatening others, hitting, biting, and
kicking are common in such patients. Often they will be the patients who also have ADHD,
which makes impulse control a considerable problem. At times the temper outbursts can be
seen as reactions to the internal and external pressures of TS. A specific etiology for
such behavioral problems is, however, not well understood. Nevertheless, they create much
consternation in teachers and great anguish both to TS patients themselves and to their
families. The treating physician or counselor is often asked whether those behaviors are
involuntary, as tics are, or whether they can be controlled. Rather than trying to make
such a distinction, it is perhaps more helpful to think of such patients as having a "thin
barrier" between aggressive thoughts and the expression of those thoughts through actions.
Those patients may experience themselves as being out of control, a concept that is as
frightening to themselves as it is to others. Management of those behaviors is often
difficult and may involve adjustment of medications, individual therapy, family therapy, or
behavioral retraining. The intensity of those behaviors often increases as the tics wax and
decreases as the tics wane.
Etiology
The most intensive research in relation to etiology has focused on neurochemical alterations
in the brain.
Multiple neurochemical systems have been implicated by pharmacologic and metabolic
evidence. The most convincing evidence for dopaminergic involvement has come from the
dramatic response to haloperidol and other neuroleptics such as pimozide, flupenazine, and
penfluridol, as well as exacerbations produced by stimulant medications. Findings of
reduced levels of dopamine metabolites in cerebrospinal fluid (CSF) have led investigators
to believe that TS results from a hypersensitivity of postsynaptic dopamine receptors.
Serotonergic mechanisms have been suggested on the basis of reduced CSF serotonin
metabolites. Since systems relying on neurotransmitters send projections to the substantia
nigra and the striatum, they could play an important role in the pathophysiology of TS.
Medications affecting that system seem somewhat effective for obsessions but have
inconsistent effects on tics. The role of the cholinergic system is clouded by
contradictory reports. Enhancing cholinergic function by use of physostigmine has been
associated both with the improvement and the worsening of TS. Elevated levels of red blood
cell choline have been found in TS patients and their relatives, but the significance is
unclear. Investigation of the GABAergic system suggests that it may be implicated. The
proximity and connections between the GABA and dopamine systems support the possibility of
an interrelationship. Response to clonazepam (a GABAergic agent) has been positive in some
cases. Yet other GABAergic drugs such as diazepam do not have such positive effects.
Noradrenergic mechanisms have been most persuasively implicated by observations that
clonidine, a drug that inhibits noradrenergic functioning by the stimulation of an
autoreceptor, may improve motor and phonic symptoms. Noradrenergic involvement has also
been suggested by the exacerbation of the syndrome by stress and anxiety. The use of
functional neuroimaging techniques such as positron emission tomography may help clarify
many physiologic relationships and identify important anatomical areas in the near future.
Stimulant Medications
A particularly important risk factor in tics and TS is the use of stimulant medication.
Over 25% of all TS patients in some cohorts have had a course of stimulation medication
early in the emergence of their behavioral or tic symptoms because they have been diagnosed
as having ADHD. Over the last several years, series of cases have been reported in which
the use of stimulants (methylphenidate, dextroamphetamine, and pemoline) has been
correlated with the onset of motor and phonic tics. There is also chemical evidence to
support the observation that stimulants will increase the severity of tics in 25-50% of TS
patients. In many cases, the tics associated with stimulant medication will disappear with
the reduction or termination of the medication. It is more controversial whether stimulants
can actually trigger or produce prolonged chronic multiple tics or TS that will persist
following their termination. However, cases have been reported in which that seems to have
occurred. Available information thus indicates that stimulants should be used cautiously
with ADHD children who have a close relative with tics, should generally be avoided with
ADHD children with a first-degree relative with TS, and should be terminated with the onset
of tics in children who previously were tic-free. Children and parents should be educated
concerning the risks versus benefits in each case prior to being treated with stimulants.
Alternatives such as behavioral management, environmental manipulation, and/or other types
of medication should be considered carefully.
Epidemiology and Genetics
While once thought to be rare, TS is now seen as a relatively common disorder affecting up
to one person in every 2,500 in its complete form and three times that number in its
partial expressions that include chronic motor tics and some forms of obsessive-compulsive
disorder. The question of the familial transmission of TS was first raised in the original
19th century descriptions of the disorder, but a genetic basis for TS was not considered
seriously until recently. Several genetic studies have now been reported and other rigorous
studies are now well enough along to draw several important conclusions. Those studies have
investigated many families in which TS and other tic disorders have been transmitted over
several generations. Based on available information, it is now clear that TS is a genetic
disorder. The vulnerability to TS is transmitted from one generation to another. When we
speak of "vulnerability," we imply that the child receives the genetic or constitutional
basis for developing a tic disorder; the precise type of disorder or severity may be
different from one generation to another. That vulnerability is transmitted by either
mothers or fathers and can be passed on to either sons or daughters. When one parent is a
carrier or has TS, it appears that there is about a 50-50 chance that a child will receive
the genetic vulnerability from that parent. That pattern of inheritance is described as
autosomal dominant. However, not everyone who inherits the genetic vulnerability will
express any of the symptoms of TS. There is a 70% chance that female gene carriers will
express any of the symptoms of TS. For a male gene carrier, there is a 99% chance of
showing some clinical expression of the gene. The degree of expression is described as
penetrance. In males, the penetrance is higher than in females; thus, males are more likely
to have some form of expression of the genetic vulnerability. There is a full 30% chance of
female gene carriers showing no symptoms at all. For males, the figure is 1%. There is a
range of forms in which the vulnerability may be expressed that includes full-blown TS,
chronic multiple tics, and, as most recently recognized, obsessive-compulsive disorder.
Some individuals have TS (or chronic tics) and obsessive-compulsive disorder together;
others may have the conditions singly. There are also differences between the sexes in the
form of expression of the TS gene. Males are more likely to have TS or tics; females are
more likely to have obsessive-compulsive disorder; however, both males and females may have
any combination or severity. The severity of the disorder is also highly variable. Most
individuals who inherit the TS genetic vulnerability have very mild conditions for which
they do not seek medical attention. Researchers are actively engaged in searching for the
chromosomal location of the TS gene of affected individuals. At present, there is no
genetic or biochemical test to determine if a person with TS or an unaffected individual
carries the gene. There is no prenatal test for the vulnerability to TS. When scientists
succeed in locating the gene, such tests may become available.
Non-Genetic Contributions
The individual variations in character, course, and degree of severity by which TS is
manifested cannot be explained by genetic hypotheses alone. Furthermore, it appears that
about 10-15% of TS patients do not acquire the disorder genetically. Thus, non-genetic
factors are also responsible, both as causes and as modifiers of TS. Non-genetic factors
that have been implicated include such stressful processes or events during the prenatal,
perinatal, or early life periods as fetal compromise and exposure to drugs or other toxins.
Findings from one study in which decreased birth weights were observed in the affected
co-twins of discordant monozygotic pairs lend further support to the influence of
environmental factors.
Clinical Assessment Of Tourette Syndrome
Assessment of a case of TS involves far more than simple diagnosis. Since symptoms may
fluctuate in severity and character from hour to hour, a thorough understanding of the
patient may take a considerable amount of time. As the patient becomes more comfortable
with the doctor, there will be less likelihood of symptom suppression or inhibition. Only
when there is confidence in the physician is the patient likely to acknowledge the most
frightening or bizarre symptoms. The nature, severity, frequency, and degree of disruption
produced by the motor and vocal tics need to be carefully assessed from the time of their
emergence until the present. Inquiries should be made about factors that may have worsened
or ameliorated their severity. A critical question concerns the degree to which the tics
have interfered with the patient's social, familial, and school or work experiences. In
those respects interviews with families may be revealing and informative. During the
evaluation of a patient with TS, the clinician must assess all areas of functioning to
fully understand both difficulties and strengths. It is important to explore the presence
of attentional and learning disabilities, a history of school and/or work performance, and
relationships with family and peers. Before receiving the diagnosis, the patient and/or
family may have thought he or she "was going crazy." The patient may have become extremely
distressed by his or her own experiences and by the often negative responses evoked.
Parents may have scolded, cajoled, ridiculed, threatened, and perhaps beaten the child to
stop the "weird" and embarrassing behavior, and the emotional sequelae may affect the
patient far beyond the period of childhood. During the evaluation of a child, therefore,
family issues including parental guilt need to be addressed. Relevant factors elicited
through careful diagnostic evaluation can be approached through clarification, education,
and therapeutic discussion with the youngster and the family. Careful assessment of
cognitive functioning and school achievement is indicated for children who have school
problems. TS children with school performance difficulties often do not have clearly
delineated learning disorders, and the average IQ of TS patients is normal. Rather, their
problems tend to lie in the areas of attentional deployment, perseverance, and the ability
to keep themselves and their work organized. Many have difficulties with penmanship
(graphomotor skills) and compulsions that interfere with writing. Determining specific
problem areas will help in the recommendation of alternatives (e.g., extended periods of
time for tests, the use of a typewriter or the emphasis on oral rathe! r than written
reports). The neurological examination should include documentation of neuromaturational
difficulties and other neurological findings. About half of TS patients have
non-localizing, so called "soft," neurological findings suggesting disturbances in the body
scheme and integration of motor control. While such findings have no specific therapeutic
implications, they are worth noting as "baseline" data since the use of medications such as
haloperidol may cloud the neurological picture. The EEG is often abnormal in TS, but the
EEG findings are nonspecific. Computed tomography of the brain produces normal results in
people with TS. Thus, unless there is some doubt about the diagnosis or some complicating
neurological factors, an EEG and a computed tomography are not necessary parts of the
clinical evaluation. Additional studies that may be considered in the biological work-up
include serum electrolytes, calcium, phosphorous, copper, ceruloplasmin, and liver function
tests - all related to movement disorders of various types. In practice, however, they are
rarely needed for the diagnosis. A behavioral pedigree of the extended family, including
tics, compulsions, attentional problems and the like is useful. Previous medications must
be reviewed in detail during assessment. If a child has received stimulant medications, it
is important to determine what the indications for the medications were, whether there were
any pre-existing tics or compulsions, and the temporal relation between the stimulants and
the new symptoms. Catecholaminergic agonists are contained in other drugs, such as in
decongestant combinations used in treating allergies and in medications used for asthma. If
a patient with TS is on a stimulant or a drug containing an ephedrine like agent,
discontinuation should be strongly considered. If the physician examines a previously
f:\12000 essays\sciences (985)\Biology\Diffrences and Effects of Natural and Synthetic Fertilizers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
At the core of the growth and germination of plants lie the nutrients they receive from the soil. The nutrients required for growth are classified into two groupings, macronutirents and micronutrients. Macronutrients are those that are needed in very large amounts, and whose absence can do a great harm to the development of the plant life. These nutrients include calcium, nitrogen, phosphorus, and potassium, and are very sparse in most soils, making them the primary ingredients in most fertilizers. The other, more common macronutrients are called secondary nutrients, as they are not of as much importance. Micronutrients, the other classification, consist of all the other elements and compounds required for sufficient growth, such as iron, boron, manganese, copper, zinc, molybdenum and chlorine. In some cases, these nutrients are found to be missing in soils, but it is extremely uncommon.
As plants need to retrieve all of their nutrients from the soil, many methods have been developed in order to find ways to improve or change the soil to suit the plant's needs. Soil, in science as well as in common gardening, must undergo detailed inspection, to detect such things as the pH of the soil. A soil with a pH above 7.0 is called an alkaline soil, and will commonly kill plants. Mineral content, as mentioned above, is also a concern, and must be clearly monitored. After inspection, it is common for minor organic materials outside fertilizers to be applied, such as peat moss, ground bark, or leaf mold. It is after these steps that fertilization must occur, leading to a debate which has plagued gardeners and scientists alike: organic or chemical?
Fertilizers, in both natural and synthetic breeds, are carriers of the primary and secondary nutrients that are found less often in even the most fertile soils. Fertilizers are mixtures that are mixed or applied to soil, thus greatly increasing its potency and maximizing plant growth. As mentioned before, however, there are both natural and inorganic fertilizers, all with varying effects. The compositional differences of these types are great, indeed. Natural fertilizers, as one would expect, are totally organic, and usually come from the manure of animals. These are the fertilizers that produced the forests of the world, among much other plant life in ecosystems, and have been used since ancient times. Chemical fertilizers are a more recent invention, consisting of carefully concentrated mixtures of nutrients, formulated for quick growth. These can take many forms, from powder, to "dirt", to even tablets!
Natural fertilizers, as mentioned above, include the various types of manure and other animal waste products, as well as compost, which is a mixture of various decaying plant and animal products mixed together to form a variable "feast" of nutrients and minerals. Uncountable types of these nutrient boosters have been developed by agriculturists, involving such oddities as kelp parts, dish meal, blood meal, and even ground gypsum! These different fertilizers tend to work well with plants, and many scientific agricultural experiments have shown them to be very effective in long term stages, with the only drawback being a slow growth process. Crops grown this way have shown to become, in the long run, larger, healthier, and above all, 100% non-toxic.
The other method of fertilization that of chemicals offers major advantages over organic, but just as undesirable complications. Chemical fertilizers have the major advantage of containing a near perfect mixture of all the nutrients necessary for growth, as well as being time-released to give the plant a steady flow of mineral supply. This causes extremely fast growth, with an initially healthy stock. Unfortunately these chemicals, which all their benefit, can easily cause what is called "fertilizer burn". This state is where fertilizer produces too many nutrients: this overloads the plants biological systems, and effectively kills the plant. Also, the chemicals often harm plants over time, causing ill health and quicker death than natural fertilizers, as soil organisms die out form over exposure, reducing the soil quality. Plants grown with chemical fertilizers have a greater chance of disease and toxicity, but the initial growth usually offsets these complications
f:\12000 essays\sciences (985)\Biology\digestive track essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The BIG MAC is placed in the mouth. The bread is mainly starch, the special sauce is mainly fat, lettuce, pickles, and onions are niacin.
The beef patties are protein,and cheeseis a form of calcium, fat, and protein.
The piece of the BIG MAC is placed in the mouth and chewed, the starch is being digested by saliva. The starch becomes a kind of sugar which
is used as a nourishment for the cells. Saliva changes food to a form that can be used by the body called enzymes. The burger is swallowed and
passes into the esophagus. This is the muscular tube that contracts along its length to push the food down into the stomach. In the alimentary
canal the meat and special sauce is being absorbed. The fat in the sauce is used for energy and the protein in the meat is used to build muscles,
the lettuce, pickles, onions are also absorbed.
In the stomach, which is a muscle, the food is churned about while digestive juices pour int from glands in the stomach wall. Eventually, the
the churning action moves food out of the stomach and into the small intestine. The lever contributes to this digestive process by secreting
into the small intestine, a liquid called bile. The pancreas secretes pancreatic juice which further aids in dissolving food.
The small intestine undergoes continual muscular contractions called peristalses. This action pushes food into the large intestine. This surface
of the small intestine has a large number of threadlike projections called villi. The digested, liquified food is absorbed through the villi, and
passes into capillaries that are inside the villi. Now the food is in the bloodstream. Not all parts of the BIG MAC can be digested.Those parts
which are indigestible pass through the large intestine to its lower part, called the rectum. Eventually, the indigestible food is eliminated from
the rectum through the anus. This is the complete teck that the BIG MAC follows when it is eaten.
f:\12000 essays\sciences (985)\Biology\Discovering Sickle Cell Anemia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1
The topic that I am learning about is Sickle Cell Anemia, a hereditary disease which affects red blood cells. Throughout this research paper, I will discuss what exactly it is, how it is caused, any known treatments or cures, and many other facts that are important in this disease.
Sickle Cell Anemia is a health problem throughout the world. More than 250,000 babies are born worldwide with this inherited blood cell disorder (http://www.medaccess.com/h_child/sickle/sca_01.htm). The disorder causes red blood cells to extend into a sickle shape which clogs the arteries.Persistant pain and life-threatening infections result from the illness. About one in 400 black newborns in the U.S. have sickle cell anemia. And one in 12 black Americans carry the sickle cell trait (http://www.medaccess.com/h_child/sickle/sca_01.htm). This leaves a good chance that the parent with the trait can pass the defect onto offspring although their own health is not harmed.
The cause of sickle cell anemia is rather simple but it leaves a life threatening affect. Anyone who carries the inherited trait for sickle cell anemia, but doesn't have the disorder, is actually protected from a severe form of malaria. This helped the children in countries where malaria was a problem, to be able to survive against that disease. What happened to those children? They grew up, had their own children and ended up passing the gene for sickle cell anemia onto their offspring.
This disease is a hereditary blood disorder that affects the red blood cell. Red blood cells contain a protein called hemoglobin which transports oxygen from your lungs to every part of your body. Hemoglobin's oxygen carrying ability is essential for living but if there is a structural defect on the pigmented molecule, it can be fatal. When a normal red blood cell distributes its
2
oxygen, it has a disc shape. But when an affected red blood cell containing sickle cell hemoglobin releases its oxygen, the image of the cell changes from a disc shape to a sickled shape. In hemoglobin, there are four chains of amino acids. Two are know as alpha chains, and two are called beta chains. In a normal hemoglobin, the amino acid in the sixth position on the beta chain is known as glutamic acid (refer to diagram 1.1 on page 6). During sickle cell anemia, the glutamic acid is pushed out of its place and replaced with another amino acid called vialine(refer to diagram 1.2 on page 6). This simple substitution has devastating consequences.
Hemoglobin molecules that contain the beta chain defect stick to one another instead of staying separate after releasing oxygen. This forms long, rigid rods inside the red blood cells. The rods cause the normally smooth and disc shaped blood cells to take on a sickle shape. When this happens, the blood cells lose essential ability to deform and squeeze through small blood vessels and arteries. The sickle cells becomes stiff and sticky which clog vessels, depriving tissue from receiving a sufficient blood supply. This change makes the hemoglobin less soluble in water. When a person is deprived of oxygen, the hemoglobin molecules join together and form fibers. The fibers cause the blood cells to change shape.
Sickle hemoglobin and normal hemoglobin carry the same amount of oxygen but there are two major differences between the two kinds of cells. The normal hemoglobin is found in only disc shaped red blood cells that are soft, which permits them to easily flow through small blood vessels. Diseased red blood cells are sickle shaped and are very hard which tend to get stuck in small blood vessels and stop the flow of blood.
The other difference between the two cells is their longevity. Sickle cells do not live as long as
3
normal cells. Normal healthy cells can survive for about 120 days , while the more fragile sickle cells can survive for about 60 days or even less. The body cannot make new red blood cells as fast as it loses sickled blood cells. A sickle cell patient has fewer red blood cells and less hemoglobin than normal red blood cells. This results in less oxygen being convenient for use by the cells of the body.
Anyone whose parent has the gene for sickle cell anemia have the chance of at least having sickle cell trait. In order for a child to have the disease, both parents must have the sickle cell gene(refer to diagrams 2.1 and 2.2 on page 6). The disease affects mostly African Americans in Africa, South America, Latin America, the West Indies, Greece, Spain, Italy, and Turkey.
When the blockage of sickled red blood occurs, it can take place in any organ or joint of the body wherever a blood clot develops. The frequency and amount of pain varies widely depending on the person. In some people, painful episodes occur once a year but for other patients, they can have as many as 15 to 20 episodes annually. These excruciating, disruptive events can be so brutal that the patient must go into the hospital for five to seven days to obtain intravenous fluids and narcotic pain killers. The pain can only be controlled, it cannot be stopped or you cant even identify when it is likely to happen again.
Sickle cell clots are life threatening, depending on where it occurs. One of the most severe places for a clot to occur in is the brain. It could lead to a stroke which could turn into paralysis or even worse, death. Sometimes a blood transfusion is required every three to four weeks to avoid recurrence of clots in the brain.
When blood capillaries are clogged, it can lead to many types of problems, depending upon
4
where the blockage occurs. The outcome of the blockages may lead to problems such as kidney infections, death and decay of tissues, intense pain in chest, arms and legs, disease of the retina of the eye, slow healing sores or ulcers, and even gallstones. When the hemoglobin is low, it is manifested by fatigue and weakness.
Currently, there is no cure for Sickle Cell Anemia. But the doctors do offer a treatment that helps control this disease. Pain medication, antibiotics, rest and high fluid intakes are all treatments for aspects of sickle cell anemia. There are also experimental therapies that are available to some patients. The drug hydroxyurea is a treatment that reduced 50% the frequency of painful episodes and hospital visits. Preventive administration of penicillin to affected children by the age of four months greatly decreases mortality from infections.
While researching this topic and studying about the disease, I have learned many new details about it. I realized that even the slightest change in the sequence of amino acids can lead to very harmful effects. In this disease, only one amino acid was substituted and still the illness is very harsh. I also learned how exactly the cells deform and why they go into a sickle shape. It was very interesting to learn that the disease mostly effects African Americans. I also learned that when the sickles get clogged in an artery, it results in a very painful attack on the person and may cause them to have an episode. When episodes occur, the patient may have to go into a hospital for pain killers. The disease also can lead to ulcers, strokes, paralysis, decay of tissues, and many other problems throughout the persons entire life. Sickle Cell Anemia is a very serious disease that effects a person and there way of life. It doesn't have a known cure yet but many treatments and therapy are available. If a perso
n has this disease, it is life-threatening and painful attacks can
5
occur at any time, anywhere. It is important to know the causes and reasons for the disease so that you can relate to what a person with Sickle Cell Anemia is going through.
f:\12000 essays\sciences (985)\Biology\Diseases Sex Linked and Sex Influenced.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There are thousands of cases of sex linked and sex influenced diseases worldwide. These diseases can range from a social inconvenience, to a fatal ailment. In sex linked diseases, like Muscular Dystrophy, hemophilia and color blindness, only males are affected. When a man infected with a sex linked disease has children, all his sons are normal, but all of his daughters are carriers. When a carrier woman and an uninfected man have children, half of the sons are normal, and half of the sons are affected; half of the daughters are carriers and half of the daughters are normal. Only males are affected because the sex linked diseases affect the X chromosome. Males have one X chromosome and one Y chromosome, so they need to use that X, whether it is flawed or not. Females on the other hand, have two X chromosomes, so if one is defective, they can use their second X chromosome. Duchenne¹s Muscular Dystrophy(DMD) is defined as ³a genetic disease characterized by defective muscle cells that can not produce a p
rotein called dystrophin² (Science News 380). In patients of hemophilia, there is a deficiency of a protein needed for blood clotting, causing this hereditary bleeding disorder. In red/green color blindness, the broadest form of color blindness that affects six percent of the population, the cones in the retina that receive green light do not function properly. Unlike sex linked diseases, sex influenced diseases are not reserved solely for the male. However, the diseases occur in males much more frequently than in females. This is because sex influenced diseases occur from imbalances in testosterone, much more highly concentrated in males. Baldness and gout are two diseases that are a result of these hormonal imbalances. Baldness is defined as the lack or loss of hair. Permanent baldness strikes on a hereditary basis because the hormonal imbalances tend to be passed from generation to generation. Gout is a hereditary metabolic disorder that involves recurrent acute attacks of severe inflammation of joi
nts.
Sex linked diseases are born when sex genes, that compose two of the 46 chromosomes, are mutated by an error in copying genes in reproduction. One of these sex linked diseases is Duchenne¹s Muscular Dystrophy. DMD is a disease that has rightfully been gaining some headlines recently, as the disease is taking the lives of young children. Several cures have been brought up recently in the medical society, but none have paid any dividends. According to the Muscular Dystrophy Association, one in every 2500 boys are infected with muscular dystrophy. The defective gene is found at the top of the X chromosome. This gene is the largest known to exist. In patients of DMD, this gene is either missing or severely mutilated. The symptoms of DMD are fatal. By age eleven, the victims weaken fast. Normally, muscle deterioration begins in the lower legs and then moves up the body of the patient. Generally, victims are in their early twenties when they die from either heart failure or diaphragm failure.(The dia
phragm is the muscle that makes breathing possible.) One mother of a Duchenne¹s Muscular Dystrophy patient says succinctly, ³Eventually these kids get bedridden and then they die.²(Grady 87) It is imperative to find a cure for Duchenne¹s Muscular Dystrophy so we can save the lives of thousands of innocent children.
One of the major researchers working on a cure for DMD is Dr. Peter K. Law of the Cell Therapy Research Foundation. Law has been in the field for over twenty years and has made many discoveries. In 1972, Law¹s doctoral thesis proved that dystrophic muscle cells have abnormal cell membranes. This showed that the disease was caused by a muscle defect, not a nerve defect as was previously thought. Since it was clear that it was a muscle defect, Law tried to transplant both whole and minced muscle into mice. The minced muscle proved to be too damaged to operate, and the whole muscle was so large that it died before an adequate blood and nerve supply was developed. At this point, since the whole muscle was too large but was the only feasible solution, he decided to transplant whole muscles of a baby mouse into an adult mouse. This muscle was not damaged, because it was not minced, and it was not too large, because the baby muscle is considerably smaller than an adult muscle. Not only did the mouse survive,
but normal function was restored to diseased adult muscle. Since the transplantation of muscle in mice was so successful, Dr. Law tried to find something along those lines that would work in a human. He found a solution; myoblasts. A myoblast is a mature muscle cell. It is a long thin fiber that can be more than an inch long. Unlike cells of other types, myoblasts have over 200 nuclei. When they are damaged, the myoblasts call upon a reservoir of satellite cells; small immature cells that nestle inside the muscle fiber¹s outer sheath. Satellite cells are the key to muscle repair and regeneration.The satellites leave the fiber, divide and then flatten into spindle shaped forms- the myoblasts. Myoblasts repair muscle cells by fusing with the injured cell and they share their nuclei with the injured cell¹s nuclei. When these two myoblasts fuse completely, new cells are formed.
In 1970 Law thought of a procedure that would fuse healthy myoblasts with the dystrophic one, hoping that the resulting hybrid would have some function. However, Law had to perfect this procedure. One of the main problems was that when the healthy myoblast cells were fused, the immune system would treat them as alien and attack them. According to Law, another thing they had to do was ³... to design and perfect a culture medium to mass-produce myoblasts and weed out other cells.²(Grady 90) Law explains yet another problem encountered,³If you cram too many cells in the same spot, they might not survive.²(Grady 90)
While Law was working on his myoblast experiments, another door was opened by the discovery of the exact gene that caused the dystrophy. Many scientists thought that this gene therapy, rather than Law¹s cell therapy, was the future. But Law dismissed gene therapy saying, ³To me, in reality, that science will not work in our lifetime. First you must make a normal copy of the defective gene, which is enormous, and somehow insert it into a small virus to carry it into the host. Then you must hope that the virus will attack the right cell in the body, get through the cell membrane, break into the nucleus, and splice itself into place inside the cell¹s DNA. And then you expect that cell to function as normal? Are you kidding me?²(Grady 91-92) Law also made it clear that in gene therapy you have to replace the exact right nucleus in the exact right gene. In cell therapy, it doesn¹t matter which is the exactly right one that needs replacement because all of the cells are being replaced.
Just two years after he wrote off the gene therapy, in 1988,when the problems were weeded out, Law injected healthy myoblasts into 19 dystrophic mice. The results of these tests were encouraging; 11 mice fared extremely well, 3 showed moderate improvement and 5 rejected the myoblasts. Another encouraging fact was that the life span was increased from nine months to nineteen months in the mice that fared extremely well. With the success in the mice, Law decided to launch phase I of his human experiments . Each of three boys received four injections of myoblasts from either their brother¹s body or their father¹s body. In two of the boys, these injections, which were given in the foot, were matched in the other foot by placebo saline solutions so nobody except Law¹s assistant would know which foot the real injections were placed. At the end of the experiment, all three boys said that they felt that one foot was stronger than the other. The foot that felt stronger was the same foot that was injected with
the myoblasts in all three cases, and all three feelings of greater strength were backed up by muscle strength tests administered by Law.
Although the results of Phase I seemed ideal, Law received some criticism from his peers. They said that he rushed too quickly into the human experiments without gaining complete assurance that it would work to perfection. Some scientists were concerned that the myoblast injection would have side-effects. The criticism was not publicized to a wide extent, and it went virtually unnoticed after Law made a statement in which he said, ³We have to move the research forward as quickly as possible. These are dying children. We have no time to lose.²(Grady 88)
In May 1991, after Phase I was considered to be a success, Law lunched Phase II. As of July 24, 1992 Law had treated the major leg muscle of 32 boys, ages 6 to 14. For this process, Law removes an eraser-sized piece of muscle from either the patients father or brother. Then, he grows the muscle in the lab until he has 5 million myoblasts. At the time of treatment, the patients go under general anesthesia for 10 minutes, and receive 48 injections of myoblasts in 22 muscle groups. All patients take cyclosporin, an immune system suppressive for six months to prevent the boys from rejecting the myoblasts. The muscle strength of each patient is recorded 3 months before treatment, at the time of treatment, and three months after treatment. This test was also successful. Muscle strength was reported to improve in 43% of the muscles by an average of 41% when compared to muscle strength before treatment. 38% of the muscles stopped deteriorating after treatment and 19% completely failed to respond.
However, as in Phase I, Law¹s success was accompanied with criticism. The major problem his peers had was that there were no controls. Says Robert H. Brown Jr. of Massachusetts General Hospital in Boston during one meeting session, ³I am astonished that you haven¹t controlled for cyclosporin.(Thompson 473) Law counters, ³We have a perfect control, strength before and after transfer on the same muscle.²(Thompson 473) Law also says that the upper body of the patient acts as a control. Law says that another reason he does not use controls is because the saline solution is shown to speed up deterioration, and that would not be ethically correct. His opposition, however says that since he only had two patients with the placebo solution, so those results could not be verified. Another thing that was criticized was the use of muscle strength to measure the effectiveness. The three major components of the criticism is that the children may not be using full exertion, that when you get older your strength gets
greater, and third, how do you know dystrophin produced this strength; what about the cyclosporin?
The work done by Peter Law has been exemplary. He has found a method for prolonging the life of young DMD patients. Although the way Law went about his trials were controversial, moving as fast as possible is imperative because thousands of children are having their ability to walk, and eventually their lives taken away by this disease. If Law had waited, it may have been too late. Although there is a large controversy concerning Peter Law, the Muscular Dystrophy Association should support him and encourage him to perfect a cure for this disease.
Another sex linked diseases that is similar to DMD in makeup, not in symptoms is hemophilia. In hemophiliacs, a protein that clots blood is missing or abnormal due to a gene mutation that was formed in the duplication of sex genes. The protein missing in hemophilia victims is antihemophilic globulin (AHG). Like in all sex linked diseases, only males can show symptoms, and females are the only carriers. The father of a hemophiliac may or may not be infected, but the mother must be a carrier. A hemophiliac has received his mother¹s bad X chromosome and his father¹s Y. The same couple can also have a normal son who received his mother¹s good X and his father¹s Y. If the couple has daughters she can receive her father¹s X and her mother¹s bad X, or mother¹s good X. So, the chance of a hemophiliac boy being born when the mother is a carrier is one in four. Therefore the incidence of hemophilia is familial, as in the Russian royal family. In hemophiliacs ,the tendency to bleed becomes noticeable at a young
age and leads to severe anemia or even death. Hemophiliacs often have large bruises and soft tissue of the skin from incidents as small as lightly bumping into something. This bruising is much like the bruising of the elderly. Not only will bruises form, but bleeding will often occur for no reason in the mouth, nose and gastrointestinal tract. Once the victim grows out of childhood, hemorrhages in knees , ankles, elbows and other joints occur frequently. These hemorrhages result in swelling which impairs the victim¹s function. Hemophilia patients are generally advised to refrain from physical activity . When hemorrhages occur, local application such as thrombin are applied that serve as a blood clotting mechanism, or blood is transfused.
A third type of a sex linked disease caused by a defective chromosome is color blindness. Red/green color blindness, the most common type that affects six percent of the population, is caused by defective green cones in the retina. People with red-green color deficiency see blue and orange very clear and bright. Other colors, although different from the colors that normal people see, are always the same to them and suit most victims fine because they have nothing to compare the colors they see to(USA Today 16). Like hemophilia, Duchenne¹s Muscular Dystrophy and all sex linked diseases, only males suffer the symptoms, and the females are the carriers. Although color blindness is a disease that affects thousands of people, it is not a life-threatening disease. Most color blind people do not suffer, because they do not know that the color should be different. Few problems, like traffic lights, hinder color blind people, and as Cynthia Bradford, an opthamologist at the University of Oklahoma Health Science
s Center says, ³With many people, you might not even know they¹re color blindunless they tell you²(USA Today 16)
Unlike sex linked diseases, sex influenced diseases do not affect one sex solely. Baldness, the lack or loss of hair, is caused by an imbalance of testosterone. Since it is caused by testosterone, much more concentrated in males, sex influenced diseases are much more common in males.This imbalance causes the destruction of hair follicles which causes the baldness to be permanent. The largest type of baldness is male-pattern baldness that affects forty percent of some male populations(Norton 2:826). Male-pattern baldness is hereditary, and varies in degree from generation to generation. Ironically, people with male pattern baldness have a higher percentage of body hair than most, and those Aborigines with male pattern baldness generally have bald calves as well. Although this disease is not life-threatening, baldness is a social problem. Almost every other man is a victim, and those who do suffer the disease are prejudiced. Solutions, not cures to baldness to exist. The first obvious option is the wi
g. Secondly, hair transplants are becoming more and more frequent, and topical solutions such as minoxidil have helped to prevent further balding in many cases, and reinitiate hair growth in a much smaller percent of users. The important thing to remember about sex influenced diseases is that they are hereditary, but only to the extent of the amount of testosterone produced. The genes tell the offspring the amount and concentration of testosterone, not whether or not to lose hair. If the amounts of testosterone relayed are not normal, baldness may occur.
A second sex influenced disease is gout. Gout is the ³hereditary metabolic disorder that is characterized by recurrent acute attacks of severe inflammation in one or more of the extremities²(Norton 5:392). This inflammation is caused by an excess deposition of uric acid in and about the joints. Like baldness, this condition strikes men predominantly, but can also be found in women. The exact cause of gout is not yet known, however, it is logical to believe that it is caused by the same hormonal imbalances as baldness, and that is why it is classified as a sex influenced disease. Gout is inborn, however the symptoms do not occur until middle age. Before the attacks, small amount of uric acid build up in the joints. All joints, especially the big toe, are susceptible. Symptoms such as heat, redness of the skin, and extreme tenderness and pain accompany the affected joints. Numerous gout attacks can cause knobby bumps on the affected joints. Acute cases of gout may come and go in a matter of a week fo
r no apparent reason. Some circumstances , however, inhibit the symptoms of gout. These circumstances include: emotional upset, diuresis, surgery, trauma, and the administration of certain drugs. Cochicine is the classic treatment for gout, but new medicines have surfaced recently.
Sex linked and sex influenced diseases are a problem that hurts our society. Although many of the diseases are just an inconvenience, others are fatal. There is no fathomable way of preventing any of these diseases, unless genes can be altered. The only medicine to treat theses diseases acts as a suppressant, not as an end to the diseases¹s life. Hopefully, cures can be found to save the lives of young, innocent people who are affected with hemophilia, Duchenne¹s Muscular Dystrophy and other fatal diseases.
Works Cited
³Color Blindness Misconceptions.² USA Today 120 (1992):16
³Foot Feat: transplant treats dystrophy.² Science News 16 June 1990:380
Grady, Denise. ³One foot forward.² Discover September 1990:86-93
Massie, Robert., and Massie, Suzanne. Journey. New York: Alfred A. Knopf, 1961.
Norton, Peter B. ³baldness.² The New Encyclopedia Britannica. 1994 ed.
Norton, Peter B. ³gout.² The New Encyclopedia Britannica. 1994 ed.
Norton, Peter B. ³hemophilia.² The New Encyclopedia Britannica. 1994 ed.
Thompson, Larry. ³Cell transplant results under fire.² Science 257 (24 July 1992) 472-474
Diseases: Sex Linked and Sex Influenced
by
Richard Nixon
Honors Biology
Mrs. Linda
December 19, 1994
f:\12000 essays\sciences (985)\Biology\Diversity of Plants.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DIVERSITY OF PLANTS
Plants evolved more than 430 million years ago from multicellular green algae. By 300 million years ago, trees had evolved and formed forests, within which the diversification of vertebrates, insects, and fungi occurred. Roughly 266,000 species of plants are now living.
The two major groups of plants are the bryophytes and the vascular plants; the latter group consists of nine divisions that have living members. Bryophytes and ferns require free water so that sperm can swim between the male and female sex organs; most other plants do not. Vascular plants have elaborate water- and food conducting strands of cells, cuticles, and stomata; many of these plants are much larger that any bryophyte.
Seeds evolved between the vascular plants and provided a means to protect young individuals. Flowers, which are the most obvious characteristic of angiosperms, guide the activities of insects and other pollinators so that pollen is dispersed rapidly and precisely from one flower to another of The same species, thus promoting out crossing. Many angiosperms display other modes of pollination, including self-pollination.
Evolutionary Origins
Plants derived from an aquatic ancestor, but the evolution of their conducting tissues, cuticle, stomata, and seeds has made them progressively less dependent on water. The oldest plant fossils date from the Silurian Period, some 430 million years ago.
The common ancestor of plants was a green alga. The similarity of the members of these two groups can be demonstrated by their photosynthetic pigments (chlorophyll a and b,) carotenoids); chief storage product (starch); cellulose-rich cell walls (in some green algae only); and cell division by means of a cell plate (in certain green algae only).
Major Groups
As mentioned earlier, The two major groups of plants are The bryophytes--mosses, liverworts, and hornworts--and The vascular plants, which make up nine other divisions. Vascular plants have two kinds of well-defined conducting strands: xylem, which is specialized to conduct water and dissolved minerals, and phloem, which is specialized to conduct The food molecules The plants manufacture.
Gametophytes and Sporophytes
All plants have an alternation of generations, in which haploid gametophytes alternate with diploid sporophytes. The spores that sporophytes form as a result of meiosis grow into gametophytes, which produce gametes--sperm and eggs--as a result of mitosis.
The gametophytes of bryophytes are nutritionally independent and remain green. The sporophytes of bryophytes are usually nutritionally dependent on The gametophytes and mostly are brown or straw-colored at maturity. In ferns, sporophytes and gametophytes usually are nutritionally independent; both are green. Among The gymnosperms and angiosperms, The gametophytes are nutritionally dependent on the sporophytes.
In all seed plants--gymnosperms and angiosperms--and in certain lycopods and a few ferns, the gametophytes are either female (megagametophytes) or male (microgametophytes). Megagametophytes produce only eggs; microgametophytes produce only sperm. These are produced, respectively, from megaspores, which are formed as a result of meiosis within megasporangia, and microspores, which are formed in a similar fashion within microsporangia.
In gymnosperms, the ovules are exposed directly to pollen at the time of pollination; in angiosperms, the ovules are enclosed within a carpel, and a pollen tube grows through the carpel to the ovule.
The nutritive tissue in gymnosperm seeds is derived from the expanded, food-rich gametophyte. In angiosperm seeds, the nutritive tissue, endosperm, is unique and is formed from a cell that results from the fusion of the polar nuclei of the embryo sac with a sperm cell.
The pollen of gymnosperms is usually blown about by the wind; although some angiosperms are also wind-pollinated, in many the pollen is carried from flower to flower by various insects and other animals. The ripened carpels of angiosperm grow into fruits, structures that are as characteristic of members of the division as flowers are.
GYMNOSPERMS AND ANGIOSPERMS
Gymnosperms
Gymnosperms are non-flowering plants. They also make up four of the five divisions of the living seed plants, with angiosperms being the fifth.
In gymnosperms, the ovules are not completely enclosed by the tissues of the sporophytic individual on which they are borne at the time of pollination. Common examples are conifers, cycads, ginkgo, and gnetophytes. Fertilization of gymnosperms is unique.
The cycad sperm, for example, swim by means of their numerous, spirally arranged flagella. Among the seed plants, only the cycads and Ginkgo have motile sperm. The sperm are transported to the vicinity of the egg within a pollen tube, which bursts, releasing them; they then swim to the egg, and fertilize it.
Angiosperms
The flowering plants dominate every spot on land except for the polar regions, the high mountains, and the driest deserts. Despite their overwhelming success, they are a group of relatively recent origin. Although they may be about 150 million years old as a group, the oldest definite angiosperm fossils are from about 123 million years ago.
Among the features that have contributed to the success of angiosperms are their unique reproductive features, which include the flower and the fruit.
Angiosperms are characterized primarily by features of their reproductive system. The unique structure known as the carpel encloses the ovules and matures into the fruit. Since the ovules are enclosed, pollination is indirect.
History
The ancestor of angiosperms was a seed-bearing plant that was probably already pollinated by insects to some degree. No living group of plants has the correct combination of characteristics to be this ancestor, but seeds have originated a number of times during the history of the vascular plant.
Although angiosperms are probably at least 150 million years old as a group, the oldest definite fossil evidence of this division is pollen from the early Cretaceous Period. By 80 or 90 million years ago, angiosperms were more common worldwide that other plant groups. They became abundant and diverse as drier habitats became widespread during the last 30 million years or so.
Flowers and Fruits
Flowers make possible the precise transfer of pollen, and therefore, outcrossing, even when the stationary individual plants are widely separated. Fruits, with their complex adaptations, facilitate the wide dispersal of angiosperms.
The flowers are primitive angiosperms had numerous, separate, spirally arranged flower parts, as we know from the correlation of flowers of this kind with primitive pollen, wood, and other features. Sepals are homologous with leaves, the petals of most angiosperms appear to be homologous with stamens, although some appear to have originated from sepals; and stamens and carpels probably are modified branch systems whose spore-producing organs were incorporated into the flower during the course of evolution.
Bees are the most frequent and constant visitors of flowers. They often have morphological and physiological adaptations related to their specialization in visiting the flowers of particular plants.
Flowers visited regularly by birds must produce abundant nectar to provide the birds with enough energy so theat they will continue to be attracted to them. The nectar visited plants tends to be well protected by the structure of the flowers.
Fruits, which are characteristic of angiosperms, are extremely diverse. The evolution of structures in particular fruits that have improved their possibilities for dispersal in some special way has produced many examples of parallel evolution.
Fruits and seeds are highly diverse in terms of their dispersal, often displaying wings, barbs, or other structures that aid their dispersal. Means of fruit dispersal are especially important in the colonization of islands or other distant patches of suitable habitat.
VASCULAR PLANT STRUCTURE
Vegetative Organs
A vascular plant is basically an axis consisting of root and shoot. The root penetrates the soil and absorbs water and various ion, which are crucial for plant nutrition, and it also anchors the plant. The shoot consists of stem and leaves. The stem serves as a framework for the positioning of the leaves, the principal places where photosynthesis takes place.
Plant Tissue
The stems and roots of vascular plants differ in structure, but both grow at their apices and consist of the same three kinds of tissues:
1. Vascular tissue--conducts materials within the structure; it consists of two types:
(1) xylem--conducts water and dissolved minerals
(2) phloem--conducts carbohydrates, mainly sucrose, which the plant uses for food, as well as hormones, amino acids, and other substances necessary for plant growth
2. Ground tissue--performs photosynthesis and stores nutrients; the vascular tissue is embedded
3. Dermal tissue--the outer protective covering of the plant
Growth
Plants grow by means of their apical meristems, zones of active cell division at the ends of the roots and the shoots. The apical meristem gives rise to three types of primary meristems, partly differentiated tissues in which active cell division continues to take place. These are the protoderm, which gives rise to the epidermis; the procambium, which gives rise to the vascular tissues; and the ground meristem, which becomes the ground tissue.
The growth of leaves is determinate, like that of flowers; the growth of stems and roots is indeterminate.
Water reaches the leaves of a plant after entering it through the roots and passing upward via the xylem. Water vapor passes out of the leaves by entering intercellular spaces, evaporating, and moving out through stomata.
Stems branch by means of buds that form externally at the point where the leaves join the stem; roots branch by forming centers where pericycle cells begin dividing. Young roots grow out through the cortex, eventually breaking through the surface of the root.
Propagation
An angiosperm embryo consists of an axis with one or two cotyledons, or seedling leaves. In the embryo, the epicotyl will become the shoot, and the radicle, a portion of the hypocotyl, will become the root. Food for the developing seedling may be stored in the endosperm at maturity or in the embryo itself.
NUTRITION AND TRANSPORT IN PLANTS
The body of a plant is basically a tube embedded in the ground and extending up into the light, where expanded surfaces--the leaves--capture the sun's energy and participate is gas exchange. The warming of the leaves by sunlight increases evaporation from them, creating a suction that draws water into the plant through the roots and up the plant through the xylem to the leaves. Transport from the leaves and other photosynthetically active structures to the rest of the plant occurs through the phloem. This transport is driven by osmotic pressure; the phloem actively picks up sugars near the places where they are produced, expanding ATP in the process, and unloads them where they are used. Most of the minerals critical to plant metabolism are accumulated by the roots, which expend ATP in the process. The mineral are subsequently transported in the water stream through the plant and distributed to the areas where they are used--another energy-requiring process.
Soil
Soils are produced by the weathering of rocks in the earth's crust; they vary according to the composition of those rocks. The crust includes about 92 naturally occurring elements. Most elements are combined into inorganic compounds called minerals; most rocks consist of several different minerals.
They weather to give rise to soils, which differ according to the composition of their parent rocks. The amount of organic materials in soils affects their fertility and other properties.
About half of the total soil volume is occupied by empty space, which my be filled with air or water depending on moisture conditions. Not all of the water in soil, however, is available to plants, because of the nature of water itself.
Water Movement
Water flows through plants in a continuous column, driven mainly by transpiration through the stomata. The plant can control water loss primarily by closing its stomata. The cohesion of water molecules and their adhesion to the walls of the very narrow cell columns through which they pass are additional important factors in maintaining the flow of water to the tops of plants.
The movement of water, with its dissolved sucrose and other substances, in the phloem does not require energy. Sucrose is loaded into the phloem near sites of synthesis, or sources, using energy supplied by the companion cells or other nearby parenchyma cells. The sucrose is unloaded in sinks, at the places where it is required. The water potential is lowered where the sucrose is loaded into the sieve tube and raised where it is unloaded.
Nutrient Movement
Apparently most of the movement of ions into a plant takes place through the protoplast of the cells rather than between their walls. Ion passage through cell membranes seems to be active and carrier mediated, although the details are not well understood.
The initial movement of nutrients into the roots is an active process that requires energy and that, as a result, specific ions can be can be maintained within the plant at very different concentrations from the soil. When roots are deprived of oxygen, they lose their ability to absorb ions, a definite indication that they require energy for this process to occur successfully. A starving plant--one from which light has been excluded--will eventually exhaust its nutrient supply and be unable to replace it.
Once the ions reach the xylem, they are distributed rapidly throughout the plant, eventually reaching all metabolical active parts. Ultimately the ions are removed from eh roots and relocated to other parts of the plant, their passage taking place in the xylem, where phosphorus, potassium, nitrogen, and sometimes iron may be abundant in certain seasons. The accumulation of ions by plants is an active process that usually takes place against a concentrations gradient and requires the expenditure of energy.
Carbohydrates Movement
Carbohydrate movement is where water moves through the phloem as a result of decreased water potential in areas of active photosynthesis, where sucrose is actively being loaded into the sieve tubes, and increased water potential in those areas where sucrose is being unloaded. Energy for the loading and unloading of the sucrose and other molecules is supplied by companion cells or other parenchyma cells. However, the movement of water and dissolved nutrients within the sieve tubes is a passive process that does not require the expenditure of energy.
Plant Nutrients
Plants require a number of inorganic nutrients. Some of these are macronutrients, which the plants need in relatively large amounts, and others are micronutrients, those required in trace amounts. There are nine macronutrients:
1. Carbon
2. Hydrogen
3. Oxygen
4. Nitrogen
5. Potassium
6. Calcium
7. Phosphorus
8. Magnesium
9. Sulfur
that approach or exceed 1% of a plant's dry weight, whereas there are seven micronutrients:
1. Iron
2. Chlorine
3. Copper
4. Manganese
5. Zinc
6. Molybdenum
7. Boron
that are present only in trace amounts.
PLANT DEVELOPMENT
Differentiation in Plant
Plants, unlike animals, are always undergoing development. Their cells do not move in relation to one another during the course of development, which is a continuous process.
Animals undergo development according to a fixed blueprint that is followed rigidly until they are mature. Plants, in contrast, develop constantly. The course of their development is mediated by hormones, which are produced as a result of interactions with the external environment.
Embryonic Development
Embryo development in animals involves extensive movements of cells in relation to one another, but the same process in plants consists of an orderly production of cells, rigidly bound by their cellulose-rich cell wall. The cells do not move in relation to one another in plant development, as they do in animal development. By the time about 40 cells have been produced in an angiosperm embryo, differentiation begins; the meristematic shoot and root apices are evident.
Germination in Plants
In the germination of seeds, the mobilization of the food reserves stored in the cotyledons and in the endosperm is critical. In the cereal grains, this process is mediated by hormones of the kind known as gibberellins, which appear to activate transcription of the loci involved in to production of amylase and other hydrolase enzymes.
REGULATION OF PLANT GROWTH
.
Plant Hormones
Hormones are chemical substances produced in small quantities in one part of an organism and transported to another part of the organism, where they bring about physiological responses. The tissues in which plant hormones are produced are not specialized particularly for that purpose, nor are there usually clearly defined receptor tissues or organs.
The major classes of plant hormones--auxins, cytokinins, gibberellins, ethylene, and abscisic acid--interact in complex ways to produce a mature, growing plant. Unlike the highly specific hormones of animals, plant hormones are not produced in definite organs nor do they have definite target areas. They stimulate or inhibit growth in response to environmental clues such as light, day length, temperature, touch, and gravity and thus allow plants to respond efficiently to environmental demands by growing in specific directions, producing flowers, or displaying other responses appropriate to their survival in a particular habitat.
Tropisms
Tropisms in plants are growth responses to external stimuli. A phototropism is a response to light, gravvitropism is a response to gravity, and thigmotropism is a response to touch.
Turgor Movement
Turgor movements are reversible but important elements in adaptation of plants to their environments. By means of turgor movements, leaves, flowers, and other structures of plants track light and take full advantage of it.
Dormancy
Dormancy is a necessary part of plant adaptation that allows a plant to bypass unfavorable seasons, such as winter, when the water my be frozen, or periods of drought. Dormancy also allows plants to survive in many areas where they would be unable to grow otherwise.
f:\12000 essays\sciences (985)\Biology\DNA 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
After staying on the plant Earth reaching the human genetic technology, I have come up with this report the four things I am going to talk about in this report are:
1) What is the chemical basis of the plant Earth
2) What do human mean by "genetic technology" and how is it possible
3) How have human used this technology
4) Are humans concerned about this technology
1)The chemical basis of the plant earth is deoxyribonucleic acid (generally shortened to DNA), it has the shape of a long twisted ladder each rung of this ladder is made up of a pair of chemical bases, the information that human body need to make proteins is coded and contained in the order of these bases along the length of the DNA ladder. All DNA molecules consist of a linked series of units called nucleotides. Each DNA nucleotide is composed of three subunits: a special sugar called deoxyribose, a phosphate group that is joined to one end of the sugar molecule, and one of several different nitrogenous bases linked to the opposite end of the deoxyribose. DNA has two specific functions: to provide foe protein synthesis, and hence the growth and development of the organism and to furnish all descendants with protein-synthesis and hence the growth and development of the organism. So all living things on plant Earth contains the genetic material DNA and the structure of a DNA molecule or combination of DNA mol
ecules determines the shape, form, and function of the offspring.
2)The term 'genetic technology'(or genetic engineering) is the modification of the genetic properties of an organism by the use of recombinant DNA technology. By using this technology it is possible to alert characteristics of living organisms in specific ways. The chemical languages of DNA in all living thing are the same so it is possible to take one gene from one living thing and transfer into another living thing. To give an animal permanent genetic change the new gene must be inserted into the single cell embryo from which all the cell's will develop in the adult animal's body. It is much difficult to introduce DNA into plant cells. So humans take one microbe that infects the plant normally and puts it in a virus or bacterium and make the it carries the DNA into the plant cell.
3)Humans has already used 'genetic technology' and here are the three examples of the living thing's human have used 'genetic technology' on. First example is the environmentally friendly cotton (cotton is any of various shrubby plants grown for the soft, white, downy fibers surrounding oil-rich seeds' humans use the fibers to make cloth). On the cotton there is a kind of pest called cotton bollworm, it ate the fiber to live. Each year humans have to spend a lot of money on pesticides to kill those worms. Now with 'genetic technology' humans are trying to make bollworm resistant cotton. Humans have found a bacterium that kills the bollworm, they are going to put this new gene into the cotton so it will produce a protein that kills the bollworm, but the protein is harmless to all the other living things. The second example I have got is disease free potatoes. Potatoes always get infected by the leaf roll virus, with the new 'genetic technology' humans are able to put a gene that normally produces the outer pr
otein coat of the virus. It helps the potato to resist the leaf roll virus and tastes the same. The last example I have got is use 'genetic technology' on animal health. Ticks and lice are the insects that suck animals fat the animals that have tick or lice will go skinny and die. Now with the help of 'genetic technology' humans are able to transfer a plant gene into the animal's body so the animal will produce a natural insecticide in their sweat glands, then whenever tick or lice suck its blood they will die.
Although 'genetic technology' brings a lot of convenience to human society but it brings up a lot of concerns too. The kinds of questions like 'are we playing god' and 'are we interfering with nature' are being risen. Many humans are worry about that genetic technology will change species on plant Earth to make them dangerous or threat they life and it does have the potential to produce dangerous organisms.
That is all the research I have done on plant Earth. From all the information I have just presented you can see that human genetics is a very complicated thing and it is completely different from ours, but humans are very intelligent species they have already learned and discovered some basic of their gene and can perform some simple genetic changes to animals and plants. I think with in the next 5 centuries they will discover all about their genes and improve their spices to a new generation.
Bibliography
Burnet.L Applied Genetics
Cambridge University Press. New York. 1988.
Ford.E.B Understanding Genetics
Faber & Faber. London. 1979.
Hubbard.R & Wald.E Exploding The Gene Myth
Beacon Press. Boston.
863 WORDS
f:\12000 essays\sciences (985)\Biology\DNA The Making.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DNA: The Making
Lyle Sykes
For more than 50 years after the science of genetics was established and the patterns of inheritance through genes were clarified, the largest questions remained unanswered: How are the chromosomes and their genes copied so exactly from cell to cell, and how do they direct the structure and behavior of living things? This paper will discuss those questions and the people that answered them.
Two American geneticists, George Wells Beadle and Edward Lawrie Tatum, provided one of the first important clues in the early 1940s. Working with the fungi Neurospora and Penicillium, they found that "genes direct the formation of enzymes through the units of which they are composed." (Annas 1996) Each unit (a polypeptide) is produced by a specific gene. This work launched studies into the chemical nature of the gene and helped to establish the field of molecular genetics.
"The fact that chromosomes were almost entirely composed of two kinds of chemical substances, protein and nucleic acids, had long been known. Partly because of the close relationship established between genes and enzymes, which are proteins, protein at first seemed the fundamental substance that determined heredity." (Goetinck 1995) "In 1944, however, the Canadian bacteriologist Oswald Theodore Avery proved that deoxyribonucleic acid (DNA) performed this role. He extracted DNA from one strain of bacteria and introduced it into another strain. The second strain not only acquired characteristics of the first but passed them on to subsequent generations. By this time DNA was known to be made up of substances called nucleotides. Each nucleotide consists of a phosphate, a sugar known as deoxyribose, and any one of four nitrogen-containing bases. The four nitrogen bases are adenine (A), thymine (T), guanine (G), and cytosine (C)."(Caldwell 1996)
"In 1953, putting together the accumulated chemical knowledge, geneticists James Dewey Watson of the U.S. and Francis Harry Compton Crick of Great Britain worked out the structure of DNA. This knowledge immediately provided the means of understanding how hereditary information is copied. Watson and Crick found that the DNA molecule is composed of two long strands in the form of a double helix, somewhat resembling a long, spiral ladder. The strands, or sides of the ladder, are made up of alternating phosphate and sugar molecules. The nitrogen bases, joining in pairs, act as the rungs. Each base is attached to a sugar molecule and is linked by a hydrogen bond to a complementary base on the opposite strand." (Caldwell 1996) "Adenine always binds to thymine, and guanine always binds to cytosine." (Annas 1996) "To make a new, identical copy of the DNA molecule, the two strands need only unwind and separate at the bases (which are weakly bound); with more nucleotides available in the cell, new complementary bases ca
n link with each separated strand, and two double helixes result. Since the "backbone" of every chromosome is a single long, double-stranded molecule of DNA, the production of two identical double helixes will result in the production of two identical chromosomes." (Caldwell 1996)
"The DNA backbone is actually a great deal longer than the chromosome but is tightly coiled up within it. This packing is now known to be based on minute particles of protein known as nucleosomes, just visible under the most powerful electron microscope. The DNA is wound around each nucleosome in succession to form a beaded structure. The structure is then further folded so that the beads associate in regular coils. Thus, the DNA has a "coiled-coil" configuration, like the filament of an electric light bulb." (Popper 1996)
"After the discoveries of Watson and Crick, the question that remained was how the DNA directs the formation of proteins, compounds central to all the processes of life. Proteins are not only the major components of most cell structures, they also control virtually all the chemical reactions that occur in living matter. The ability of a protein to act as part of a structure, or as an enzyme affecting the rate of a particular chemical reaction, depends on its molecular shape. This shape, in turn, depends on its composition. Every protein is made up of one or more components called polypeptides, and each polypeptide is a chain of subunits called amino acids. Twenty different amino acids are commonly found in polypeptides." (Caldwell 1996) "The number, type, and order of amino acids in a chain ultimately determine the structure and function of the protein of which the chain is a part." (Marx 1996)
"Since proteins were shown to be products of genes, and each gene was shown to be composed of sections of DNA strands, scientists reasoned that a genetic code must exist by which the order of the four nucleotide bases in the DNA could direct the sequence of amino acids in the formation of polypeptides." (Barinaga 1995) "In other words, a process must exist by which the nucleotide bases transmit information that dictates protein synthesis. This process would explain how the genes control the forms and functions of cells, tissues, and organisms. Because only four different kinds of nucleotides occur in DNA, but 20 different kinds of amino acids occur in proteins, the genetic code could not be based on one nucleotide specifying one amino acid. Combinations of two nucleotides could only specify 16 amino acids (4² = 16), so the code must be made up of combinations of three or more successive nucleotides. The order of the triplets-or, as they came to be called, codons-could define the order of the amino acids in the
polypeptide." (Snaz 1996)
"Ten years after Watson and Crick reported the DNA structure, the genetic code was worked out and proved biologically. Its solution depended on a great deal of research involving another group of nucleic acids, the ribonucleic acids (RNA). The specification of a polypeptide by the DNA was found to take place indirectly, through an intermediate molecule known as messenger RNA (mRNA). Part of the DNA somehow uncoils from its chromosome packing, and the two strands become separated for a portion of their length. One of them serves as a template upon which the mRNA is formed (with the aid of an enzyme called RNA polymerase). The process is very similar to the formation of a complementary strand of DNA during the division of the double helix, except that RNA contains uracil (U) instead of thymine as one of its four nucleotide bases, and the uracil (which is similar to thymine) joins with the adenine in the formation of complementary pairs. Thus, a sequence adenine-guanine-adenine-thymine-cytosine (AGATC) in the cod
ing strand of the DNA produces a sequence uracil-cytosine-uracil-adenine-guanine (UCUAG) in the mRNA." (Witten 1996)
"The production of a strand of messenger RNA by a particular sequence of DNA is called transcription. While the transcription is still taking place, the mRNA begins to detach from the DNA. Eventually one end of the new mRNA molecule, which is now a long, thin strand, becomes inserted into a small structure called a ribosome, in a manner much like the insertion of a thread into a bead. As the ribosome bead moves along the mRNA thread, the end of the thread may be inserted into a second ribosome, and so on." (Lemonick 1996) Using a very high-powered microscope and special staining techniques, scientists can photograph mRNA molecules with their associated ribosome beads.
"Ribosomes are made up of protein and RNA. A group of ribosomes linked by mRNA is called a polyribosome or polysome. As each ribosome passes along the mRNA molecule, it "reads" the code, that is, the sequence of nucleotide bases on the mRNA. The reading, called translation, takes place by means of a third type of RNA molecule called transfer RNA (tRNA), which is produced on another segment of the DNA. On one side of the tRNA molecule is a triplet of nucleotides. On the other side is a region to which one specific amino acid can become attached (with the aid of a specific enzyme). The triplet on each tRNA is complementary to one particular sequence of three nucleotides-the codon-on the mRNA strand. Because of this complementary, the triplet is able to "recognize" and adhere to the codon. For example, the sequence uracil-cytosine-uracil (UCU) on the strand of mRNA attracts the triplet adenine-guanine-adenine (AGA) of the tRNA. The tRNA triplet is known as the anticodon." (Witten 1995)
"As tRNA molecules move up to the strand of mRNA in the ribosome beads, each bears an amino acid. The sequence of codons on the mRNA therefore determines the order in which the amino acids are brought by the tRNA to the ribosome. In association with the ribosome, the amino acids are then chemically bonded together into a chain, forming a polypeptide. The new chain of polypeptide is released from the ribosome and folds up into a characteristic shape that is determined by the sequence of amino acids. The shape of a polypeptide and its electrical properties, which are also determined by the amino acid sequence, dictate whether it remains single or becomes joined to other polypeptides, as well as what chemical function it subsequently fulfills within the organism." (Witten 1996)
"In bacteria, viruses, and blue-green algae, the chromosome lies free in the cytoplasm, and the process of translation may start even before the process of transcription (mRNA formation) is completed. In higher organisms, however, the chromosomes are isolated in the nucleus and the ribosomes are contained only in the cytoplasm. Thus, translation of mRNA into protein can occur only after the mRNA has become detached from the DNA and has moved out of the nucleus." (O'Brien 1996)
As funding for research becomes available for scientist, they continue to study the DNA molecule with hopes of find the secrets that are hidden with in our own bodies. Their findings continue to aid us in cures and the prevention of many illnesses that years ago we couldn't solve. Hopefully the research will soon pay off, with the cure for cancer or Alzheimer's Disease, for instance. Only time will tell what discoveries will be made to help those that are ill. The sad thing is, most that are ill have very little time to spare. That is why the DNA research is important now, to save the ones that aren't in need.
Bibliography
Annas, George J. 1996, "Genetic Prophecy and Genetic Privacy"; SIRS 1996 Electronic Only, Article 103, January 1996, pg. 18+.
Barinaga, Marcia 1995, "Missing Alzheimer's Gene Found"; SIRS 1996 Medical Science, Electronic Only, Article 201, August 18, 1995, pg. 917-918.
Caldwell, Mark 1996, "Beyond the Lab Rat"; SIRS 1996 Medical Science, Article 69, May 1996, pg. 70-75.
Goetinck, Sue 1995, "Genetics: Gene Whiz!"; SIRS 1996 Medical Science, Article 28, October 16, 1995, pg. 6D+.
Lemonick, Michael D. 1996, "Hair Apparent"; Time, v.147, June 10, 1996, pg. 69.
Marx, Jean 1996, "A Second Breast Cancer Susceptibility Gene Is Found"; SIRS 1996 Medical Science, Electronic Only, Article 197, January 5, 1996, pg. 30-31.
O'Brien, Claire 1996, "New Tumor Suppresser Found in Pancreatic Cancer"; SIRS 1996 Medical Science, Electronic Only, Article 195, January 19, 1996, pg. 294.
Popper, Andrew 1996, "Digging for Victims of Bosnia's War"; U.S. News and World Report, v. 121, August 12, 1996, pg. 40-41.
Sanz, Cynthia 1996, "A Son's Crusade"; People Weekly, v.45, April 8, 1996, pg. 126-8+.
Witten, Mark 1995, "Solving Alzheimer's"; SIRS 1996 Medical Science, Article 30, November 1995, pg. 35+.
Witten, Mark 1996, "Cancer, Fate & Family"; SIRS 1996 Medical Science, Article 47, Jan./Feb. 1996, pg. 60-73.
f:\12000 essays\sciences (985)\Biology\DNA.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Deoxyribonucleic Acid - the fingerprint of life also know as DNA was first mapped out in the early 1950's by British biophysicist, Francis Harry Compton Crick and American biochemist James Dewey Watson. They determined the three-dimensional structure of DNA, the substance that passes on the genetic characteristics from one generation to the next. DNA is found in the chromosomes in the nucleus of a cell.
"Every family line has it's own unique pattern of restriction-enzyme DNA fragments. This variation in patterns of DNA fragments found in human genetic lineages is called 'restriction-fragment length polymorphism'(RFLP). (Louis Levine, ?) Because each person, except for identical twins(which have the exact same DNA), is formed from two family lines the pattern of sizes of the fragments from an individual is unique and can serve as a DNA fingerprint of that person. These 'fingerprints' have became very important in identifying criminals in a number of violent crimes where the victims aren't able to. Blood or semen stains on clothing, sperm cells found in a vaginal swab taken after a rape, or root hairs are all available for analysis. Although other body tissues such as skin cells and saliva can provide genetic information about a person for Forensic Science purposes, blood is the most useful source of inherited traits. If the DNA fingerprints produced from two different samples match, the two samples pr
obably came from the same person.
Here are some examples of court cases where DNA plays an important roll in the outcome of the trial.
----Hauppauge N.Y.---After 11 years in prison for rape Kerry Kotler cried tears of joy becoming one of the first convicts in the United States to be freed by DNA technology.
- 1 -
At a banquet held for Kotler he received a standing ovation from the guest's of his lawyer, Barry Schech and Peter Neufeld, who would later use their DNA expertise to help free O.J. Simpson.
Now the very weapon used to free Kotler will be used against him and instead of his lawyers praising DNA testing they will be trying to tear it down. Four years after being released from prison Kotler was charged with another rape and the DNA test matched him to the semen found on the victims clothing. Posing as a police officer he forced a 20 year old college student off the highway and raped her. A partial license plate number and a description of the car led them to Kotler. The semen matched Kotler's blood and the chances of the semen being somebody else's is one to 7.5 million. Also, dog hairs on the victims clothing matched hairs from Kotler's German shepherd. Kotler, 37, is free on $25000 bail and could get up to 50 years in jail if convicted of rape and kidnapping.
----Anamosa, Iowa---22 year old Cathy Jo Bohlken was sexually assaulted and murdered. Genetic evidence from fluid taken from her body points to an 18 year old named Travis Jamieson. Bohlkan's body was found DEC 26, 1993 on the floor of her duplex with a bag over her head and her hands wrapped with duct tape. Autopsy shows she died of multiple stab wounds. The search of a pick-up truck registered to Jamieson's parents revealed a utility knife and a "red-brown stain" on the steering wheel. (http://www.wcinet.com/th/news/th0208/stories/1355.htm)
----Norman, Okla---Thomas Webb III was released after more than 13 years in prison for a
1982 rape. DNA testing was not available at the time so Gale Webb, Thomas' wife, pushed
- 2 -
authorities to use DNA genetic profiling on the 14 year old evidence. These DNA tests ruled him out as a suspect. (http://www.wcinet.com/th/news/th0525/stories/12284.htm)
----Santa Ana, Calif---Kevin Lee Green cried as the judge apologized for the mistake and freed him from prison after nearly 17 years. He was convicted of killing his unborn baby and nearly beating his wife to death.
He was released as authorities prepared to charge a convicted rapist with the murder of Green's unborn child. The reversal of Green's conviction came after another man confessed and his statement was backed up by DNA technology not available in 1980. He was sent to prison on the testimony of his wife who at first didn't remember the attack but that her memory suddenly returned as she read a baby magazine. She testified that her husband severely beat her because she refused sex. Green insisted that he left to get a cheeseburger and came home to see a man leaving in a van. Authorities now believe the attacker was Gerald Parker, 41, who owned a van at the time. He is suspected to be in the "bedroom basher" serial killings in Orange County in 1978 and 1979. (http://www.wcinet.com/th/news/th0622/stories/15897.htm)
----San Francisco---Theodore Kaczynski, 54, has been jailed in Helena, Mont. since his arrest April 3rd at the mountain cabin where he spent most of his time since quitting his job at the University of California in 1969. The former math professor has been charged with possession of bomb-making material. Kaczynski, the suspected Unabomber is blamed for 3 deaths and 23 injuries in an 18 year bombing spree that begun in 1978. DNA tests of saliva found on two letters-one sent by the Unabomber and one by Kaczynski to his family-
- 3 -
showed a genetic link. An FBI investigation found common phrases and misspellings
between his writings and documents say were written by the Unabomber. A search of his cabin revealed the original copy of the Unabomber's 35000 word anti-technology manifesto, a typewriter used on the manifesto, bombs, bomb parts and detonators. (http://www.wcinet.com/th/news/th0615/stories/14962.htm)
The accuracy of DNA fingerprinting has been challenged for many reasons. One reason is because DNA segments rather than complete DNA strands are fingerprinted, a DNA fingerprint may not be unique. There have been no large scale research to confirm the uniqueness of DNA fingerprints. Also, DNA fingerprinting is often done in private laboratories that may not follow uniform testing standards and quality controls, and since humans must interpret the test, human error could lead to false results. DNA fingerprinting is expensive, so suspects who are unable to provide their own DNA experts may not be able to adequately defend themselves against charges based on DNA evidence.
There are two methods which can be used to test DNA. "The older one called 'restriction-fragment length polymorphism' (RFLP) takes up to two weeks to complete and requires a larger supply of high quality, uncontaminated DNA. The good thing about this test is that it finds the 'random repeats'. These extra chemical units give everyone's DNA a unique pattern. The newer method is called Polymerase Chain Reaction (PCR). This system uses an enzyme that can be directed towards regions of DNA known to contain variations. The results can be printed out in a series of blue dots. The good thing about this method is that it can be completed in a few days and it only requires a small amount of
- 4 -
DNA, even if it has begun to degrade and deteriorate. Although PCR is faster and easier it
does have its drawbacks. The old method finds rarely repeated characteristics while PCR finds genetic features shared by many people. That means that the older method might show one person in a billion is likely to have the same DNA as a suspect while PCR shows that the same characteristics may be shared by as many as one in a thousand.(Nichols, P58)
The discovery of DNA has led to tremendous advances in solving crimes but there is still a lot to learn. The technology of DNA is still in it's infancy and as it develops and as lab procedures become standardized DNA will be an even more powerful force in the courtroom.
- 5 -
f:\12000 essays\sciences (985)\Biology\Dolphins.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dolphins
Dolphins are mammals closely related to whales
and porpoises. Dolphins have a powerful and streamlined
body. They are found in all seas and oceans. Dolphins can be
told apart from porpoises by their nose, which is beaklike, and
also their conical teeth. Porpoises have a flatter nose, sharper
teeth, and a more solid body.
There are 32 known species of dolphins. The bottle-
nosed dolphin is often the species used in aquatic shows. The
common dolphin inspired many Mediterranean folk lores. Both
of the dolphins above appear in open waters, usually around
cruise ships. They like to show off around the boat.
There are also freshwater dolphins that live in rivers
of Asia and South America. The Buffeo dolphin has been
spotted up to 1250 miles up he Amazon River. The buffeo is the
smallest of all dolphins averaging about 4 feet. The bottlenose
is closer to 10 feet. The killer whale, which is also considered a
dolphin, can grow to be 30 feet long. The pilot whale is also
considered a dolphin.
Dolphins were once hunted by commercial boats
for the small amount of oil that can be extracted from their
body. This oil is used to lubricate small parts in watches.
Cheaper oils have been found, so dolphins are not hunted for
this reason anymore. Dolphins can be caught in tuna nets by
accident. Since dolphins have to breath at the surface they
drown in tuna nets. It is estimated that 4.8 million dolphins were
killed in tuna nets from 1959 to 1972. Under pressure from
animal rights activists tuna consumers will not accept tuna from
canners that do not protect dolphins. Animal rights activists
also believe that dolphins shouldn't be in captivity for use in
aquatic shows.
Dolphins eat a lot of food in a day, usually about
one third of their body weight. A dolphin's diet consists of
mostly fish and squid. Dolphins can swim very fast, so they are
able to easily catch their food. The dolphin has 200 to 250
sharp teeth. Dolphins follow schools of fish in groups. The
Pacific white-sided species is estimated to travel in groups with
tens of thousands of members, while on the other hand
bottlenose dolphins travel in groups that contain only a few
members.
Dolphin, like whales, breathe through a blowhole in
the top of their head. While traveling dolphins break the surface
once every two minutes. When dolphins exhale water is
sometimes thrown from the blowhole. After exhaling the
dolphins inhale and disappear into the ocean. A dolphins lungs
are adapted to resist the physical problems that are caused by
quick changes in pressure. With this adaptation dolphins can
dive up to 1000 feet with no problem.
A dolphins tail is, like all other aquatic mammals,
moves in an up and down motion. Dolphins double fluke their
tale to move forward. Their flippers are used to stabilize the
dolphin as they swim.
A dolphin is very streamlined and can average a
speed of up to 19 miles per hour with bursts of over 25 miles
per hour. At these speeds, dolphins can cover great distances
in a day.
The best studied species of dolphins are the bottle-
nosed. Bottle-nosed dolphins reach sexual maturity at the ages
of 5 to 12 years in females and 9 to 13 years in males. Dolphins
mate in the spring. The dolphins carry the baby, which is called
a calf, for 11 to 12 months. At this time a single calf is born,
coming out tail first. Calves can swim and breathe minutes after
birth. A calf will nurse for up to 18 months. Calves can keep up
with their mother by remaining close and taking advantage of
it's mothers aerodynamic swimming.
Dolphins almost always emit either clicking sounds
or whistles. The clicks are short pulses of about 300 sounds
per second, which come from a mechanism located just below
a dolphin's blowhole. These clicks are used to locate objects
around a dolphin. When the sound of a click bounces off of an
object and back to the dolphin, the dolphin uses that
information to move without hitting anything. This clicking
system is similar to a bats radar system. The whistles are
single-toned squeals that come from deep in the larynx. These
whistles are used to communicate alarm, sexual excitement,
and perhaps other emotions.
Because of dolphins ability to learn and perform
complex tricks in captivity, their continuous communication
with one another, and their ability, with training, to understand a
few human words, some scientists think that dolphins could
learn a language to communicate with humans.
Most experts agree that even though a dolphins
problem-solving ability is close to that of a primate, no evidence
has been shown that dolphins communication skills even come
close to the complexity of a true language.
All dolphins belong to the order of Cetacea. The
bottle-nosed dolphin is scientifically classified as Tursiops
truncatus. The common dolphin is classified as Delphinus
delphis, and the buffeo dolphin is classified as Sotalia fluviatilis.
The killer whale is classified as Orcinus orca and the white-
sided dolphin is classified as Lagenorhynchus obliquidens.
The Dolphins are also a football team located in
Miami. They usually play good in the regular season, but when
playoff time rolls around the team falls apart. I do not like the
Dolphins and I wish that they would withdraw from the National
Football League.
The End!!!
p Dolphins
Dolphins are mammals closely related to whales
and porpoises. Dolphins have a powerful and streamlined
body. They are found in all seas and oceans. Dolphins can be
told apart from porpoises by their nose, which is beaklike, and
also their conical teeth. Porpoises have a flatter nose, sharper
teeth, and a more solid body.
There are 32 known species of dolphins. The bottle-
nosed dolphin is often the species used in aquatic shows. The
common dolphin inspired many Mediterranean folk lores. Both
of the dolphins above appear in open waters, usually around
cruise ships. They like to show off around the boat.
There are also freshwater dolphins that live in rivers
of Asia and South America. The Buffeo dolphin has been
spotted up to 1250 miles up he Amazon River. The buffeo is the
smallest of all dolphins averaging about 4 feet. The bottlenose
is closer to 10 feet. The killer whale, which is also considered a
dolphin, can grow to be 30 feet long. The pilot whale is also
considered a dolphin.
Dolphins were once hunted by commercial boats
for the small amount of oil that can be extracted from their
body. This oil is used to lubricate small parts in watches.
Cheaper oils have been found, so dolphins are not hunted for
this reason anymore. Dolphins can be caught in tuna nets by
accident. Since dolphins have to breath at the surface they
drown in tuna nets. It is estimated that 4.8 million dolphins were
killed in tuna nets from 1959 to 1972. Under pressure from
animal rights activists tuna consumers will not accept tuna from
canners that do not protect dolphins. Animal rights activists
also believe that dolphins shouldn't be in captivity for use in
aquatic shows.
Dolphins eat a lot of food in a day, usually about
one third of their body weight. A dolphin's diet consists of
mostly fish and squid. Dolphins can swim very fast, so they are
able to easily catch their food. The dolphin has 200 to 250
sharp teeth. Dolphins follow schools of fish in groups. The
Pacific white-sided species is estimated to travel in groups with
tens of thousands of members, while on the other hand
bottlenose dolphins travel in groups that contain only a few
members.
Dolphin, like whales, breathe through a blowhole in
the top of their head. While traveling dolphins break the surface
once every two minutes. When dolphins exhale water is
sometimes thrown from the blowhole. After exhaling the
dolphins inhale and disappear into the ocean. A dolphins lungs
are adapted to resist the physical problems that are caused by
quick changes in pressure. With this adaptation dolphins can
dive up to 1000 feet with no problem.
A dolphins tail is, like all other aquatic mammals,
moves in an up and down motion. Dolphins double fluke their
tale to move forward. Their flippers are used to stabilize the
dolphin as they swim.
A dolphin is very streamlined and can average a
speed of up to 19 miles per hour with bursts of over 25 miles
per hour. At these speeds, dolphins can cover great distances
in a day.
The best studied species of dolphins are the bottle-
nosed. Bottle-nosed dolphins reach sexual maturity at the ages
of 5 to 12 years in females and 9 to 13 years in males. Dolphins
mate in the spring. The dolphins carry the baby, which is called
a calf, for 11 to 12 months. At this time a single calf is born,
coming out tail first. Calves can swim and breathe minutes after
birth. A calf will nurse for up to 18 months. Calves can keep up
with their mother by remaining close and taking advantage of
it's mothers aerodynamic swimming.
Dolphins almost always emit either clicking sounds
or whistles. The clicks are short pulses of about 300 sounds
per second, which come from a mechanism located just below
a dolphin's blowhole. These clicks are used to locate objects
around a dolphin. When the sound of a click bounces off of an
object and back to the dolphin, the dolphin uses that
information to move without hitting anything. This clicking
system is similar to a bats radar system. The whistles are
single-toned squeals that come from deep in the larynx. These
whistles are used to communicate alarm, sexual excitement,
and perhaps other emotions.
Because of dolphins ability to learn and perform
complex tricks in captivity, their continuous communication
with one another, and their ability, with training, to understand a
few human words, some scientists think that dolphins could
learn a language to communicate with humans.
Most experts agree that even though a dolphins
problem-solving ability is close to that of a primate, no evidence
has been shown that dolphins communication skills even come
close to the complexity of a true language.
All dolphins belong to the order of Cetacea. The
bottle-nosed dolphin is scientifically classified as Tursiops
truncatus. The common dolphin is classified as Delphinus
delphis, and the buffeo dolphin is classified as Sotalia fluviatilis.
The killer whale is classified as Orcinus orca and the white-
sided dolphin is classified as Lagenorhynchus obliquidens.
The Dolphins are also a football team located in
Miami. They usually play good in the regular season, but when
playoff time rolls around the team falls apart. I do not like the
Dolphins and I wish that they would withdraw from the National
Football League.
The End!!!
p
f:\12000 essays\sciences (985)\Biology\Down Syndrome An informative essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Biology\Dr develops No calorie fat substitute.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No-Cal Powder May Sub for Food's Fat
George E. Inglett of the U.S. Department of Agriculture's Biopolymer Research Unit in Peoria III invented a no-calorie fat substitute called Z-Trim. It is a mix of crushed fibers made from the hulls of grains. It can replace the fat and some of the carbohydrates in foods such as chocolates, brownies, cheese, and ground beef. He spent three years trying to perfect Z-Trim to be smooth because he made it out of tough hulls of corn, oats, and rice. He first crushed the hulls with a solution of hydrogen peroxide. He washed the peroxide off in centrifuge. After this step it was still too large, so he put the pieces back through the first step of the hydrogen peroxide and the centrifuge. That made it smooth. Now, it is a fine, white cellulose powder that can be made into a gel by adding water.
Inglett also developed Oatrim. This is made up of a digestible fiber from oat flour that provides four calories per gram.
Z-Trim compared to another fat substitute, olestra, is different. Olestra can cause gastrointestinal distress and take vitamins and carotenoids out of the body. The new substitute does not have those affects. Inglett says that you should eat more of the kind of fibers that make up Z-Trim to reduce the chances of getting intestinal disorders.
But there are some people who argue with Inglett's theory on his new substitute. "I wouldn't expect Z-Trim to have the same kinds of problems as olestra," says Margo Wootan, a senior scientist at the Center for Science in the Public Interest in Washington, D.C. "Fiber is already found in our diet, while olestra is a synthetic chemical. There is also concern for the "microbial stability" of foods containing Z-Trim. "Whenever you remove the lipid material and replace it with water," says Thomas H. Parliment, a flavor chemist for Kraft Foods in White Plains, New York, "microbes are to grow, and you can get mold." That would have to be worked out before Z-Trim could go on the market, Parliment says.
If you want to replace fat in food, Inglett says, only 3 safe no-calorie possibilities exist: water, air, and fiber. "You don't sell anybody air, you don't sell anybody water, but you can sell people Z-Trim.
f:\12000 essays\sciences (985)\Biology\Ebola.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
March 1, 1996 EBOLA Imagine going on vacation to a foreign country and
when youcome home you are horribly sick. Your head hurts, you have a highfever, and you start
vomiting. Chances are that you may havecontracted the Ebola virus. Ebola was first
discovered in the village of Yambuku(1) nearthe Ebola River in Zaire. Since its discovery, there
have beenfour outbreaks of this disease. There are three known strains, ofvariations of ebola.
There is no known cure for this disease(2).Ebola has become one of the most mysterious and
feared viruses onthe face of this earth. Ebola's first documented appearance was in Zaire in
1976. Noone knows where ebola comes from or what the original host is.However, scientists know
that man is not ebola's natural host(3).The host was first suspected of being carried by monkeys
in theAfrican rain forests(4), but in one case the monkeys at a holdingfacility broke out and had to
be killed. In the pursuit of a cure and an origin, there have beenseveral teams of scientists
whose top priority is to find theviruses origin(5). The teams have trekked through the rainforests
of Africa to collecting different species of animals,bugs, and plant life. Bugs are also collected
from the hospitalsand from the surrounding huts of the villages. So far 36,000specimens have
been collected. Once they have been gathered, thespecimens are put into liquid nitrogen and
flown back to theUnited States, where they are studied at the Centers for DiseaseControl in
Atlanta Georgia and the Army Medical Research Instituteof Infectious Diseases at Fort Detrick
Md.,(6). Researchers havediscovered the source of human infection for all level fourorganisms
except ebola(7). This means that all organisns thatcause deadly viruses have been contained and
studied, and have hadantibodies created to ward of the illnesses that are caused. Although
Ebola is a mystery to humans, the virus isrelatively hard to catch and it kills quickly, lessening
thechance victims will infect others. It is transmitted by contactwith bodily fluids like blood, vomit
and semen or contaminatedsyringes and is not known to be passed along through
casualcontact(8). When the first outbreak of ebola occurred, it was in 1976 inZaire and in
Sudan at the same time. There were 318 casesreported in Zaire and 240 of those cases proved
to be fatal. InSudan, there were 284 cases and 134 of those cases proved to befatal. In 1979,
there was another small epidemic in the sameregion of Sudan. In 1989 there was a breakout in
Reston Virginia,at a monkey holding facility, that killed over 400 monkeys thathad been shipped
from the Phillines. This strand however, is onlylethal to monkeys and id not a threat to
humans(9). In 1995, therewas an outbreak in Kikwit Zaire that claimed 233 lives. At least7 people
survived that outbreak becauses of a new breakthroughthat is a possible solution to the loss of
lives that are sufferedin a outbreak. Blood from one surviving patient can be transfusedto a
person of the same blood type to possibly save the personslife. Such was the case in 1995(10).
Scientists were able tofind who the first person to contract the virus was in 1995. Theman's name
was Gaspard Menga. Menga infected his family, and hisfamily infected others(11). Menga is
known as the index patient.The reason it is so important to have the index patient is thatthis way
they can trace the patients movements and try to find theorigin of the virus. Scientists are now
arguing that if therewasn't so much interference with the rain forests that therewouldn't be new
diseases emerging all the time(12) The most recent outbreak happened in January of 1996 in
asmall village in inland Zaire. Two children were playing when thecame upon a dead chimpanzee
and they took it back to the villagewhere the villagers celebrated for the finding of such a
wonderfulthing. The reason this was so celebrated was because meat is rarein that village.
Anyone who helped clean or cook the animalbecame ill with the deadly ebola virus. The final
death count was16 people. Villagers have been warned not to eat any animals thatthey find
already dead and to be careful not to eat any sickanimals that they may encounter. Scientists
now believe that monkeys are not the original hostbecause they seem to just as susceptible to the
disease as humans.Scientists are hoping that they will make some substantialdiscoveries with this
outbreak.(13) Scientists do know that ebola is a strand of sevenproteins(14) that belongs to a
family of viruses calledfilovirusus. The virus consists of a shell of proteinssurrounding genetic
material. The virus attaches itself to a hostcell, and changes the chemicals makeup to fit its own
so that itcan reproduce(15). Ebola is a hemorrhagic virus that has a short incubationperiod of
about two days to two weeks(16). It causes high fever,chills, internal and external bleeding,
vomiting, the eyes turnred and the skin becomes blotchy and bruises appear. The surfaceveins
and arteries erode. Organs liquify and blood flows fromevery opening in the body including the
eyes and ears(17). Thisis followed by a painful death that usually occurs within threeweeks(18).
There are three known strains of The virus. Ebola Zaire,ebola Sudan, and ebola Reston. Ebola
Zaire is the most lethal ofthe three followed by ebola Sudan and then ebola Reston. EbolaReston
is the least worried about because it has not proved to behostile to humans. The question of
whether or not this virus could becomeairborne has struck fear in many. Scientists say that it
isunlikely that it will become airborne, because it is killed byultraviolet rays within seconds. The
only way that it couldsurvive is if it mutated to become resistant to ultraviolet rays. At this point,
a person is more likely to contract HIV thanit is to contract the ebola virus, although it takes ten
years toaffect a person the way ebola does in ten days. Even though ebola is a very
mysterious and feared disease, itis in the process of becoming more understood. It can destroy
anentire city in a matter of weeks, and could wipe out an entirenation if it ever became airborne,
but it is a very difficultdisease to contract so the united states is probably safe from anynear
future epidemics. On the other hand many third worldcountries could have serious problems if
there is an outbreak dueto unsanitaryliving and medical conditions. The hospitals and
medicalpersonnel reuse needles that have been infected and they don't uselatex or any other kind
of gloves which can be a cause of widespread sickness. Everyone hopes that diseases like ebola
will notget out of control before a cure can be found. Such hopes seemunreasonable due to the
facilities available in some areas of theworld.
lable in some areas of theworld.
f:\12000 essays\sciences (985)\Biology\Eczema.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ECZEMA
WHAT IS ECZEMA?
Eczema is a category of skin disease that is characterized by
inflammation, itching, dry scaly skin, and in severe cases, small fluid filled
blisters and insomnia. It is the most common skin disease in children today.
Mild cases of Eczema are a little worse than a tendency toward dry, itching skin.
Severe cases can effect the whole body, can be intensely itchy, uncomfortable,
and even have an effect on the person in a psychological manner due to
self-consciousness. Eczema sufferers have acute flare-ups or relapses of their
chronic disease that can be annoying, itchy, and very uncomfortable.
HOW DO YOU GET ECZEMA?
Eczema is not a contagious skin disease, but it does effect around 1 in 10
people. Its causes aren't fully understood yet, but eczema seems to occur in
people with family or personal history of allergic asthma, rhinitis, conjunctivitis,
food allergies, icthyosis vulgaris, and keratosis pilaris. Eczema has always
seemed to be a genetic skin disease, but until recently the researchers have
been unable to identify a specific gene involved in the passing on of eczema.
Now, doctors believe they have found a gene that causes eczema, but since it is
not present in all cases of eczema, they believe that there is more than one gene
that can cause eczema. Also, a maternal pattern of inheritance has been
discovered. Doctors and researchers believe that this maternal inheritance
pattern is due to modification in the immune responses in utero, or via breast
milk.
CAN ECZEMA BE CURED?
There is no way to absolutely "cure" eczema although, many treatments
have been found to be effective. Now, with the wonderful discovery of one of the
genes that may cause eczema, who knows what will happen. They are working
on ways to permanently rid people of eczema all the time, but knowing exactly
where the "instructions" on how to create the disease are located make finding a
cure more likely.
WHAT CAUSES ECZEMA TO FLARE UP?
There are many things that can trigger or worsen cases of eczema. The
number one cause of eczema flare-ups is emotional stress. Anger, frustration,
anxiety, family hostility, rejection, and guilt can complicate the problem of
eczema. Irritants such as soaps, solvents, and laundry detergent can provoke it
also. The only way to keep irritants from triggering eczema is to avoid them by
using substitutes like a non-soap cleaning agent. Allergens in food and the air
can also cause eczema to flare-up. Dietary management and air purifiers can
help keep allergens under control. Infections of both a viral and bacterial nature
can cause eczema to relapse. When the immune system is weak from illness,
eczema sufferers are more prone to break outs of greater seriousness and
discomfort.
WHAT ARE SOME COMMON TYPES OF ECZEMA?
There are many forms of eczema. Some forms are not specific and
cannot be clearly categorized although, most cases fit into one of the following
categories:
1. Atopic Eczema is the most persistent kind of eczema, and is often the
hardest kind to treat. It is also the most common form of eczema, and it usually
develops in the first year of a baby's life. The most frequent sites for atopic
eczema, or atopic dermatitis as it is sometimes referred to, are the elbows, the
knees, the neck, and around the eyes. Atopic Eczema usually disappears as a
child grows older, but sometimes it does persist through adulthood.
2. Numular Eczema is characterized by its round patches of discolored dry
skin. These patches can occur anywhere and are occasionally hard to treat,
requiring intralesional corticoids.
3. Asteatotic Eczema is the result of extremely dry skin. This form of
eczema occurs most frequently on the arms and legs.
4. Eyelid eczema is a form of eczema that occurs on the eyelids often
causing the eyelids to swell, and in extreme cases become swollen shut. Eyelid
eczema can be made worse by use of eye cosmetics, facial cosmetics, and
harsh soaps.
5, Contact Eczema is divided into two separate types, allergic and irritant.
Allergic contact eczema is caused by direct contact with a substance that causes
an allergic reaction. The trigger substances can be everyday items such as
rubber, glue, and nickel found in some costume jewelry. Irritant contact eczema
is caused by things that irritate the skin such as detergents and disinfectants.
Contact eczema is also called contact dermatitis.
6. Seborrhoeic Eczema effects the scalp and eyebrows and may spread to
other parts of the body where there is hair. Seborrhoeic eczema often effects
babies and is nicknamed cradle cap.
WHAT ARE POSSIBLE TREATMENTS FOR RELIEF OF ECZEMA'S
SYMPTOMS?
The main treatments offered to an eczema patient are emollients and
topical steroids. Another possible treatment are antihistamines and antibiotics
taken orally, but they are a rare last resort.
Emollients are mixtures of oils, fats, and water that help to restore both
the moisture and oil content to the skin. They are available in the forms of
cream, ointment, lotions, and medicinal bath oils, and are sold by prescription.
Emollients need to be used several times a day, even when the skin is
apparently free of eczema.
Topical steroids are carefully designed anti-inflammatory medicines that
are used to bring eczema under control quickly. They soothe inflammation and
itching, reduce the risk of infection, and help the skin heal. Topical steroids are
available in ointments, creams, and lotions, and are only available through
prescription.
f:\12000 essays\sciences (985)\Biology\Effect of road salt on the environment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Salt Pollution
As awareness for pollution increases, other forms of pollution are defined. Almost everyone knows about toxic waste and carbon dioxide pollution, but not many people have heard of salt pollution. Salt pollution has been on the increase since the evolution of the automobile. With more pressure on government agencies to keep the highway clear and safe, an increase in the use of salt has developed. It is important to understand why salt is used and how it work as well as the environmental effects to understand the salt pollution problem.
Salt is a necessary and accepted part of the winter environment. It provides safety and mobility for motorists, commercial vehicles and emergency vehicles.
Salt is used as the principal deicer because it is the most available and cost-effective deicer. Rock salt is preferred because it is cheap and effective. It costs 20 dollars a ton where as an alternative like calcium magnesium cost around 700 dollars a ton. Some 10 million tons of deicing salt is used each year in the U.S. and about 3 million in Canada.
Salt is used to keep snow and ice from bonding to the pavement and to allow snowplows to remove. When salt is applied to ice and snow it creates a brine that has a lower freezing temperature than the surrounding ice or snow. Salt is the ideal deicing material because it is:
•the least expensive deicer
•easy to spread
•easy to store and handle
•readily available
•non-toxic
•harmless to skin and clothing
Salt pollution is broken into two main groups. Water, which includes the effects on ground water, surface water and aquatic life and land.
Most of the salt applied to the roadways eventually ends up in the ground water. It is estimated that 30% to 50% of the salt used travels into the ground water. Salt effect two areas of ground water, chloride concentration and sodium concentration. Chlorides may be present in the form of sodium chloride crystals or as a ion in a solution. Normal concentrations in the water are average around 10 mg/litre. Concentrations found in ground water near major highways have been recorded as high as 250 mg/litre which is around the threshold of taste.
The main factor with ground water pollution is the risk to human health. The raised level in sodium in water can cause high blood pressure and hypertension. With people who already suffer from these problem it is necessary to keep their salt intake relatively low, they should not drink water above 20 mg/liter. Although this is recommended, a study of private well water in Toronto showed that half the wells exceeded this limit, twenty percent exceeded 100 mg/litre and six percent exceeded 250 mg/litre. This increase in sodium and chlorine can also cause problem with water balance in the human body.
As well as surface water, ground water is also affected by road salting. Although the effects are not as great as ground water, they still pose problems to the environment. The problems are based on the salt ions. The salt ions interact with heavy metal that fall to the bottom of the body of water. An example of this is when sodium and chlorine ions compete for mercury to bond with. This cause the release of mercury into the water system. The risk of mercury poisoning is far greater than that of sodium or chlorine. This increase of sodium and chlorine as well as mercury and other heavy metal also cause changes in the pH of water.
The increase of salt around bodies of water also effect aquatic life in the area. Two main areas that are effected are osmotic regulation in fish and the death of micro-biotic life in ponds and lakes. Most fish life can only tolerate a narrow range of salt content in the water. The increase of salt in the water produced by road de-icing cause freshwater fish to swell up with water. The increased salt cause a lower concentration of water in the fishes cells. To compensate, the fishes body takes in water to restore equilibrium. This can kill fish if the salt concentration becomes to high.
Just as important as fish, microorganisms are also effected in a detrimental way. Microrganisms are tiny organism that sustain aquatic life in all bodies of water. They are more susceptible to the effect of salt pollution than fish. These microorganism are at the bottom of the food chain, when they die, it doesn't take long for the rest of the food chain to follow. Large increase in salt concentration can cause 75% - 100% death for these microorganisms, The effect of salt is almost immediate. Most of the organism are only one cell big and blow up in contact with increased amounts of salt.
Water insects are also effected by the increase in salt in the environment. The number of insects lowers because the inability for water insects to reproduces in the presence of high salt concentrations. With the decreasing numbers in microorganisms, insects and fish, it is easy to see the effect it would have on the rest of the food chain even though other animal may be more salt tolerable.
Salt pollution also is a major factor to land. It can also be broken up into the effects on soil, vegetation and animals.
The effect of salt on soil may seem relatively less important than the other topics mentioned so far, but it leads up to more important things. The effect salt has on soil is that it alters the soil structure. Sodium chloride actually deteriorates the structure of the soil. This cause a decrease in soil fertility. In most cases calcium in the soil is replaced by sodium in a anion exchange. The make the soil less usable by vegetation. This also occurs with magnesium. This depletion of calcium and magnesium also causes the soil to increase in alkalinity with pH of nearly 10. Normal pH for the soils tested were between 5.4 and 6.6.
High concentrations of sodium in the soil also makes the soil less permeable. In some case soil may be encrusted in a layer of salt. As a result, moisture content in the soil may be drastically decreased. High concentrations of salt may also cause clay to have a decreased concentration of water. This makes the clay harder and vegetation is less likely to grow.
Although salt already effects the soil vegetation grows in, it also can directly effect vegetation itself. Vegetation can be dehydrate to the point of death when in contact with high levels of salt. This occurs because the osmotic stress put on the plant make it react like it was in a drought. A decrease in roots production and burns to leaf tips cause the plant to go into shock.
Salt injury will also occur when plants come into contact with increased levels of salt. Salt injury is when foliage damage is present by leaf burn, die-back, defoliation and brooming. It can also cause fruit trees to have reduced quantity and quality of fruit. This occurs with only a small amount of salt comes in contact with the plant. It only take 0.5% of the plants tissue dry weight to become salt before the plant reach toxic levels. Increased chlorine levels can also cause salt injury to a plant in the same way. Salt injury also effect trees as well as small plant life. Growth of plants in also effected by the presence of sodium and chlorine.
Animal are also greatly effected by roadway de-icing. Although animals tolerance to salt intake is quite high using salt for de-icing road presents unusual dangers. Moose and deer become susceptible to salt pollution because of their attraction to salt. Deer and moose are know to drink the salty water around roads. It becomes an addiction to them and reduces the level of fear when in contact with cars and people. They have also been found licking the gravel and the side of the road and even the road itself in search for salt.
Small animals are effected more by the toxicity of high levels of salt. Increased levels of salt in small wildlife caused kidney hemorrhaging, depression, excitement, tremors, incoordination, coma and death. Rabbits seem to be the most susceptible because their inability to stop consuming salt. Household pets are also effected. once outside, salt collects on their feet. Pets consume a lot salt when cleaning their feet. This causes cats and dogs to get inflamed stomachs.
As one can see, the effect of roadway de-icing on the environment are tremendous. The use of salt causes a great burden to both land and water. One must weigh the pro's and con's of de-icing when learn about the effects of salt on the environment.
f:\12000 essays\sciences (985)\Biology\Effects of Aspergillosis nosocomial infections.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aspergillus spp. is a type of fungus that forms spores. It is normally found in soil,
water, and decaying vegetation. In the hospital environment, the spores settle in some
part of the ventilation system. Spores are also stirred up from construction and
renovation. Additional sources of the fungal spores could be contaminated or wet wood,
bird droppings in air ducts, or decaying fireproofing materials. The fungus causes
pneumonia in a host with a weak or otherwise compromised immune system. Patients at
risk are those undergoing organ transplant or bone marrow transplants, and depending on
the type of transplant, mortality rates are as high as 95%. Bone marrow transplant
patients, the highest risk group, should be treated like they are immunosuppressed for up
to four weeks after the procedure. The portal of entry is through the upper respiratory
track. The infection then becomes systemic and is spread into multiple deep organs.
Because few reliable tests are available for diagnosing pneumonia due to Aspergillus,
clinicians often use a lung biopsy. Blood culture techniques fail because antibody
responses in immunocompromised patients give false indications of infection.
Identification of a source of the fungi is difficult, but can be determined through careful
evaluation of each additional case. A better solution is to develop protected areas for high
risk patients. These protected areas would have special air filtration systems that direct air
flow only in certain directions. The doors to these rooms would have vacuum seals and
would have a higher pressure inside than outside. The end goal of the protected areas
would be to increase air flow in the room to the point that the air becomes essentially
sterile and to maintain a clean environment. The costs of implementing a "protected area"
system may be prohibitively high.
f:\12000 essays\sciences (985)\Biology\Effects of foreign species introduction on an ecosystem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The effects of foreign species introduction on an ecosystem
The effects of foreign species introduction into an ecosystem are very profound. From small microorganisms to species of large mammals, many foreign species introductions occur every day. New implications of their introduction are found just as often.
When a foreign species is introduced into an ecosystem, often the ecosystem contains no natural predators for the new species. This lack of predators sometimes leads to; in conjunction with a supply of food suitable for the new species, a period of exponential growth of the species. This growth and severe increase in the size of the population can cause a shortage of food for native species. When this occurs, the native species disappear and the biodiversity in the ecosystem is reduced. The carrying capacity is also reduced because the ecosystem will not be capable of supporting the same amount of life. If one species hogs the food and does not contribute itself to the food chain, the balance is disrupted and there will be less available for the native species. Once the new species has found its ecological niche however, balance begins to restore itself.
When the biodiversity in the ecosystem is reduced, the ability of the ecosystem to grow, or the biotic potential, is as well reduced. More species residing in an ecosystem which depend on each other allows for a greater chance of survival and perpetuation. This may occur for several reasons, for example a bee and a flower. The bee requires the pollen of the flower to make its honey. However, while gathering the pollen from the flowers, it transfers some of the pollen to female flowers, allowing them to make seeds and spawn further generations. However, a foreign species may, for example, eat the bees therefore allowing for decreased fecundity of the flowers.
Another implication of the introduction of foreign species into an ecosystem is the potential for toxins to be spread up the species chain is increased. For example, in ports all over the world, ships empty their ballast tanks containing large amounts of sea water, often laced with organisms not naturally found in their new region. The zebra mussel provides food for a certain type of fish, and also contains several toxins because it is a filter feeder. The level of toxins in the fish due to the biological amplification is high. But if and when a new type of fish are introduced, which eats zebra mussels and provides a more preferred food for the fish which formerly ate the mussels, a new level of biological amplification is inserted. This results in the higher levels containing more toxins than they previously did, which can lead to higher death rates, and lower birth rates, which is an example of a lower biotic potential.
Finally, abiotic factors may not be prepared for the new species introduction. If, for example, a forest has a certain amount of rocks suitable for the construction of shelter by certain animals, and a new species moves in which also utilizes the same material for its shelter. The rocks will be in short supply. They are an abiotic factor, without which, animals have no shelter. The animal which takes up the building supplies but does not provide back to the ecosystem will thrive, however the rest of the ecosystem which depends on the native animal which is harmed, will not thrive and have a decreased biotic potential.
In conclusion, if a foreign species is introduced, the ecosystem is often not prepared to deal with the new competition. Because the predator-prey relationship is important in controlling the population, and the new species may not have any predators, the population may explode. The materials available may be compromised for the more beneficial organisms, and overall the biotic potential and carrying capacity will decrease. At least, until the new organism finds its niche and can contribute to the ecological community.
f:\12000 essays\sciences (985)\Biology\Evolution from a molecular Perspective.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction: Why globular evolution?
Evolution has been a heavily debated issue since Charles Darwin first documented the theory in 1859. However, until just recently, adaptation at a molecular level has been overlooked except by the scientific world. Now with the help of modern technology, the protein sequences of nearly every known living thing have either been established or are in the process of establishment, and are widely accessible via the internet. With the knowledge of these sequences, one can actually look at several organisms genetic codes and point out the similarities. Entire genomes of creatures have been sequenced, and the human genome project is well underway and ahead of schedule. With this new knowledge comes worries, for humans, however. What if the information stored in our genes was available to the public? Would insurance companies and employers base their selections on these traits? Also, with the total knowledge of every sequence of every amino acid chain in a person's genome, couldn't a laboratory perceivably re
construct an exact copy of, or clone, that person? These are all issues that will have to be dealt with in the near future, but for now we need only concern ourselves with the objective observation of these proteins in our attempt to explain our ever mysterious origin. As humans, we are the first creatures to question exactly where we came from and how we got here. Some cling to religious creationism as a means, while others embrace the evolutionary theory. As of now, and possibly forever, neither can be proven to be absolute truth with hard facts, and both have their opposing arguments. The point of this paper being composed is not to attempt to abolish the creationist view, a feat that at this point seems impossible, but merely to educate those seeking to unravel the mystery of our forthcoming by pointing out facts that exist in the modern world and that can be quite easily and independently researched. It is conceivable that the two ideas, creationism and evolutionism, can exist symbiotically due to t
he fact that both views have very good points.
Hemoglobin: Comparisons between species
Of all the proteins in living things, hemoglobin is "the second most interesting substance in the world," as American biochemist L. J. Henderson once stated (Hemoglobin, 4). However bold this statement seems, it must be realized that hemoglobin is, at least in the scientific world, by far the most studied and most discussed substance in the human body, as well as in other living organisms. Hemoglobin is the carrier in blood that transports oxygen to our tissues and carbon dioxide out of our body, changing colors as it does so. Hence, hemoglobin has long been termed the pigment of our blood. Hemoglobin was one of the first proteins to be purified to the point where its molecular weight and amino acid composition could be accurately measured. This finding was very important in that it eventually lead to the understanding that a protein is a definite compound and not a colloidal mixture of polymers. Each molecule was built from exactly the same amino acid subunits connected in the same order along a chain,
and had exactly the same weight. Most organisms have their own unique, individual chain of proteins to make up their hemoglobin, but all organisms share certain similarities, so striking that they are unable to be ignored. Let's take, for example, the first twenty-five amino acids in the alpha hemoglobin chains of 7 different animals: a human man, rhesus monkey, cow, platypus, chicken, carp (bony fish), and shark (cartilaginous fish) (See Table 1.1.) As is shown, the variations increase the further apart the organisms are on the proposed evolutionary scale. A human man differs from a rhesus monkey only twice in the first twenty-five amino acids of their alpha hemoglobin chains, whereas a man and a cow differ in three areas. This is the product of many thousands of years of natural fine tuning, if you will, through the slow but precise processes of natural selection and adaptation. The fact of natural selection shows us that while most genetic mutations usually prove fatal, a slim few are actually benef
icial, and assist the mutant in living and procreating offspring. This assistance helps the mutant-gene's frequency grow in the gene pool and remain there since all progeny possessing this certain trait are going to have an advantage over the other organisms lacking this quality. This is the basis for evolution. The higher a certain species is on the evolutionary scale, the more advanced that organism is due to a slight change in the amino acid sequences of certain genes. An example would be that of the human man, the rhesus monkey, and the cow. There is a smaller difference in the amino acid sequences between a man and a monkey than between a man and a cow, and, respectively, a monkey is more advanced than a cow, genetically (monkeys and humans have far advanced apposable thumbs.) Also, where the amino acids have been conserved between all the studied organisms, such as in columns 27, 31, and 39, indicates that in order for the species to survive, that certain amino acid must be there. If it is changed
in any way, the organism can not survive. There are thirty-four conserved positions in the first 141 amino acids in the seven studied organisms. After just these few demonstrations, how could anyone doubt the theory of evolution? This question leads me into a short interlude where I will discuss the arguments on both sides, and show just how endless this debate could be.
Evolution -vs- Creation: Which Is Truth?
When evolution is mentioned to many people, the first thing that enters their mind is the completely incorrect thought that man evolved from monkeys. Man did not, in fact, evolve from monkeys, this is a known and agreed upon fact. The only connection between modern day men and modern day chimpanzees, for example, is the fact that they must have shared a common ancestor. The "common ancestor" theory, as I have chosen to name it, states that all life living, or ever to have lived on this planet can be traced back to a single, common ancestor. At some point in time, between 3.5 and 4.1 billion years ago, a certain grouping of chemicals came together at just the right time and life began. From this single life-form, the slow process of natural selection began. First came the proteinoid microsphere, the first organisms on the planet to carry on all life functions. Eventually, then, came viruses, parasites, saprophytes, holotrophs, chemosynthesizers, and photosynthesizers, all mutants of the very first cell.
Some have tried to use thermodynamics to disprove evolution, especially the second law. The second law of thermodynamics states that "all energy transfers or transformations make the universe more disordered." These speculatives claim that since man is more advanced than any other creature, we are more ordered. This is wrong. Man is more advanced due to the mutations in his genes. Compared to the very first life-form's genes, a human man's amino acid sequences are very dissimilar, or more disordered. Also, the first law of thermodynamics can be used for either argument. The first law of thermodynamics states that energy cannot be created or destroyed--in other words it has always been here. Using this law, the matter in the universe can either be thought of as always being here, or that the creator, with his infinite power, simply transformed the energy that he possessed into the matter of the universe. Both sides have an arguable point that agrees with the laws of thermodynamics. Another arguable
point that is worthy of mention is the discrepancies in the fossil record. The Earth's crust, and all the fossils contained therein, can also be utilized as arguments for both sides. The "Pre-Cambrian Void" (Creation-Evolution: The Controversy, 362) shows very little sign of fossilization. Then, suddenly, massive amounts of fossilization can be found during the Cambrian times, pointing to some sort of catastrophe, like a flood. The Bible mentions a flood sent by God to destroy every living thing on Earth. The fact that a flood could have happened, in that sense, strengthens the creationists' views. The evolutionist theory can use these facts in two ways. One, when the selection pressure on a species is constant for a long time, a species could become so specialized that any slight change in their environment could lead to extinction, this is called a climax group. Around the time that the large amounts of fossilization was occurring, the Earth had cooled down enough to allow the immensely dense atmosph
ere to condense, thus causing many years of rain. Would not this rain cause almost any climax group's entire population to become extinct? Also, before the rains came, the great majority of the organisms inhabiting the Earth were land creatures. Once the rains came, the Earth was covered in water, killing thousands of populations, and effectively burying them in the water. The water preserved their parts for fossilization. These have been just a few double sided arguments demonstrating that either side can turn any facts around to fit their own hypothesis.
Leghemoglobin, Protein Relations Between Species, and the Evolution of the Globin Family
Like animals, plants also carry a sort of hemoglobin, leghemoglobin. Leghemoglobin is a globin which is less evolved than that of hemoglobin or myoglobin. The whole globin family, itself, has undergone much evolution and mutation. At one time, animals had no globin at all. As life evolved, a single-chain oxygen-binding substance formed--we will call this the basic globin. Then life branched into two parts: animals carrying the basic globin, such as annelid worms, insects, and mollusks, and creatures (manly plants) carrying leghemoglobin, a mutation of the basic globin. The animal kingdom's globin eventually split into myoglobin (Mb) and hemoglobin (Hb). Since then, myoglobin has basically stayed the same in many organisms (See Table 1.3.) Hemoglobin, on the other hand, has undergone some major mutations. After the basic globin bifurcated into Mb and Hb, Hb split into alpha (a) and beta (b) chains. The a-chain eventually split into two parts, and has remained this way up to present times. The b-cha
in split into many more parts. Everything that has been said up until now about the evolution of the globins from a common single-chained oxygen-binding ancestor has been summarized in Table 2. If one would compare sequences of globin between species, one would notice that the less amino acids that are different the more closely related two species are. If we used this theory on the vertebrates that were studied it would give us a "schematic family tree of globin containing vertebrates" (Hemoglobin, 78) (See Table 3). This same tree is obtained by comparing sequences of myoglobin, or the a or b chains of hemoglobin. This tree tells us that all organisms alive today are just as evolved as any other living organism. Different species evolve in different ways, that is the basis of evolution. Man is just as evolved as a chimpanzee, or a carp, or a rose bush. Different organisms simply evolved differently. Another excellent way of showing the relationships between organisms is the mean amino acid differenc
es. The more amino acids that are different between two species, the further apart they are genetically. For instance, of the entire b-chain of human and rhesus monkey hemoglobin, there are, on the average, eight places where the amino acids are different. However, when comparing b-chains of man and platypus, there are thirty four average differences. A chart and a graph can help us better understand these points (See Table 4.) The amino acids that have changed are a result of mutated DNA that has proven beneficial to the carrier mutant. This process, as stated before, is the basis of evolution.
Speaking solely of hemoglobin, the variances between species can be shown through greater or less affinity for oxygen. "H. F. Bunn has shown that mammalian hemoglobin can be divided broadly into two groups: the great majority have intrinsically high oxygen affinity, which is lowered in the red cell by DPG," (D-2,3-biphosphoglycerate) "while those of ruminants and cats (Cervidae, Bovidae, Felidae) and of one primate, the lemur, have an intrinsically low oxygen affinity that is little, or not at all, lowered by DPG ("Species Adaptation in a Protein Molecule", 16). DPG is one of the ligands that "reduces the oxygen affinity of hemoglobin in a physiologically advantageous manner by combining preferentially with the T structure." ("Species Adaptation in a Protein Molecule", 3) For instance, the mole (Talpa europaea) lives in its burrows under conditions lacking a rich oxygen supply. This creatures hemoglobin has adapted to having a high oxygen affinity, a high concentration per unit volume of blood, and a lo
w body temperature. This high affinity is due to the mole's hemoglobin's low affinity for DPG. So as you can see, DPG asks as a type of buffer. The more DPG the creature's hemoglobin can hold, the less space it has for oxygen. Since the environment has low amounts of oxygen, the blood needs to hold as much oxygen as possible, so the mole has adapted.
Which Came First?
One final point that should be mentioned is the question of which change came first. Did a mutation occur that adapted a species to a new environment take place before the species occupied that environment, or did the genetic change occur after the environment changed in order to assist the creature with living in its new surroundings? "W. Bodmer suggested that once a large change in chemical affinities produced by one mutation had enabled a species to occupy a new environment, its effect might have been refined by later adaptive mutations, each contributing minor shifts, over a long period of time." ("Species Adaptation in a Protein Molecule", 22.) For example, did a llama's hemoglobin adapt to a higher grazing altitude by increasing the oxygen affinity, or did the oxygen affinity increase and the llama then realize that it could graze higher than some other animals. This could show the "punctuated equilibria" (Biology, 296) in the evolution of a species.
What Does It All Mean?
After seeing all of these demonstrations of adaptation at a molecular level, you may ask what it adds to the betterment of the world. The truth is, merely knowledge. It is doubtful that the evolution-creation controversy will ever be settled, but without interest, research, and work by people in all corners of the debate, be it theological, or scientific, the answer will never be discovered. It is quite possible that neither hypothesis is correct--perhaps the truth lies in a combination of the two, or something completely different. I believe that the truth, at least a partial truth, can be found somewhere at the molecular level. If the genes, and amino acid sequences are examined, I believe that the actual evolutionary time table can be reconstructed. The human species, however, must be the last "stem" on this branch of the evolutionary tree, due to our personal views of mutations. We all see mutations as negative, when some may actually be positive. If a child is born with twelve fingers instead of
ten, two are surgically removed, and the child becomes less attractive to the opposite sex, and may not get his mutated genes back into the gene pool. This process has almost always worked in the opposite way in every species up until now--the mutant with the beneficial, but different, genotype and (perhaps) phenotype has had an advantage that makes him more attractive to the opposite sex, and his genes are passed on to his offspring. One of the only mutations that could, and has, gone unnoticed is the expansion of the control of the mind. Over the hundreds of years of human existence, especially in the past few decades, the knowledge of the modern man has expanded dramatically, and now, with the ease of the internet, anyone can learn about anything imaginable. People are tired of mind-numbing thoughtless hours spent in front of the television, and are now expanding their minds in their free time. I can only hope that this paper has inspired some thought about the subject, and has brought us a small step
closer to the conclusion of the debate.
Works Utilized
Dickerson, Richard E. and Geis, Irving. Hemoglobin: Structure, Function, and Pathology.
California: The Benjamin/Cummings Publishing Company Inc., 1983.
Perutz, Max. "Species Adaptation in a Protein Molecule", Molecular Biology and Evolution Chicago: University of Chicago, 1983
Mammrack, Mark. Biology 112 Lecture Notes. Ohio: Wright State, 1996
Wysong, Randy. The Creation-Evolution Controversy. Michigan: Inquiry Press, 1978
Lasker, Gabriel. Human Evolution. New York: Holt, Rinehart, and Winston, Inc., 1963
Campbell, Neil. Biology Second Edition. California: The Benjamin/Cummings Publishing Company Inc., 1990
Solomon, Eldra; Berg, Linda; Martin, Diana; Villee, Claude. Biology Fourth Edition. New York: Saunders College Publishing, 1996
Genbank. National Center for Biotechnology Information, 1996
Available www: http://www2.ncbi.nlm.nih.gov/cgi-bin/genbank
--Tables!-- (tables 2-4 were scanned in from the Hemoglobin book in the Bibliography GET IT!)
Table 1.1 Sequence comparisons of globin (information gathered from Hemoglobin and from "Genbank")
1 25 50 67
ALPHA HEMOGLOBIN CHAIN
1. VLSPADKTNVKAAWGKVGAHAGEYG--AEALERMFLSFPTTKTYFPHF-DLSH--GSAQVKGHGKKVA-DALT
2. VLSPADKSNVKAAWGKVGGHAGEYG--AEALERMFLSFPTTKTYFPHF-DLSH--GSAQVKGHGKKVA-DALT
3. VLSAADKGNVKAAWGKVGGHAAEYG--AEALERMFLSFPTTKTYFPHF-DLSH--GSAQVKGHGAKVA-AALT
4. MLTDAEKKEVTALWGKAAGHGEEYG--AEALERLFQAFPTTKTYFSHF-DLSH--GSAQIKAHGKKVA-DALS
5. VLSNADKNNVKGIFTKIAGHAEEYG--AETLERMFIGFPTTKTYFPHF-DLSH--GSAQIKGHGKKVA-LAIT
6. SLSDKDKAAVKIAWAKISPKADDIG--AEALGRMLTVYPQTKTYFAHWADLSP--GSGPVK-HGKKVIMGAVG
7. DYSAADRAELAALSKVLAQNAEAFG--AEALARMFTVYAATKSYFKDYKDFTA--AAPSIKAHGAKVV-TALA
1. Human Man 2. Rhesus Monkey 3. Cow 4. Platypus 5. Chicken 6. Carp 7. Shark
Table 1.2 Sequence comparisons of globin (information gathered from Hemoglobin and from "Genbank")
68 75 100 125 141
ALPHA HEMOGLOBIN CHAIN (Part two)
1. NAVAHVDD--MPNALSALSNLHAHKLRVDPVNFKL--LSHCLLVTLAAHLPAEFTPAVHASL--DKFLASVSTVLTSKYR
2. LAVGHVDD--MPNALSALSDLHAHKLRVDPVNFKL--LSHCLLVTLAAHLPAEFTPAVHASL--DKFLASVSTVLTSKYR
3. KAVEHLDD--LPGALSELSDLHAHKLRVDPVNFKL--LSHSLLVTLASHLPSDFTPAVHASL--DKFLANVSTVLTSKYR
4. TAAGHFDD--MDSALSALSDLHAHKLRVDPVNFKL--LAHCILVVLARHCPGEFTPSAHAAM--DKFLSKVATVLTSKYR
5. NAIEHADD--ISGALSKLSDLHAHKLRVDPVNFKL--LGQCFLVVLVAHLPAELAPKVHASL--DKFLCAVGTVLTAKYR
6. DAVSKIDD--LVGGLASLSELHASKLRVDPANFKI--LANHIVVGIMFYLPGDFPPEVHMSV--DKFFQNLALALSEKYR
7. KACDHLDD--LKTHLHKLATFHGSELKVDPANFQY--LSYCLEVALAVHL-TEFSPETHCAL--DKFLTNVCHELSSRYR
1. Human Man 2. Rhesus Monkey 3. Cow 4. Platypus 5. Chicken 6. Carp 7. Shark
Table 1.3 Sequence comparisons of globin (information gathered from Hemoglobin and from "Genbank")
1 25 50 75 80
MYOGLOBIN
1. GLSDGEWQLVLNVWGKVEADIPGHG--QEVLIPLFKGHPETLEKFDKFKHLK--SEDEMKASEDLKKHGATVLTALGGI--LKKKG 2. GLSDGEWQAVLNAWGKVEADVAGHG--QEVLIRLFTGHPETLEKFDKFKHLK--TEAEMKASEDLKKHGNTVLTALGGI--LKKKG
3. VLSEGEWQLVLHVWAKVEADVAGHG--QDILIRLFKSHPETLEKFDRFKHLK--TEAEMKASEDLKKHGVTVLTALGAI--LKKKG
4. GLSDGEWQLVLKVWGKVEGDLPGHG--QEVLIRLFKTHPETLEKFDKFKGLK--TEDEMKASADLKKHGGTVLTALGNI--LKKKG
5. GLSDQEWQQVLTIWGKVEADIAGHG--HEVLMRLFHDHPETLDRFDKFKGLK--TEPDMKGSEDLKKHGQTVLTALGAQ--LKKKG
6. ----TEWEHVNKVWAVVEPDIPAVG--LAILLRLFKEHKETKDLFPKFKEI---PVQQLGNNEDLRKHGVTVLRALGNI--LKQKG
1. Human Man 2. Cow 3. Sperm Whale 4. Platypus 5. Chicken 6. Shark
Table 1.3 Sequence comparisons of globin (information gathered from Hemoglobin and from "Genbank")
1 25 50 75 80
MYOGLOBIN (part two)
1. HHEAEIKPLAQSHATKHKIP--VKYLEFISECIIQVLQSKHPGDFGA--DAQGAMNKALELFRKDMASNYKELG--FQG
2. HHEAEVKHLAESHANKHKVP--IKYLEFISDAIIHVLHAKHPSNFAA--DAQGAMNKALELFRKDMASNYKELG--FQG
3. HHEAELKPLAQSHATKHKIP--IKYLEFISEAIIKVLHSRHPGDFGA--DAQGAMNKALELFRKDIAAKYKELG--YQG
4. QHEAELKPLAQSHATKHKIS--IKFLEYISEAIIHVLQSKHSADFGA--DAQAAMGKALELFRNDMAAKYKEFG--FQG
5. HHEADLKPLAQTHATKHKIP--VKYLEFISEVIIKVIAEKHAADFGA--DSQAAMKKALELFRDDMASKYKEFG--FQG
6. KHSTNVKELADTHINKHKIP--PKNFVLITNIAVKVLTEMYPSDMIG--PMQESFSKVFTVICSDLETLYKEAD--FQG
1. Human Man 2. Cow 3. Sperm Whale 4. Platypus 5. Chicken 6. Shark
f:\12000 essays\sciences (985)\Biology\Evolution of Immunity in Invertebrates.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Article Summery"
Name: "Immunity and the Invertebrates"
Periodical: Scientific American Nov, 1996
Author: Gregory Beck and Gail S. Habicht
Pages: 60 - 71
Total Pages Read: 9
The complex immune systems of humans and other mammals evolved over quite a long time - in some rather surprising ways. In 1982 a Russian zoologist named Elie Metchnikoff noticed a unique property of starfish larva. When he inserted a foreign object through it's membrane, tiny cells would try to ingest the invader through the process of phagocytosis. It was already known that phagocytosis occurred in specialized mammal cells but never in something less complex like a starfish. This discovery led him to understand that phagocytosis played a much broader role, it was a fundamental mechanism of protection in the animal kingdom. Metchnikoff's further studies showed that the host defense system of all animals today were present millions of years before when hey were just beginning to evolve. His studies opened up the new field of comparative immunology. Comparative immunologists studied the immune defenses of past and current creatures. They gained further insight into how immunity works.
The most basic requirement of an immune system is to distinguish between one's own cells and "non-self" cells. The second job is to eliminate the non-self cells. When a foreign object enters the body, several things happen. Blood stops flowing, the immunity system begins to eliminate unwanted microbes with phagocytic white blood cells. This defensive mechanism is possessed by all animals with an innate system of immunity. Innate cellular immunity is believed to be the earliest form of immunity. Another form of innate immunity is complement, composed of 30 different proteins of the blood.
If these mechanisms do not work to defeat an invader, vertebrates rely on another response: acquired immunity. Acquired immunity is mainly dealt by specialized white blood cells called lymphocytes. Lymphocytes travel throughout the blood and lymph glands waiting to attack molecules called antigens. Lymphocytes are made of two classes: B and T. B lymphocytes release antibodies while T help produce antibodies and serve to recognize antigens. Acquired Immunity is highly effective but takes days to activate and succeed because of it's complex nature. Despite this, acquired immunity offers one great feature: immunological memory. Immunological Memory allows the lymphocytes to recognize previously encountered antigens making reaction time faster. For this reason, we give immunizations or booster shots to children.
So it has been established that current vertebrates have two defense mechanisms: innate and acquired, but what of older organisms ? Both mechanisms surprisingly enough can be found in almost all organisms (specifically phagocytosis). The relative similarities in invertebrate and vertebrate immune systems seem to suggest they had common precursors. The oldest form of life, Protozoan produce these two immune functions in just one cell. Protozoan phagocytosis is not uncommon to that of human phagocytic cells. Another basic function of immunity, distinguishing self from non-self, is found in protozoan who live in large colonies and must be able to recognize each other. In the case of metazoan, Sponges, the oldest and simplest, are able to do this as well refusing grafts from other sponges. This process of refusing is not the same in vertebrates and invertebrates though. Because vertebrates have acquired immunologic memory they are able to reject things faster than invertebrates who must constantly "re-learn" what
is and is not self. Complement and lymphocytes are also missing from invertebrates, but which offer an alternative yet similar response. In certain invertebrate phyla a response called the prophenoloxidase (proPO) system occurs. Like the complement system it is activated by enzymes. The proPO system has also been linked to blood coagulation and the killing of invading microbes.
Invertebrates also have no lymphocytes, but have a system which suggests itself to be a precursor of the lymph system. For instance, invertebrates have molecules which behaving similarly to antibodies found in vertebrates. These lectin molecules bind to sugar molecules causing them to clump to invading objects. Lectins have been found in plants, bacteria, and vertebrates as well as invertebrates which seems to suggest they entered the evolutionary process early on. This same process occurs in human innate immune systems with collections of proteins called collectins which cover microbes n a thin membrane to make them easier to distinguish by phagocytes. And although antibodies are not found in invertebrates a similar and related molecule is. Antibodies are members of a super family called immunoglobulin which is characterized by a structure called the Ig fold. It is believed that the Ig fold developed during the evolution of metazoan animals when it became important to distinguish different types of cells wi
thin one animal. Immunoglobulins such as Hemolin have been found in moths, grasshoppers, and flies, as well as lower vertebrates. This suggests that antibody-based defense systems, although only active in vertebrates, found their roots in the invertebrate immune system.
Evolution seems to have also conserved many of the control signals for these defense mechanisms. Work is currently being done to isolate invertebrate molecules similar to the cytokines of vertebrates. Cytokines are proteins that either stimulate or block out other cells of the immune system as well as affecting other organs. These proteins are critical for the regulation of vertebrate immunity. It is suspected that invertebrates will share common cytokines with vertebrates or at least a close replication. Proteins removed from starfish have been found to have the same physical, chemical, and biological properties of interleukins (IL-1, IL-6), a common cytokine of vertebrates. This research has gone far enough to conclude that invertebrates possess similar molecules to the three major vertebrate cytokines. In the starfish, cells called coelomocytes were found to produce IL-1. The IL-1 stimulated these cells to engulf and destroy invaders. It is thus believed that invertebrate cytokines regulate much of their
host's defense response, much like the cytokines of vertebrate animals in innate immunity.
Comparative Immunology has also found defense mechanisms first in invertebrates only later to be discovered in vertebrates. Invertebrates use key defensive molecules such as antibacterial peptides and proteins, namely lysozyme, to expose bacterial cell walls. Thus targeting the invader. This offers great potential for medicinal purposes, because lysozyme is also found in the innate immunity of humans in it's defense of the oral cavity against bacteria. Peptides of the silk moth are currently being developed as antibacterial molecules for use in humans. Two peptides found in the skin of the African clawed frog actively fight bacteria, fungi, and protozoa. Antibodies which bind to these two peptides also bind to the skin and intestinal lining of humans.
The potential of these peptide antibiotics only now being discovered is a rather considerable thing to ponder. For that reason it is surprising that such little attention has been paid to invertebrate immune responses. In the end, the complexity of vertebrate immune systems can only be understood by studying the less complex systems of invertebrates. Further studies look to explain immunity evolution as well as aid in the solving of problems of human health.
f:\12000 essays\sciences (985)\Biology\Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Essay on Evolution
There are many mechanisms that lead to evolutionary change. One of the
most important mechanism in evolution is natural selection which is the
differential success in the reproduction of different phenotypes resulting from the
interaction of organisms with their environment. Natural selection occurs when a
environment makes a individual adapt to that certain environment by variations
that arise by mutation and genetic recombination. Also it favors certain traits in a
individual than other traits so that these favored traits will be presented in the
next generation. Another mechanism of evolution is genetic drift. Genetic drift is
a random change in a small gene pool due to sampling errors in propagation of
alleles or chance. Genetic drift depends greatly on the size of the gene pool. If
the gene pool is large, the better it will represent the gene pool of the previous
generation. If it is small, its gene pool may not be accurately represented in the
next generation due to sampling error. Genetic drift usually occurs in small
populations that contain less than 100 individuals, but in large populations drift
may have no significant effect on the population. Another mechanism is gene
flow which is when a population may gain or lose alleles by the migration of
fertile individuals between populations. This may cause the allele frequencies in
a gene pool to change and allow the organism to evolve. The most obvious
mechanism would have to be mutation that arises in the gene pool of a
population or individual. It is also the original source of the genetic variation that
serves as raw material for natural selection.
Not only are there mechanisms of evolution, but there is also evidence to
prove that these mechanisms are valid and have helped create the genetic
variety of species that exists today. Antibiotic resistance in bacteria is one
example of evolutionary evidence. In the 1950's, Japanese physicians realized
that a antibiotic given to patients who had a infection that caused severe
diarrhea was not responding. Many years later, scientists found out that a
certain strain of bacteria called Shigella contained the specific gene that
conferred antibiotic resistance. Some bacteria had genes that coded for
enzymes that specifically destroyed certain antibiotics such as ampicillin. From
this incident, scientists were able to deduce that natural selection helped the
bacteria to inherit the genes for antibiotic resistance.
Scientists have also been able to use biochemistry as a source of
evidence. The comparison of genes of two species is the most direct measure of
common inheritance from shared ancestors. Using DNA-DNA hybridization,
whole genomes can be compared by measuring the extent of hydrogen bonding
between single-stranded DNA obtained from two sources. The similarity of the
two genes can be seen by how tightly the DNA of one specie bonds to the DNA
of the other specie. Many taxonomic debates have been answered using this
method such as whether flamingos are more closely related to storks or geese.
This method compared the DNA of the flamingo to be more closely related to the
DNA of the stork than the geese. The only disadvantage of this method is that it
does not give precise information about the matchup in specific nucleotide
sequences of the DNA which restriction mapping does. This technique uses
restriction enzymes that recognizes a specific sequence of a few nucleotides and
cleaves DNA wherever such sequences are found in the genome. Then the DNA
fragments are separated by electrophoresis and compared to the other DNA
fragments of the other species. This technique has been used to compare
mtDNA from people of several different ethnicity's to find out that the human
species originated from Africa. The most precise and powerful method for
comparing DNA from two species is DNA sequencing which determines the
nucleotide sequences of entire DNA segments that have been cloned by
recombinant DNA techniques. This type of comparison tells us exactly how much
divergence there has been in the evolution of two genes derived from the same
ancestral gene. In 1990, a team of researchers used PCR(polymerase chain
reaction) a new technique to compare a short piece of ancient DNA to
homologous DNA from a certain plant. Scientists have also compared the
proteins between different species such as in bats and dolphins.
The oldest type of evidence has been the fossil record which are the
historical documents of biology. They are preserved remnants found in
sedimentary rocks and are preserved by a process called pretrification. To
compare fossils the ages must be determined first by relative dating. Fossils are
preserved in strata, rock forms in layers that have different periods of
sedimentation which occurs in intervals when the sea level changes. Since each
fossils has a different period of sedimentation it is possible to find the age of the
fossil. Geologists have also established a time scale with a consistent sequence
of geological periods. These periods are: the Precambrian, Paleozoic, Mesozoic
and the Cenozoic eras. With this time scale, geologists have been able to
deduce which fossils belong in what time scale and determine if a certain specie
evolved from another specie. Radioactive dating is the best method for
determining the age of rocks and fossils on a scale of absolute time. All fossils
contain isotopes of elements that accumulated in the organisms when they were
alive. By determining an isotope's half-life which is the number of years it takes
for 50% of the original sample to decay, it is possible to determine the fossil's
age.
f:\12000 essays\sciences (985)\Biology\Excretion and Elimination of .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Excretion and Elimination of
Toxicants and their MetabolitesExcretion and Elimination of
Toxicants and their Metabolites
The first topic that was covered by this chapter was the excretion of wastes by the Renal system. The first step that occurs in the kidney deals with the nephron, which is the functional unit of the kidney. In the glomerulus the formation of urine begins with the passive filtration of plasma through the pores that are found in the glomerulus. The plasma is forced through these pores by hydrostatic pressure. The only things that determines if a molecule will pass through the pores of the glomerulus is it's molecular weight. The lower the molecular weight, the easier it will pass through the pores. Another determining factor will be if a molecule is bound to a large molecule. If this is true then passage through the pores will be hindered by the size of the larger molecule.
Reabsorbtion of the many ions, minerals and other nutrients that escaped in the glomerular filtrate will need to be recovered.. Reabsorbtion begins in the tubules of the nephron. Anywhere from 65% to 90% of reabsorbtion occurs in these structures. Active reabsortion is used to recapture glucose, proteins, amino acids and other nutrients. Water and chloride ions are passively reabsorbed by the establishment of osmotic and electrochemical gradients. Both the Loop of Henley and collecting duct are used to establish these osmolar gradients. The tubule has a brush border that will absorb proteins and polypeptides through pinocytosis. These molecules are sometimes catabolised and converted into amino acids. and returned to the blood. Sometimes the accumulation of these proteins can lead to renal toxicity
A second process that occurs in the tubules is tubular secretion. This is another mechanism used to excrete solutes. Secretion may be either passive or active. Secretions include organic bases, which occur in the pars recta of the proximal tubule. Secretions of weak bases and two weak acids occur passively. Other mechanisms involves the use of a mechanism that is called ion trapping. At a certain pH the compounds are more ionized. Outside of the tubule these compounds are non-ionized and are lipophilic. Thus they are able to diffuse across the membranes of the tubule. Once inside, the pH of the tubule will ionize them and render then unable to pass across the cell membranes.
The removal of xenobiotics is dependant on many factors. First is the polarity of the xenobiotic. Polar compounds are soluble in the plasma water are more easily removed by the kidneys through the use of glomerular filtration. The faster the rate of glomerular filtration , the faster the polar xenobiotics are eliminated from the body. Other factors that affect the rate of elimination include: dose of the xenobiotic, the rate pf absorbtion, and the ability to bind to proteins as well as the polarity of the compound.
In comparison lipophilic compounds will cross the cell membrane with more ease. Due to their lipohpillic properties they will follow the their concentration gradient across the membrane of the tubules and are ,therefore, easily retained by the body. If a lipophilic compound is metabolized to a more polar state then it is more easily metabolized. Another important factor that will determine excretion by the kidneys will be the pH of the environment. Those compounds that are effected by pH will have both an ionized and nonionic form. When in their nonionized form it will rebsorbed by the tubules and kept their because of their change to an ionized form.
The liver is the second most important organ that is involved in the removal of wastes from the body. The primary methood of excretion involvrd the Hepatic cells of the liver. Both passive and active modes of transport are used.
Bile is excreted by the hepatic cells. It is a concentration of amphipatic compounds that will aid in the transport of lipids from the small intestine. Before reaching the small intestine, via the common bile duct, it will be stored and concentrated in the gall bladder. The bile will then be reabsorbed by a process known as enterohepatic circulation.
The more lipophilic or nonionized a compound is, the more readily it will be absorbed. Solubility is another factor that will determine absorbance. The rapid absorbance of these compounds does not mean that they will not be readily excreted. Some compounds are readily excreted after absorbtion.
Most toxic xenobiotics are very lipophilic. This means that they will be easily ablorbed and dispursed among the tissue. Their liphilic characterizations also means that there excretion in either the urine or bile will be in very small amounts, unless they are metabolized ito more polar compounds.
One of the methods used to dispose toxic lipophilics is by degradation of the large compounds into small polar fragments thatcan be eliminated through the urine or bile. Oxidative metabolism of toxic cyclic and polycyclic hydrocarbons is done with the introduction of a hyroxyl group into the ring structure. The excretion of halogenated hydrocarbons is extremely difficult. Their accumulation in the body occurs in both adipose tissue and lipid layers of the skin. They will stay there for the duration of theanimals life time.
The molecular weight of a compound will determine if the compound will be excreted in the urine or feces. Any elimination of a xenobiotic will be done in association with the excretion of another compound that is normally eliminated by the body. Most gaseous and volatile xenobiotics are eliminated through the lungs. The rate of ecretion is based on how soluble the compond is in the blood, the rate of volume of respiration, and the rateof blood flow to the lungs. Asecond method used is the alveolobronchilar transport mechanism. Which will involve the use of the mucociliary bronchotracheal escalator that will end with the material being swallowed and passed out of th body.
Sex linked elimination is restricted to the female.The milk excreted by the mother will contain the largest number of possible xenobiotics.The elimination of the xenobiotic is dependant on the half-lifeof the compound. Most of the compounds that are excreted are low in dosage and therefore are not lethal. Chronic exposure can be toxic to the nursing young. The type of materials that are excreted are lipophilic because they are not excreted by the other major pathways. In eggs the type of compound eliminated are also limpohpilic in nature. Fetuses are mostly effected by lipophilic compounds that are ablr to pass the placental barrier. There are cases of fatal exposure of xenobiotics to the fetus through the mother.
f:\12000 essays\sciences (985)\Biology\Extinction.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EXTINCTION
Two-hundred and thirty million years ago the first dinosaur-like creature roamed the earth. Within five million years it could be considered a dinosaur. They were soon at the top of the food chain. They populated every continent. Then 65 million years ago they vanished. The most powerful creatures ever to live on earth had become extinct.
Dinosaurs were not the only victims of this "mass extinction." There were many other species that were killed off. During what is known as the K-T extinction (K stands for Cretaceous, T stands for Tertiary), many species and families became extinct. These include all marine reptiles such as plesiosaurs, mosasaurs, ichthyosaurs, and ammonites, swimming and flying reptiles, sea crocodiles, and foraminifera. In addition to that there were many bony fish, sponges, snails, clams, and sea urchins became extinct.
Paleontologists have proposed scenarios that could have caused these extinctions. One such scenario involves the growing number of small mammals which ate dinosaur eggs, and therefore caused the dinosaurs' birth rate to drop. The birth rate became smaller than the death rate and the dinosaurs died out. This, however, is not a plausible scenario. This would only account for the dinosaurs, but not all the other creatures of that time. Paleontologists needed to come up with a more plausible and devastating theory that would include the other creatures that died out 65 million years ago.
There have been several major theories that have come about that can all be substantiated. Any one of these events, theorized by paleontologists, could have brought an end to the dinosaurs and all the other species that died with them. Since there are many theories about this I will not write about them all. I have chosen the theory about death by Cosmic Collision (an asteroid).
In 1980 Luis Alvarez and John Sepkowski Jr., famous geologists, blamed the extinction of the dinosaurs on a large celestial body which hit the Earth 65 million years ago. According to them, an asteroid five miles across blasted through the Earth's crust, and threw up molten rock, ash, and dust. Adding to the support to this theory is the discovery of a layer of iridium in New Zealand, Denmark, and Italy. This layer became known as the iridium anomaly. Remarkably, this layer of iridium was found in sedimen tary rock that is abundant on other celestial bodies. This was too remarkable to be a coincidence, so the research team decided that the layer of iridium must have been caused by space debris which resulted from the collision of Earth and another celestial body. At first, Alvarez's theory was not widely accepted, but as more iridium was found from that period of time around the world, it became the leading theory. Other evidence includes the discovery of stishovite, a mineral created by high heat and
pressure. These two minerals were most likely caused by a cosmic collision. Even more stunning, geologists have found a huge crater on the Yucatan Peninsula which is dated to be exactly 65 million years old.
Whatever hit the Earth would have caused extreme and immediate havoc, includ ing, a blast wave which would have incinerated everything in its path. Tsunamis would occur in oceans all over the world and would affect vast areas. Water and rock would evaporate. The impact of the asteroid would cause massive earthquakes all over the Earth, and these earthquakes would increase volcanic activity. The power of the collision would have also caused global wildfires. Scientists have discovered large quantities of sulfur around the crater at the Yucatan Peninsula. When the asteroid or comet hit, it would have mixed with water to form sulfuric acid, which would have created a barrier to block out light and heat from the sun. This could have lasted for decades. This cloud of darkness would have killed off vegetation, and then herbivores, having nothing to eat, would have died. With no herbivores to eat, carnivores would have killed each other, and the reign of the dinosaurs would have ended.
Now Alvarez's theory is regarded as one of the leading theories justifying the K-T extinct
Biology 6th Period
Dinosaur Report
Ryan Humphries
EXTINCTION
Two-hundred and thirty million years ago the first dinosaur-like creature roamed the earth. Within five million years it could be considered a dinosaur. They were soon at the top of the food chain. They populated every continent. Then 65 million years ago they vanished. The most powerful creatures ever to live on earth had become extinct.
Dinosaurs were not the only victims of this "mass extinction." There were many other species that were killed off. During what is known as the K-T extinction (K stands for Cretaceous, T stands for Tertiary), many species and families became extinct. These include all marine reptiles such as plesiosaurs, mosasaurs, ichthyosaurs, and ammonites, swimming and flying reptiles, sea crocodiles, and foraminifera. In addition to that there were many bony fish, sponges, snails, clams, and sea urchins became extinct.
Paleontologists have proposed scenarios that could have caused these extinctions. One such scenario involves the growing number of small mammals which ate dinosaur eggs, and therefore caused the dinosaurs' birth rate to drop. The birth rate became smaller than the death rate and the dinosaurs died out. This, however, is not a plausible scenario. This would only account for the dinosaurs, but not all the other creatures of that time. Paleontologists needed to come up with a more plausible and devastating theory that would include the other creatures that died out 65 million years ago.
There have been several major theories that have come about that can all be substantiated. Any one of these events, theorized by paleontologists, could have brought an end to the dinosaurs and all the other species that died with them. Since there are many theories about this I will not write about them all. I have chosen the theory about death by Cosmic Collision (an asteroid).
In 1980 Luis Alvarez and John Sepkowski Jr., famous geologists, blamed the extinction of the dinosaurs on a large celestial body which hit the Earth 65 million years ago. According to them, an asteroid five miles across blasted through the Earth's crust, and threw up molten rock, ash, and dust. Adding to the support to this theory is the discovery of a layer of iridium in New Zealand, Denmark, and Italy. This layer became known as the iridium anomaly. Remarkably, this layer of iridium was found in sedimen tary rock that is abundant on other celestial bodies. This was too remarkable to be a coincidence, so the research team decided that the layer of iridium must have been caused by space debris which resulted from the collision of Earth and another celestial body. At first, Alvarez's theory was not widely accepted, but as more iridium was found from that period of time around the world, it became the leading theory. Other evidence includes the discovery of stishovite, a mineral created by high heat and
pressure. These two minerals were most likely caused by a cosmic collision. Even more stunning, geologists have found a huge crater on the Yucatan Peninsula which is dated to be exactly 65 million years old.
Whatever hit the Earth would have caused extreme and immediate havoc, includ ing, a blast wave which would have incinerated everything in its path. Tsunamis would occur in oceans all over the world and would affect vast areas. Water and rock would evaporate. The impact of the asteroid would cause massive earthquakes all over the Earth, and these earthquakes would increase volcanic activity. The power of the collision would have also caused global wildfires. Scientists have discovered large quantities of sulfur around the crater at the Yucatan Peninsula. When the asteroid or comet hit, it would have mixed with water to form sulfuric acid, which would have created a barrier to block out light and heat from the sun. This could have lasted for decades. This cloud of darkness would have killed off vegetation, and then herbivores, having nothing to eat, would have died. With no herbivores to eat, carnivores would have killed each other, and the reign of the dinosaurs would have ended.
Now Alvarez's theory is regarded as one of the leading theories justifying the K-T extinct
Biology 6th Period
Dinosaur Report
Ryan Humphries
EXTINCTION
Two-hundred and thirty million years ago the first dinosaur-like creature roamed the earth. Within five million years it could be considered a dinosaur. They were soon at the top of the food chain. They populated every continent. Then 65 million years ago they vanished. The most powerful creatures ever to live on earth had become extinct.
Dinosaurs were not the only victims of this "mass extinction." There were many other species that were killed off. During what is known as the K-T extinction (K stands for Cretaceous, T stands for Tertiary), many species and families became extinct. These include all marine reptiles such as plesiosaurs, mosasaurs, ichthyosaurs, and ammonites, swimming and flying reptiles, sea crocodiles, and foraminifera. In addition to that there were many bony fish, sponges, snails, clams, and sea urchins became extinct.
Paleontologists have proposed scenarios that could have caused these extinctions. One such scenario involves the growing number of small mammals which ate dinosaur eggs, and therefore caused the dinosaurs' birth rate to drop. The birth rate became smaller than the death rate and the dinosaurs died out. This, however, is not a plausible scenario. This would only account for the dinosaurs, but not all the other creatures of that time. Paleontologists needed to come up with a more plausible and devastating theory that would include the other creatures that died out 65 million years ago.
There have been several major theories that have come about that can all be substantiated. Any one of these events, theorized by paleontologists, could have brought an end to the dinosaurs and all the other species that died with them. Since there are many theories about this I will not write about them all. I have chosen the theory about death by Cosmic Collision (an asteroid).
In 1980 Luis Alvarez and John Sepkowski Jr., famous geologists, blamed the extinction of the dinosaurs on a large celestial body which hit the Earth 65 million years ago. According to them, an asteroid five miles across blasted through the Earth's crust, and threw up molten rock, ash, and dust. Adding to the support to this theory is the discovery of a layer of iridium in New Zealand, Denmark, and Italy. This layer became known as the iridium anomaly. Remarkably, this layer of iridium was found in sedimen tary rock that is abundant on other celestial bodies. This was too remarkable to be a coincidence, so the research team decided that the layer of iridium must have been caused by space debris which resulted from the collision of Earth and another celestial body. At first, Alvarez's theory was not widely accepted, but as more iridium was found from that period of time around the world, it became the leading theory. Other evidence includes the discovery of stishovite, a mineral created by high heat and
pressure. These two minerals were most likely caused by a cosmic collision. Even more stunning, geologists have found a huge crater on the Yucatan Peninsula which is dated to be exactly 65 million years old.
Whatever hit the Earth would have caused extreme and immediate havoc, includ ing, a blast wave which would have incinerated everything in its path. Tsunamis would occur in oceans all over the world and would affect vast areas. Water and rock would evaporate. The impact of the asteroid would cause massive earthquakes all over the Earth, and these earthquakes would increase volcanic activity. The power of the collision would have also caused global wildfires. Scientists have discovered large quantities of sulfur around the crater at the Yucatan Peninsula. When the asteroid or comet hit, it would have mixed with water to form sulfuric acid, which would have created a barrier to block out light and heat from the sun. This could have lasted for decades. This cloud of darkness would have killed off vegetation, and then herbivores, having nothing to eat, would have died. With no herbivores to eat, carnivores would have killed each other, and the reign of the dinosaurs would have ended.
Now Alvarez's theory is regarded as one of the leading theories justifying the K-T extinct
Biology 6th Period
Dinosaur Report
Ryan Humphries
f:\12000 essays\sciences (985)\Biology\False Words and False Hopes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hajducko 1
Steven Hajducko
Prof. Sims
MWF 10:00-11:00
29 November 1995
False Words and False Hope
Autism is a childhood disease where the child is in a private world of their own. A description of an autistic child by her mother is:
We start with an image---a tiny, golden child on hands and knees, circling round and round a spot on the floor in mysterious self-
absorbed delight. She does not look up, though she is smiling and laughing; she does not call our attention to the mysterious
object of her pleasure. She does not see us at all. She and the spot are all there is, and though she is eighteen months old, an age for touching, tasting, pointing, pushing, exploring, she is doing none of these. (Groden 2)
This is the most important trait in an autistic child: They don't interact or socialize with other people. Other characteristics in autistic children are language retardation and ritualistic or compulsive behaviors. It used to be thought that children became autistic because of "poor parenting" and that the only solution was that the parents should be removed from the child (Baron-Cohen 26). Now it is known that autism is caused by biological factors due to: neurological symptoms, mental handicap, genetic causes, infections, and even difficulties in pregnancy.
Even though autism is thought of as a disease or disorder, autistic
Hajducko 2
children can demonstrate special skills. These skills are referred to as "isolated
islets of intelligence" (Baron-Cohen 53). Some examples of these are found in an autistic child's ability to draw, play music, or recall a certain date. Nadia, an autistic child, has the ability to draw in an "almost photographic way" (Baron-Cohen 54). Autistic children can also play instruments, accurately sing songs, recognize structures of music, etc. A problem that arises when autistic children are going through therapy is that they start to lose their remarkable skills.
For parents to find out that their child is autistic can be very shocking. They go from having a bouncy, livey baby to a having a total stranger as their child. Many therapies have been devised to help autistic children. Some of these therapies are: behavior therapy, speech and language therapy, holding therapy, music therapy, and the newest one, facilitation therapy.
Since most autistic children are different and their behaviors are different, one therapy may be more effective than another one. Facilitation therapy is catching on, but is already becoming a controversy. Although facilitation therapy is one of the most popular used methods in communicating with autistic children, it is being downgraded because of the controversies where the children are being manipulated by the facilitators.
A child with autism can be detected by the age of three. "If treament is started right away, the child may gain their normal functioning. This is a critical factor in reversing the disorder" (McEachin 105). Other elements in autistic therapy that are important factors in helping with the child are "observations, establishing relationships, and changing behaviors" (Simons 27). Once autistic children have made a relationship, they are brought closer to the outside world. That is why facilitation therapy is so popular. This kind of therapy helps the
Hajducko 3
outside world to communicate with the lost child. The autistic child is supported by a facilitator who holds the arm, the wrist, or the hand. This support helps the
child to control his/her movements in order for the child to point to words, pictures, etc. In this way autistic children can express feelings or thoughts that no one thought they had.
So why is there controversy over facilitation therapy ? The autistic child is being observed, a relationship is formed between the child and the facilitator, and the gap is being closed. The problem with facilitation therapy is expressed by Dr. Green from the New York Times, "Facilitated communication seems tantamount to a miracle, but it's more like a self-fulfulling prophecy - you see what you want to see" (C11). There is always the chance that the child is not the one expressing the thoughts. Scientists in the New York Times "are likening it to a Ouija board" (C1), because as people subconsciously move the message indicator to get an answer to their question, facilitators can move the autistic child's hand to what they want. Another argument against facilitation therapy was in an article, the "Harvard Educational Review," where three concerns were mentioned: 1) facilitated communication manipulated the handicapped, 2) facilitation has never been proven valid, and 3) facilitation contradicts "50 years of research in autism and developmental disabilities" (Biklen 110). It seems impossible that an autistic child who can not speak can suddenly communicate with words. The autistic child can answer questions when asked by a facilitator, but normally would just ignore a person that asked a question. Even though facilitation therapy is a gateway into the autistic child's mind, it causes much skepticism.
Hajducko 4
"One of the greatest barriers to success with facilitation is the tendency to underestimate people's abilities based on prevailing paradigms or definitions of disability" (Biklen 193). When assumptions are made about people with a
handicap, others don't put too much faith in their ability to spell, write, or communicate. People that are retarded are assumed that they have no
intelligence, so others do everything for them. Another example is that people talk loud around the elderly because they are assumed to have lost their hearing. Many assumptions related to autism are: "receptive problems, processing problems, global cognitive failure, specific cognitive failure, levels of deficit, and the inability to use pronouns, verb tenses, and other forms of language" (Biklen 193). These assumptions would lead a facilitator to think that an autistic child, who has always had to depend on others, would have no skill of their own. Biklen suggests instead of facilitators making wrong assumptions about the child's ability, that they should encourage the child in a "natural manner," and "treat the person being facilitated as competent" (193). This would be hard to do knowing the limitations of the person. It is also hard to think of someone as being competent when that person starts to scream or starts hitting themselves.
Many parents doubt the effectiveness of facilitation therapy with their child. How can their brain damaged child know anything? Dr. Schneiderman, a pediatrician at the State University of New York Health Sciences Center in Syracuse, uses facilitation therapy with his autistic son, David. In a New York Times article he exclaims his concern over whether or not he is the one cuing the responses, "I worry a lot about whether what I'm doing is real when I facilitate. If I'm doing this unconsciously, I'm unconsciously producing an autistic
Hajducko 5
personality" (C11). Another father expressed his doubts about facilitation therapy over his daughter:
My child is severly handicapped. This breaks my heart; but I have learned to live with that and make it part of my joy. I cannot in good conscience allow that to be erased by the denial of other; that [she] . . . is reading and comprehending . . . is incredibly ludicrous, not to mention serious fabrication . . . . The onus of
responsibility to prove whether or not this so-called method is effective should rest on the practitioner. (Biklen 119)
The father had also done facilitated communication with his daughter and nothing happened. If encouragement, love, and support is given by the facilitator to the autistic child, and these elements are supposed to help the child communicate, then a parent should be able to get a response from their child.
Facilitation therapy is controversial in that manipulation is thought to be involved. Biklen uses an argument by Cummins and Priors:
The success of assisted communications has very little to do with
emotional support, . . . and very much to do with physical control
by the assistant; either in the form of overt control of the client's
movement's or by supplying covert cues which are used by the
client to control his or her movements. (112).
Biklen noticed in his first studies of facilitation, an autistic child would only communicate with one facilitator, and could not independently communicate even though he wrote, "Let me show them what I can really do" (112). Physical manipulation is also evident if the child being facilitated is not old enough to spell, but is communicating on the keyboard. Other signs of physical manipulation are: if the child types without any problems of pronoun reversals,
Hajducko 6
incorrect verb tenses, not normal "autistic" language, and if the child says things that others would not want to know or that aren't true about family and friends (Biklen 128).
The most recent controversial subject with facilitation therapy is the reports of sexual abuse to the autistic child. Dr. Bernard Rimland, director of the
Autism Research Institute in San Diego, states, "I know of about 25 cases through facilitated communication of sexually abusing their kids" (Goleman C11).
The result of the cases is that the facilitator was sexually abused and expresses the event through the autistic child. When these cases go through the court it is up to the judge to determine the reliability of the facilitator (Lambert B10). It's sad to think that facilitators would use the autistic child in revealing their sexual abuse.
Facilitation is not the only answer in helping with autism. Behavior therapy is making progress with its effects in treating autism. In the New York Times, it explains how a team of psychologists have reported that the progress of "19 children with autism who at age 2 or 3 had recieved 40 hours a week of behavioral treatment . . . By age 11 . . . nine of those autistic children were going to regular schools" (C10) This kind of therapy is used to award good behavior and discourage bad behavior. It is less controversial and seems to working more than facilitated communication. Also with behavioral therapy, it not only communicates with the child, but obviously can bring some children back into the real world. Facilitation therapy only helps the child to "talk," if it is even the child speaking.
Another treatment for autism is an effective medication called clomipramine. It was reported in the Archives of General Psychiatry that it
Hajducko 7
"reduced a range of symptoms in three-quarters of autistic children tested" (Goleman C11). The improvements in the children were that they were able to make eye contact and begin interactions. Also compulsive behaviors were reduced. In facilitation therapy many of the compulsive behaviors are still observed, plus when the child is given medication there is no doubt that it is the autistic child doing the communicating.
For some autistic children facilitation therapy may be the key to reaching out. For the majority of autistic people, to close the gap between the real world and the world they live in, takes intensive therapy. It takes more then a hand supporting a wrist or an arm to communicate. Facilitation therapy is proving to be too controversial to really know if it's the autistic person's own thoughts. Yes, there is a hidden person inside that mute creature. Hopefully with love and support from family and other outside contacts, that unique individual will emerge.
f:\12000 essays\sciences (985)\Biology\Fern Life Cycle.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction:
This essay will discuss the fern life cycle as taught in biology lab. The essay will
cover the basic process which we used to grow a fern. I will discuss the methods and the
results of the lab exercise. Finally, I will discuss the evidence of the methods and results
that were obtained .
Methods and Results:
To begin our experiment we obtained a petri dish from our lab instructor which
contained fern spores and the food they needed to survive. We then look at the spores
through the micro scope. It was to soon to see anything but little green dots. We then put
our petri dishes under a light until next week.
When we came in next week we observed our fern spores through the dissecting
microscope. We looked to see if we could find anything germinating. We quickly noticed
something that appeared like an air bubble squirting out something green. This was our
fern spore which was germinating. Next, we removed a few of the germinating spores
from the petri dish and put them under a compound microscope scope. We found the
spore wall and observed how the developing gametophyte had broken through the wall, as
instructed by our lab manuals. One could also identify the chloroplasts with in the cell.
We then put up our dishes for another week.
The third week of our fern lab we identified the difference between male and
female gametophytes. We did this by taking a culture from our petri dish and placing it
under a dissecting microscope. Due to the male and female being both located on the
same prothallus it was necessary to obtain the exact location of the antherium and the
archegonium from the lab book or the instructor. Once this was done it was fairly easy to
tell the difference between the male gametophyte or (antheridium) and female
gametophyte (archegonium) on the prothallus. The antheridium was located around the
perimeter of the prothallus, near the rhizoids. The antheridium was located near the
growing notch on the under side of the prothallus. To me the growing notch seem to like
red dots set up like bowling pins. We also observed sperm swimming around the
archegonium. We then put our fern lab petri dishes back under the light until next week.
By the forth week of our fern lab experiment our gametophytes had grown quite a
bit. We briefly looked at them under a compound microscope, but there was no valuable
information learned from this. The gametophytes would be large enough in the next
couple of weeks to transplant them into three liter soda bottles to grow into full size fern
plant. This would complete our fern life cycle experiment.
Discussion:
In this section I will talk a little about what I learned from the fern life cycle from
first germination to final result, a full grown fern plant. I will begin by saying that I had to
learn a lot of specific terms to be able to follow the experiment. It is imperative to
understand the basics get a handle on the whole. Anyway, I will start from the beginning. I
learned that their were several different stages in which a fern had to go through in order
to grow into an adult plant. I will describe the fern life cycle as learned in biology lab and
the lab manual. First the fern was given to us as a gametophyte. The gametophyte contains
an antheridium, which is the male sex organ that produces the sperm, and the archegonim,
the female sex organ were fertilization takes place. This allows the fern gametphyte to
fertilize itself. Once this happens the gametophyte will give rise to a sporophyte. Then the
sporophyte will produce more spores and the spores will produce more gametophytes,
thus completing the cycle of life once again.
I learned a lot by watching the experiments through a microscope. The hands on
experience really help to understand what was going on in the gametophyte. When one
could actually see the archegonium and the anteridium on the prothallus it seem to help
make sense of the lab experiment. One could even see the sperm going to the
aechegonium, which lead to fertilization. I can remember looking into the microscope and
seeing the green ooze squeezing out of the cell wall.
In conclusion, all of this combined lead me to believe the fern life cycle did indeed
happen as the lab book and instructor had taught. The experience of studying the fern life
cycle did spark my curiosity in the development of life from cells. It really amazed to see
an adult fern grow from something I had to look at through a microscope.
f:\12000 essays\sciences (985)\Biology\Fire Ants.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FIRE ANTS
Fire ants have been in the United States for over sixty years, and almost every
American that lives in or frequently visits the quarantined states which they inhabit has had an unpleasant run in with these troublesome critters. Inhabitants of the Southeast who have ever stood unwittingly atop a fire ant mound know that the insects are aptly named. When the ants sting it creates a sensation similar to scorching caused by a hot needle touching the skin momentarily (1. Tschinkel 474).
Fire ants are native to South America and were introduced to the United States in 1928 through a port in Mobile, Alabama. The ants were stowaways hidden in soil used for ballast and in dunnage dropped off the ships once they had sailed from South America to the ports of Alabama (2. Lockley 31). The two basic species of fire ants in the United States are the are black and red, they vary in length from one eighth to one quarter inch. Black fire ants arrived first followed shortly by the infamous imported red fire ants. Black ants (Solenopsis Richteri Forel) were the first to arrive and spread slowly but steadily despite government intervention to stop them from spreading(3. Lockley 33). These black ants would spread much further then the second wave of imported ants recognized as Solenopsis Invicta Buren or red fire ants(4. Lockley 33). This second wave of ants arrived in about 1945 and spread much more rapidly and dominated the previous more passive black ant(5. Lockley 34). Homer Collins, a fire an
t
expert, stated that "The new invader, known as the red imported fire ant, proved more adaptive and rapidly displaced the existing imported black ant. By 1949, Solenopsis Invicta Buren were the dominant species of imported fire ant. Ants could be found in commercial ornamental-plant nurseries in the heart of the Southeast." Red ants are a particularly aggressive ant species that, like the killer bees, are rapidly spreading northward from the Southeastern United States, and have traveled as far west as Texas and as far north as North Carolina. "Experts predict that the ants may eventually reach as far west as California and as far north as Chesapeake Bay."(7. Tschinkel 474).
The spread of fire ants into new areas depends on many factors: the existing level of fire ant population, climate, competition, and natural predators . In areas where other ant populations are well established and an abundance of natural enemies exist, colony establishment is hindered because of the threat to the queen and the competition for resources. Man and his need for cleared land has created open sunny areas free of natural enemies and fewer competitors and inadvertently aided the spread of the fire ants(8. Lockley 35).
Fire ant infestation is a very serious problem in the Southern United States ranging from Florida, West along the Gulf Coast region, to West Texas. Over 200,000,000 acres of land in the United States and Puerto Rico are infested with fire ants. They pose a major economic threat to the agricultural and ranching industries, lawns, gardens and recreational areas, as well as a threat to animal life and even human life. The total cost of controlling the ants, preventing the damage, and treating the medical problems in urban and rural areas is estimated to be $2.7 billion per year (9. Lockley 36).
When native species are defeated by aggressive invaders, the cost is measured in lost species and disrupted communities. The result, predicted ecologist Gordon Orians at the
1994 Ecological Society of America Conference, will be the "Homogocene," an era in which the world's biota is homogenized through biological invasions(10. Lockley 37).
Fire ants use their stingers to immobilize or kill prey and to defend ant mounds from disturbance by larger animals such as humans. Any disturbance sends hundreds of workers out to attack the potential nourishment or predator. The ant grabs its victim with its mandibles (mouth parts) and then inserts its stinger. The process of stinging releases a chemical which alerts other ants, inducing them to sting simultaneously. In addition, one ant can sting several times, even after its' venom sack has been emptied, without letting go with its mandibles(11. Lockley 37). Once stung, human beings experience a sharp pain which lasts a couple of minutes. These ants are notorious for their painful, burning sting that results in a pustule and intense itching, which may persist for ten days. Initially the sting results in a localized intense burning sensation (hence the name "fire" ant). This is followed within 14-18 hours by the formation of a white pustule at the sting site. These pustules can become sites of seconda
ry infection if the pustules are broken or are not kept clean, in some cases they can leave permanent scarring(12 Lockley 38). Some people have allergic reactions to fire ant stings that range from rashes and swelling to paralysis, or anaphylactic shock. In rare instances, severe allergic reactions can cause death(14. Lockley 35).
Then the sting starts itching and a welt appears. Fire ant venom contains alkaloids and a relatively small amount of protein compared to other stinging insects. The alkaloids in the venom kill skin cells; this attracts white blood cells, which form a pustule within a few hours of being stung. The fluid in the pustule is sterile, but if the pustule is broken the wound may become infected. The protein in the venom can cause allergic reactions including nausea, vomiting, dizziness, perspiration, cyanosis, and asthma which may require medical attention. Death has been known to result when toddlers fall on fire ant mounds and when adults have extreme allergic reactions. Although fire ant stings are not as painful as those of harvester ants or as dangerous as those of bees and wasps, their greater numbers raise them to the status of pest. Although less than one percent of the population requires medical attention after a sting, so many people live in areas infested with fire ants and fire ants are so dense i
n these areas that this translates to tens of thousands of people requiring medical attention for fire ant bites each year.
Fire Ants construct nests which are often in the form of dome-shaped mounds of soil, sometimes as large as three feet across and one and one half feet in height. In sandy soils, mounds are flatter and less visible(15. Lockley 38).. Fire ants usually build mounds in sunny, open areas such as lawns, pastures, cultivated fields, and meadows, but they are not restricted to these areas. Mounds or nests may be located in rotting logs, dried cow manure, around trees and stumps, under pavement and buildings, and occasionally indoor. When the nest is disturbed, numerous fire ants will quickly disperse out of the mound and attack any intruder. The mound serves three primary purposes: it is a platform for nuptial flights, a place to raise the colony above the water table in soaked soil, and it collects the suns warmth during the cold months of winter(16. Melnick 14).
Fire Ant colonies consist of eggs, brood, minim and major workers, and one or more reproductive queens. A colony is usually started by a single queen, however some beginning colonies can have up to five queens(17. Lockley 39). Mature colonies often posses more than one queen. During the spring and summer, winged males and females leave the mound and mate in the air. After mating females become queens and may fly up to ten miles from the parent colony. However, most queens descend to the ground within much shorter distances. Only a small percentage of queens survive after landing. Most queens are killed by foraging ants, especially other fire ants. If a queen survives she sheds her wings, burrows into the ground, and lays eggs to begin a new colony. A queen ant lays her eggs in a brood chamber twenty five to fifty millimeters deep in the mound. After twenty to thirty days the first workers appear(18. Melnick 14). These workers, called minims, are very small due to the limited amount of energy and
resources the queen can devote to them. The minims explore the outside world and forage for food to feed the queen and the developing colony. Within thirty days the next wave of workers emerge and are up to ten times larger than the minims(19. Lockley 39). These workers are called majors and perform the tasks of expanding the mound and foraging for nourishment. The labor is divided by age, and to a lesser degree by size. Youngest and smallest workers are given the job of caring for the developing eggs and brood. Middle-aged workers are tasked with colony maintenance while the eldest and largest workers forage for food and defend the mound. A colony may contain as many as 240,000 workers after three years(20. Lockley 39).
Fire ants are omnivorous, feeding on almost any plant or animal material; although insects seem to be their preferred food. The arrival of imported fire ants into an ecosystem can wreak havoc on the local ecological community. Studies have shown that a minimum two-fold reduction occurs among populations of field mice, snakes, turtles and other vertebrates when fire ants are allowed to establish colonies within a given area. In some instances, the depredation by fire ants has completely eliminated spiders, scorpions, mites, centipedes, ground nesting mammals and birds from an ecosystem(21. Lockley 41).
Fire ants are not only a threat to other insects and small mammals, they also cause billions dollars worth of damage per year. In Urban areas ants are attracted to electrical currents and cause considerable damage to heat pumps, air conditioners, telephone junction boxes, transformers, traffic lights, gasoline pumps, et cetra (22. Lockley 41). In agriculture, fire ants have been identified as damaging corn, soybeans, citrus trees, okra, and up to fifty-four other different species of cultivated plants. Ants are also known for feeding upon arthropod predators and other beneficial insects, eating upon ground nesting vertebrates and other wildlife, damage to asphalt roads, damage of farm equipment and machinery(23. Lockley 41).
It is very difficult to find an effective method to exterminate fire ant colonies. Four basic methods used to aid in the extermination of fire ants are: individual mound treatment, broadcast treatments, biological control, and the effects of natural enemies. Individual mound treatments are simply applying a insecticide directly on the mound and allowing the worker ants to carry the poison into the colony and feed it to the brood and queen. Broadcast treatment do not require the locating of each mound but still rely on the worker ants to bring the insecticide back to the mound to kill the queen and young. Biological and natural enemies feed upon the ants themselves, like parasites, to terminate the colony.
Prevention is the key to reducing the threat of fire ant infestations indoors, which means removing exposed food sources that may attract these insects. If fire ants enter a building, the treatment objective must be to reduce the potential for accidental stings as quickly as possible. Insecticides labeled for indoor use particularly pyrethroids, can be used in homes and public buildings to drive foraging ants outside or away from critical areas, such as kitchens, recreation rooms, patient rooms, operating rooms, or intensive care units. Baits work well for many ants that invade buildings. However the baits should be used in moderation to control fire ants indoors because they are likely to attract additional foraging ants, increasing the chance that an occupant will be stung. Ultimately, long-term control of fire ants indoors can be achieved only by locating and treating their mound or mounds, probably with an insecticide drench(24. Lockley 42)
Fire ants both black and red have caused billions dollars in damage since their introduction to the United States over sixty years ago. Even in 1997 society has not found an effective way to exterminate or control the spread of these troublesome insects. As mankind chooses to genetically experiment with species and continues to connect the remote areas of the world with faster and more efficient means of moving food and goods. Occurrences of accidental transportation of troublesome pests, bacteria, and viruses will also increase. The fire ant while costly and annoying won't cause the absolute destruction of life as we know it. Fire ants are however a reminder that ecosystems are a delicately balanced environments with forces that keep the food chain functioning. The fire ant and the African killer bee do not have natural enemies in the Southern United States that reside in South Africa. As mankind destroys the rain forests of South America for cattle grazing, he has released things like the Hunta
virus, and the Ebola virus in Africa. Both of these viruses could rapidly destroy populations. Mankind has made tremendous leaps in knowledge and technology during this century. If this use of that technology is not metered and controlled intelligently it may be the downfall of the mankind.
f:\12000 essays\sciences (985)\Biology\Flourescence InSitu Hybridisation and its advantages.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Flourescence in-situ hybridisation is a great advancement in technology
because there are fewer chances of a miscarriage, the parents receive faster
results, and the tests are easier to do. In the future, FISH will be able to
decrease the chances of a miscarriage by using samples of maternal blood
instead of amniotic fluid. The problem with amniocentesis is it uses a hollow
needle to take fluid from the mother's uterus. The needle could damage the
developing fetus if not inserted properly. Another advantage of FISH is that
the parents get a much quicker test result than amniocentesis. After the
sample is taken, amniocentesis can take up to three weeks before a test can
be administered. With FISH, a same day result is given, which is much more
convenient to the parents. Additionally, FISH is a simpler process. It uses
specially prepared molecules which bind to specific regions of DNA. To find a
particular gene, the examiner just looks for the flourescent coloured molecule
that bonds to that specific gene. In amniocentesis, test cells are cultured
for three weeks and then tested. The chromosomes are then counted manually,
and compared to a chart of normal chromosomes. This can be difficult, as the
tester is looking for an extra chromosome 21. So because the test is much
simpler, quicker, and reduces the risks of miscarriages, flourescence in-situ
hybridisation is an incredible discovery in the field of genetics.
f:\12000 essays\sciences (985)\Biology\Forever young.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Reversing The Aging Process, Should We?
In the length of time measured as human lifetime one can expect to see a full range of differing events. It is assumed that during a lifetime a person will experience every possible different emotion. If one is particularly lucky, he will bear witness to, or affect some momentous change in humanity. However is it reasonable to ask what would be experienced by someone who lived two lifetimes? Up until recently the previous question would and could only be rhetorical. There is no answer, because no one has ever lived that long. Of course that was up until now.
At McGill University, nematodes (tiny organisms) have experienced five lifetimes (Kluger). Through complex scientific experiments nematodes and fruit flies have had their lifespans increased not by fractions of life times, but by multiples of lifetimes (Kruger). Mankind is using the discovery of DNA as an opportunity to play G-d by changing the aging process. Man has a natural tendency to play the role of G-d. Man has a an inherent need to affect others, be it through the vises of war, power, manipulation or politics. However man's natural tendency to play G-d has reached it's final manifestation. By attempting to slow down the aging process man is using himself as the ultimate canvas, to play the role of the omnipotent.
Research into the process of aging began in 1961(Rose, Technology Review:64). Since then a great deal of time, money and effort have been appropriated into discovering the causes of aging, it can therefore be inferred that humanity has an almost "personal" interest in aging. Of course the culmination of discovering how we age, is discovering how to stop it. An intrinsic characteristic of Man is His obsession with superficiality. Superficiality is equated with appearance. The appearance of beauty can be equated with youth. Therein lies man's obsession with age, ceasing to age means being eternally beautiful. As usual man's actions are dominated by ego and self-preservation. Within the confines of youth there lies a certain fountain of power. Power which cannot be accessed once one ages. Things like physical and sexual prowess. The time of youth is often refereed to as the "prime of your life". It is therefore not difficult to understand and conceive of man's motivation to stay young and to wish that the immedia
te people surrounding him stay young.
If a mathematician wished to create a formula to describe the life of one man he would say that life is equal to a series of interchangeably quantized, experiences and emotions. With the advent of a retarded aging process, that which we know as life changes. While life is composed if those quantized properties there are a finite amount of them, therefore decelerating the aging process has major implications. First and foremost among them is what to do with all that extra time? In 1900 the average life expectancy of a baby born in the United States was 47 years. Conservative estimates place life expectancy of children born today in the united sates at 76, while less conservative estimates place the life expectancy at 100 years. Presently man is unable to cope with this extra time. Many septuagenarians spend days sitting around doing next to nothing. The term "waiting to die" has been applied in reference to such activities, or rather lack thereof. Even while the average life-span has increased, whose to say tha
t the time added is quality time? Another general comment overheard in the population at large was "what's the point of growing old and having to suffer through ulcers, cataracts, hemorrhoids, and cancer. Isn't it better to die young and healthy then to die old, infirm and brittle?" The essential question being proposed is one of quality versus quantity. Is it better to live for a long time with much of that time spent in dialysis, or is it preferable to enjoy a short but "fun" life. Even if the scientists can cure humanity of the ailments of the elders, there still remains the question of how to manage one's time. "We're bored" has often been used as the battle cry of youth, people who haven't even lived two decades. What are people who have lived twelve decades supposed to do? These questions are stuck in the realm of rhetoric. There are no answers to these questions. It is altogether possible that there never will be.
Scientists involved in the dissection of the aging process have made what they believe to be an important discovery (Gebhart,174). Scientists discovered a small area at the tip of the chromosomes that served no apparent purpose (Kluger). Dubbed a telomere, this area of the chromosome wasn't responsible for any physiological traits. What was discerned however was that whenever a cell divides to create two new cells each of the daughter cells has less telomere than the mother cell (Kluger). Once the cell has undergone a maximum number of divisions the telomere was reduced to a stub, exposing genes which initiated proteins that caused the deterioration of the cell (Kluger). The most applicable analogy would be that of a bomb. The telomere acts as the fuse to the bomb. The fuse is lit from the time of birth, and when the telomere\fuse runs out the bomb goes off. Only in this case instead of instantaneous death, the victim succumbs to the equivalent of radiation poisoning. The victims condition is terminal
from the start and slowly degrades to the point of death . The conclusion is that life is just a case of terminal death. Or is it? Scientists also discovered an enzyme known as telomerase prevents the loss of telomere, essentially stomping the fire out (Rose, Technology Review: 64). There are many substantial and immediate implications raised by this. What are the ethics of immortality? Was humanity meant to be immortal? Are there benefits to being immortal? Are there consequences?
While it seems like quite a neat thing to do immortality would place an incredible strain on our resources. Not only on social actions and mental coping but also on the resources of this planet. There are a limited quantity of resources available for consumption on this planet. As a result of human immortality, the first consequence would be overcrowding. No one ever dies, therefore there's no room to go "out with the old and in with the new". The next major problem would be a food shortage. With an ever-increasing population and a constant food supply, there wouldn't be enough food to feed everybody. Either the vast majority of the planet would be starving while a few noble class people feasted, or in general people would have to reduce the amount they eat. Which introduces the problem of waste disposal. Not only human and animal defecation but garbage, where would it go?
A common complaint from a number of people, and most teenagers is that there parents place too much pressure on them, and that they're always trying to find out things that are none of there business. Well imagine the pressure placed on someone who has not only his parents, not only his grandparents, but also his great-grandparents, his great-great-grandparents, their parents, and their parents. A person would have an endless supply of ancestors, and would be constantly overseen. These are huge ramifications that would change the way humanity not only acts but also the way humanity perceives itself.
Lastly there is the ethical aspect of increasing humanity's lifespan. Regardless of whether there is or is not a some omnipotent watchperson whom we in our rather limited capacity perceive as G-d there are ethical issues which must be dealt with. Humanity has always perceived itself as more than just the sum of its parts. However that isn't to say that if you change one of the parts humanity will stay the same. There is nothing more immediate than DNA to a human. What right does humanity have to go stumbling around down there. A baby doesn't change its own diapers does it? If humans were meant to live for a certain amount of time who are we to say we should live longer. On the other hand who's to say we shouldn't. Yes the human lifespan has been adjusted in the past, but those were all external stimuli, war, famine, disease and the CIA were all responsible for changing the definition of a lifetime. However adjusting DNA is an internal change. Changing our society and hygiene is light years away from control
ling microscopic chemical reactions. Man is referred to as G-d's ultimate creation, the universe his canvas. But what happens when humans steal the canvas and decide to redecorate, would you want to recolor your Picasso? Is there any justification for living that long, does there need to be? These are not easy questions, and there not intended to be, but should scientists prove successful in their endeavors, all of these questions will have to be resolved. How can certain establishments which frown on cosmetic plastic surgery frown on the reorganization of protein strands? There is no doubt that the people in charge of those organizations would take advantage of these technologies (Rose, Melatonin,: 6). How are the two things different? There are no possible answers to these questions for now they must remain rhetorical.
It is increasingly obvious that the repercussions of these technologies stretch across the board. As always the horizon of the future stretches before us, only revealing a glimpse of that which is to come. The resounding questions that will soon confront us can only be concluded with the passage of time, something apparently humanity will have a lot of.
f:\12000 essays\sciences (985)\Biology\Frogs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Frogs
Frog is the common name for a species of
amphibian that also includes toads. A very common
question is "whats the difference between frogs and
toads?", the answer: none, except for the fact that toads
lack the powerful legs that frogs have. "Where can frogs
and toads be found?", one might ask. They live in all
parts of the world, except for Antarctica, but are mostly
found in tropical areas. Frogs are small animals with
smooth moist skin, and big eyes that can see in almost
any direction. Most species have webbed feet and
powerful legs making them good jumpers, and excellent
swimmers. A frogs tongue is attached to the front of it's
mouth instead of the rear, and most frogs are very vocal,
especially the male frogs.
As a frog grow, it goes through many changes.
Starting out as a tadpole, and morphing into a frog.
Most frogs lay their eggs in water. Others will lay their
eggs some where safe, then carry them to water where
they hatch into tadpoles. At this stage they have gills,
no legs, and a tail. As they mature, their gills and tail
disappear, and they develop lungs and legs. This period
of tadpole life can be divided into three stages. The first
stage, called "premetamorphosis," lasts about 50 days
(Patent 54). The second stage, in which the hind legs
grow, is called "prometamorphosis," and lasts about 21
days. When the legs are about as long as the body, the
third stage, which is called "metamorphic climax," and
takes place very rapidly, begins. During this last stage,
which lasts about a week, many great changes occur.
They lungs complete their development, and the gills
disappear. The skin gets thicker, nostrils form, and the
tail is completely resorbed.
Most frogs prefer moist regions, and many kinds
live in the water. Because frogs absorb oxygen in water
through their skin, they can stay underwater for long
periods of time. A frogs body temperature depends on
it's surroundings, and during cold weather, frogs dig
burrows in mud and hibernate. During hibernation, the
frog needs little oxygen and no more food than is already
in it's tissues. During intense heat, a frog might
estivate, or in other words, lie in a state of torpor during
the heat, after burying themselves in sand and clay.
Frogs are carnivores. They eat just about anything
smaller than then that moves. A frog thinks like this: If
it's smaller than itself and moves, eat it. If it's the same
size, mate, or attempt to mate (this gets some frogs in
lots of trouble). If it's bigger than itself, run. Their diet
may include insects, worms, spiders, or even centipedes.
Aquatic frogs sometimes eat other frogs, tadpoles, and
small fish. Large frogs can eat can eat stuff as big as
mice and snakes. Sometimes a frog eats something too
big to swallow all at once, and will leave it sticking out
of its mouth ingesting it gradually or even choking and
regurgitating it. So virtually, the size of a frog's dinner
is determined by the size of it's mouth. If a frog eats
something poisonous or bad for them, they can throw up
their entire stomach and wipe it with their right front leg.
Frogs help out humans in many ways. Toads are
used world wide as pest control in gardens and on farms.
One toad alone can consume thousands of insects.
Frogs have been used as food for centuries. Efforts have
been made to harvest frogs, but most frogs eaten today
are taken from their natural habitat. People in South
America, the South Pacific, Philippine Islands, and parts
of Africa savor frogs, and consider them a delicacy. The
Chinese and French are lovers of frogs legs. One of the
reasons frogs legs are so expensive is the great demand
for frogs in scientific and medical laboratories. Because
their skeletal, muscular, digestive, nervous, and other
systems are similar to those of higher animals, frogs are
very important in these in these fields of research.
One large and nearly worldwide family of frogs are
the true frogs, many species combined that are well
known (Encarta True Frogs). The Bullfrog is one of the
largest true frogs in North America (Barker 150). It
weighs up to 1.2 pounds and has a total length of 15
inches. One of the most common North American
species is the leopard frog (Barker 154), which is easily
recognized by the numerous black, often light-edged
spots on the back and legs. Most true frogs stay close to
ponds and streams, but the North American wood frog
(Stebbins 135), a small redish-brown species with
mask-like black bands on the head, wander far away
from the water. The green frog is another common
species in North America and despite their name, some
green frogs are brown. Two well-known true frogs of
Europe are the common European frog, which resembles
the wood frog, and the edible frog, a popular food in
Europe. The African Giant Frog, the largest of all frogs,
which grows as long as 26 inches and weighs as much as
10 pounds, is also a true frog . The smallest frog is
probably the Psyllophyne Didactyla from Brazil which is
about 9.8 mm as an adult.
The frogs and other amphibians of North America,
and those of other continents too, are important in the
way all wild things are important. They are also a living
resource that needs protection and greater understanding
to appreciate its true worth.
Today there is a strong effort by all forms of
government to set aside areas that furnish the sort of
environments required by many forms of wildlife,
including frogs. Private organizations and individuals
too have established many special areas mostly free from
conditions that disturb natural habitats. People are
finally realizing that, hey, frogs aren't such bad guys and
maybe we should keep them around.
f:\12000 essays\sciences (985)\Biology\fungi.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fungi: The Great Decomposers
Although fungi are over looked in the commerical asspect of the world, they play a great rool in the web of life. In the fungi kingdom there are over 175,00 diffrent species. The main object of fungi is to decompose nutrients of plants and animals.
History
the history of fungi is not very clear because scientists have never realy wnt in great deepth , because fungi are not needed commericaly. the ancestors of fungi lived in shallow bodies of water about 600-800 million years ago. Some of the things the fungi had to encounter from living out of water was, there was more sunlight that was normally blocked be the water, and the had to do something about the rapid shifts in tempature and seasonal shifts.
Fungi are different from other plants in many ways. The general characteristics of fungi are extracellalar digestion, peculiar structures, growth patterns, their use of spores for reproduction, and their life cycles.
Characteristics of Fungi
Fungi have many feactures that set them apart from the plant and animal dingdom. Fungi are heterotrophs that obtain nutrients by secreting enzymes.
f:\12000 essays\sciences (985)\Biology\Future of Human Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Alexander R.
Prof. Kohn
Darwinism and Evolution
12-6-96
The Future of Evolution
Evolution, the science of how populations of living organisms change over time in response to their environment, is the central unifying theme in biology today. Evolution was first explored in its semi-modern form in Charles Darwin 's 1859 book, Origin of Species by means of Natural Selection. In this book, Darwin laid out a strong argument for evolution. He postulated that all species have a common ancestor from which they are descended. As populations of species moved into new habitats and new parts of the world, they faced different environmental conditions. Over time, these populations accumulated modifications, or adaptations, that allowed them and their offspring to survive better in their new environments. These modifications were the key to the evolution of new species, and Darwin proposed natural selection or "survival of the fittest" as the vehicle by which that change occurs. Under Natural Selection, some individuals in a population have adaptations that allow them to survive and reproduce more tha
n other individuals. These adaptations become more common in the population because of this higher reproductive success. Over time, the characteristics of the population as a whole can change, sometimes even resulting in the formation of a new species.
Humans have survived for thousands of years and will most like survive thousands of more. Throughout the history of the Huminoid species man has evolved from Homo Erectus to what we today call Homo Sapiens, or what we know today as modern man.. The topic of this paper is what does the future have in store for the evolution of Homo Sapiens. Of course, human beings will continue to change culturally; therefore cultural evolution will always continue; but what of physiological evolution? The cultural evolution of man will continue as long as man can think; after all it's the ideas we think up that makes up our cultures. In a thousand years man might complete a 180 degree turn culturally (not to mention physiologically) and as seen by our fellow inhabitants of earth we would in essence be different beings. One can say that this new culture has chosen its ideas based on Natural Selection. One can see this in the spread of ideas in the past history of homo sapiens, the ideas which cause man to succeed are chosen
such as science and democracy (the present growth of Islam is also worthy of mention, but would be a paper in itself). Lamarck's fourth law, that is, ideas acquired by one generation are passed on to the next, describes this transfer of ideas from one generation to another.
The question is can humans evolve (physically), that is through changes of some sort to the general human gene pool, enough to be considered a different species sometime in the future. The answer to this is tricky. The answer is "yes" if there is no human intervention and "not likely" (or atleast controlled) if there is human intervention. The more interesting answer is the latter.
The first answer deserves some mention. Through the subtraction or addition (that is through chance changes of some sort) of alleles (different forms of a characteristic gene) from the overall gene pool until homo sapiens are no longer is feasible. One might ask how and were this is occurring. The answer is human genes are changing all the time through radiation and spontaneous mutations (the latter more rapidly no than ever since the human population is now larger than ever) and one can see these changes to the overall gene pool in the disappearance of certain human tribes within parts of Africa and South America.. These tribes unfortunately take exclusive alleles with them. What about Natural Selection in present human culture. Some peoples are growing faster than others, for example-Chinese faster than any other in the present world, thus the large Chinese population. Therefore some group traits ae more common than others. Yet the loss of these alleles and the gain of these mutations offer marginal c
ontributions to our species and thus have little or no effect.
The first step in understand evolution in present terms is to mention genetic engineering (including genetic drift). The first step to understanding genetic engineering, and embracing its possibilities for society, is to obtain a rough knowledge base of its history and method. The basis for altering the evolutionary process is dependant on the understanding of how individuals pass on characteristics to their offspring. Genetics achieved its first foothold on the secrets of nature's evolutionary process when an Austrian monk named Gregor Mendel developed the first "laws of heredity." Using these laws, scientists studied the characteristics of organisms for most of the next one hundred years following Mendel's discovery. These early studies concluded that each organism has two sets of character determinants, or genes (Stableford 16). For instance, in regards to eye color, a child could receive one set of genes from his father that were encoded one blue, and the other brown. The same child could also receiv
e two brown genes from his mother. The conclusion for this inheritance would be the child has a three in four chance of having brown eyes, and a one in three chance of having blue eyes (Stableford 16).
Genes are transmitted through chromosomes which reside in the nucleus of every living organism's cells. Each chromosome is made up of fine strands of deoxyribonucleic acids, or DNA. The information carried on the DNA determines the cells function within the organism. Sex cells are the only cells that contain a complete DNA map of the organism, therefore, "the structure of a DNA molecule or combination of DNA molecules determines the shape, form, and function of the [organism's] offspring " (Lewin 1). DNA discovery is attributed to the research of three scientists, Francis Crick, Maurice Wilkins, and James Dewey Watson in 1951. They were all later accredited with the Nobel Price in physiology and medicine in 1962 (Lewin 1).
"The new science of genetic engineering aims to take a dramatic short cut in the slow process of evolution" (Stableford 25). In essence, scientists aim to remove one gene from an organism's DNA, and place it into the DNA of another organism. This would create a new DNA strand, full of new encoded instructions; a strand that would have taken Mother Nature millions of years of natural selection to develop. Isolating and removing a desired gene from a DNA strand involves many different tools. DNA can be broken up by exposing it to ultra-high-frequency sound waves, but this is an extremely inaccurate way of isolating a desirable DNA section (Stableford 26). A more accurate way of DNA splicing is the use of "restriction enzymes, which are produced by various species of bacteria" (Clarke 1). The restriction
enzymes cut the DNA strand at a particular location called a nucleotide base, which makes up a DNA molecule. Now that the desired portion of the DNA is cut out, it can be joined to another strand of DNA by using enzymes called ligases. The final important step in the creation of a new DNA strand is giving it the ability to self-replicate. This can be accomplished by using special pieces of DNA, called vectors, that permit the generation of multiple copies of a total DNA strand and fusing it to the newly created DNA structure. Another newly developed method, called polymerase chain reaction, allows for faster replication of DNA strands and does not require the use of vectors (Clarke 1).
Genetic drift, another important factor when discussing evolution, is the study of statistical population genetics. ). One aspect of genetic drift is the random nature of transmitting alleles from one generation to the next given that only a fraction of all possible zygotes become mature adults. The easiest case to visualize is the one which involves binomial sampling error. If a pair of diploid sexually reproducing parents (such as humans) have only a small number of offspring then not all of the parent's alleles will be passed on to their progeny due to chance assortment of chromosomes at meiosis. In a large population this will not have much effect in each generation because the random nature of the process will tend to average out. But in a small population the effect could be rapid and significant. Suzuki et al. explain it as well as anyone I've seen; "If a population is finite in size (as all populations are) and if a given pair of parents have only a small number of offspring, then even in the absence
of all selective forces, the frequency of a gene will not be exactly reproduced in the next generation because of sampling error. If in a population of 1000 individuals the frequency of "a" is 0.5 in one generation, then it may by chance be 0.493 or 0.0505 in the next generation because of the chance production of a few more or less progeny of each genotype. In the second generation, there is another sampling error based on the new gene frequency, so the frequency of "a" may go from 0.0505 to 0.501 or back to 0.498. This process of random fluctuation continues generation after generation, with no force pushing the frequency back to its initial state because the population has no "genetic memory" of its state many generations ago. Each generation is an independent event. The final result of this random change in allele frequency is that the population eventually drifts to p=1 or p=0. After this point, no further change is possible; the population has become homozygous. A different population, isolated from the
first, also undergoes this random genetic drift, but it may become homozygous for allele "A", whereas the first population has become homozygous for allele "a". As time goes on, isolated populations diverge from each other, each losing heterozygosity. The variation originally present within populations now appears as variation between populations (Suzuki 704).
The evolution of man can be broken up into three basic stages. The first, lasting millions of years, slowly shaped human nature from Homo erectus to Home sapiens. Natural selection provided the means for countless random mutations resulting in the appearance of such human characteristics as hands and feet. The second stage, after the full development of the human body and mind, saw humans moving from wild foragers to an agriculture based society. Natural selection received a helping hand as man took advantage of random mutations in nature and bred more productive species of plants and animals. The most bountiful wheats were collected and re-planted, and the fastest horses were bred with equally faster horses. Even in our recent history the strongest black male slaves were mated with the hardest working female slaves. t
f:\12000 essays\sciences (985)\Biology\genetic diversity in agriculture.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
GENETIC DIVERSITY IN AGRICULTURE
Genetic variation is the raw material for the plant breeder, who must often select from primitive and wild plants, including wild species, in search of new genes. The appearance of new diseases, new pests, or new virulent forms of disease causing organisms makes it imperative that the plant be preserved, because it offers a potential for the presence of disease resistant genes not present in cultivated varieties. Also, there are demands for new characters--for example, high protein, improved nutritional factors, and fertility restoration. As a result, plant breeders require a large and diverse gene pool to meet ever changing needs.
A gene bank is a popular term that is used to describe repositories for genes of living organisms. It is commonly used in the context of plant breeding as I described above, but it also applies to the freezing and the storage of animal sperm and embryos for use in animal husbandry or artificial insemination.
An understanding of crop origins and variations is necessary in assembling genetic diversity in plant crops. In certain geographical areas there has existed a rich source of variability in crop plants but the encroachment of civilization has reduced the natural variability inherent in primitive plant forms and related species of crop plants. Agricultural process, as a result of new breeding programs, has reduced rather than increased crop variability as improved cultivars, or varieties, are planted in wider and wider areas and old cultivars, which may contain valuable genes, are lost. Crop failures, which result in a smaller gene pool, have led to an increased awareness of the need to preserve genetic diversity in plants. Efforts are under way to increase collections of plant materials in various forms. Usually these are preserved as seeds, but living plants, pollen, and cell cultures are also used. In most gene banks, seeds are usually preserved under conditions of low temperature and humidity. These
collections must be periodically renewed by growing the plants and producing new seeds. Increasing emphasis is also being placed on preserving living collections of asexually propagated crops such as species of fruits and nuts.
In the united states, germ plasm banks are handled in a state-federal cooperative program. Internationally, a consortium of international, government, and private organizations called the consultative group in in International Agricultural research, (established in 1974), the International Board for Plant Genetic Resources (IBPGR) to promote the activities of international plant research centers that collect and preserve plant germ plasm.
Crop improvement is continuous. Professional plant breeders are constantly working, through genetics, on the improvement of plants to meet changing needs and standards. For example, with the introduction of mechanical pickers for tomatoes, a tomato resistant to bruising by the machine was needed. Such a variety was created by plant breeders.
Better, higher-yielding crop varieties have played an important part in the increase in crop production per acre in the united states and some other nations. Varieties of rice, cotton, vegetable-oil crops and sugar crops have changed almost completely since the early nineteen fifties. By the late nineteen sixties, most crop acreage in the united states was producing varieties unknown to earlier decades. Best known of the improved crops are the many varieties of hybrid corn that are planted on more than ninety-seven percent of the total corn acreage in the united states. Government experimental laboratories and commercial seed companies shared in the research and development of the high-yield plant varieties that provide such superior characteristics as resistance to cold, drought, diseases, and pests.
Improvements in livestock, such as more efficient use of feed, has added greatly to the annual farm output. Such improvements are the result of breeding and improved husbandry and veterinary techniques. Special purpose-stock has been developed through selective breeding. It includes cattle that are able to thrive in subtropical regions, hogs that yield lean bacon instead of lard, and small and broad breasted turkeys.
Artificial insemination has become a major factor in cattle improvement. In this technique the sperm of genetically superior bulls is used to inseminate thousands of cows. In this way a herd can be upgraded significantly in a single generation. However, some people feel that producing plants and animals that conform to the needs mechanization and increased production has resulted in less desirable farm products.
f:\12000 essays\sciences (985)\Biology\Genetic Engineering 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Genetic Engineering
The engineering of deoxyribonucleic acid (DNA) is entirely new, yet genetics, as a field of science, has fascinated mankind for over 2,000 years. Man has always tried to bend nature around his will through selective breeding and other forms of practical genetics. Today, scientists have a greater understanding of genetics and its role in living organisms. Unfortunately, some people are trying to stop further studies in genetics, but the research being conducted today will serve to better mankind tomorrow. Among many benefits of genetic engineering are the several cures being developed for presently incurable diseases. Genetics has also opened the door way to biological solutions for world problems, as well as aid for body malfunctions. Genetic engineering is a fundamental tool for leading the world of medicine into the future; therefore, it is crucial to continue research in this field.
Today's research in genetic engineering is bringing about new methods for curing and treating major medical illnesses. The Human Genome Project has allowed geneticists to map the genes of human beings. This project is far from complete, as the DNA sequence of humans is extremely long, yet it will eventually show geneticists which genes are responsible for certain inherited diseases. Identified genes could be repaired, resulting in the irradiation of inherited diseases, such as cancer. Just last year, the locations of genes for several diseases were confirmed and may soon be correctable. Secondly, research in genetics has brought about a new medical field, genetic counseling. Couples planning to have children can visit a genetic counselor and identify what medical difficulties their child may have. With continued research in genetics, couples will have the opportunity to become aware of a greater number of medical conditions that may affect their child and can make the proper adjustments needed in advance. Las
tly, and perhaps the most important advancement in the curing and treating of illnesses, geneticists are developing a new method for removing viruses from human bodies-DNA scissors. This new method works in a similar way that antibiotics does. When antibodies enter our internal system they attack a specific type of enemy cell or virus and destroy it. Likewise, DNA scissors enter the body and attack a specific type of enemy virus or cell. DNA scissors are much more effective than conventional antibiotics because they enter the enemy cell and unravel their DNA. With dysfunctional DNA, a cell is a pile of lipids and proteins; cancerous tumors will turn to harmless dumps of organic material, that can be filtered out by the body. DNA scissors will affect things that antibiotics cannot, like AIDS. (Not even AIDS can function without DNA). One day the only thing that will stand between medical diseases and their cure will be the analysis of their DNA.
Genetics now offers a new way to solve the general problems of the world. First, genetic research makes it possible for food to be grown safer, better, and faster, without doing any damage to the environment. With today's knowledge of genetic engineering, several food companies are investigating possibilities of making more food in less time. Through a process know as gene therapy, geneticists have the ability to modify parts of genetic material in organisms. Geneticists can add attributes to crops, like tomatoes, that would make them resistant to insects. With such features, dangerous chemicals like DDT that harm the environment, plants, animals, and humans would not be needed. Other enhancements would include prolonged life spans for food products after harvesting. For example, tomatoes have been engineered to last longer so they do not have to be harvested early. Thus, it is unnecessary to spray chemicals on them to prematurely change their color. While the US has not yet approved the new crops, several co
untries have and are making great profits off them. Finally, through a proccess known as gene splicing, geneticists are able to cross different organisms and therefore breed beneficial life forms. The Supreme Court ruled that scientists can patent newly created life forms, so several companies have invested in genetic research. General Electric provided the funding for a team of geneticists to create a new life form; the result was oil eating bacteria. The bacteria consume oil and are of no threat to the environment, so far. A major use for the bacteria is to clean shores after an oil spill. It is impossible to clean every drop of oil on the shoreline, so the bacteria are released to remove any traces of oil tediously and perfectly. General Electric is in the process of obtaining , or already has obtained a patent for the bacteria. It is quite clear that genetics will have an active role in our quest for solving world problems.
Genetic Engineering makes it possible to treat and correct bodily malfunctions. First, the use of genetics allows us to produce supplements for those who have chemical deficiencies. The most well-known example of such a supplement is insulin. In the 1800's, diabetics received insulin from sheep, yet as it can be imagined, it took a great deal of sheep to sustain one person. After the discovery of DNA, geneticists used gene splicing to develop a bacterium to produce insulin. By cloning the human gene for insulin and inserting it into bacteria known as E. coli, the scientists created bacteria that produced insulin and when the bacteria reproduced, they reproduced the human gene as well. Next, genetic engineering will make it possible to create vital organs for transplants. A major medical difficulty today is the lack of organ donors. Waiting lists are always getting longer, and people are losing their lives as a result. In the future, geneticists would be able to clone pieces of organs and, then, make organs for
surgeries involving transplants. Geneticists may even be able to clone cells from damaged organs and then engineer exact duplicates. Genetics will definitely have a large impact on correcting of malfunctions in the human body.
Without doubt, genetic engineering has already helped make human life easier and will continue to do so in the future, provided that research on genetic engineering continues. All advancements in science have led to positive and negative results, yet, the rewards of genetics greatly outweigh the disadvantages. Mankind is entering a new era in medicine-genetic engineering-one that has received criticism. As the field of genetics inevitably becomes integrated with medical practice, people may continue to protest against what they believe genetic engineering will unleash on our society. Rather than allowing fear and ignorance to derail one of the most humane efforts underway, scientists and the society must find bridges of communication and understanding, through education, to promote the benefits of genetic engineering.
f:\12000 essays\sciences (985)\Biology\Genetic Engineering history and future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Genetic Engineering, history and future
Altering the Face of Science
Science is a creature that continues to evolve at a much higher rate than the beings
that
gave it birth. The transformation time from tree-shrew, to ape, to human far exceeds the
time
from analytical engine, to calculator, to computer. But science, in the past, has always
remained
distant. It has allowed for advances in production, transportation, and even entertainment,
but
never in history will science be able to so deeply affect our lives as genetic engineering will
undoubtedly do. With the birth of this new technology, scientific extremists and anti-
technologists
have risen in arms to block its budding future. Spreading fear by misinterpretation
of facts, they promote their hidden agendas in the halls of the United States congress.
Genetic
engineering is a safe and powerful tool that will yield unprecedented results, specifically in
the
field of medicine. It will usher in a world where gene defects, bacterial disease, and even
aging
are a thing of the past. By understanding genetic engineering and its history, discovering its
possibilities, and answering the moral and safety questions it brings forth, the blanket of
fear
covering this remarkable technical miracle can be lifted.
The first step to understanding genetic engineering, and embracing its possibilities
for
society, is to obtain a rough knowledge base of its history and method. The basis for
altering the
evolutionary process is dependant on the understanding of how individuals pass on
characteristics to their offspring. Genetics achieved its first foothold on the secrets of
nature's
evolutionary process when an Austrian monk named Gregor Mendel developed the first
"laws of
heredity." Using these laws, scientists studied the characteristics of organisms for most of
the
next one hundred years following Mendel's discovery. These early studies concluded that
each
organism has two sets of character determinants, or genes (Stableford 16). For instance, in
regards to eye color, a child could receive one set of genes from his father that were
encoded one
blue, and the other brown. The same child could also receive two brown genes from his
mother.
The conclusion for this inheritance would be the child has a three in four chance of having
brown eyes, and a one in three chance of having blue eyes (Stableford 16).
Genes are transmitted through chromosomes which reside in the nucleus of every
living
organism's cells. Each chromosome is made up of fine strands of deoxyribonucleic acids,
or
DNA. The information carried on the DNA determines the cells function within the
organism.
Sex cells are the only cells that contain a complete DNA map of the organism, therefore,
"the
structure of a DNA molecule or combination of DNA molecules determines the shape,
form, and
function of the [organism's] offspring " (Lewin 1). DNA discovery is attributed to the
research
of three scientists, Francis Crick, Maurice Wilkins, and James Dewey Watson in 1951.
They
were all later accredited with the Nobel Price in physiology and medicine in 1962 (Lewin
1).
"The new science of genetic engineering aims to take a dramatic short cut in the
slow
process of evolution" (Stableford 25). In essence, scientists aim to remove one gene from
an
organism's DNA, and place it into the DNA of another organism. This would create a new
DNA
strand, full of new encoded instructions; a strand that would have taken Mother Nature
millions
of years of natural selection to develop. Isolating and removing a desired gene from a
DNA
strand involves many different tools. DNA can be broken up by exposing it to ultra-high-
frequency
sound waves, but this is an extremely inaccurate way of isolating a desirable DNA section
(Stableford 26). A more accurate way of DNA splicing is the use of "restriction
enzymes, which are produced by various species of bacteria" (Clarke 1). The restriction
enzymes cut the DNA strand at a particular location called a nucleotide base, which makes
up a
DNA molecule. Now that the desired portion of the DNA is cut out, it can be joined to
another
strand of DNA by using enzymes called ligases. The final important step in the creation of
a
new DNA strand is giving it the ability to self-replicate. This can be accomplished by using
special pieces of DNA, called vectors, that permit the generation of multiple copies of a
total
DNA strand and fusing it to the newly created DNA structure. Another newly developed
method, called polymerase chain reaction, allows for faster replication of DNA strands and
does
not require the use of vectors (Clarke 1).
The possibilities of genetic engineering are endless. Once the power to control the
instructions, given to a single cell, are mastered anything can be accomplished. For
example,
insulin can be created and grown in large quantities by using an inexpensive gene
manipulation
method of growing a certain bacteria. This supply of insulin is also not dependant on the
supply
of pancreatic tissue from animals. Recombinant factor VIII, the blood clotting agent
missing in
people suffering from hemophilia, can also be created by genetic engineering. Virtually all
people who were treated with factor VIII before 1985 acquired HIV, and later AIDS.
Being
completely pure, the bioengineered version of factor VIII eliminates any possibility of viral
infection. Other uses of genetic engineering include creating disease resistant crops,
formulating
milk from cows already containing pharmaceutical compounds, generating vaccines, and
altering livestock traits (Clarke 1). In the not so distant future, genetic engineering will
become
a principal player in fighting genetic, bacterial, and viral disease, along with controlling
aging,
and providing replaceable parts for humans.
Medicine has seen many new innovations in its history. The discovery of
anesthetics
permitted the birth of modern surgery, while the production of antibiotics in the 1920s
minimized the threat from diseases such as pneumonia, tuberculosis and cholera. The
creation
of serums which build up the bodies immune system to specific infections, before being
laid low
with them, has also enhanced modern medicine greatly (Stableford 59). All of these
discoveries,
however, will fall under the broad shadow of genetic engineering when it reaches its apex
in the
medical community.
Many people suffer from genetic diseases ranging from thousands of types of
cancers, to
blood, liver, and lung disorders. Amazingly, all of these will be able to be treated by
genetic
engineering, specifically, gene therapy. The basis of gene therapy is to supply a functional
gene
to cells lacking that particular function, thus correcting the genetic disorder or disease.
There
are two main categories of gene therapy: germ line therapy, or altering of sperm and egg
cells,
and somatic cell therapy, which is much like an organ transplant. Germ line therapy results
in a
permanent change for the entire organism, and its future offspring. Unfortunately, germ
line
therapy, is not readily in use on humans for ethical reasons. However, this genetic method
could, in the future, solve many genetic birth defects such as downs syndrome. Somatic
cell
therapy deals with the direct treatment of living tissues. Scientists, in a lab, inject the
tissues
with the correct, functioning gene and then re-administer them to the patient, correcting the
problem (Clarke 1).
Along with altering the cells of living tissues, genetic engineering has also proven
extremely helpful in the alteration of bacterial genes. "Transforming bacterial cells is easier
than transforming the cells of complex organisms" (Stableford 34). Two reasons are
evident for
this ease of manipulation: DNA enters, and functions easily in bacteria, and the
transformed
bacteria cells can be easily selected out from the untransformed ones. Bacterial
bioengineering
has many uses in our society, it can produce synthetic insulins, a growth hormone for the
treatment of dwarfism and interferons for treatment of cancers and viral diseases
(Stableford
34).
Throughout the centuries disease has plagued the world, forcing everyone to take
part in a
virtual "lottery with the agents of death" (Stableford 59). Whether viral or bacterial in
nature,
such disease are currently combated with the application of vaccines and antibiotics. These
treatments, however, contain many unsolved problems. The difficulty with applying
antibiotics
to destroy bacteria is that natural selection allows for the mutation of bacteria cells,
sometimes
resulting in mutant bacterium which is resistant to a particular antibiotic. This now
indestructible bacterial pestilence wages havoc on the human body. Genetic engineering is
conquering this medical dilemma by utilizing diseases that target bacterial organisms. these
diseases are viruses, named bacteriophages, "which can be produced to attack specific
disease-causing
bacteria" (Stableford 61). Much success has already been obtained by treating animals
with a "phage" designed to attack the E. coli bacteria (Stableford 60).
Diseases caused by viruses are much more difficult to control than those caused by
bacteria. Viruses are not whole organisms, as bacteria are, and reproduce by hijacking the
mechanisms of other cells. Therefore, any treatment designed to stop the virus itself, will
also
stop the functioning of its host cell. A virus invades a host cell by piercing it at a site called
a
"receptor". Upon attachment, the virus injects its DNA into the cell, coding it to reproduce
more
of the virus. After the virus is replicated millions of times over, the cell bursts and the new
viruses are released to continue the cycle. The body's natural defense against such cell
invasion
is to release certain proteins, called antigens, which "plug up" the receptor sites on healthy
cells.
This causes the foreign virus to not have a docking point on the cell. This process,
however, is
slow and not effective against a new viral attack. Genetic engineering is improving the
body's
defenses by creating pure antigens, or antibodies, in the lab for injection upon infection
with a
viral disease. This pure, concentrated antibody halts the symptoms of such a disease until
the
bodies natural defenses catch up. Future procedures may alter the very DNA of human
cells,
causing them to produce interferons. These interferons would allow the cell to be able
determine if a foreign body bonding with it is healthy or a virus. In effect, every cell would
be
able to recognize every type of virus and be immune to them all (Stableford 61).
Current medical capabilities allow for the transplant of human organs, and even
mechanical portions of some, such as the battery powered pacemaker. Current science can
even
re-apply fingers after they have been cut off in accidents, or attach synthetic arms and legs
to
allow patients to function normally in society. But would not it be incredibly convenient if
the
human body could simply regrow what it needed, such as a new kidney or arm? Genetic
engineering can make this a reality. Currently in the world, a single plant cell can
differentiate
into all the components of an original, complex organism. Certain types of salamanders
can re-grow
lost limbs, and some lizards can shed their tails when attacked and later grow them again.
Evidence of regeneration is all around and the science of genetic engineering is slowly
mastering
its techniques. Regeneration in mammals is essentially a kind of "controlled cancer", called
a
blastema. The cancer is deliberately formed at the regeneration site and then converted
into a
structure of functional tissues. But before controlling the blastema is possible, "a detailed
knowledge of the switching process by means of which the genes in the cell nucleus are
selectively activated and deactivated" is needed (Stableford 90). To obtain proof that such
a
procedure is possible one only needs to examine an early embryo and realize that it knows
whether to turn itself into an ostrich or a human. After learning the procedure to control
and
activate such regeneration, genetic engineering will be able to conquer such ailments as
Parkinson's, Alzheimer's, and other crippling diseases without grafting in new tissues. The
broader scope of this technique would allow the re-growth of lost limbs, repairing any
damaged
organs internally, and the production of spare organs by growing them externally
(Stableford
90).
Ever since biblical times the lifespan of a human being has been pegged at roughly
70
years. But is this number truly finite? In order to uncover the answer, knowledge of the
process
of aging is needed. A common conception is that the human body contains an internal
biological
clock which continues to tick for about 70 years, then stops. An alternate "watch" analogy
could
be that the human body contains a certain type of alarm clock, and after so many years, the
alarm sounds and deterioration beings. With that frame of thinking, the human body does
not
begin to age until a particular switch is tripped. In essence, stopping this process would
simply
involve a means of never allowing the switch to be tripped. W. Donner Denckla, of the
Roche
Institute of Molecular Biology, proposes the alarm clock theory is true. He provides
evidence
for this statement by examining the similarities between normal aging and the symptoms of
a
hormonal deficiency disease associated with the thyroid gland. Denckla proposes that as
we get
older the pituitary gland begins to produce a hormone which blocks the actions of the
thyroid
hormone, thus causing the body to age and eventually die. If Denckla's theory is correct,
conquering aging would simply be a process of altering the pituitary's DNA so it would
never be
allowed to release the aging hormone. In the years to come, genetic engineering may
finally
defeat the most unbeatable enemy in the world, time (Stableford 94).
The morale and safety questions surrounding genetic engineering currently cause
this new
science to be cast in a false light. Anti-technologists and political extremists spread false
interpretation of facts coupled with statements that genetic engineering is not natural and
defies
the natural order of things. The morale question of biotechnology can be answered by
studying
where the evolution of man is, and where it is leading our society. The safety question can
be
answered by examining current safety precautions in industry, and past safety records of
many
bioengineering projects already in place.
The evolution of man can be broken up into three basic stages. The first, lasting
millions
of years, slowly shaped human nature from Homo erectus to Home sapiens. Natural
selection
provided the means for countless random mutations resulting in the appearance of such
human
characteristics as hands and feet. The second stage, after the full development of the
human
body and mind, saw humans moving from wild foragers to an agriculture based society.
Natural
selection received a helping hand as man took advantage of random mutations in nature
and bred
more productive species of plants and animals. The most bountiful wheats were collected
and
re-planted, and the fastest horses were bred with equally faster horses. Even in our recent
history the strongest black male slaves were mated with the hardest working female slaves.
The
third stage, still developing today, will not require the chance acquisition of super-mutations
in
nature. Man will be able to create such super-species without the strict limitations imposed
by
natural selection. By examining the natural slope of this evolution, the third stage is a
natural
and inevitable plateau that man will achieve (Stableford 8). This omniscient control of our
world may seem completely foreign, but the thought of the Egyptians erecting vast
pyramids
would have seem strange to Homo erectus as well.
Many claim genetic engineering will cause unseen disasters spiraling our world into
chaotic darkness. However, few realize that many safety nets regarding bioengineering are
already in effect. The Recombinant DNA Advisory Committee (RAC) was formed under
the
National Institute of Health to provide guidelines for research on engineered bacteria for
industrial use. The RAC has also set very restrictive guidelines requiring Federal approval
if
research involves pathogenicity (the rare ability of a microbe to cause disease) (Davis,
Roche
69).
"It is well established that most natural bacteria do not cause disease. After many
years of
experimentation, microbiologists have demonstrated that they can engineer bacteria that are
just
as safe as their natural counterparts" (Davis, Rouche 70). In fact the RAC reports that
"there has
not been a single case of illness or harm caused by recombinant [engineered] bacteria, and
they
now are used safely in high school experiments" (Davis, Rouche 69). Scientists have also
devised other methods of preventing bacteria from escaping their labs, such as modifying
the
bacteria so that it will die if it is removed from the laboratory environment. This creates a
shield
of complete safety for the outside world. It is also thought that if such bacteria were to
escape it
would act like smallpox or anthrax and ravage the land. However, laboratory-created
organisms
are not as competitive as pathogens. Davis and Roche sum it up in extremely laymen's
terms,
"no matter how much Frostban you dump on a field, it's not going to spread" (70). In fact
Frostbran, developed by Steven Lindow at the University of California, Berkeley, was
sprayed on
a test field in 1987 and was proven by a RAC committee to be completely harmless
(Thompson
104).
Fear of the unknown has slowed the progress of many scientific discoveries in the
past.
The thought of man flying or stepping on the moon did not come easy to the average
citizens of
the world. But the fact remains, they were accepted and are now an everyday occurrence
in our
lives. Genetic engineering too is in its period of fear and misunderstanding, but like every
great
discovery in history, it will enjoy its time of realization and come into full use in society.
The
world is on the brink of the most exciting step into human evolution ever, and through
knowledge and exploration, should welcome it and its possibilities with open arms.
Works Cited
Clarke, Bryan C. Genetic Engineering. Microsoft (R) Encarta.
Microsoft Corporation, Funk & Wagnalls Corporation, 1994.
Davis, Bernard, and Lissa Roche. "Sorcerer's Apprentice or Handmaiden
to Humanity." USA TODAY: The Magazine of the American Scene [GUSA] 118
Nov 1989: 68-70.
Lewin, Seymour Z. Nucleic Acids. Microsoft (R) Encarta. Microsoft
Corporation, Funk & Wagnalls Corporation, 1994.
Stableford, Brian. Future Man. New York: Crown Publishers, Inc., 1984.
Thompson, Dick. "The Most Hated Man in Science." Time 23 Dec 4 1989:
102-104
f:\12000 essays\sciences (985)\Biology\Genetic Engineering.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Bioengineering, or genetic engineering is an altering of
genes in a particular species for a particular outcome.
It involves taking genes from their normal location in one
organism and either transferring them elsewhere or putting
them back into the original organism in different combinations.
Most biomolecules exist in low concentrations and as complex, mixed
populations which it is not possible to work efficiently. This problem
was solved in 1970 using a bug, Escherichia coli, a normally innocuous
commensal occupant of the human gut. By inserting a piece of DNA of interest
into a vector molecule, a molecule with a bacterial origin of replication, when
the whole recombinant construction is introduced into a bacterial colonies all
derived from a single original cell bearing the recombinant vector, in a short
time a large amount of DNA of interest is produced. This can be purified from
contaminating bacterial DNA easily and the resulting product is said to have been "cloned".
So far, scientists have used genetic engineering to produce, for example:
· improve vaccines against animal diseases such as footrot and pig scours;
· pure human products such as insulin, and human growth hormone in commercial quantities;
· existing antibiotics by more economical methods;
· new kinds of antibiotics not otherwise available;
· plants with resistance to some pesticides, insects and diseases;
· plants with improved nutritional qualities to enhance livestock productivity.
Methods:
· Manipulation of the Gene pool, which is related to Hybridization which is the breeding of species but the species are not the same but they are related.
· Chain reaction is the production of many identical copies of a particular DNA fragment.
· The utility of cloning is important, it provides the ability to determine the genetic organization of particular regions or whole genome. However, it also facilitates the production of naturally-occurring and artificially-modified biological products by the expression of cloned genes.
· Insertion of selectable marker genes to pick out recombinant molecules containing foreign inserts
· Removal or creation of useful sites for cloning
· Insertion of sequences which not only allow but greatly increase the expression of cloned genes in bacterial, animal and plant cells.
· The ability to take a gene from one organism (e.g. man or tree), clone E. coli and express it in another (e.g. a yeast) is dependent on the universality of the genetic code, i.e. the triplets of bases which encode amino acids in proteins:
f:\12000 essays\sciences (985)\Biology\Genetic Observations Through the Studies of Hybrid Corn Sing.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date : 11/15/96
Site found at : infoseek search
Genetic Observations Through
The Studies of Hybrid Corn,
Single Gene Human Traits,
and Fruit Flies
The basic foundation of modern genetics was led by Gregor Mendel (Corcos, 1993). Mendel was not the first to experiment with heredity, and our Lyman Briggs biology class will not be the last to deal with genetics. Genetics is the science of heredity. In our lab, we had three main objectives. First, we evaluated our data on monohybrid and dihybrid corn cross seed counts against Mendel's theoretical expectations of independent assortment and the segregation of alleles. Next, we used the Hardy-Weinberg Theorem to provide a theoretically expected value for allele frequencies for single human gene traits. Lastly, we dealt with Drosophila melanogaster and we examined red and white eye alleles to determine if this gene is sex-linked or autosomal.
During the mid 1800's Mendel bred garden peas to study inheritance. He choose these plants because of their well defined characteristics and the ability to be grown and crossed (Campbell, 1996). Mendel wanted to know the genetic basis for variation among individuals and what accounted for the transmission of traits from generation to generation. Mendel followed traits for the P generation, F1 generation, and F2 generation. The P generation is the original true-breeding parents. Their hybrid offspring is the F1 generation, the first filial. The F2 generation is the second filial and is the self-pollination of the F1 hybrids. It was predominantly his research on the F2 generation that led to Mendel's Law of Segregation and Law of Independent Assortment (Campbell, 1996).
Mendel's Law of Segregation states that alleles sort into separate gametes. He formed this through performing monohybrid crosses. The F2 generation will have a 3:1 phenotypic ratio. By considering more than one trait Mendel formed his Law of Independent Assortment. He questioned whether traits were inherited independently or dependently. By performing dihybrid crosses he found that genes are independent and will form all possible combinations . Crossing two different traits resulted in a 9:3:3:1 phenotypic ratio (Campbell, 1996).
Thomas Hunt Morgan also had a major contribution in the study of inheritance. He was the first to associate a specific gene with a specific chromosome. Morgan used Drosophila melanogaster, which are commonly known as fruit flies. These were a good choose because they are prolific breeders, and they only have four pairs of chromosomes (Davis, 1996). Morgan linked a fly's eye color to its sex. He found that females carry two copies of this gene, while the male only carries one . Morgan's work also led to a new, more wildly used way for symbolizing alleles (Campbell, 1996).
Materials and Methods
Materials and methods were as per Davis (1996). For the corn cross lab, corn was counted off of the ears of the corn, rather than through jars. For the human characteristics between 143 to 149 students were observed. Seven different single human gene traits were considered for this lab. The fruit fly cross was set up on September 24, 1996. The parental (P) generation begun with ten red-eyed males and six white-eyed females. The parent flies were removed on October 3, 1996. Data collection was stopped on October 10, 1996.
Results
A punnett square was used for the monohybrid corn cross to find the genotypes of the potential offspring. The gamete combinations were Su=smooth seeds with an observed value of 497, and an expected value of 451.5; and su=wrinkled seeds with an observed value of 105, and an expected value of 150.5. The chi-squared value was 18.35, this value didn't correspond with any of the given probability values. The Null hypothesis with a 3:1 phenotypic ratio was rejected (see figure 1). Hence, the observed number of smooth seeds versus wrinkled seeds is not different from the expected 3:1 ratio for a monohybrid cross (see table 1).
A dihybrid cross was used to find the genotypes of potential offspring regarding two traits. The offspring possibilities were SuP=smooth, purple seeds with an observed value of 577 and an expected value of 617.6; Sup=smooth, yellow seeds with an observed value of 229 and an expected value of 205.9; suP=wrinkled, purple seeds with an observed value of 210 and an expected value of 205.9; and sup=wrinkled, yellow seeds with an observed value of 68.6. The chi-squared value was 7.96, and the Null hypothesis was again rejected. The Null hypothesis stated that the observed number of smooth, purple seeds versus smooth, yellow seeds versus wrinkled, purple seeds versus wrinkled, yellow seeds is not different than the expected ratio of 9:3:3:1 for a dihybrid cross (see table 2 and figure 2).
For the single gene human traits, between 143 to 149 students were observed. The amount of data for each of the seven different traits varied. One trend was that six out of the seven traits had a higher frequency for the recessive allele (see table 3).
In the fruit fly cross, a total of 99 fruit flies were collected. Forty-four white-eyed males with a phenotype of X^wY, and forty-five red-eyed females with a phenotype of X^+X^w. The phenotypes of the flies were the cross between a red-eyed male, X^+Y, and a white-eyed females, X^wX^w (see table 4).
Discussion
Data for the monohybrid cross did not correspond with the expected values. The monohybrid phenotypic ratio of 3 smooth seeds versus 1 wrinkled seed is derived from the punnett square (see table 1). My observed values were 497 smooth seeds and 105 wrinkled seeds. Their expected values were 451.5 smooth seeds and 150.5 wrinkled seeds (see figure 1). The chi-squared value was used to interpret data, and the value for chi-squared was too high. Therefore, it was rejected (see figure 1). This test can be used to see how well data fits a theoretical exception. The expected frequency can be found by multiplying the punnett square phenotypic ratios by the amount of corn counted. The chi-squared number was found to be 18.35 and was then compared to the probability chart (Davis, 1996). The probability value must be greater than 0.05 to accept the Null hypothesis. The Null hypothesis was rejected since there was not a 3:1 ratio.
The dihybrid cross also rejected the Null hypothesis. The observed and expected values differed for smooth & purple seeds, smooth & yellow seeds, wrinkled & purple seeds, and wrinkled & yellow seeds (see figure 2). The chi-squared value was calculated the same way as the monohybrid cross, with four different traits rather than just two. The Null hypothesis expected a 9:3:3:1 ratio, which it did not have.
Both the monohybrid and dihybrid crosses had chi-squaered values that were too high. Therefore, both Null hypothesis' were rejected. This may have been due to an observational error. The kernels may have been miscounted or interpreted wrong.
The monohybrid corn cross illustrated Mendel's Law of Segregation, and Mendel's Law of Independent Assortment was demonstrated by the dihybrid corn cross. The understanding of meiosis is central to an understanding in genetics. Meiosis is a process consisting of two consecutive cell divisions, called meiosis I and meiosis II, resulting in four daughter cells, each with only half as many chromosomes as the parent (Campbell, 1996). Genetics is the science of heredity. Therefore, in understanding the process of meiosis, we learn how it is that we acquire our own unique set of genes.
By conducting the single gene human trait experiment, phenotypes of a given list of traits were determined by individual students. A set of possible genotypes for these given traits were also found (see table 3).
The allele frequency is a numerical value of the dominant or recessive allele appears. We find these values by using the Hardy-Weinberg equation. One significant item was the frequency of alleles of the students in our class. The recessive allele had a higher frequency for six out of the seven traits considered. The traits that followed this trend were: the bent pinky, eye color, widow's peak, thumb crossing, ear lobe, and the hitchhhiker's thumb. Allele frequency is found by using the Hardy-Weinberg equation (see figure 3). It is a number that tells you how often that allele, dominant or recessive appears. The tendency for the recessive allele to have a higher frequency may have to do with natural selection. On the other hand, the phenotype frequency differs from the allele frequency because the phenotype is made of two alleles and the heterozygous genes consist of both alleles.
I could only definitely determine two of my genotypes for this experiment. They are not having a bent pinky, bb,and not having a widow's peak, ww. This can be determined because both of these traits are homozygous recessive (see table 3). I don't know whether or not the rest of
my single human gene traits are homozygous dominant or heterozygous.
To decipher the genotype of the fruit flies a punnett square was used (see table 4). The fruit fly cross started with ten red-eyed males and six white-eyed females. Their offspring resulted with red-eyed females and white-eyed males. The experiment resulted in the 'way ' we expected. Red eyes are the dominant allele, with forty-five counted female offspring and the recessive allele, white eyes had forty-four males counted. These traits appeared in the opposite sex then in the P generation. Therefore, the red/white eye allele is sex-linked among fruit flies. Hence, red eyes is the dominant allele, the female receives this from the male parent, the male then picks up the white eyes (see table 4).
Through the genetics lab we observed many different concepts. Through our evaluation of the monohybrid and dihybrid corn cross seed counts we connected them to Mendel's Laws of Independent Assortment and the Segregation of Alleles. Using the monohybrid cross, Mendel's Law of Segregation was described by portraying the 3:1 phenotypic ratio in the first filial, F1 generation. While using the dihybrid corn cross, Mendel's Law of Independent Assortment illustrated that four possible phenotypes form a 9:3:3:1 phenotypic ratio. For the single gene human traits experiment, we used the Hardy-Weinberg Theorem and equation to find the allele frequencies. For the experiment with Drosophila melanogaster we examined a fruit fly cross between red-eyed males and white-eyed females. We determined that this trait is sex-linked when the offspring were red-eyed females and white-eyed males. Throughout the genetics lab each purpose was determined and explained. A lot was learned about Mendel, genetics, and the hereditary
process that makes us who we are today.
Cited Literature
Campbell, N.A. 1996. Biology. The Benjamin Cummings Publishing Co., New York, pp. 238- 279.
Corcos, Alain F. and Floyd V. Monaghan. 1993. Gregor Mendel's Experiments on Plant Hybrids. Rutgers University Press, New Jersey, pp. 45-46, 76, 105-112, 133.
Davis, M. 1996. Genetics. LBS 144 Laboratory Manual. The Lyman Briggs School, Michigan State University, East Lansing, pp. 25-36.
f:\12000 essays\sciences (985)\Biology\Geology Transitions of Reptiles to Mammals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Transitions of Reptiles to Mammals
A long long time ago, in a galaxy not too far away, was a little blue planet called
Earth, and on this world not a single mammal lived. However a lot of time has past since
then and we now have lots of furry creatures that are collectively called mammals. How
did they get their? Where did they come from? These are the kinds of questions that led
me to my subject of choice. I will endeavor to provide examples, using specific
transitional fossils, to show that mammals have evolved from a group of reptiles and were
simply not placed here by unknown forces.
Before I begin, I would like to define some terms so that nobody gets left in the
dust. The term transitional fossil can be used in conjunction with the term general
lineage, together they help explain the how one species became another.
"General lineage":
This is a sequence of similar genera or families, linking an older to a very different younger
group. Each step in the sequence consists of some fossils that represent certain genus or
family, and the whole sequence often covers a span of tens of millions of years. A lineage
like this shows obvious intermediates for every major structural change, and the fossils
occur roughly (but often not exactly) in the expected order. However, usually there are
still gaps between each of the groups. Sometimes the individual specimens are not thought
to be directly ancestral to the next-youngest fossils (e.g. they may be "cousins"" or
"uncles" rather than "parents"). However they are assumed to be closely related to the
actual ancestor, since the have similar intermediate characteristics.
-1-
Where Does It All Begin ?
Mammals were derived during the Triassic Period ((from 245 to 208 million years
ago) It began with relatively warm and wet conditions, but as it progressed conditions
became increasingly hot and dry.) from members of the reptilian order Therapsida. The
therapsids, members of the subclass Synapsida (sometimes called the mammal-like
reptiles),generally were unimpressive in relation to other reptiles of their time. Synapsids
were present in the Carboniferous Period (about 280 to 345 million years ago) and are one
of the earliest known reptilian groups. Although therapsids were primarily predators by
nature, some adaptations included a herbivorous species as well, they were generally small
active carnivores. Primitive therapsids are present as fossils in certain Middle Permian
deposits; later forms are known from every continent except Australia but are most
common in the Late Permian and Early Triassic of South Africa.
The several features that separate modern reptiles from modern mammals
doubtlessly evolved at different rates. Many attributes of mammals are correlated with
their highly active lifestyle; for example, efficient double circulation of blood with a
completely four-chambered heart, anucleate and biconcave erythrocytes (blood cells), the
diaphragm, and the secondary palate (which separates passages of food and air and allows
breathing during mastication (chewing) or suckling). Hair for insulation correlates with
endothermy (being warm-blooded), the physiological maintenance of individual
temperature independent of the environmental temperature, and endothermy allows high
levels of sustained activity. the unique characteristics of mammals thus would seem to
have evolved as a complex interrelated system.
-2-
Transitions to New Higher Taxa
Transitions often result in a new "higher taxon" (a new genus, family, order, etc.)
from a species belonging to different, older taxon. There is nothing magical about this.
The first members of the new group are not bizzare, they are simply a new, slightly
different species, barely different from the parent species. Eventually they give rise to a
more different species, which in turn gives rise to a still more different species, and so on,
until the descendents are radically different from the original parent. For example, the
Order Perissodactyla (horses) and the Order Cetacea (whales) can both be traced back to
early Eocene animals that looked only marginally different from each other, and didn't
look at all like horses or whales. (They looked more like small, dumb foxes with raccoon-
like feet and simple teeth.) But over the following tens of millions of years, the
descendents of those animals became more and more different, and now we call them two
different orders.
Major Skeletal Differences (derived from the fossil record)
The mammalian skeletal system shows a number of advances over that of reptiles.
the mode of ossification (process of bone formation) of the long bones is one
characteristic. In reptiles each long bone has a single centre of ossification, and
replacement of cartilage by bone proceeds from the centre toward the ends. In mammals
secondary centres of ossification develop at the ends of the bones. Mammalian skeletal
growth is termed determinate, for once the actively growing zone of cartilage is used up,
growth in length ceases. As in all bony vertebrates, of course, there is continual renewal of
bone throughout life. The advantage of secondary centres of ossification at the ends of
bones lies in the fact that the bones have strong articular surfaces before the skeleton is
mature. In general, the skeleton of the adult mammal has less structural cartilage than
does that of a reptile.
-3-
The skeletal system of mammals and other vertebrates is broadly divisible into axial
and appendicular portions. The axial skeleton consists of the skull, the backbone and ribs,
and serves primarily to protect the central nervous system. the limbs and their girdles make
up the appendicular skeleton. In addition, there are skeletal elements derived from gill
arches of primitive vertebrates, collectively called the visceral skeleton. Visceral elements
in the mammalian skeleton include jaws, the hyoid apparatus supporting the tongue, and
the auditory ossicles of the middle ear. The postcranial axial skeleton in mammals general
has remained the rather conservative during the course of evolution. The vast majority of
mammals have seven cervical (neck) vertebrae, and do not have lumbar ribs, both
characteristics are unlike reptiles.
The skull of mammals differs markedly from that of reptiles because of the great
expansion of the brain. The sphenoid bones that form the reptilian braincase form only the
floor of the braincase in mammals. In mammals a secondary palate, that is not present in
reptiles, is formed by processes of the maxillary bones and the palatines. The secondary
palate separates the nasal passages from the oral cavity and allows continuous breathing
while chewing or suckling.
The bones of the mammalian middle ear are a diagnostic of the class. The three
auditory ossicles form a series of levers that serve mechanically to increase the amplitude
of sound waves reaching the tympanic membrane, or eardrum, produced as disturbances
of the air. The innermost bone is the stapes, or "stirrup bone." It rests against the oval
window of the inner ear. The stapes is homologous with the entire stapedial structure of
reptiles, which in turn was derived from the hyomandibular arch of primitive vertebrates.
The incus,
-4-
or "anvil", articulates with the stapes. The incus was derived from the quadrate bone,
which is involved in the jaw articulation in reptiles. The malleus, or "hammer", rests
against the tympanic membrane and articulates with the incus. The malleus is the
homologue of the reptilian articular bone. The mechanical efficiency of the middle ear has
thus been increased by the incorporation of two bones of the reptilian jaw assemblage. In
mammals the lower jaw is a single bone, the dentary.
The mammalian limbs and girdles have been greatly modified with locomotor
adaptations. The primitive mammal had well developed limbs and was five-toed. In each
limb there two distal bones (radius and ulna in the forelimb; tibia and fibula in the
hindlimb) and a single proximal bone (humerus; femur). The number of phalangeal bones
in each digit, numbered from inside outward, is 2-3-3-3-3 in primitive mammals and
2-3-4-5-4 in primitive reptiles. Modifications in mammalian limbs have involved
reduction, loss, or fusion of bones. Loss of the clavicle from the shoulder girdle, reduction
in the number of toes.
The Transition
This is a documented transition between vertabrate classes. Each group is clearly
related to both the group that came before, and the group that came after, and yet the
sequence is so long that the fossils at the end are astoundingly different from those at the
beginning. As Gingerich has stated (1977) "While living mammals are well seperated from
other groups of animals today, the fossil record clearly shows their origin from reptilian
stock and permits one to trace the orgin and radiation of mammals in considerable detail."
This list starts with pelycosaurs (early synapsid reptiles) and continues with therapsids and
cynodonts up to the first unarguable "mammal". Most of the changes in this transition
-5-
involved elaborate repackaging of an expanding brain and special sense organs,
remodeling of the jaws and teeth for more efficient eating, and changes in the limbs and
vertebrae related to active, legs-under-the-body locomotion.
Here are some differences to keep an eye on:
Early Reptiles Mammals
1. No fenestrae in skull Massive fenestra exposes all of braincase
2. Braincase attached loosely Braincase attached firmly to skull
3. No secondary palate Complete bony secondary palate
4. Undifferentiated dentition Incisors, canines, premolars, molars
5. Cheek teeth uncrowned points Cheek teeth premolars and molars crowned and cusped
6. Teeth replaced continuously Teeth replaced once at most
7. Teeth with single root Molars double-rooted
8. Jaw joint quadrate-articular Jaw joint dentary-squamosal
9. Lower jaw of several bones Lower jaw of dentary bone only
10. Single ear bone (stapes) Three ear bones (stapes, incus, malleus)
11. Jointed external nares Seperate external nares
12. Single occipital condyle Double occipital condyle
13. Long cervical ribs Cervical ribs tiny, fused to vertebrae
14. Lumbar ribs Lumbars are rib free
15. No diaphragm Diaphragm present
16. Limbs sprawled out from body Limbs under body
17. Scapula simple Scapula with big spine for muscles
18. Pelvic bones unfused Pelvis fused
19. Two sacral (hip) vertebrae Three or more sacral vertebrae
20. Toe bone #'s 2-3-4-5-4 Toe bones 2-3-3-3-3
21. Body temperature variable Body temperature constant
-6-
- Paleothyris (early Pennsylvanian) - An early captorhinomorph reptile, with no temporal
fenestrae at all.
- Protoclepsydrops haplous (early Pennsylvanian) - The earliest known synapsid reptile.
Little temporal fenstra, with all surrounding bone intact. Fragmentary. Had amphibian-
type vertebrae with tiny neural processes. (reptiles had only just separated from
amphibians)
- Clepsydrops (early Pennsylvanian) - The second earliest known synapsid. These early,
very primitive sysnapsids are a primitive group of pelycosaurs collectively called
"ophiacodonts".
- Archaeothyris (early-mid Pennsylvanian) - A slightly later ophiacodont. Small temporal
fenstra, now with some reduced bones (supratemporal). Braincase still just loosely
attached to skull. Slight hint of different tooth types. Still has some extremely primitive
amphibian features.
- Varnops (early Permian) - Temporal fenestra further enlarged. Braincase floor shows
first mammalian tendencies and first signs of stronger attachment to the rest of the skull.
Lower jaw shows first changes in jaw structure. Body narrower, deeper, vertebral column
more strongly constructed. Ilium further enlarged, lower-limb musculature starts to
change. This animal was more mobile and active. Too late to be a true ancestor, must be a
"cousin".
- Haptodus (late Pennsylvanian) - One of the first known sphenacodonts, showing the
initiation of sphenacodont features while retaining many primitive features of the
ophiacodonts. Skull more strongly attached to the braincase. Teeth become size
differentiated, with the in the canine region and fewer teeth overall. Stronger jaw muscles.
Vertebrae parts and joints more mammalian. Neural spines on vertebrae longer.
-7-
Hip strengthened by fusing to three sacral vertebrae instead of just two. Limbs very well
developed.
- Dimetrodon, Sphenacodon (early Permian) - More advanced pelycosaurs, clearly closely
related to the first therapsids. Dimetrodon is almost definitely a "cousin" and not a direct
ancestor, but as it is known from very complete fossils, it's a good model for
sphenacodont anatomy. Medium sized fenestra. Teeth further differentiated, with small
incisors, two huge deep-rooted upper canines on each side, followed by smaller cheek
teeth, all replaced continuously. Fully reptilian jaw hinge. Lower jaw made of multiple
bones and first signs of a bony prong later involved in the eardrum, but there was eardrum
yet, so these reptiles could only hear ground-borne vibrations (they did have a reptilian
middle ear). Vertebrae had still longer neural spines (especially so in Dimetrodon, which
had a sail), and longer transverse spines for stronger locomotion muscles.
- Procynosuchus (late Permian) - The first known cynodont - A famous group of very
mammal-like therapsid reptiles, sometimes considered to be the first mammals. Probably
arose from the therocephalians, judging from the distinctive secondary palate and
numerous other skull characters. Enormous temporal fossae for very strong jaw muscles,
formed by just one of the reptilian jaw muscles, which has now become the mammalian
masseter (muscle). Secondary palate now composed mainly of palatine bones, rather than
vomers and maxilla as in older forms. Lower incisor teeth were reduced to four per side,
instead of the previous six. Dentary now is 3/4 of lower jaw; the other bones are now a
small complex near the jaw hinge. Vertebral column starts to look mammalian: first two
vertebrae modified for head movements, and lumbar vertebrae start to lose ribs. A
diaphragm may have been present.
-8-
-Thrinaxodon (early Triassic) - A more advanced cynodont. Further development of
several of the cynodont features seen already. Temporal fenestra still larger, larger jaw
muscle attachments. Bony secondary palate almost complete. Functional division of teeth:
incisors (four uppers and three lowers), canines, and then 7-9 cheek teeth with cusps for
chewing. The cheek teeth were all alike (no premolars and molars). The whole locomotion
was more agile. Number of toe bones is 2-3-4-4-3, intermediate between the reptile
number (2-3-4-5-4) and the mammalian (2-3-3-3-3), and the "extra" toe bones were tiny.
- Exaeretodon (late Triassic) - True bony secondary palate formed exactly as in mammals.
Mammalian toe bones (2-3-3-3-3). Lumbar ribs totally lost.
- Sinoconodon (early Jurassic) - Proto-mammal. Eyesocket fully mammalian now (closed
medial wall). Hindbrain expanded. Permanent cheek teeth, like mammals, but the other
teeth were still replaced several times. Mammalian jaw joint stronger, with large dentary
condyle fitting into a distinct fossa on the squamosal. This final refinement of joint
automatically makes this animal a true "mammal".
- Peramus (late Jurassic) - An advanced placental-type mammal. The closest known
relative of the placentals and marsupials. Has attained a fully mammalian three-boned
middle ear with excellent high-frequency hearing.
- Steropodon galmani (early Cretaceous) - The first known monotreme (egg laying
mammals).
- Pariadens kirklandi (late Cretaceous) - The first definite marsupial.
-9-
- Kennalestes and Asioryctes (late Cretaceous) - Small, slender animals; eyesockets open
behind; simple ring to support eardrum; primitive placental-type brain with large olfactory
bulbs; basic primitive mammalian tooth pattern. Canine now double rooted. Still just a
trace of a non-dentary bone (the coronoid process), on the otherwise all-dentary jaw.
"Could have given rise to nearly all subsequent placentals." says Carroll (1988)
So, by the late Cretaceous the three groups of modern mammals were in place:
monotremes, marsupials, and placentals. Placentals appear to have arisen in East Asia and
spread to the Americas by the end of the Cretaceous. In the late Cretaceous, placentals
and marsupials had started to diversify a bit, and after the dinosaurs died out, in the
Paleocene, this diversification accelerated. For instance, in the mid-Paleocene the placental
fossils include a very primitive primate-like animal (Purgatorius - known only from a
tooth, though, and may actually be an early ungulate), a herbivore-like jaw with molars
that have flatter tops for better grinding, and also an insectivore (Paranygenulus).
Because the characteristics that separate reptiles and mammals evolved at different
rates and in a response to a variety of interrelated conditions, at any point in the period of
transition from reptiles to mammals there were forms that combined various
characteristics of both groups. such a pattern of evolution is termed "mosaic" and is a
common phenomenon in those transitions marking the origin of major new adaptive types.
To simplify definitions and to allow the strict delimitation of the Mammalia, some authors
have suggested basing the boundary on a single character, the articulation of the jaw
between the dentary and squamosal bones and the attendent movement of accessory jaw
bones to the middle ear as auditory ossicles. The use of a single character allows the
placement in a logical classification of numerous fossil species, other mammalian.
-10-
characters of which, such as the degree of endothermy and nursing of young and the
condition of the internal organs, probably never will be evaluated. It must be recognized,
however, that if the advanced therapsids were alive today, taxonomists would be hard-put
to decide which to place in the Reptilia and which in the Mammalia.
-11-
References
Carroll, R. 1988. Vertebrate Paleontology and Evolution. W.H.
Freeman and Co., New York
Gingerich, P.D. 1977. Patterns of Evolution in the Mammalian Fossil Record.
Elsevier Scientific Pub. Co.
Gingerich, P.D. 1985. Species in the Fossil Record: Concepts, Trends, and
Transitions. Paleobiology.
Rowe, T. 1988. Definition, Diagnosis, and Origin of Mammalia.
J. Vert. Paleontology.
-12-
f:\12000 essays\sciences (985)\Biology\Glossay and Defenitions.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Glossary and Definitions
Distribution:
Drug distribution is the process by which a drug reversibly leaves the blood stream and enters the interstitium (extracellular fluid) and/or the cells of the tissues. The delivery of a drug from the plasma to the interstitium primarily depends on blood flow, capillary permeability, the degree of binding of the drug to plasma and tissue proteins, and the relative hydrophobicity of the drug.
Excipient:
Vehicle. A more or less inert substance added in a prescription as a diluent or vehicle or to give form or consistency when the remedy is given in pill form; simple syrup, aromatic powder, honey, and various elixirs are examples.
Gel:
A colloidal state in which the molecules of the dispersed phase form a three-dimensional structure in the continuous phase to produce a semisolid material such as a jelly. For example, a warm, dilute(2 percent) solution of gelatin(a protein mixture) forms, on cooling, a stiff gel in which the molecules of the continuous phase are trapped in the holes of a "brush-heap" like structure of the gelatin. Administered orally.
Microemulsion:
Microemulsions are thermodynamically stable, optically transparent, isotropic mixtures of a biphasic oil-water system stabilized with surfactants. The diameter of droplets in a microemulsion may be in the range of 100 A to 1000 A. Microemulsion may be formed spontaneously by agitating the oil and water phases with carefully selected surfactants. The type of emulsion produced depends upon the properties of the oil and surfactants utilized.
Ointment:
Semisolid preparations intended for topical application. Most ointments are applied to the skin, although they may also be administered ophtalmically, nasally, aurally, rectally, or vaginally. With a few exceptions, ointments are applied for their local effect on the tissue membrane rather than for systematic effects.
Professional skills:
Body of systematic scientific knowledge, manual dexterity and deftness, proficiency, resulting from training, practice and experience particular of an individual who has completed the formal education and examination required for membership in a profession.
Water:
A clear, colorless, odorless and tasteless liquid, H2O that is essential for most plant and animal life and is most widely used of all solvents. Any of the various forms of water, as rain. A bodily fluid, as urine, perspiration or tears. Any of various liquids that contain and somewhat resemble water. Naturally occurring water exerts its solvent effect on most substances it contacts and thus is impure and contains varying amounts of dissolved inorganic salts, usually sodium, potassium, calcium, magnesium iron, chlorides, sulfates and bicarbonates as well as dissolved and undissolved organic matter and microorganisms.
f:\12000 essays\sciences (985)\Biology\Greenhouse Effect.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The greenhouse effect, in environmental science, is a popular term for the
effect that certain variable constituents of the Earth's lower atmosphere have on
surface temperatures. These gases--water vapor (H2O), carbon dioxide (CO2), and
methane (CH4)--keep ground temperatures at a global average of about 15 degrees
C (60 degrees F). Without them the average would be below the freezing point of
H20. The gases have this effect because as incoming solar radiation strikes the
surface, the surface gives off infrared radiation, or heat, that the gases trap and
keep near ground level. The effect is comparable to the way in which a greenhouse
traps heat, hence the term.
Environmental scientists are concerned that changes in the variable contents
of the atmosphere (particularly changes caused by human activities) could cause the
Earth's surface to warm up to a dangerous degree. Even a limited rise in average
surface temperature might lead to at least partial melting of the polar ice caps and
hence a major rise in sea level, along with other severe environmental agitation. An
example of a runaway greenhouse effect is Earth's near-twin planetary neighbor
Venus. Because of Venus's thick CO2 atmosphere, the planet's cloud-covered
surface is hot enough to melt lead.
Water vapor is an important "greenhouse" gas. It is a major reason why
humid regions experience less cooling at night than do dry regions. However,
variations in the atmosphere's CO2 content are what have played a major role
in past climatic changes. In recent decades there has been a global increase in
atmospheric CO2, largely as a result of the burning of fossil fuels. If the many
other determinants of the Earth's present global climate remain more or less
constant, the CO2 increase should raise the average temperature at the Earth's
surface. As the atmosphere warmed, the amount of H2O would probably also
increase, because warm air can contain more H2O than can cooler air. This process
might go on indefinitely. On the other hand, reverse processes could develop such
as increased cloud cover and increased absorption of CO2 by phytoplankton in the
ocean. These would act as natural feedbacks, lowering temperatures.
In fact, a great deal remains unknown about the cycling of carbon through
the environment, and in particular about the role of oceans in this atmospheric
carbon cycle. Many further uncertainties exist in greenhouse-effect studies because
the temperature records being used tend to represent the warmer urban areas rather
than the global environment. Beyond that, the effects of CH4, natural trace gases,
and industrial pollutants--indeed, the complex interactions of all of these climate
controls working together--are only beginning to be understood by workers in the
environmental sciences.
Despite such uncertainties, numerous scientists have maintained that the rise
in global temperatures in the 1980s and early 1990s is a result of the greenhouse
effect. A report issued in 1990 by the Intergovernmental Panel on Climate Change
(IPCC), prepared by 170 scientists worldwide, further warned that the effect could
continue to increase markedly. Most major Western industrial nations have pledged
to stabilize or reduce their CO2 emissions during the 1990s. The U.S. pledge thus
far concerns only chlorofluorocarbons (CFCs). CFCs attack the ozone layer and
contribute thereby to the greenhouse effect, because the ozone layer protects the
growth of ocean phytoplankton. would probably also increase, because warm air
can contain more water than can cooler air. This process might go on indefinitely.
On the other hand, reverse processes could develop such as increased cloud cover
and increased absorption of CO2 by phytoplankton in the ocean. These would act
as natural feedbacks, lowering temperatures.
f:\12000 essays\sciences (985)\Biology\Gregor Johann Mendel.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gregor Johann Mendel
Gregor Mendel was one of the first people in the history of science to
discover genetics. He independently discovered his work and lived in Brunn,
Czechoslovakia. In Brunn he was a monk and later the Abbot of the church in
Brunn. While he was in Brunn he performed many experiments with garden
peas. With the information he observed he wrote a paper where he described
the patterns of inheritance in terms of seven pairs of contrasting traits that
appeared in different pea-plant varieties. All of the experiments he performed
utilized the pea-plant, which in this case is the basis of the experiment.
Mendels work was reported at a meeting of the Brunn Society for the Study of
Natural Science in 1865, and was published the following year. Mendels paper
presented a completely new and unique documented theory of inheritances,
but it did not lead immediately to a cataclysm of genetic research. The scientists
who read his papers of complex theories, dismissed it because it could be
explained in such a simple model. He was rediscovered by Hugo de Vries in The
Netherlands, Carl Correns in Germany, and Evich Tschermak in Austria all at the
same time after 1900. They named the units Mendel described "genes." When
the gene has a slighty different base sequence it is called an "allele."
Mendel also developed 3 laws or principles. The first principle is called
the, "Principle of Segregation." This principle states that the traits of an
organism are determined by individual units of heredity called genes. Both adult
organisms have one allele from each parent, which gives both organisms 2
alleles. The alleles are separated or "segregated" from each other with the
reproductive cell formation. Mendel's second principle is the, "Principle of
independent assortment." This principle states that the expression of a gene
for any single trait is usually not influenced by the expression of another trait.
Mendel's third and last principle is called the, "Principle of dominance." This
principle states that an organism with contrasting alleles for the same gene, has
one allele that maybe dominant over the other (as round is dominant over
wrinkled for seed shapes in pea-plants). All the principles just stated are
Mendel's Laws of genetics.
Gregor Johann Mendel
Misja Prins
Biology II I.B.
Period 3
12/14/96
f:\12000 essays\sciences (985)\Biology\Grey Wolves.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ANIMALIA VERTEBRATA MAMMALIA CARNIVORA CANIDAE CANIS LUPUS
AND
ANIMALIA VERTEBRATA MAMMALIA CARNIVORA CANIDAE CANIS NIGER
Introduction:
Any person who has been able to catch a glimpse of any
type of wolf is indeed a lucky man. The wolf is one of the
earth's most cowardly and fearful animals, and it is so sly and,
pardon the expression, foxy, that it is almost a waste of time
to try and catch him in any kind of trap.
Although he can be cowardly and fearful, he can also be
one the most vicious and blood-thirsty of all animals. Often,
they simply kill as much prey as is possible, regardless of
hunger and appetite. This is done by "hamstringing" their prey.
This leaves them helpless and unable to move. Then the wolf
pack can eat and tear him apart at their own will. Although
savage and bloodthirsty, wolves are among some of the world's
smartest and most perceptive mammals.
Where found:
Wolves are found all over the world, and on almost every
major continent of the earth. The following wolves are types of
Gray Wolves (Canis lupus).
In eastern Europe the European Wolf (Canis lupus lupus) can
be found even though it used to roam most of western Europe as
well. In Spain, two wolves have also been identified-Canis lupus
deitanus and Canis lupus signatus. While the first is similar to
many of the other European wolves, the latter may be more
closely related to the jackal (Canis aureus), than to a wolf.
The Caucasion Wolf (Canis lupus cubanensis) is found in many
parts of eastern Europe and western Asia. The large tundra
wolf of eastern Asia, the Tundra or Turukhan Wolf (Canis lupus
albus), is very close in relations to the wolves of northern
Alaska.
In the Arctic Islands and Greenland the Melville Island Wolf
(Canis lupus arctos), the Banks Island Wolf (Canis lupus bernardi),
the Baffin Island Wolf (Canis lupus manningi), and the Greenland
wolf (Canis lupus orion), are all found.
Wolves of the Continental Tundra and Newfoundland include
the Alaska Tundra Wolf (Canis lupus tundrarum), the Interior
Alaska Wolf (Canis lupus pambasileur), the Kenai Peninsula Wolf
(Canis lupus alces), the Mackenzie Tundra Wolf (Canis lupus
mackenzii), the Mackenzie Valley Wolf (Canis lupus occidentalis),
the Hudson Bay Wolf (Canis lupus hudsonicus), the Labrador Wolf
(Canis lupus labradorius), and the Newfoundland Wolf (Canis lupus
beothicus). However, the Newfoundland wolf has seemed to become
extinct. This is strange because there is no evidence of them
being intensely hunted by man, of extreme habitat changes, or of
lack of food and yet in the early 1900s they became extinct.
The wolves of the Western Mountains and Coast of North
America include the British Columbia Wolf (Canis lupus
colombianus), the Alexander Archipelago Wolf (Canis lupus ligoni),
the Vancouver Island Wolf (Canis lupus crassodon), the Cascade
Mountain Wolf (Canis lupus fuscus), the Northern Rocky Mountain
Wolf (Canis lupus irremotus), the Southern Rocky Mountain Wolf
(Canis lupus youngi), and the Mogollon Mountain Wolf (Canis lupus
mogollonensis). Of these wolves, the British Columbia Wolf is the
largest. The last two of these wolves have now been
exterminated due to the killings by man.
The Mexican Wolf (Canis lupus baileyi) is the smallest of
the subspecies of the wolves found in the Americas. They could
be found in the area of Northern Chihuahua and other parts of
Mexico and the southern United States, especially Texas. The
Texas Gray Wolf (Canis lupus monstrabilis) is obviously larger
than the Mexican Wolf and used to be commonly found in Texas.
Now, both of these subspecies have been exterminated in the
United States but still can be found in the Sierra Madre
Occidental and the mountains of western Coahuila and eastern
Chihuahua, in Mexico.
The Eastern of Timber Wolf (Canis lupus lycaon) and the
Great Plains or Buffalo Wolf (Canis lupus nubilus) could
originally be found on almost 25% of North America. Today,
however, due to competition with settlers, the Buffalo Wolves
were exterminated by the early 1900s. The Timber Wolf, for the
same reason, can no longer be found in the United States, but
still is common in Ontario and Quebec.
There are three main subspecies of Red Wolves (Canis
niger). They include the Florida Red Wolf (Canis niger niger), the
Mississippi Valley Red Wolf (Canis niger gregoryi), and the Texas
Red Wolf (Canis niger rufus). Gray wolves and red wolves can
usually be distinguished by size. In most cases the gray wolves
are larger than red wolves with the exception that some of the
larger red wolves may be bigger than the smaller of the gray
wolves. They can also be distinguished by identifying a knob,
"called cingulum, on the upper carnassials, or shearing teeth of
the red wolf." However, this method, also, is not altogether
full-proof. In some cases a timber wolf will have a cingulum and
an occasional red wolf will not have one at all. This method of
using the cingulum to distinguish the wolves can also be
decieving in that almost all coyotes have a cingulum just like
the red wolves.
Characteristics:
The Red Wolf and the Gray Wolf are both from the family
Canidae. This family includes the coyote, jackal, dingo, domestic
dog, fox, bush dog, hunting dog, dhole, and the wolf. The wolf
has long and powerful legs, as well as a mighty stamina, that
allow it to spend eight to ten hours a day on the move and in
search of food. The wolves usually travel during the night
times or in the cool temperatures during dawn and dusk. They
usually travel at an average speed close to five miler per hour,
but they can run up to 25 miles per hour. Wolves, like most
canids, are digitigrade with five toes on front feet and four on
hind feet. They are equipped with short, thick claws that give
them good traction for running.
Wolves are very well-equipped for the hunt. They have 42
teeth that are backed up by incredibly strong jaw muscles. They
usually can track their prey with their keen sense of smell
that, if downwind, can detect prey at up to around 300 yards.
An interesting aspect in the manner in which wolves is what is
known as the "conversation of death." Wolves often test large
prey, and in approaching whatever this might be, a moose,
caribou, elk, or bison, they engage their prey's gaze with a
sober stare. Man has no been able to translate this
"phenomenon" any more than he has been able to translate the
meaning and significance of howling. However, it has been
suggested that with this momentary, silent communication it is
decided whether the hunt will be stopped or if a chase will
follow.
Gray wolves (Canis lupus) generally have fairly heavey
coats that provide good insulation in cold weather. The first
layer is a fine underfur and the second layer is made up of long
guard hairs that shed moisture and keep underfur dry. Wolves
can live in temperature as cold as -40 degrees fahrenheit. The
coats of gray wolves vary in color from gray to black, and
sometimes from brownish gray to brownish white. Many of their
hairs can be black-tipped which results in irregular, wavy black
markings that are concentrated in the middle of the back. The
young of wolves are, throughout, grayer than the adults.
Red wolves (Canis niger) tend to be beautifully colored,
with some black and dark gray, brown, cinnamon, and buff. Their
tail are generally the same as the rest of the fur, but are
usually dark-tipped. The color of these wolves also tends to
vary withe the season and with the geographic location. Wolves
from Chihuahua tend to be "grizzled on the back and flanks,"
whereas "these parts are more tawny or brindled on wolves from
southern Durango." The fur of red wolves also tends to be more
thin than that of the grey wolves. This is because they tend
to live in areas with much warmer climates than the areas of
the grey wolves.
The size of wolves can vary somewhat, but most wolves are
relatively close in size. Wolves are sexually dimorphic: the
male wolves are measurably larger than the females. The
average length of male wolves is about 4.5-5.5 feet and the
average height, at the shoulders is approximately 27-33 inches.
Their tales are between 14 and 17 inches long, and they range in
weight from 70 to 100 pounds. The females are usually 4-5 feet
long, 25-30 inches in height at the shoulders, and have tails 12-
15 inches long. They usually weight between 50 and 80 pounds.
How it interacts with the environment:
For wolves, pack is the basic unit, which can vary from 2
to 15 or more wolves. They travel, hunt, feed, and rest
together.
In each pack there is a specific order of rank and a well
developed social system. The highest ranking male, the alpha
male, is dominant to all others and directs the pack's activities.
The alpha female is dominant over all other females. Each pack
may also include pups, juveniles, and older, more mature wolves.
The pack is very family oriented and there are stong bonds of
attachment within each pack. The socialization of pups begins
when they start to appear outside of the den. Here they
establish dominance relations among littermates through "play
fighting." Younger males, not pups, but not adults yet, prepare
for adulthood in many ways. One of these ways is by chasing
deer. However, they do not chase to kill, but rather to
practice, sharpen their hunting skills, and to train themselves.
In a pack it is important that the dominant wolves are
easily distinguished from the submissive ones. To avoid fighting
within the pack wolves have ritualized behaviours, postures, and
gestures that are used to show dominance. A dominant wolf will
assert himself by standing erect, ears and tail up, eyes open,
teeth bared, and body hairs erect. The subordinate wolf will
show his submission by slumping down, laying back his ears,
putting his tail between his legs, closing his mouth, and slightly
closing his eyes. The submissive wolf may, in some cases, lie
belly up to show his submission.
Wolves are very territorial and the mark out their
territories by chemical signals. Among these are urine, feces,
and secretions. Wolves usually have a regular pattern of
visiting and marking their territories every few weeks. Through
observation, it has been shown that, while on the move, an alpha
male will stop and mark every two minutes. Wolves also use
these scents to recognize which individuals have been to the
given area.
Wolves are well-known for hunting large animals. They will
hunt and kill large animals such as moose, deer, caribou,
mountain sheep, elk, bison, and musk-ox, but, seasonally, they
will also eat rodents, hares, birds, fish, insects, and even
carrion. While wolves must kill many animals for the pack to
survive, most chases do not end with a kill. Wolves react to
how the prey reacts first, and it has been shown that they seem
to need the stimulus of prey running away to start chasing
after him. In the case of kill, the wolf can consume up to 20
pounds and can then go for several days without eating.
Miscellaneous:
Wolves and humans have had a very inticate and close
relationships for thousands of years. They are also very
similar in many ways. The wolf and man are known as "apex
hunters," that is, they hunt at the top of their food chain and,
except for each other neither has to compete with any other
animal or enemy for their biological niche. Another similarity
between man and wolf is the pack. Hominid hunter-gatherer clans
stayed in groups of about 15 (even though they ranged from 5 to
50, or greater, this was the average size) and would travel over
a territory of 500 to 1000 square miles in search of food.
Wolves, in packs of about ten, cover approximately the same size
area. Few other animals would ever travel in such wide-ranging
parties made up of so few members.
Within the past two decades, Americans have developed a
longing and a want for wolves and dogs that have high
proportions of wolf blood. People fall in love with the wild
elegance of the animals and the affectionate interaction that
can take place between them and the animal, but wolves being
kept in the house almost inevitably become a fatal attraction-
usually for the wolf. While the wolves are still pups they
behave with a playful and rough affection that is cute while the
animal is young. But as the animal matures, its predisposed
nature of being a highly territorial predator emerges. This
leads to conflict between the wolf and the owner who has bought
it for its wild appeal, but then expects it to act as a
domesticated pet. Whether by biting people, urinating all over
the house, or by tearing up cabinets and furniture, the wolf will
exhaust its owner and the wolf will no longer be as cute as it
seemed it would be. Wolves are not monstrous killing machines,
but they are wild animals and the best advice that can be given
to anyone considering the purchase of a wolf or wolf hybrid is
not to.
Wolves have been in constant conflict with man. Sadly,
extermination of wolves has taken place in more than 95% of the
48 continuous United States, much of Mexico, parts of Canada,
most of Europe, and most of the Soviet Union. The status of
the wolf in the lower 48 states is endangered. Areas in
northern Minnesota and Isle Royale have been declared "Critical
Habitat" and have been protected from destruction or adverse
modification. In Alaska, hunting of the animal is permitted since
the wolf is not included on the Endangered Species List there.
In Norway, Sweden, Italy, Israel, India, and Mexico the wolf is
totally protected by law although protection is often minimally
enforced.
Man holds the future of the wolf in his hands; the wolf is
fully capable of surviving if man learns to understand it and
therefore learn to appreciate it and value it.
f:\12000 essays\sciences (985)\Biology\Growth Dynnamics of E coli in Varying Media.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Growth Dynamics of E. coli in Varying Concentrations of Nutrient
Broths, pH, and in the Presence of an Antibiotic
Dvora Szego, Elysia Preston
Darcy Kmiotek, Brian Libby
Department of Biology
Rensselaer Polytechnic Institute
Troy, NY 12180
Abstract
The purpose in this experiment of growth dynamics of E. coli in varying media was to determine which media produces the maximum number of cells per unit time. First a control was established for E. coli in a 1.0x nutrient broth. This was used to compare the growth in the experimental media of 0.5x and 2.0x, nutrient broths; nutrient broths with an additional 5.0mM of glucose and another with 5.0mM lactose; nutrient broths of varying pH levels: 6.0, 7.0, and 8.0; and finally a nutrient broth in the presence of the drug/antibiotic chloramphenicol. A variety of OD readings were taken and calculations made to determine the number of cells present after a given time. Then two graphs were plotted, Number of cells per unit volume versus Time in minutes and Log of the number of cells per unit volume versus Time growth curve. The final cell concentration for the control was 619,500 cells/mL. Four media, after calculations, produced fewer cells than that of the control, these were: Chloramphenicol producing 89,3
01 cells/ml; glucose producing 411,951 cells/mL; lactose producing 477,441 cells/mL and finally pH 6.0 producing 579,557cells/mL. The remaining four media, after calculations, produced cell counts greater than the control: 2X with 1,087,009 cells/mL; 0.5X with 2,205,026 cells/mL; pH 8 with 3,583,750 cells/mL and finally pH 7.0 with 8,090,325 cells/mL. From these results the conclusion can be made that the environment is a controlling factor in the growth dynamics of E. coli. This was found through the regulation of pH and nutrient concentrations. In the presence of the drug/antibiotic, chloramphenicol, cell growth was minimal.
Introduction
E. coli grows and divides through asexual reproduction. Growth will continue until all nutrients are depleted and the wastes rise to a toxic level. This is demonstrated by the Log of the number of cells per unit volume versus Time growth curve. This growth curve consists of four phases: Lag, Exponential, Stationary, and finally Death. During the Lag phase there is little increase in the number of cells. Rather, during this phase cells increase in size by transporting nutrients inside the cell from the medium preparing for reproduction and synthesizing DNA and various enzymes needed for cell division. In the Exponential phase, also called the log growth phase, bacterial cell division begins. The number of cells increases as an exponential function of time. The third phase, Stationary, is where the culture has reached a phase during which there is no net increase in the number of cells. During the stationary phase the growth rate is exactly equal to the death rate. A bacterial population may reach stati
onary growth when required nutrients are exhausted, when toxic end products accumulate, or environmental conditions change. Eventually the number of cells begins to decrease signaling the onset of the Death phase; this is due to the bacteria's inability to reproduce (Atlas 331-332).
The equation used for predicting a growth curve is N=N0ekt. N equals the number of cells in the culture at some future point, N0 equals the initial number of cells in the culture, k is a growth rate constant defined as the number of population doublings per unit time, t is time and e is the exponential number. The k value can be easily derived by knowing the number of cells in a exponentially growing population at two different times. K is determined using the equation k=(ln N-ln N0 )/t, where ln N is the natural log of the number of cells at some time t, ln N0 is the natural log of the initial number of cells and t is time. This equation allows one to calculate the numbers of cells in a culture at any given time. The reciprocal of k is the mean doubling time, in other words, the time required for the population to double, usually expressed as cells per unit volume. (Edick 61-62)
Temperature is the most influential factor of growth in bacteria. The optimal temperature of E. coli is 37C, which was maintained throughout the experiment. Aside from temperature, the pH of the organisms environment exerts the greatest influence on its growth. The pH limits the activity of enzymes with which an organism is able to synthesize new protoplasm. The optimum pH of E coli growing in a culture at 37C is 6.0-7.0. It has a minimum pH level of 4.4 and a maximum level of 9.0 required for growth. Bacteria obtains it nutrients for growth and division from their environment, thus any change in the concentration of these nutrients would cause a change in the growth rate (Atlas 330). Drugs/Antibiotics are another very common tool in molecular biology used to inhibit a specific process. Chloramphenicol, used in this experiment, inhibits the assembly of new proteins, yet it has no effect on those proteins which already exist( ).
The growth dynamics of E.coli were evaluated in individual media trials. By using only one variable the results can be directly correlated to that particular variable. For example in this experiment the temperature was held at a constant 37C, and the variables were the broths which the E. coli were using to grow. The k values needed to be determined in order to provide an accurate projection of cell growth, by providing a constant initial cell count.
The purpose of this experiment was to determine the effects of varying media, and compare which media produces the maximum number of cells per unit time.
Methods and Materials
The initial step of this experiment was to establish a control of E.coli in a nutrient broth with a concentration designated as 1.0. A variety of media were established, there were nutrient broths with concentrations of 0.5x and 2.0x, nutrient broths with additional an 5.0mM of glucose and another with 5.0mM lactose. There were also nutrient broths of varying pH levels: 6.0, 7.0, and 8.0. The last of the medium contained drugs/antibiotics, a very common tool in molecular biology used to inhibit a specific process, chloramphenicol 200mg/ml. Each solution had a corresponding blank used to zero the spectrophotometer. These blank consisted of the medium before inoculation with E. coli. Beginning with approximately 50 ml of each of these inoculated solutions, 3.0 ml of each was pipetted out and placed into a cuvet, if care is used, to speed up this process, the sample may be poured into the cuvet. After the aliquots of each sample had been transferred to a cuvet the OD was measured at 600nm. The solutions we
re then placed in an incubator or water bath with forks, to maintain a constant temperature of 37 degrees Celsius. Every 15 minutes thereafter for a 150 minute time period 3.0 ml of each solution was removed and the OD600 was measured and recorded. The samples are not to remain out of the water bath for an extended period of time. If a spectrophotometer was not available the sample was placed in an ice bath, the cells were chilled in 2-3 minutes and thus no grow could occur. However all moisture was wiped off the outside of the cuvet with a Kimwipe before placing it in the spectrophotometer, as water will cause serious damage to the instrument. To prevent cells from settling at the bottom of the cuvet, the samples were gently swirled to ensure that the cells are evenly distributed throughout the cuvet, then the reading was taken as quickly as possible.
The k values were determined for each time interval of all experimental media by taking the natural log of the number cells at time t minus the natural log of the number of cells at t-15minutes and dividing by 15 minutes. Beginning with an initial cell count equal to that of the control, these k values were used with the growth equation to calculate the number of cells in each media at each time interval. These calculations were necessary in order to accurately compare the growth in each medium. If this procedure was not followed the results are likely to be misinterpreted. This was because the initial cell counts for each sample were different. Graphing the numbers obtained directly from the experiment showed misleading final cell counts. A table was also made of 1/k, the mean doubling time. The k used in this calculation was derived using the initial and final cell counts and dividing by the entire time period. Finally graphs were made of the number of cells per mL versus Time in minutes and
the Log number of cell per mL versus Time in minutes , which produces the traditional growth curve.
Results
The first part of the experiment was to determine the cell number using the optical densities and multiplying them by 1.5 x 106. These numbers were then used as raw data to calculate the k values of each time interval for each media. Using these k values cell counts were calculated for all media beginning with an initial count of 70,500 cells. These results were graphed, plotting the number of cells per mL versus the time in minutes. (Graphs 1,3,5) These graphs show the growth dynamics of E. coli in the varying media. The control at time 150 minutes produced a final cell count of 796,500 cells/mL. After doing the necessary calculations to determine k values and thus make all of the graphs begin at one standard point the graphs were plotted. Through these graphs (1,3,5) it was visible that four media produced fewer cells than that of the control, these were: Chloramphenicol producing 89,301 cells/mL; glucose producing 411,951 cells/mL; lactose producing 477,441 cells/mL, and finally pH6.0 producing 57
9,557 cells/mL. The remaining four media produced cell counts greater than the control: 2.0x with 1,087,009 cells/mL; 0.5x with 2,205,026 cells/mL; pH 8.0 with 3,583,750 cells/mL and finally pH 7.0 with 8,090,325 cells/mL. The Log number of cells was plotted versus Time(Graphs 2,4,6). This is the form of the traditional growth curve. By observing these graphs it can be told that the same media which produced greater final cell counts also produced a greater final value on the growth curve. The average k values for each media were found and the values for 1/k, the mean doubling time, were computed (Chart 5). These results clearly exhibit the effect media has on the growth dynamics of E.coli. The control sample had an average doubling time of 69 minutes while pH 7.0 doubled in only 30.4 minutes and chloramphenicol has a calculated doubling time of 381 minutes.
Discussion
This study confirmed our hypothesis that varying the media will produce different effects on growth rate. By graphing the number of cells per mL versus Time in minutes it can be seen which of the media provided the best environment for cell growth. The graphs of the Log number of cells versus Time produces the traditional growth curves.
The results supported the hypothesis stating that E.coli has the best growth rate at a pH 7.0 with a final cell count of 8,090,325 cells/mL, however the pH of 8.0 producing 3,583,750 cells/mL was found to produce a greater number of cells than that of a pH of 6.0 579,557 cells/mL. The change in nutrients also had a great affect on the cell production (the control produced a final cell number of 619,500 cells/mL). The 0.5x nutrient broth produced 2,205,026 cells/mL while the 2.0x nutrient broth only produced 1,087,009 cells/mL. Although these are both higher than the control sample, it is interesting to note that the 0.5x broth actually produced more cells than the 2.0x broth. This shows that more isn't necessarily better. There are fewer cells in both the lactose enhanced medium and the glucose enhanced medium samples than in the control. This may be due to the fact the E.coli is able to ferment both glucose and lactose producing complex end products(Benson 153). In the presence of chloramphenicol, the
drug/antibiotic, the growth rate reaches the stationary phase at time 120 when there are 57,000 cells/mL(experimental). This is due to the fact that chloramphenicol inhibits the assembly of new proteins, yet has no effect on those proteins which already exist. Therefore, in the presence of chloramphenicol, translation was inhibited preventing the cells from growing and dividing (Atlas 371).
The growth curve produced by graphing the Log number of cells/ml versus time in minutes were found to be incomplete. The expected reasoning for this is due to the fact that the experiment was not run for a sufficient amount of time. By observing these graphs it can be told that the same media which produced greater final cell counts also produced a greater final value on the growth curve. This is because all that was done to convert these numbers was to take the log of the cell number, so should there be any error it will be in all three sets of graphs. The final step of computing the 1/k values provided the knowledge of which of the media more closely resembled that of an optimal environment, the data obtained from this experiment showed that the media at a pH of 7.0 most closely resembled this ideal environment of 37C, a pH between 6.0-7.0, a rich nutrient concentration and no antibiotics present. In this environment the cells growth rate exceeds that of the cells death rate and the cells are able to
continue for a longer period of time in the log stage of the cell growth cycle.
Any error in the findings are more than likely due to human error. It could be due to the samples remaining out of the water bath for an extended period of time and a spectrophotometer not being available, therefore additional cell growth could have occurred. Or if the sample had been placed in the ice bath, the water on the outside of the cuvet may not have been thoroughly wiped off therefore causing error in the OD600. The final possibility for this error is that the cells in the sample may have settled to the bottom of the cuvet because the reading was not taken fast enough.
In conclusion the k values should be constant throughout each individual media, but should differ between the various media. The results from this experiment showed the k values to fluctuate slightly. Also, the results of this experiment showed, after calculations, that in the 0.5x nutrient broth more cells were produced than in either the 1.0x and 2.0x broths, this could be due to the fact that the cells were growing at a slower rate but they were not dying as fast or producing as many toxins as in the 1.0x broth or the 2.0x broth. This is only a hypothesis but is supported by the lab manual which says, "This suggests and has been substantiated experimentally, that the waste products produced by the bacteria are significant factor in the limitation of population size" (Edick 64). The pH change was also a contributor to the number of cells which were produced in a specific media. The pH 7.0 produced a significantly larger number of cells than that of 6.0 and 8.0, this is more than likely due to it being
the established optimal pH for the growth of E. coli. As stated in the previous paragraphs the broth which contained chloramphenicol produced significantly fewer number of cells than any other medium. This was due to the fact that the antibiotic/drug inhibits translation and prevented the cells from growing and dividing. This experiment could be examined further through the use of different nutrient enhanced media, media containing induce lac operons, temperatures changes, and different drugs/antibiotics at different concentrations.
References
Atlas, Ronald M. 1995. Principles of Microbiology, St. Louis, MO: Mosby-Year book
Inc.
Benson, Harold J. 1994. Microbiological Applications 6th edition, Debuque, IA: Wm. C.
Brown Publishers.
Edick, G.F (1992). Escherichia coli: Laboratory Investigations of Protein Biochemistry,
Growth and Gene Expression Regulation, 3rd edition; RPI.
f:\12000 essays\sciences (985)\Biology\Hammerhead Sharks.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Marine Science/ Per. 1
Hammerhead Sharks
Sharks are one of the most feared sea animals. They live in oceans across the world but are most common in tropical waters. There are over three hundred fifty species of sharks. They can be broadly categorized into the following four groups: Squalomorphii, Squatinomorphii, Batoidea, and Galeomorphii. The shark family Sphyrnidae that includes the Hammerheads are part of the Galeomorphic classification. They are probably the most easily recognizable of all the sharks. The Hammerheads are among the strangest looking sharks. As the name indicates they have a flattened head which resembles the head of a hammer. Their eyes and nostrils are at the ends of the hammer. There are many species of Hammerheads. There are eight living species of hammerheads. The following four are the main categories:
1. Scalloped hammerhead (Sphyrna lewini)-Pectoral fins are tipped with black this grey shark. The maximum length is about 12 feet.
2. Bonnethead (Spyrna tiburo)-With a head shaped like a shovel the bonnethead rarely grows more than four feet long. This shark is commonly seen inshore.
3. Smooth hammerhead (Sphyrna zygaena)-Bronze with dusky fin tips, it can grow to thirteen feet.
4. Great hammerhead (Sphyrna mokarran)-Attaining a length of a possible 18 feet, this is the largest and most dangerous of all the hammerheads.
One of the most interesting things about the hammerheads is the unique shape of their heads. Ever since scientists started to study the hammerhead they have speculated about the use of the hammer. The hammer is a complex structure and probably serves more than one function. The most important function of the hammer according to scientists is increased electroreceptive area and it's sensory perception. This means that the hammerhead has a remarkable sensory ability to detect the small electrical auras surrounding all living creatures. Under certain conditions, such as in searching for wounded animals, the electrical activity increases helping the hammerhead to feed. It is also believed that the hammerhead may be able to use the Earth's magnetic field as a source for navigation. Some hammerheads migrate a lot and may rely on this built in compass sense to guide them in the open ocean. Another use for the hammer is to enhance maneuverability. The hammer's similarity to a hydrofoil seems to explain its u
sefulness for maneuverability and improved lift. However, this theory has not been tested.
Sharks generally have a small brain in comparison to their body weight. Among sharks hammerheads have a relatively large brain-body weight ratio. Sharks differ form most other fish in several ways. Sharks have a boneless skeleton made of cartilage that is a tough elastic substance. Most sharks have a rounded body shaped like a torpedo. This shape helps them swim efficiently. Hammerheads are especially good swimmers because of the hydrodynamic function of their head.
All sharks are carnivorous. Most eat live fish, including other sharks. Most sharks eat their prey whole, or tear off large chunks of flesh at a time. They also eat dying animals. Hammerheads have definite food preferences. Their elongated head may help them locate the prey they prefer. The Great Hammerhead likes to eat stingrays. This was observed when the stomach contents of a hammerhead were examined and stingray spines were found. Stingrays are usually difficult to detect because they are partially buried in the sediment. Yet, the hammerhead is capable of finding them because they can swim close to the bottom swinging their heads in a wide arc like a metal detector.
Sharks reproduce internally. Unlike most fish sharks eggs are fertilized internally. The male shark has two organs called claspers which release sperm into the female where it fertilizes the egg. In many sharks the eggs hatch inside the female, and the pups are born alive. Other species of sharks lay their eggs outside. The hammerhead female has an internal pregnancy in which a placenta is formed around the embryo. The gestation period for most placental sharks is between nine and twelve months. The placenta appears about two to three months after ovulation when the embryos have consumed their yolk. Eggs are ovulated at intervals of a day or so, which explains why their may be considerable variations in the developmental ages of pups in a litter. It's not unusual to find embryos that have died during development.
Hammerhead sharks tend to form schools of fifty to two hundred. They tend to congregate and swim at special sea mounts. Sea mounts are underwater mountains. In these sea mounts there are many other fish attracted by rich algae and invertebrate larvae. The hammerheads have no interest in these fish. So why do they gather at these underwater mountains? Recent research seems to indicate that hammerheads go there for mating purposes. Observations in these sea mounts show that the majority of hammerheads there are female. This indicates that its easy for the male to find a mate. However, researchers were surprised to find that there were many immature female hammerheads at the sea mounts. This led them to believe that in addition to reproduction there must be other reasons for coming to the sea mounts. It is believed that the sea mounts serve as navigational centers. Each evening the hammerheads begin a ten to fifteen mile swim away from the mount, always returning by dawn or the following day. It seems
that they spend the night at distant deep water feeding grounds. The young females participate in these long distance swims. The sea mount serves as a navigational center helping them find their way back. The nightly swim help the young find nutritious food which helps them in their growth.
Bibliography:
Klimley, Peter, "Hammerhead City", Natural History, Oct. 1995, pp 33-38.
Martin, Richard, "Why the Hammerhead?", Sea Frontiers, May-June 1989, pp. 142-145.
Moss, Sanford, Sharks, Prentice-Hall, 1984.
World Book Encyclopedia, Sharks, World Book Inc., 1988.
f:\12000 essays\sciences (985)\Biology\Hernia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Femoral Hernia
A hernia is any type of abnormal protrusion of part of an organ or tissue through the structures that normally contain it. A weak spot or opening in a body wall allows part of the organ or tissue to bulge through. Hernias may develop in almost any area of the body, but they most frequently occur in the abdomen or groin. Hernias are commonly called "ruptures," but this is a misnomer, as nothing is torn or ruptured. Hernias can be present from birth (congenital) or can be caused by stress and/or strain.
A femoral hernia is just one of many different types of hernias. They occur when a part of the intestine protrudes into the femoral canal. The femoral canal is the tubular passageway that carries blood vessels and nerves from the abdomen into the thigh. Femoral hernias occurs most commonly in women. This condition can be brought about by an inherent weakness in the abdominal muscles in the groin. A sudden or prolonged increase in pressure to this region, as occurs in heavy lifting or coughing, can cause the intestine to be forced through the weakened opening.
Diagnosis of a hernia is usually done by a visual examination and by studying the patients medical history. Sometimes the hernia will be pinched, or strangulated, resulting in pain and nausea. Other times it may hardly be noticeable. Treatment usually involves manually manipulating the protruding portion of the intestine back to the proper place or the surgical repair of the muscle wall through which the hernia protrudes. Surgical repair of a hernia is referred to as a herniorrhaphy. The only effective way of preventing a hernia is to refrain from putting strain or pressure on the area of the abdomen or groin.
f:\12000 essays\sciences (985)\Biology\Homo Aquaticus .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Homo Aquaticus?
I. Introduction
When the human brain is compared with the brains of apes there are several obvious
differences; the centers for the sense of smell and foot control are larger in apes than in humans,
but the centers for hand control, airway control, vocalization, language and thought are larger in
humans. In my paper, I will describe the most defined differences of brain size and centers
between humans and their closest relatives, chimpanzees, to compare them with other mammals
and to draw conclusions about the evolution history of humans.
II. Brain Evolution
Humans and chimpanzees are biochemically (DNA) and therefore probably
phylogenetically (evolution relationships), more alike than chimps and gorillas. But the brains of
chimps and humans differ in size and anatomy more than gorillas and chimps. The brains of
chimps and gorillas probably didn't go through many evolutionary innovations, because they
generally resemble other ape and monkey brains. This implies that the human brain changed a lot
after the human/chimp evolution. With the exception of the olferactory bulb (scent), all brain
structures are larger in humans than in apes. The neocortex (part of the cerebral cortex), for
instance is over three times larger than in chimps, even though chimps and humans are pretty
close to equal in body weight.
Each side of the brain is diveded by the central sulces into independant halves. Just before
the central sulcus lies the post-central cortex, where the opposite body half (right side for left
brain, left side for right brain). Just in front of the central sulcus lies the pre-central cortex where
the information for the voluntary movements leave tthe brain. The pre-central area is called
primary motor cortex, and also "Area 4" in primates.
III. Human and Chimp Cortex Differences
In humans Area 4 is almost twice as large as it is in chimpanzees. The part of Area 4 that
commands the movement of the leg, foot and toes is smaller in humans than apes. This leaves
more room for the part that controls the hand, fingers and thumb. Even bigger is the lower part
of human Area 4, related to the mouth and brething and vocal cords. The post central cortex is
enlarged the same as Area 4.
In front of the primate Area 4 lie the cortex areas (pre-motor) that tell Area 4 what to do.
In front of the enlarged part of human Area 4 is the Area of Broca, the motor-speech center
which controls the breathing muscles. Above Area Broca is Wernicke's Area, the speech center,
a uniquely human brain center along with Area of Broca. Wernicke's Area has direct connections
to Broca's Area through arcuate fasciculus, a neural pathway that apes don't have anywhere in
their brain.
The major difference between the human and ape cortex's is the enlargement of the hand
and mouth integration areas. These areas occupy a large part of the human brain.? In the motor
half of the cerebral cortex, enlarged areas are in the pre-motor area and Broca's Area. In the
sensory half, the enlarged ares are Wernicke's Area and the visual area as well as the auditory
cortex.?
IV. Explanations
Many anthropologists believe that the differences between human and ape brains are
shown through man's ability to use tools and language. This traditional view cannot explain why
only human ancestors developed these motor skills and language abilities, that is, why nonhuman
primates and other savannah mammals didn't develop these abilities.
The solution may lie in the aquatic theory of human evolution, the theory that explains
why humans don't have fur, and why we have excess fat, and many other human features.(4)
There are indications that the early hominoids (ancestors to man and ape) lived in mangrove or
gallery forests(5), where they adapted to a behavior like proboscis monkeys, climbing and hanging
in mangrove trees, wading into water and swimming on the surface. In my opinion human
ancestors, split from chimpazees and other apes and, instead of staying in forests like chimps,
progressed with their water skills, like diving and collecting seaweed, then adapted to waders in
shallow water and finally to bipedal walkers on land.
The fact that human olfactory bulbs are only 44% of the chimpanzee bulb?, is not
compatible with African savanah life. All savanah animals have a good olfaction. But an aquatic
evolutionary phase would explain why humans have a poor sense of smell. Water animals
typically have a reduced or even non-existent sense of smell.(4)
The human Area 4 for the legs, feet and toes are reduced, because human left the trees and
lost the grasping hind limbs of apes. Area 4 for the hands and fingers are larger than apes. The
human hand is much more mobile than an apes, the thumb and index finger in particular, the
human fingertips are more sensetive, we have faster-growing fingernails. All of these
enhancements point towards the enhanced hand mobility and sensitivity of raccoons and sea
otters, which suggests that human ancestors groped for crayfish and shellfish underwater, also the
mobility was needed to remove the shells of the food. Raccoons are good climbers but seek most
of their prey in shallow water. They have human like forelimbs and fingers?. And their brain
cortex shows the same types of enlargements as humans. Sea-otters, humans, and mongrove
monkeys all use tools, unlike savannah mammals.(5)
Like humans, all diving mammals have excellent airway control, to keep the water out of
their lungs. This voluntary control of breathing is necessary because they have to inhale strongly
before they dive, and under water they have to hold their breath until they surface. In land
mammals, however, exhaling and inhaling breathing rythms change involuntarily. with lower
oxygen and higher carbondioxide. An aquatic mammal with that mechanism would inhale
strongly when its need for oxygen was the highest, in other words, it would inhale involuntarily
while underwater. That's why the human ancestor tripled the part of Area 4 for the mouth and
airways, and why he evolved the Broca area which coordinates the muscles of the mouth airways.
This refined airway control was a preadaptation for human speech.(4)
V. Speech and Association Areas
The arcuate fasciculus in humans directly connects the coordination center of the muscles
needed for breathing (Broca) with the cortex behind the sensory areas for the mouth and throat
(where we feel the movements our breathing, singing, talking etc. make) and the audio areas
(where we hear and register the sounds we hear)(5). This connection of airway sensation with
hearing was the beginning of learning to make voluntary sounds. In the primitive' part of
Wernicke, the first interpretations of sound are made possible through connections to the visual
and paretial areas, so that the sounds were associated with what we were seeing and feeling when
we heard the sound.
Once the connection of Wernicke and Broca was made, we got a device that could both
make sound and interperet it. Using that apparatus we learned to communicate with the others
living in our group. We bettered our communication abilities by evolving larger areas for the use
of our new mechanism.
Apes lack the association ares. Any ape could have evolved a greater amount of brain
tissue and developed the larger association areas, if the ape had found a need for the extra brain.
But, the larger association areas were useless without the improved sound making/interpereting
areas found in humans.
Voluntary and variable sound production seen in aquatic animals like otters, seals, sea
lions, and toothed whales. Large brains are a feature of many aquatic animals, seals and toothed
whales. The relation between aquatic life, brain size and vocal control is not clear. But even the
small-brained sea mammals have fairly well-developed vocalization skills.
VI. Brain Lateralization
An important difference between a human brain and an ape's brain is the larger amount of
asymmetry in human brains. Like humans being right-handed, is more pronounced than dexterity
of monkeys or apes.(4) Most mammals and birds show small signs of asymmetry in certain brain
functions. The left part of the brain, in most people, is larger than the right half. (Remember that
the left half of the brain controls the right half of the body and vice versa.) In 65% of people, the
left planum template, where the hearing centers are, is much larger than the right one. Musical
learning that occurs before the age of seven seems to induce strong enlargement of the left
planum
temporale. In more than 80% of people the same hemishpere controls the dominant hand (right).
Why is that?
The right hand is usually the hand that does things with an object while the other hand
holds the object steady. The left hand holds the shield, holds the billiard cue, and holds the paper
when writing. This fits with the spatial and geometricality of the right hemishpere. The right
hand is not completely dominant, there is a small division of tasks between the left and right brain
centers for the hands, especially with jobs that two hands have to be used in.
Some people say that our dexterity came from an ancestor that picked fruit with one hand
while stabalizing himself by holding onto a branch with the other hand. Although apes are
sometimes right or left-handed for certain tasks, systemetic handedness in the human sense has
hardly been demonstrated so far in nonhuman primates. One explanation for humans having
more
refined dexterity than apes is that diving homonids used the right hand to get shellfish from the
bottom of waters while the left grasping something to keep them on the bottom. Or they used a
rock to open the shell while the left held the mussel or oyster etc. That could be the beginning of
human tool use.
Hands are paired organs. Each hand needs its own control center in the brain. The two
centers can be symmetrical -- like apes or, like humans when each hand has a different function --
more or less asymmetrical. However, an unpaired organ works better if it has one brain center
dominating over the other, so that fine movements would not be messed up by commands from
the other brain center. A good coordination of the breathing muscles would be essential, the need
for dominant brain centers made dominant brain centers.
Song production in birds is strongly asymmetrical. In adult finches, section of the left
nerve for the syrinx leads to the loss of most of the song, but the right section has only minor
effects of song loss. If the left nerve is cut before song develops, the right takes over completely.
Human speech centers, too, show a great deal of plasticity. The localization of Braca's
and Wernicke's Area in the left hemishpere is more constant than human dexterity: not only right-
handers, but also most left-handers, have their speech centers in the left hemisphere of their brain.
Is there any relation between right/left-handedness and the location of the sound-
interpereting/making device? The fact that the control of our dominant hand is usually situated on
the hemisphere of speech centers could mean that the earliest language use in human ancestors
were the naming of objects that were manipulated or pointed at with the right hand. Or is it
simply cooincidence.
VII. Conclusions
The changes in human brain anatomy, compared with the brain anatomy of apes and
monkeys, fits with the aquatic theory of human evolution and have relationships with aquatic and
semiaquatic mammals.
Reduced olfaction is typically seen in aquatic mammals.
Diminished foot control is a feature of nonarboreal (not living in trees) mammals.
Very refined finger control is a feature of shallow-water feeders.
Perfect control of the airway entrances is essential in diving mammals.
Elaborated vocal ability is seen in aquatic mammals.
A large brain is seen in aquatic mammals such as seals and toothed whales.
Brain asymmetry leads to an aquatic ancestor in human evolution history.
Result: Homo Aquaticus? I think so. And I thik I have proved to myself well enough to believe
in the theory.
f:\12000 essays\sciences (985)\Biology\How changes in the atmosphere eukarotes and multicellularit.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
About 2.5 billion years ago, oxygen began slowly to accumulate in the atmosphere, as a result of the photosynthetic activity of the cyanobacteria. Those prokaryotes that were able to use oxygen in ATP production gained a strong advantage, and so they began to prosper and increase. Some of these cells may have evolved into modern forms of aerobic bacteria. Other cells may have become symbionts with larger cells and evolved into mitochondria. As the amount of oxygen and other atmospheric gasses increased, they started blocking out deadly u.v. rays from the sun. The sun's rays made life outside of water nearly impossible. These changes made life on land possible and evolution occurred as prokaryotes gave rise to land living eukaryotes.
The microfossil record indicates that the first eukaryotes evolved at least 1.5 billion years ago. Eukaryotes are distinguished from prokaryotes by their larger size, the separation of nucleus from cytoplasm by a nuclear envelope, the association of DNA with histone proteins and its organization into a number off distinct chromosomes, and complex organelles, among which are chloroplasts and mitochondria. Scientists believe that eukaryotic organisms such as the protists evolved from the prokaryotes. There are two main theories which describe how this transition may have occurred. The first is the endosymbiotic theory, or enosymbiosis, and the other is the autogenous theory, or autogenisis. These two theories are not mutually exclusive, meaning one or the other could account for different parts of eukaryotic cells. The endosymbiotic theory states that the formation of eukaryotic cells were symbiotic associations of prokaryotic cells living inside larger prokaryotes. The endosymbiotic hypothesis accou
nts for the presence in eukaryotic cells of complex organelles not found in the far simpler prokaryotes. Many modern organisms contain intracellular symbiotic bacteria, cyanobacteria, or photosynthetic protists, indicating that such associations are not difficult to establish and maintain. Endosymbiosis is said to be responsible for the presence of chloroplasts and mitochondria in eukaryotes. Autogenisis, the alternative to the endosymbiotic theory is specialization of internal membranes derived originally from the plasma membrane of a prokaryote. Autogenisis could be responsible for structures like the nuclear membrane and endoplasmic reticulum in eukaryotes.
There are two scenarios for which multicellularity may have occurred. The first is unicellular organisms came together to form a colonial organism, then some tissue developed specialized functions and the cells became differentiated, forming a multicellular organism. The other scenario starts with a coencytic organism forming cellulorization with individual cells developing membranes, then tissues became more specialized forming a multicellular organism. There are some advantages of multicellularity such as having specialization of cells which creates a division of labor, leading to greater efficiency. Another advantage is multicellular organisms have a larger size which provides protection from predators. Fungi are large and have a large surface area to volume ratio, allowing them to absorb nutrients more efficiently.
f:\12000 essays\sciences (985)\Biology\How nutrients get in and wastes get out .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Science 10 Assignment -- Part B
How Nutrients get in, and wastes out.
In a human being, nutrients are necessary for survival. But how are these nutrients obtained? This report will go into depth on how the food we eat gets into our cells, and how the waste products that we produce get out of the body. Also, the unicellular organism Paramecium will be compared with a human being, in terms of all of the above factors.
Dietary Nutrients
The chief nutrients in a diet are classified chemically in four groups: carbohydrates, proteins, vitamins (Which do not require digestion) and fats.
Carbohydrates in the diet occour mainly in the form of starches. These are converted by the digestive process to glucose, one of the main nutrients needed for cellular respiration to occour. Starch is a large molecule, a polymer of glucose. Dextrin and maltose are intermediate products in the digestion of starch. Some foods contain carbohydrates in the form of sugars. These are the simple sugars, such as sucrose (cane sugar) or lactose (milk sugar), that must be processed into smaller units. Occasionally, the simplest form of sugar, a monosaccharide such as glucose, is present in food. These monosaccharides do not require digestion.
Proteins are polymers composed of one or more amino acids. When they are digested, they produce free amino acids and ammonia.
Vitamins are a vital part of our food that are absorbed through the small intestine. There are two different types of vitamins, water-soluble (All the B vitamins, and vitamin C) and fat-soluble (vitamins A, D and K).
Neutral fats, or triglycerides, are the principal form of dietary fat. They are simple compounds, and within digestion are broken down into glycerol and fatty acids, their component parts.
Ingestion
Intake of food in the Paramecium is controlled by the needs of the cell. When food is sensed, the organism guides itself towards the food, and guides it into the oral groove, then enclosing it in a vacuole. Enzymes are then secreted to digest the food, which is then absorbed into the cytoplasm and made available to the various organelles. But, a Paramecium has to be able to move to its food source, while a human cell has his food brought to it through the circulatory system. In man, a much more complicated system exists than that of a unicellular organism, for the size of the animal and the fact that all of the cells within the animal must be able to absorb food and get rid of wastes, just like the Paramecium does.
Digestion in the Mouth
Upon entering the mouth, the food is mixed by mastication with saliva, which starts the
digestive process by making contact with the food particles with the salivary enzyme ptyalin, dissolving some of the more soluble matter within the food. It also coats the food mass with mucin, to aid in swallowing. The chemical phase of digestion in the mouth begins when the salivary amylase, ptyalin, attacks the cooked starch or dextrin, converting some of this starch into dextrin, and some of the dextrin into maltose. The salivary glands can be activated when food is thought of, while the actual presence of food will produce a continuous flow. Since food remains in the mouth for a very short period, very little of the digestive process actually occours in the mouth.
Following digestion in the mouth, the semisolid food mass is passed by peristaltic movements of the esophagus, a long muscular tube that connects the mouth to the stomach. The food then reaches the esophageal sphincter, a ring of muscle at the upper end of the stomach. This sphincter then opens to let the food into the stomach.
Digestion in the Stomach
Here, salivary digestion continues until the acid of the gastric juice penetrates the food mass, and destroys the salivary amylase. The food mass is then saturated with gastric juice, and the gastric phase of digestion is initiated.
The gastric phase of digestion is chiefly proteolytic, or protein-splitting. Within this process, the gastric glands secrete the enzymes pepsin and rennin. These enzymes, aided by gastric acid, converts a fairly large amount of the proteins to smaller forms, such as metaproteins, proteoses and peptones. There also may be a small amount of fat digestion in the stomach, since a small amount of lipase is present in gastric juice. This enzyme causes hydrolysis of the triglycerides into glycerol and fatty acids.
The digestive action of these enzymes, combined with the action of the gastric juice results in the solution of most of the food material. In the final stages of gastric digestion, the fluid mass, propelled by peristaltic movements, passes into the small intestine through the pyloric sphincter. Here, the chemical phase of digestion is initiated.
Digestion in the Small Intestine
The fluid product of gastric digestion mixes with the intestinal secretion, and two other fluids, namely the pancreatic juice (produced by the pancreas) and the bile (produced by the liver). Both of these fluids are secreted near the pyloric valve, which separates the stomach from the intestine. These secretions neutralize the acidic gastric juice, causing the gastric digestion phase to end. The enzymes within the pancreatic juice, and those of the intestinal juice start the final digestion phase. The pancreatic juice contains powerful amylase, protease, and lipase, that attack the starch, protein and fat that escapes the actions of the salivary and gastric phases of digestion. The intestinal secretion contains enzymes that attack the intermediate products of proteolytic and amylolytic digestion, as well as some smaller food molecules.
The pancreatic amylase converts both the raw starch and the cooked starch that was not digested by the two previous phases. Cooked starch is converted to dextrin, and the dextrin to maltose. The pancreatic lipase hydrolyzes the neutral fat to glycerol and fatty acids. The bile has an important role here, as it, along with the alkali content in the secretions, emulsifies the fat, producing many fat surfaces on which the lipase can act. The pancreatic proteases convert any remaining protein to proteoses and peptones. These intermediate products are then attacked by enzymes known as erepsins, and converted slowly into their individual amino acids. The intestinal enzymes, maltase, sucrase, and lactase hydrolyze their respective disaccharides (maltose, sucrose and lactose) into their component monosaccharide units, and finally into glucose.
After all of these processes, carbohydrates have been broken down into glucose, proteins have been broken down into amino acids, and fats hydrolyzed into fatty acids and glycerol. These nutrients are absorbed by the villi, finger-like microscopic projections that line the inside of the small intestine. The sugars and amino acids take a direct route, and pass into the capillaries of the villi, taking them directly into the bloodstream. Glycerol and fatty acids, however, are first resynthesized into triglycerides, they then enter the lymphatic system and then into the bloodstream.
Digestion in the Large Intestine
The last part of the digestive system, the large intestine is where all of the wastes enter. It holds the wastes and reabsorbs some of the remaining undigested material. The first part of the intestine is mainly responsible for reabsorbing. The materials reabsorbed are water, bacterial vitamins, and sodium and chloride ions. Within the last half of the intestine, wastes are stored. These wastes are made up of undigested food, and dead bacteria. The wastes then become feces, and are released through the anus. The food is moved down the small and large intestine by peristalsis, much like how food moves down the esophagus.
Circulatory System
After the food molecules within the villi diffuse into the bloodstream, the blood carries the nutrients into the liver. There, the sugar is removed from the blood and stored for later use as glycogen. After the liver, the blood travels to the main provider of motive force, the heart. It is then pumped out through the arteries into the body. The blood vessels become smaller and narrower until they reach their 'target' tissue. The blood is now within the smallest vessels called capillaries. The capillary walls are only one cell think, enabling diffusion of the nutrients carried in the bloodstream into the individual cells, and diffusion of waste products back into the bloodstream from the cells. At this level, the systems regulating and governing the maintenance of homeostasis are similar in both man and the Paramecium. Absorption and excretion are basically governed by the concentration of fluids inside the cell, as compared with the fluid concentration outside the cell.
Excretion
When the blood takes the nutrients to the cells, it receives cellular waste products as well, such as carbon dioxide, urea, and surpluses of other chemicals, such as glucose. From the cells, the blood (with the waste products) goes to the kidneys. It enters the kidney through the renal artery, and branches out into many capillaries. Here, there is a slowdown of circulation. as a result of this, pressure increases, and much of the plasma is forced out of the blood. Renal tubules (nephrons) number about one million in each kidney. There tubules are responsible for the production of the fluid that is eventually eliminated as urine. They filter out many of the chemicals, particularly urea (which is poisonous) and nitrates which are the by-products of protein digestion. This process is called pressure filtration. As the fluid moves down the tubule, many of the nutrients that escaped the cell such as sodium ions and glucose are reabsorbed so that the body can use them, and will not become short of these sub
stances. The fluid is now urine, and collects in a hollow region of the kidney. From the kidney, the urine enters the urinary bladder, a storage container for the urine. When this bag fills, a sphincter opens to the urethra, and the urine is let out of the body from an external opening. Excretion also takes place in the lungs when a person breathes out, and also though sweat. But, these parts of the excretory system are not controlled as well as the kidney, and can lead to loss of salt.
Nervous System
The nervous system controls a large part of the activity of the digestive and excretory systems. The control is exercised through the autonomic nervous system, of which there are two parts. The first part, controlling increase in activity, is called the sympathetic system. And second, controlling decrease in the level of activity is the parasympathetic system. Both systems are unconscious, only chewing, swallowing and the anal and uretheral sphincters are under conscious control.
Endocrine System
The endocrine system deals with hormones, which regulate the metabolic rates of cells and organs. They are much like nerves, but target only certain parts of the body. They are essential to maintain homeostasis. The gastrin hormone is found in the stomach, and controls the amount of gastric juice produced. They also regulate excretions such as saliva. The hormones controlling the digestive and excretory systems are located some distance away from the cells that they have to control, therefore some method of transport must be utilized. This method is the bloodstream. The hormonal glands secrete their hormones into the bloodstream. These hormones then travel to the target organ or cell, and regulate the activity of that organ or cell. This system is slower to respond than the nervous system.
By Faisal Premji
f:\12000 essays\sciences (985)\Biology\Human Evolution and the Fossil Record.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Human Evolution and the Fossil Record."
Scientists continue to debate the history of man. It is generally agreed upon by the scientific community, however, that humans evolved from lesser beings, and this essay will function to provide evidence to support this claim. Several points will be outlined, including the general physical changes that occurred between several key species on the phylogeny of man, and a discussion of dating methods used to pinpoint the age of the fossils.
This essay will begin with a brief discussion of dating techniques. In the study of hominid evolution, two main methods of dating are used: carbon-14 and potassium-argon dating. Carbon-14 dating involves the decay of radioactive C-14, which has a half life of 5770 years. This makes this method useful for dating of recent fossils, with good accuracy, up to 50,000 years back. After 5770 years, half of the carbon-14 in a fossil decays to nitrogen-14. Since the ratio of carbon-12 to carbon-14 in a living organism remains the same as in the environment around them because the organism constantly eats and replenishes it, if it were to die, the ratio would change greatly after many years. It is the difference between this ratio now and the time is died that allows a date for it to be established. Potassium-argon dating, another dating method, is possible due to volcanic ash and rocks found near many fossil sites. Rocks and ash created in this manner contain potassium-40, but no argon. As time passes, the po
tassium-40 decays into argon-40. In the laboratory, the sample is reheated, and since argon-40 is a gas, it is released. The ratio of argon-40 released to potassium-40 still present allows for a date to be assigned to objects near the sample. However, due to potassium's high half-life (1.3 billion years), it is only useful as a dating technique for finds older than 500,000 years old. Also, it is only useful where volcanic activity existed. Both these methods have error margins, ranging from a few thousand years in carbon-14 dating to tens of thousands of years, or more, for potassium-argon dating. However, thanks to scientific breakthroughs, these two processes can be used with reasonable security in establishing a time for fossils.
Our farthest believed ancestor is believed to be Australopithecus afarensis. This species, which lived between three and four million years ago, is believed to be the first real hominid because it is the oldest, and "most primitive of any definite hominid form thus far found."(Turnbaugh, 281) Evidence from fossilized footprints, as well as pelvic and leg bones which were similar to modern hominids, led scientists to believe that they could walk upright. Its teeth resembled more those of primates, due to their large size. Its skull capacity ranged from 350 to 500 cm3. This species, though it had some hominid characteristics, was still more like an ape. Its face portruded outwards near the mouth region, and it did not have a definable chin. Finally, their craniums had large, portruding ridges over either eye.
Another important being in the human timeline is Australopithecus africanus. Many scientists believe that it is the next in the sequence leading to man, however, a few believe that it belongs to a lineage on its own. A. africanus fossils have been dated back to the time period between two and three million years ago. It had a greater body size than A. afarensis, and a skull volume ranging between 420 and 500 cm3. It averaged a little higher in height than the 3 1/2 to 5 feet believed for A. afarensis. Its jaws also portruded out. The "keel" effect is very distinguishable on this species, as it is with many of the older hominid species - a slight peak on the top of the cranium. Ridges over the eyes were also prominent on this hominid.
The next species believed to be in our line of descent is Homo habilis. This is the first being with the distinction of having Homo as its genus. This species, which is dated back to between 1.5 and 2.4 million years ago, had a face which portruded less than A. africanus and A. afarensis. Its teeth, though still larger than modern humans, were smaller than those of its ancestors. Finally, its fossil fragments displayed "an average increase in cranial size of 21 percent and 43 percent, respectively, over [A. africanus and A. afarensis],"(Turnbaugh, 288) with an average cranial capacity of 650 cm3. Skulls found of this hominid also feature a bulge of "Broca's area," an area essential for human speech. It was also taller than the previous hominids, averaging around 5 feet high.
At about the same time as Homo habilis and some of the other Homo species, other hominid species belonging to the Australopithecus genus, are believed to have co-existed. These include A. robustus, A. boisei, A. aethiopicus, and A. robustus . Though similar to the Homo line in structure, their bones were thicker and more robust. These other hominids are believed to have developed on a different lineage than the Homo line, and all of these streams died out at around the time of Homo erectus, the next key hominid on the human lineage. Because they are believed to have evolved apart from Homo hominids, it is not important to cover these species in detail.
Homo erectus lived between 300,000 and 1,800,000 years ago, and still had portruding jaws and a "keel" effect on the top of the cranium. It, like its predecessors, had no definable chin, and thick brow ridges. However, skull capacity in these hominids jumped from an average of 650 cm3 in H. habilis to an average of 900 cm3 in early specimens and 1100 cm3 in later specimens. The skeleton "is more robust than those of modern humans, implying greater strength."(Foley, n.pag.) Due to their larger brain sizes, they are believed to have posessed greater intelligence, and evidence of this has been found in their probably use of fire, as shown by traces of burnt bones in cave floors, and the finding of more sophisticated tools than H. habilis. They were shorter, on average, than Homo sapiens, and their craniums showed a Nuchal torus, or a ridge, across the back of the head. This species also had keeled craniums.
Archaic Homo sapiens, which first appeared 500,000 years ago, are believed to be our most rescent relatives. By this time, the "keel" that existed on their skulls is non-existent, and the supraorbital torus (the brow ridge) has begun to recede. Cranial volume has been measured at an average of 1200 cm3, and their brain shape was probably most similar to our own. Fossil evidence shows a trend for their posterior teeth to have reduced in size, and the anterior teeth to have increased in size, from previous Homo species, while late archaic Homo sapiens finds show a general reduction in the size of both areas. The face and jaw areas also showed a reduction in size from previous species.
It is at this point that Homo sapiens neanderthalensis enters the picture. Commonly known as Neanderthal Man, this species is believed by most scientists to have existed at the same time as late archaic Homo sapiens and early Homo sapiens sapiens, our own species. Many scientists theorize that either we killed them off, or interbred with them to produce modern humans. Their craneal volume is in fact higher than modern humans, at an average of 1450 cm3. Their bones were also thicker, which implies greater bulk in body. They also had larger nose cavities, a weak chin, and a portruding jaw area. "Neandertals would have been extraordinarily strong by modern standards, and their skeletons show that they endured brutally hard lives."(Foley, n.pag.) Neandertal skeletons have been dated to between 30,000 and 230,000 years ago.
Finally, our own species is encountered. Scientists have dated the earliest Homo sapiens sapiens fossils back 120,000 years. Our species showed an increase in skull capacity up to an average of 1350 cm3. The supraorbital ridge is all but gone with modern humans, and other features seen in earlier Homos, such as the "keel" and the craneal ridges on the back are also gone. The cranium is more rounded, as opposed to the general "pentagon" shape seen in earlier hominids. Teeth size for modern humans shows a decrease in size from archaic Homo sapiens. Also, bone size shows a trend towards reduced robustness, with thinner bones and smaller jaws.
From all the fossil evidence, a rough line can be drawn for human evolution, starting from A. afarensis and ending in H. sapiens sapiens. A clear progression of features, especially in the cranial region, can be seen. Features such as brain size are seen to have developed and increased from our earliest ancestors up until now, while other "non-essential" features, like a furry skin, a supraorbital ridge, and large teeth, have diminished. This shows evolution of our species, from a more primitive creature, to our modern shape, which is highly adaptive, intelligent, and suited to any environment. God has created the perfect creature - a creature that evolves to suit its needs.
Bibliography
archaic - obsolete, antique. In hominid terms, it is used to describe Homo sapiens, because they existed before out own species, Homo sapiens sapiens, with a similar name.
cm3 - cubic centimeter, a measurement of volume. Also known as a "cc."
cranial volume - also, "cranial capacity," or "skull capacity." Refers to the volume of the area within the cranium, which is a rough (but not exact) indicator of an organism's brain size.
cranium - the skull, referring especially to its upper section, which held the brain.
hominid - a human-like species. Hominids are identified by the ability to walk upright, and a general internal structure similar to our own. The first believed hominid is Australopithecus boisei, which existed between 1.1 and 2.1 million years ago.
phylogeny - a depiction of the evolutionary lines of descent for a particular organism - the "family tree" of a species.
supraorbital - above the eye socket
Works Cited
Eldredge, Niles. Life Pulse: Episodes From the Story of the Fossil Record. pp. 233-240. New York: Facts On File Publications, 1987.
Foley, Jim. "Hominid Species." The Fossil Hominids FAQ. 1996. On-line. Internet. 1 Jan. 1997. Available: http://earth.ics.uci.edu:8080/faqs/homs/species.html.
Leaky, L.S.B. The Progress and Evolution of Man in Africa. Toronto: Oxford University Press, 1961.
Johanson, Donald and Edey, Maitland. Lucy: The Beginnings of Humankind. New York: Simon & Schuster, 1981.
Rak, Yoel. The Australopithecine Face. New York: Academic Press, 1983.
Stoner, Don. A New Look At An Old Earth. June, 1992. On-line. Internet. 11 Jan. 1997. Available: http://www.power.net /users/aia/newlook/NLCHPTR5.HTM#Top.
Turnbaugh, William A., et al., Understanding Physical Anthropology and Archaeology,, 5th Edition. Minnesota: West Publishing Company, 1993.
f:\12000 essays\sciences (985)\Biology\Human Genome Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Human Genome Project is a worldwide research effort with the goal of analyzing the structure of human DNA and determining the location of the estimated 100,000 human genes. The DNA of a set of model organisms will be studied to provide the information necessary for understanding the functioning of the human genome. The information gathered by the human genome project is expected to be the source book for biomedical science in the twenty-first century and will be of great value to the field of medicine. The project will help us to understand and eventually treat more than 4,000 genetic diseases that affect mankind. The scientific products of the human genome project will include a resource of genomic maps and DNA sequence information that will provide detailed information about the structure, organization, and characteristics of human DNA, information that constitutes the basic set of inherited "instructions" for the development and functioning of a human being.
The Human Genome Project began in the mid 1980's and was widely examined within the scientific community and public press through the last half of that decade. In the United States, the Department of Energy (DOE) initially, and the National Institutes of Health (NIH) soon after, were the main research agencies within the US government responsible for developing and planning the project. By 1988, the two agencies were working together, an association that was formalized by the signing of a Memorandum of Understanding to "coordinate research and technical activities related to the human genome". The National Center for Human Genome Research (NCHGR) was established in 1989 to head the human genome project for the NIH. NCHGR is one of twenty-four institutes, centers, or divisions that make up the NIH, the federal government's main agency for the support of biomedical research. At least sixteen countries have established Human Genome Projects.
The Office of Technology Assessment (OTA) and the National Research Council (NRC) prepared a report describing the plans for the US human genome project and is updated as further advances in the underlying technology occur.
To achieve the scientific goals, which together encompass the human genome project, a number of administrative measures have been put in place. In addition, a newsletter, an electronic bulletin board, a comprehensive administrative data base, and other communications tools are being set up to facilitate communication and tracking of progress. The overall budget needs for the effort are expected to be about $200 million per year for approximately 15 years.
Lasers are used in the detection of DNA in many aspects of the project; a very important use is in sorting chromosomes by flow cytometry. Lasers are also used in confocal fluorescence laser microscopy to excite fluorescently tagged molecules in genome mapping, in addition to other mapping uses. In diagnostic applications, lasers are used with fluorescent probes attached to DNA to light up chromosomes and to create patterns on DNA chips.
From the beginning of the human genome project it was clearly recognized that acquisition and use of such genetic knowledge would have momentous involvements for both individuals and society and would pose a number of consequential choices for public and professional deliberation.
As Thomas Lee writes, "the effort underway is unlike anything ever before attempted, if successful, it could lead to our ultimate control of human disease, aging, and death".
Whatever its justification, the human genome project has already inspired society with the hope of "better" babies, and one way to deploy pragmatism in the analysis of genetic engineering is to look at this promise of "better" babies in its social context: parenthood. Parents hope for healthy children and, if they could afford it, make choices (such as choosing parental care) to help "engineer" healthier babies. Genetic engineering seems in this respect to offer the brightest hope for parents. Through germ-line therapy, disastrous, but genetically discrete diseases, such as Huntington's and cystic fibrosis could be removed from the DNA of the egg or zygote. Clearly parents would follow the model in choosing to avoid a short, painful life for their children.
Another more reasonable fear is that we have not the slightest idea what we are doing and ought to avoid making hasty choices. Hybrid varieties are often impossible to protect from the complexities and dangers of nature. In the human condition, this is the possibility of making an error and creating a genetically advanced baby who cannot cope with an imperfect world. While much of society reports a willingness to modify DNA for the purpose of heightening intelligence, education about genetics and medicine is still in its beginning.
Jonathan Glover argues for a "pragmatism of risks and benefits", writing that, "The debate on human genetic engineering should become like the on nuclear power: one in which large possible benefits have to be weighed against big problems and great disasters".
One significant element is the assertion that genetic engineering is radically different from any other kind of human medicine, and constitutes interference in a restricted area, trying to "play God".
As Robert Wright notes, "Biologists and ethicists have by now expended thousands of words warning about slippery slopes, reflecting on Nazi Germany, and warning that a government quest for a super race could begin anew" if genetic engineering ventures "too far".
In my opinion, I believe that, if and only if, a deadly disease is detected, then the scientists and/or doctors should tap into the DNA of a zygote or egg for testing and absolute knowledge of the steps of the procedure must be present. I do not believe that there should be a genetically advanced child in the world, everyone is created equal and nobody should have their destiny changed for any reason.
f:\12000 essays\sciences (985)\Biology\Huntingtons Disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HUNTINGTON'S DISEASE
Huntington's disease, also known as Huntington's chorea is a genetic
disorder that usually shows up in someone in their thirties and forties,
destroys the mind and body and leads to insanity and death within ten to
twenty years. The disease works by degenerating the ganglia (a pair of nerve
clusters deep in the brain that controls movement, thought, perception, and
memory) and cortex by using energy incorrectly. The brain will starve the
neurons (brain cells), and sometimes make them work harder than usual,
causing extreme mental stress. The result is jerky, random, uncontrollable,
rapid movement such as grimacing of the face, flailing of arms and legs, and
other such movement. This is known as chorea.
Huntington's chorea is hereditary and is caused by a recently discovered
abnormal gene, IT15. IT stands for "interesting transcript" because of the
fact that researchers have no idea what the gene does in the body.
Huntington's disease is an inherited mutation that produces extra copies of
a gene sequence (IT15) on the short arm of chromosome 4. A genetic base that
exists in triplicate, CAG for short, is effected by Huntington's disease. In
normal people, the gene has eleven to thirty-four of these, but, in a victim
of Huntington's disease the gene exists from anywhere between thirty-five to
one-hundred or more. The gene for the disease is dominant, giving children
of victims of Huntington's disease a 50% chance of obtaining the disease.
Several other symptoms of the disease exist other than chorea. High levels
of lactic acid have been detected in patients of Huntington's disease as a
bi-product of the brain cells working too hard. Also, up to six times above
the normal level of an important brain brain protein, bFGF (or basic
fibroblast growth factor) in areas of the brain effected by the chorea. This
occurs from the problems on chromosome 4, where the gene for control of bFGF
is also located.
As of yet, there is no treatment for Huntington's disease. But with the
discovery of the mutated genes that cause it, there is now a way of
diagnosing if you will get it. This technique was discovered only recently
and reported in the Journal of American Medical Association in April, 1993.
Something that many people do not want to know. Because it can go two ways.
Either you are extremely relieved because the test shows up negative, and a
great burdon is lifted off of your mind, or you show up positive, and know
how and a little bit about when you will die, increasing the burdun very
greatly. And living the rest of your life in depression.
Some 30,000 Americans are currently suffering for this genetic disorder.
Named in 1872 for George Huntington the New York Doctor who first wrote down
it's devestating symtoms, Huntingtons disease up to now was a silent time
bomb.
13,000 people, the largest known concentration of sufferers from
Huntington's Disease, live in the Lake Maracaibo region of Venezuela. The
origins of this gene pool has been traced back to the 1800's to a woman
named Maria Concepcion. It was from blood samples of these people that
scientists became extraordinarily lucky and isolated the genetic marker that
shows the presence of this disorder. Today, it is believed that Maria
obtained the disease when she was birthed by a european sailor.
Since it was first recorded by George Huntington, a Long Island doctor,
Huntington's disease had remained fairly low key. No one heard about it
until it infected Woodie Guthrie, A famous folk singer from the 1920's who
showed symptoms of the disease. In 1967, he died. This put Huntington's
Disease on the map, but it still was not well known. But, before Woodie
guthrie died, he had a son, Arlo Guthrie. He, too became a famous folk
singer, this time from the Seventies. He became extremely famous, but had to
live with the fact that he has a 50% chance of having the disorder. That
aroused huge public interest and made the disease well-known.
Now that you know about Huntington's disease, you can imagine how it works,
and the probability of getting it. But, can you imagine how it feels to have
the disorder? What would it be like to know that you have a 50% chance of
not reaching your sixtieth birthday? Now, enter the life of Nancy Wexler, a
woman who knows how it feels for both of these. She watched as her mother
died from the disease, and has to live with the fact that she may be next.
When Wexler was young, three of her uncles died of the killer disease. "Men
only got Huntington's disease" went the myth. Then it happened; her sister
was told by her doctor that her unusual walk was an early symptom. She too
had the disease. Since then, she and her sister Alice, swore never to have
children. Years later, Wexler joined up with her husband Milton Wexler, and Marjorie
Guthrie, wife of Woodie Guthrie, and formed the Los Angeles chapter of the
Commitee to combat Huntington's Disease. Guthrie wanted to focus the
organization on patient care, but Wexler was intent on finding a cure. So,
she began to invite biologists to help study the disease while she worked to
get her Ph.D. In 1976 she moved to Washington to become executive director
of the Congressional Commission for the control of Huntington's disease and
it's Consequences.
Once there, they discovered that Huntington's disease works by distroying
the Ganglia. Then they decided that the best way to research Huntington's
disease was at the level of the gene. They decided to loook for a "marker"
(small identifiable piece of DNA) of where the faulty gene is located. This
normally would yave taken 50 to 75 years to find. But, on a freak chance,
they found it. it was the 12th marker that they tested. The discovery of the
marker led to the discovery of the gene which won Wexler the Albert Lasker
Public Service Award. The highest honor in American medicine. She also
developed a test to accurately determine whether or not someone will get
Huntington's disease.
Wexler will not reveal if she, herself has taken the test because she does a
multitude of genetic counciling, and does not want to sway her patients'
decisions on whether or not to take the test. But, whether she tests
positive or negative, Huntington's disease will live on. Unless scientists
like Wexler can find a cure.
f:\12000 essays\sciences (985)\Biology\Ibuprofen.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract
The project that I chose to research was the effect of Ibuprofen on the heart rate of the daphnia. The reason that I chose to do this was because many people in society use over the counter pain killers without really understanding any of the long term effects of this medicine other than clearing their aches pains, and swelling. One of the leading drugs on the market today is Ibuprofen which you may know as Motrin or Advil. Both drugs are identical except for quantity and price, and even there it might be the same. The organism I chose to work with is a crustacean called the Daphnia. Later in this report I will attempt to explain the significance of that organism and why I chose to conduct tests on it.
I then, with the information at my disposal, conjured a hypothesis which I would test. Using the materials at hand I, to the best of my ability tested my hypothesis. In conducting the tests I created graphs and tables of my work.
At the conclusion of my experiment I came up with an answer that was almost unpredictable with the information that I was using. Although this was a crude experiment, I believe that I did gain a lot from it.
Introduction
The drug that I chose, Ibuprofen, is an anti-inflammatory analgesic. It is propionic acid that is white and powdery, and soluble in water, and organic solvents such as ethanol and acetone. (1) Its structural formula is:
(CH3)2CHCH2 CH(CH3)COOH
Its role of action as a drug is not completely clear to the sciences, but that with time will change. One thing we do know is that people who have allergies to Aspirin should not take this medicine. (2)
As of now we know that it may play a role in prostaglandin synthesis inhibition. (1) Prostaglandin are hormonal like substances that form in animal tissue from polly unsaturated fatty-acids. (3) They do affect several body systems including the central nervous, gastrointestinal, urinary, and endocrine systems. It has been shown to have very minor effects on smooth muscle contraction and the clotting ability of blood which we are concerned with.(4) Excesses of these substances may cause pain, inflammation, and fever.
It is also an analgesic which is used to reduce or eliminate pain without causing a loss of consciousness. (4) Another name for these substances which we all know of is a painkiller.
A water flea or Daphnia, is a member of the subclass Brachiopod, in the order Cladocera. They are found in the plankton of open water. (5) What I was concerned with the Daphnia was the fact that they are transparent organisms. Observing their inner organs is done with ease and what I mainly focused on was the heart.
The almost colorless blood of the Daphnia flows anteriorly from the heart to the hemocoeland inner wall of the carpous where gaseous exchanges occur. In a non-oxygenated environment the organisms appear to be orange due to the changing color of their hemoglobin. (6)
Hypothesis
Although Ibuprofen has been shown to have a very slight effect on the heart rate of human beings this occurrence was very rare and unlikely according to research done. (2) I hypothesized that Ibuprofen would have absolutely no effect on the heart rate of the daphnia .
Procedure
To go about testing the hypothesis I first obtained a control using ten Daphnia. I counted their heart rates and averaged them all out to have a 246 beat per minute heartbeat. Here is a chart of my control:
Heart-rate of Daphnia (Control)
TRIAL # I II III IV V
1 240 240 248 242 248
2 259 244 243 251 242
BEATS 3 243 237 241 235 251
4 253 246 241 248 243
5 242 247 247 252 249
PER 6 260 241 246 241 241
7 251 248 243 240 239
8 239 240 249 248 246
MINUTE 9 259 243 245 249 252
10 246 249 249 245 237
AVG. 249 243 245 245 247
OVERALL AVG. 246
After having gained a control from which I would base my experiment on I created Ibuprofen solutions of varying concentrations using the water from the environment of the Daphnia as to avoid creating another variable which might affect the heart rate. I created the different concentrations by means of "Serial Dilution".
I did this progressively until I reached a 5% concentration. After having created the dilution's I subjected 5 Daphnia to each solution and observed their heartbeats as shown below:
Heart Rate with Ibuprofen Solutions...
CONCENTRATIONS 90% 45% 22% 11% 5%
1 240 240 248 242 248
2 259 244 243 251 242
BEATS 3 243 237 241 235 251
4 253 246 241 248 243
5 242 247 247 252 249
PER 6 260 241 246 241 241
7 251 248 243 240 239
8 239 240 249 248 246
MINUTE 9 259 243 245 249 252
10 246 249 249 245 237
TIME(MIN.) 7 9 13 18 23
SPONTANEOUS HEART-
RATE INCREASE ...
SPONTANEOUS HEART
FAILURE
Results
At first my results showed no change in the heart rate of the Daphnia. Within a certain time, according to each solution, there was a spontaneous increase in heart rate and then a spontaneous heart failure. This occurred in all solutions progressively from 90% in 7min. To 5% in 23min.
Discussion
I believe that this lab helped me learn how to research and experiment with scientific data more so than I did before. The results negate my hypothesis yet I still believe that they did not negate them 100%.
Conclusion
In conclusion the heart rate was effected by the Ibuprofen, however, I do not believe this effect to be a direct effect on the heart, instead I believe that the Ibuprofen indirectly effected it by causing something else to go wrong in the organism.
f:\12000 essays\sciences (985)\Biology\Impotency The New Therepy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IMPOTENCY: NEW THERAPY
By Tony Gramazio
As many as two-thirds of the men with impotency because of physical conditions, vascular disease, stress, trauma, surgery and diabetes, can probably have an erection again.
A new treatment to help men has been thought to be found. It is less invasive than most other treatments. According to almost 60 medical centers all over the United States of America, the new approach has been found. Transurethral Alprostadil has enabled 64.9% of all men with erectile disfunction to have an erection during sexual intercourse, compared to 18.6% on placebo.
Other therapies include needle injection, vacuum devices, and implants. the new treatment is used by inserting an applicator containing a microspository of aprosital into the urethra after going to the bathroom. When a button on the applicator is pressed the suppository is deposited into the urethral lining. Then after ten minutes an erection is formed
f:\12000 essays\sciences (985)\Biology\Insects.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Insects are neumerout invertebrate animals that belong in the Phylum
Arthropoda and Class Insecta. The class Insecta is divided into 2 subclasses:
Apterygota, or wingless insects, and Pterygota, or winged insects. Subclass
Pterygota is futher divided on basis of metamorphosis. Insects that have
undergone incomplete metamorphosis are the Exopterygota. Insects that undergo
complete metamorphosis are the Endopterygota.
Insects have an outer bilateral exoskeleton to which the muscles are attached
to and provides protection for internal organs. The body is divided into 3
main parts which are the head, which include mouthparts, eyes, and antennae;
thorax, which operate the jointed legs and /or wings; and abdomen, which has
organs for digested food, reproducing, and getting rid of waste products.
The major systems in insects are the circulatory, respiratory, nervous,
muscular, digestive, and reproductive systems. In the circulatory system,
blood is pumped by the heart in a tube to the aorta, the head, and to other
organs then enters the ostia openings along the sides of the tube back to the
heart. The respiratory systems carries O2 to cells and takes away CO2 from
cells through branching out to call cells of body. The nervous system consists
of a brain receiving information from eyes, antennae, and controls the whole
body and 2 nerve cords containing ganglia fused together to control activities
of the segment without the help of the brain. insect muscular system is made
up of a
few thousand samll but string muscles allowing the insect to carry objects
heavier than it. The digestive system is basically a long tube where food
enters the mouth to the crop where it is stored, gizzard where it is grinded,
stomach where it is digested, then the undigested parts and wastes are moved
to the intestine, colon then released at the anus. And in the reproductive
system, a new individual is produced sexually when the female eggs produced in
the ovaries united with male sperm produced in the testes.
Both man and insect live almost everywhere, eat all kinds of food, and use
all kinds of materials to build homes so they constantly live in conflict.
Some insects seriously affect man's health and are parasitic on man and other
animals. insects that feed on human or animal blood can carry disease in their
salivary juices and spread the disease to other animals. Many insects irritate
us without disturbing our health. Some bite and sting, and some people are
allergic to them and some insects are injurious to our agricultural crops,
food products, clothing, and wooden buildings. So far man has only partial
success in defending against insects. But some insects species are beneficial
to man. The honey bee, for example, supplies us with honey and the silkworm
supplies us with silk. So bugs really aren't that bad.
f:\12000 essays\sciences (985)\Biology\Introns and Exons.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
March 31 1997
AP. Biology essay
Introns and Exons.
The finding of the Introns and the exons was one of the most significant discoveries in genetics in the past fifteen years. split genes were discovered when lack of relation between DNA sequences were seen during. DNA- mRNA hybridation. For all new mRNA, they must be transcribed by RNA polymerase enzymes. The transcription begins at the promoter sequence on the DNA and works down, thus the nucleotide sequence of the mRNA is complimentary to the one of DNA. In eukaryotes the mRNA is processed in the nucleus before transport to the cytoplasm for translation. In order for the mRNA to become true functioning RNA it must under go several stages of modification.
At first, when the mRNA is produced, a cap is added enzymaticully to the 5¹ end of the RNA by linking a 7-methylguanosine residue by a triphosphate bond this is called the G-cap. The G-cap is necessary for translation. The subunit of the ribosome recognizes the G-cap and then finds the initiation codon to start translation. As the mRNA comes finishes transcription, the Poly A tail is added to the 3¹ end. As the two ends are placed the mRNA becomes pre-mRNA.
The pre-mRNA consists of splicing and non-coding regions. pre-mRNA molecules are much longer than the mRNA molecule needed to code for its protein. The regions that do not code for amino acids; aa, are scattered all along the coding region. The genes are split with coding regions, called exons, short for expressed regions; in between the exons the non-coding region called introns exist. Before the translation of mRNA the introns must be spliced off. Splicing is an complicated process for the cell. It must locate every intron in the primary transcript. An average mRNA consists of eight to ten introns, some even contain sixteen introns. exons, like introns are also spread apart. Some of their codons may be split by introns, so information for a single amino acid could be some distance apart. Splicing takes place in the nucleus but also could take place in the cytoplasm and the mitochondria. After the splicing of the introns, the G-caps and the Poly A tails remain on the mRNA.
A single gene can code for multiple proteins by alternative splicing. A single strand was found to be coding for twenty different proteins, depending on how the exons are assembled. Different splicing combinations are regulated in t issue specific manner.
Most of the transcribed DNA are introns. ninety nine percent of the information contained in the gene transcript is destroyed when the introns are eliminated since exons are only translated. Most genes have introns. Only a hand full of organisms are found without introns. Larger eukaryotes tend to have bigger and more numerous amounts of introns compared to smaller eukaryotes.
There are sequence of nucleic acids at the exon-intron junction of mRNA allowing intron splicing., From what is known there is an GU at the 5¹ splice site and AG at the 3¹ splice site for most genes. This is called the GU-AG rule Splicing enzymes recognize these sites with the help of ribonucleprotein called snRNP or snurps. snurps are formed by small nuclear RNA fragments of less than three hundred nucleotides called snRNAs. As an RNA molecule is being transcribed, four snurps attach to it combining into a large spliceosome. The abundant snRNAs catalyze the cutting and the splicing of the gene.
In a self splitting intron, the hair pin structure brings the ends o the introns near to the branch point. Then the introns it self catalyze the making of the loop joining the two exons. The difference between self splicing intron and one which require the spliceosome is that the non-self splicing introns can split any introns, almost any size. This helps the organism to survive mutations. When a mutation forms, some times the self splicing introns lose it¹s hairpin structure not allowing it self to be spliced off.
f:\12000 essays\sciences (985)\Biology\Investigation of Reproduction and Development in Animals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Investigation of Reproduction and Development in Animals
Due Date: 12/9/96
Cycles, Conception and Contraception
Fertility is the condition of being fertile. The aim of contraception is to inhibit fertility in individuals, hence, slowing population growth. The system by which all female contraception must operate: the menstrual cycle.
What is the menstrual cycle? and what hormones are involved in controlling it?
After puberty, the female produces an egg each month. Other changes take place on a cyclic basis including the uterus lining and hormone levels. These changes are called the menstrual cycle. Follicle stimulating hormone (FSH) from the pituitary gland stimulates the growth of follicles in the ovary. Follicles produce oestrogen that stimulates the pituitary to produce leutinising hormone (LH). As LH increases the size of the follicle increases until an egg is released. The corpus luteum formed in the follicle secretes progesterone that prepares the lining of the uterus for pregnancy. If fertilisation does not occur the lining of the uterus is discharged from the body in the process called menstruation.
What is the birth control pill?
The combined oral contraceptive pill (the pill) is a reversable, hormonal method of birth control. The pill consists of a mixture of two synthetic hormones similar to oestrogen and progesterone, the woman's natural hormones which regulate the menstrual cycle. The pill is on of the most popular methods of birth control.
How does it work?
The pill basically prevents ovulation, therefore, the ovaries can't release a mature egg. Without an egg for the sperm to fertilize , a woman cannot get pregnant . The hormones also increase cervical mucus, making it difficult for sperm to pass into the uterus.
How effective is the pill?
If used correctly, the pill is highly effective. It has a less than one percent failure rate. However, because many people misuse it, the actual failure rate is more like three percent.
The pill does not provide protection against sexually transmitted diseases.
How are pills used?
One pill must be swallowed at the same time every day. It is not any single pill, but the day-to-day process of taking the pill which provides protection against pregnancy.
What is infertility?
Infertility in humans and other species(animals) is the inability to concieve or carry a pregnancy to a live birth. The causes of infertility can be identified in some cases. The majority of cases relate to female factors (50%), 40% relate to male factors and 10% aree unknown. Infertility sometimes may serve as a combination of both male and female factors.
If the cause of infertility is known, treatment of some kind may be available. In other cases a problem may disappear on its own and fertility is restored. For many couples, about 40% of those affected, there is no solution to their infertility.
Now days, there are a range of technologies and options available to couples wishing to have children of their own. These methods include: donor insemination, IVF ( in vitro fertilisation), ZIFT, GIFT and so on.
What is IVF?
IVF involves fertilization outside the body in an artificial environment. This procedure was first used for infertility in humans in 1977,in England. To date, thousands of babies have been delivered as a result of IVF treatment. The IVF procedure has become simpler, safer and more successful over the years.
What types of infertility can be helped by IVF?
IVF is a good option for a couple in several instances. The most common reasons for this procedure is blocked or damaged fallopian tubes. Through IVF, the damaged fallopian tubes are bypassed and the fertilization which usually takes place within fallopian tubes is now performed in the human embryo culture laboratory. Other factors that might lead to the need for IVF include low sperm count, endometriosis and unexplained infertility which has not responded to other courses of treatment.
How is procedure carried out?
To accomplish pregnancy as a result of IVF several steps are involved:
· Stimulation of the ovary to produce several fertilizable eggs.
· Retrieval of the eggs from the ovary.
· Fertilization of the eggs and culture of the embryos in the IVF laboratory.
· Placement of the embryos into the uterus for implantation (embryo transfer or ET).
Bibliography:
Encarta Encyclopedia, Microsoft 96'
Kinnear, Judith, Book One: Nature of Biology, The Jacaranda Press, Sydney, 1992.
Netscape
Winston, Robert, Infertility, A Sympathetic Approach, Optima Book, Great Britian, 1994.
World Book Encyclopedia, World Book Inc, Chicago, 1991.
The Human Body, World Book Inc, Chicago, 1990.
f:\12000 essays\sciences (985)\Biology\Iron Absorption.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Todd Bowen
Human Biology Today
December 5, 1996
Iron Absorption from the Whole Diet:
Comparison of the Effect of Two Different Distributions of Daily Calcium Intake
Hypothesis - If a woman distributes her daily intake of calcium by having less of it in her lunch and dinner meals and more in her breakfast and evening meals, then this would reduce the inhibitory effects calcium has on heme iron and nonheme iron absorption.
Background Information - This experiment is one of many that addresses calcium's inhibitory affects on iron absorption. In 1994, the Consensus Development Panel in Optimal Calcium Intake suggested an increase of the current Recommended Dietary Allowances of calcium(Whiting, p.77). This goal of this increase was to aid in the prevention of osteoporosis and other bone diseases. Unfortunately, this attempt at prevention could have an adverse affect on the human body's ability to absorb iron.
Recent studies have shown that eating a normal daily allowance of calcium cuts iron absorption by as much as 50-60%(Hallberg et al. p.118). Other studies examine the affect of iron bioavailability on menstruating, pre-menopausal, and post-menopausal women(Rossander-Hulten et al and Gleerup et al). One of the fears of an increased amount of calcium intake is the increased possibility of anemia in women who are already susceptible to this condition. The iron inhibition by calcium is a classical example of how the correction of one nutritional problem can be the cause of another.
The physiological mechanism of this calcium-iron relationship remains a mystery, however there are two feasible theories. One states that calcium competes for an iron binding site on intestinal epithelial cells. It is believed calcium binds to the protein mobilferrin on the epithelial cells, which is the iron transport protein(Whiting, p.78). Another group of scientists theorizes that iron is able to be transported into the epithelial cells without problem, however the iron then has trouble getting into the blood stream. The presence of calcium inhibits iron's ability to leave the epithelial layer.
Another very interesting theory is not on the microscopic level but in the evolutionary plane. Eaton et al. state that one possibility for this phenomenon could lie in the Homo sapiens genetic ancestry. As little as 200 years ago humans had almost double the amount of calcium intake as they do in the present, because humans evolved in a high-calcium nutritional environment. With the decrease in calcium, there has also been a large decrease in physical activity(Eaton et al.). The inhibitory effect of calcium on iron absorption could be related to the low intakes of iron and calcium in conjunction with the present low-energy lifestyle(Glerrup et al. p. 103).
Terms -
Extrinsic radioisotopic iron tracer - Radioisotopes of iron (59Fe and 55Fe) which can be traced from outside the body.
Heme - The heme molecule is a heterocyclic ring system of porphyrin derivation which has a molecule of iron in the center of the ring structure. Myoglobin and each of the four subunits of hemoglobin noncovalently bind to a single heme group. Heme is also the site at which each globin monomer binds one molecule of O2 (Voet et al, p. 216).
Heme iron - Iron which is located in the heme molecule.
Nonheme iron - Iron found in human tissue that is not a part of the heme molecule.
Oral reference dose - An oral dose of radio-labeled iron given to the subjects in order to examine their uninhibited iron absorption. This process was used as a control rate for each subject.
Experiment - The absorption of nonheme was measured from all meals during four 5-d periods (A1, A2, B1, and B2). Each day four meals were served to the subjects: breakfast, lunch, dinner, and an evening meal. The menu of the two B weeks were identical to the two A weeks, except for the distribution of dairy products. All meals, except for the evening meal, were served under supervision in the lab. A precise measyre of the iron content of each meal was required to enable the homogeneous labeling of the nonheme iron with radioisotopic iron. One wheat-rye roll served as the carrier of the radioisotope, and this was eaten throughout the course of the meal.
Before starting the 4-wk absorption study, a blood sample was drawn to determine hemoglobin and serum ferritin concentrations. Three weeks after the last serving, the total retention of 59Fe and 55Fe was measured by a whole-body counter to determine their ratio. An oral reference dose of 59Fe was also given to the subjects to determine the retention of absorbed iron. During the study, measurements were made of the blood menstrual losses in 19 of the subjects. One subject had no menustrationfor 4 years and the other had an irregular cycle.
The subject sample consisted of 21 healthy female volunteers in a fertile age period. Most of these subjects were senior students or staff members of the institute, and were all highly motivated to participate in the experiment.
All meals were prepared in the laboratory kitchen. All the amounts of food were weighed and prepared for the subjects. Subjects with higher energy requirements than provided by the meal served were allowed to eat more of the special unlabeled wheat roll. Each whole-grain roll was labeled from the radioisotopic iron standard solutionjust before serving I amounts that gave exactly the same specific activity of nonheme iron in all meals.
Fresh samples of raw and boiled vegetables and potatoes were analyzed for ascorbic acid on the same day that they were served. Weighed amounts from the meat dishes were analyzed for total iron and analysis of nonheme iron content of the meat. The mineral analyses of calcium, phosphorous, and magnesium were made after completion of the wet-ash method in sulfuric acid and hydrogen peroxide.
The measurement of nonheme iron absorption was done by using a liquid-scintillation spectrometer. The following equation wasused to determine heme iron absorption: heme iron absorption% = reference dose absorption % X 0.322 +15.71.
Results - The nonheme iron absorption of high calcium intake was 12.1 +- 2.20% (range, 1.8%-32.3%) and low calcium intake was 15.9 +- 2.50% (range, 1.6% - 40.6%). When log serum ferritin concentration was used as a measure of iron status, the r-squared values were 0.68 and 0.51, respectively.
The mean difference of the individual total iron-absorption figures from the two 10-d periods was 0.52 mg. In 16 of the 20 women , total iron absorption was higher when the calcium intake with the two main meals was low. No obvious explanation was found for the four subjects who responded differently to the higher calcium intake.
Conclusion - The absorption of heme and nonheme iron are both influenced by calcium to some extent. A main result of this study was that 30-50% more iron was absorbed when the intake of calcium was low at lunch and dinner compared with when the intake of calcium was high at these meals. This difference in iron absorption corresponds to 0.44 mgFe/day. In 5 of the original 21 women in our study, iron absorption was unexpectedly lower when milk was not served with the lunch and dinner meals. In 4 of the 5 subjects the difference was small and no obvious explanation could be found. In 11 of the 42 periods studied, nonheme iron absorption exceeded 20% and in four periods, 30%.
An important condition for this kind of long-term and rather complex absorption study is access to highly motivated and knowledgeable subjects who follow experimental instructions carefully.
Misgivings - The one facet of this experiment I felt was very lacking was the diversity of the sample. The entire sample had a very similar background and all came from the same place, the university. I was also disappointed in the size of the sample. It seems this sample could have been bigger, or they could have had another trial.
Personal Response - Overall, I really enjoyed reading this article. The questions about our diet that it poses in the conclusion were extremely interesting. I also thought it was written well. The terms and ideas were easy enough for the layman to understand, while it was also able to discuss complicated ideas.
Works Cited
Eaton SB, and Nelson DA. "Calcium in evolutionary perspective." American Journal of Clinical Nutrition 1991;54:281S-287S.
Gleerup A, Rossander-Hulthen L, Gramatkovski E, and Hallberg L. "Iron absorption fom the whole diet: comparison of the effect of two different distributions of daily calcium intake." American Journal of Clinical Nutrition 1995;61:97-104.
Hallberg L, Brune M, Erlandsson M, Sandberg A, and Rossander-Hulthen. "Calcium: effect of different amounts on nonheme- and heme-iron absorption in humans." American Journal of Clinical Nutrition 1991;53:112-119.
Rossander-Hulten L, and Hallberg L. "Iron requirements in menstruating women." American Journal of Clinical Nutrition 1991;54:1047-1058.
Voet D, and Voet JG. "Hemoglobin Function." Biochemistry. New York, NY. 1995.
Whiting S. "The Inhibitory Effect of Dietary Calcium on Iron Bioavailability: A Cause for Concern?" Nutrition Reviews, Vol. 53, No. 3, pp. 77-80.
f:\12000 essays\sciences (985)\Biology\Is it worth waiting for cryogenics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Imagine being frozen in time to escape a deadly illness, then getting warmed
when a cure is found. There is question on whether cryogenic methods should be
used. To fully understand cryogenics a knowledge of cold, background information on
some branches of cryogenics, some problems with cryopresevation, and different
peoples views towards cryogenics is needed.
"Cold is usually considered hostile to mankind. Most people hate cold and with
reasons." If not careful, cold can be deadly to animal and human life, but it can also
help cure, because cold bodies perform functions slower (Kavaler 16-17).
Measurement of temperature is extremely important in cryogenics and the temperatures
must be exact. The standard for scientific temperature measurement is the Kelvin
scale. On the Kelvin scale absolute zero has a value of zero degrees on the
thermometer. In theory no substance can be lowered to or below zero degrees Kelvin
or absolute zero. Temperatures in cryobiology range from zero degrees Celsius--water
freezes--to just above negative two hundred and seventy three point sixteen degrees
Celsius--absolute zero. The word "Cryogenics" comes from the Greek word "kryos"
meaning cold ("Cryogenics" Raintree 127, Kavaler 16). The science of cryobiology was
first recognized in the early nineteen sixties. Cryobiology is the study of the effects of
extremely low temperatures on living animals and plants. The chief concern in
cryobiology is to preserve living matter for future use. This method can also be called
cryopreservation. Cryotherapy is the use of extreme cold in treatment. The first trials
of cryotherapy proved with great results ("Cryobiology" Comptons 1, McGrady 97).
Frozen cells can be kept alive for very long periods of time in a state of
"suspended animation." Almost immediately after rapid thawing, the frozen cells regain
normal activity. Cooling of the body causes a loss of feeling, therefore it can be used
as anesthesia in surgery. Since certain drugs don't affect healthy cells at low
temperatures, the drugs can be safely used against cancerous tumors in the body.
Cryogenics also helps in the preservation and storage of human tissues. Tissues such
as eye corneas, skin, and blood that were rapidly frozen can be stored in "banks" for
later use. Then skin can be grafted to burn victims and eye corneas can replace
damaged ones. Thanks to Cryobiology blood can be frozen and stored for indefinitely
for many years as opposed to only three weeks as it was before cryogenic technology
was used. Surgeons can use a cryoscapel, freezing tips, to deaden or destroy tissue
with great accuracy and little bloodshed ("Cryogenics" Academic 350, "Cryobiology"
World Book 929). Scientists use a liquid gas called liquid nitrogen to freeze and store
cells. Some could problems also occur in cryogenics. If cells are not frozen fast
enough they will explode and die. Many biological reactions may take place in
temperatures as low as negative nine degrees Celsius. And, ice crystals, which form
as temperatures as low as negative one hundred and thirty degrees Celsius, will
destroy the frozen cells (Allen 38, "Cryobiology" Gale 1029).
Following are the views of two people involved with cryogenics. Mr. Young, a
biology teacher with a working knowledge of cryogenics, thinks cryobiology should be
used to preserve endangered species. He doesn't see the technology for freezing a
whole body but maybe body organs in the near future. Mr. Young believes the money
that would be needed to improve the technology could be better spent, which is a
controversy in itself. If given a chance Mr. Young would enter his body into
cryopreservation for the benefit science (Young interview). Whereas, an America
On-Line--AOL--user with an interest in cryogenics feels when cryopreservation
becomes a reality for an entire body, if people are willing and have the money, they
should enter their bodies into cryopreservation. He thinks the shock of waking up in a
new age of time could be dangerous to ones mental health (AOL interview).
Cryogenics is important because it could save and improve life in many
ways. Cryopreservation, a branch of cryobiology, has the main purpose of preserving
living matter, plant or animal, for future use. Cryogenics could save the planet from
extinction of endangered species. Scientist could save the gametes, the sperm and the
eggs, of endangered that can be fertilized and raised when the environment is able to
handle them. Cryopreservation should be used on humans who want to use it. Many
people are willing to take the risk of being suspended or maybe even lost in time for a
chance to live life again.
Word Count: 872
Works Cited
Allen, Richard J. Cryogenics. Philadelphia: J.B. Lippencott Company,1964.
AOL user. Internet interview. 6 January 1997.
Coxeter, Ruth. "The Deep Freeze for Irregular Heartbeats." Business Week 19
September 1994: 90.
"Cryobiology." Compton's New Media Forum. 1995 ed.
"Cryobiology." The Gale Encyclopedia of Science. 1996 ed.
"Cryobiology." The New Grolier Multimedia Encyclopedia. 1993 ed.
"Cryobiology." The World Book Encyclopedia. 1967 ed.
"Cryogenics." Academic American Encyclopedia. 1991 ed.
"Cryogenics." The Raintree Illustrated Science Encyclopedia. 1979 ed.
Kavaler, Lucy. Freezing Point. New York: The John Day Company, 1970.
McGrady, Patrick. Science Year The World Book Science Annual. Chicago: Field
Enterprises Educational Company, 1969.
"New Tools of the Trade," Current Health 2 January 1992: 9.
Young, Glen. Personal
f:\12000 essays\sciences (985)\Biology\Laboratory Safety.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Recognizing Laboratory Safety
Purpose:
The purpose of this lab is to stay safe no matter what you're doing in lab. You may be working with dangerous materials such as glass or acid.
Procedure:
I am following the procedure described in pages 21-24 in the Biology Laboratory Manual.
Observations:
Does not apply to this lab
Answers to Questions:
1. The flask symbol means glassware safety. Glassware can be broken easily.
2. The goggle symbol means that you are working with fire. It is extremely important to protect yourself from fire.
3. The hand symbol signifies that you are to wear heat-resistant gloves.
4. The bottle with the crossbones on it means chemical safety. Whenever you see this symbol, you know that you will be working with possibly dangerous chemicals.
5. The eye symbol signifies that you will be working with objects that could be hazardous to your eyes
6. The razor blade symbol signifies that you will be working with sharp objects. You should always be careful when working with sharp objects.
7. An electrical plug symbol means that you will be using electricity in your lab. Never touch an electrical socket or appliance with wet hands.
8. The symbol that looks like a duck means that you will be working with live animals.
Analysis and Conclusion:
1. The person is not wearing safety goggles and he isn't really paying close attention. Safety goggles are vital when you are working with fire.
2. She is pointing the vial towards herself. Whenever you are working with heating liquids, the vial should never pointing towards you.
3. The person is heating a liquid with a top on the beaker. Whenever you are heating a substance, there should never be a top on the liquid.
4. The lady is drinking out of the beaker. You should never do that. You also should never eat at a lab station
Conclusion:
In this lab, I learned what certain signs mean. I also learned about situations that could happen in the lab, and what to do if they occur. If in the future, I see this sign: , I know to wear protective goggles
f:\12000 essays\sciences (985)\Biology\Language of the cell.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE LANGUAGE OF THE CELL
MAY 3rd, 1996 SCIENCE 10 AP
BOOK INFORMATION
LIBRARY: Fish Creek Area Library
NAME OF BOOK: The Language of the Cell
AUTHOR: Claude Kordon
PUBLISHER: McGraw-Hill, Inc.
PUBLICATION DATE: 1993
CALL NUMBER: 3 9065 03969 1246
DUE DATE: May 3rd, 1996
The cell is a complex and delicate system: It can be seen that the cell is the stage where everyday functions such as molecule movement, protein synthesis and tissue repair take place. All organelles within the cell are well rehearsed in their operations, but an error on an organelles behalf, can send the cell and it's organelles into panic. The efficiency rate of the cell plummets down to a low level. It does take some time for the dust to settle, and once the scripts are memorized, the cell is now ready to begin it's tasks again.
Since the 19th Century, it was known that all living things, whether they were plants or animals, were made up of cells. This whole idea has been given credit to an English Physicist, Robert Hook (1635-1703), when he looked at a thin slice of cork under powerful hand lens. Hook discovered a large number of cells. Rudolf Virchow (1821-1902) propounded this idea, that the cell is a basic structure and functional unit for all living organisms.
A cell can be a wide range of shapes and sizes, although most cells are microscopic. Inside a cell membrane, a nucleus can be seen. The nucleus is the control center of the cell. Between the nucleus and the membrane, there is a polysaccharide matrix called the cytoplasm, where organelles can be found. The organelles are attached to a framework. The cell's cytoskeleton.
Every living cell has the ability to detect signals from it's environment. The signals are usually in the form of chemical molecules, that the cell has learned to recognize. The cell decodes these molecules into messages, and acts upon them. The cell has a "language". Signals and messages are carried by particles of matter that have a very low energy requirements. There are many, many signals rumbling around the cell. It was thought that the cell would confuse itself in all of that background signal noise. One defense is available to this question. The cell's decoding mechanisms are located downstream from the receptors. They are based on complex chemical reactions that take place in the cell membrane and control all the responses of the cell to the messages it receives.
Neuropeptides and polypeptide hormones, are made up of complex assemblies of amino acids, aligned in different sequences. In other cases, the amino acids are slightly transformed, as this is the case with well known transmitter substances such as epinephrine (adrenaline), dopamine and histamine.
Products made in the organelles within the cell, are sent to various destinations, both in and out of the cell. The cell has what amounts to a parcel delivery service, that is guided by "addresses," by chemical "tags" or labels. These labels generally consist of fairly simple molecules (often sugars) attached to the product being forwarded and recognized by the structure for which it is intended.
When a cell messes up on a delivery, which doesn't happen very often, is usually the result of a genetic defect. The "tag" on the product being forwarded is usually mutated, therefore the receptor cannot recognize it. Sometimes, the receptor is mutated, meaning that it does not recognize the signal. The result of this is a botched cell. An example of this is a low density lipoprotein receptor. If the lipoprotein fails to sequester and internalize it's signal (cholesterol), then cholesterol can no longer be reincorporated in the cell, and it builds up in vessels, causing potentially fatal conditions.
Three recent discoveries about the cell tell us that;
A) Each cell is not simply controlled by an accelerator and an inhibitor, and the cell has the ability to recognize a great amount of signals.
B) The number of signals discovered in the body has increased tremendously.
C) Signals within the cell are not, as formerly believed, characteristic of an organ or function, but they are all found in nearly all organs and are associated with nearly all functions.
As mentioned earlier, signals are incredibly small, have low energy requirements and weigh approximately one billionth of a gram. Scientists have discovered new signals with the development of extremely effective chemical methods that make it possible to purify them and elucidate their structure. These advances and discoveries lead to a well understood field of protein chemistry. One problem is that new signals are coming out everyday!
A point which should be stressed is that the universality of communication implies that no signals are attached exclusively to one organ or function. However, signals do not circulate unrestrictedly throughout the body. Most signals are very versatile in the way that they can carry out all sorts of assignments whether it be local cell to cell communications or long distance cell to cell communications. The best example of this is epinephrine, which acts both as a nervous system mediator and as a hormone.
On the contrary, some signals remain highly specialized, and therefore cannot take on many of the tasks that a versatile signal can. GnRH, a small peptide of amino acids, is mainly involved in the regulation of sexual behavior, reproductive hormones and external genital organs. These highly specialized organs can be found in a select amount of organs only.
All cell's and organs depend heavily on the receptors and decoders ability to do their job. Recognition errors by the mechanisms in the immune system can have grave consequences: unintentional destruction of elements of the self, failure to be alert to nonself antigens. The cells responsible for the body's defenses use different decoding combinations for protection. Example is that a foreign antigen is perceived foreign only if it is presented to the lymphocyte (white blood cell formed in lymphoid tissue) by another cell. The cell's crash-free system calls for one more security check. Appropriate signals have to "confirm" the order to respond to the intruding antigen by the secretion of antibodies. Fail safe? The reader definitely hopes so.
Neuroendocrinolgy. A nice long name for an extremely important field of study. Neuroendocrinology is the study of the exchanges of signals, and how they are integrated in the general coordination plan of an organ. A cooling of the outside temperature perceived by the nervous system's sensory organs triggers an increase the production of heat and at the same time a decrease in it's dissipation. The same goes for when it is hot outside, the sensory organs order a decrease in heat production and the dissipation rate is raised. The steps the cell go through, are similar to that of the thermostat. Both responses involve concomitantly nervous mechanisms (changes in behavior, chilling, etc...) and a hormonal link; stimulation of the thyroid. This is an example of a neuroendocrine reflex. It is interesting to note how a thermostat can be related to a neuroendocrine reflex. Home device ideas "taken" from bodily functions may turn out to be a useful tool in the school curriculum. Students could relate various abstract concepts to everyday household devices.
Their is an old question of the relation between the simplicity and complexity in biology. During the course of evolution, rules about the theory of cell communication has more or less remained unchanged. A primitive organism; a bacterium or an amoeba, when directly immersed in a liquid medium, fulfill the need for securing information regarding their environment. What are the available nutrients? The unicellular organism decodes this information and acts upon it. If their are food signals, then the amoeba will undergo phagocytosis, capture the food and then enzymes will be released and the food will be digested. The cell and it's signals is a continuing circle.
The reader thought that the Language of the Cell was an extremely hard, well written book. The book went deep into the subject most of the time, and sometimes the reader found this confusing. One negative point discovered by the reader was that the concepts explored in the book were new, confusing and frustrating. Little was understood on neurology, yet the reader made an incredible effort in trying to understand, and make sense of the topic. The reader feels that there are not many books that have to deal with biology on a grade ten AP level. Therefore, a reading assignment of this sort can be extremely challenging, and leave the reader with too many questions and no answers. The reader feels that these kind of book report assignments should be explored in grade twelve, when DNA and other neurological concepts of some sort are known.
The reader felt that this whole book report was an experience, and it broadened the reader's knowledge base. The reader also found out that biology is a very interesting subject as a whole, but some fields of study under the whole biology picture are extremely boring and complicated. Maybe this impression will change over time.
f:\12000 essays\sciences (985)\Biology\Lassa Fever An Old World ArenaVirus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LASSA FEVER; AN OLD WORLD ARENAVIRUS
ABSTRACT
A brief summary of lassa fever, its history, pathology and effects on the indigenous populations. Also, lassa fever in the context of newly emerging diseases.
LASSA FEVER
On January 12, 1969, a missionary nun, working in the small town of Lassa, Nigeria, began complaining of a backache. Thinking she had merely pulled a muscle, she ignored the pain and went on about her business. After a week, however, the nurse had a throat so sore and so filled with ulcers, she couldn't swallow. Thinking she was suffering from one of the many bacterial diseases endemic to the area, her sisters administered every antibiotic they had on store in the town's Church of the Brethren Mission Hospital. But, the antibiotics did nothing. Her fever escalated, she was severely dehydrated and blotches, hemorrhages, were appearing on her skin. She began to swell and became delirious, so they shipped her to a larger hospital, where one day later she went into convulsions and died. After a nurse who was tending to the sister came down with the same symptoms and died, the doctors in the hospital began to suspect it was a disease heretofore unseen by any of them. Autopsy on the nurse showed significant damage
to every organ in the body, the heart was stopped up, with loads of blood cells and platelets piled well into the arteries and veins. Fluids and blood filled the lungs. Dead cells and lipids clogged the liver and spleen. The kidneys were so congested with dead cells and free proteins they had ceased to function. Dissecting the lymph nodes, they discovered that they were completely empty; every white blood cell had been utilized in a futile attempt to stave off the unknown microbe. A few days later, a prominent western viral researcher contracted the unknown disease and the hunt for the microbe that caused lassa fever, began in earnest.(Garrett, 1994)
Lassa fever is a virus belonging to the family Arenaviridae. Genus Arenavirus, although being around for about 60 years in the form of lymphocytic choriomeningitis, has recently been brought to the public's attention because of the large number of species known as "emerging viruses" in the genera. The genera consists mostly of new world viruses, among them the Junin, Machupo and Guanarito viruses, which cause, respectively, Argentine, Bolivian and Venezuelan hemorrhagic fevers along with a few other non - pathogenic viruses. These viruses, long hidden in the deepest recesses of rain forests, are making their presence felt as much of the rain forest and other isolated areas become more and more accessible. Lassa fever is mostly on the rise as its main vector, the rodent Mastomys natalensis, is increasing in numbers due (indirectly) to an increase in poverty and scarcity of food.(Garrett, 1994) To be specific, when the endemic region has a scarcity of food, the villagers kill and eat the larger rat, Rattus ra
ttus, which is a main competitor of the Mastomys natalensis, thereby allowing the smaller Mastomys to flourish. The disease mainly effects the areas of western Africa, from Senegal to, of course, Zaire, although it has been exported to the United States (about 115 cases). (Southern, 1996)
Lassa fever consists of two single strands of RNA enclosed within a spherical protein coat. The RNA exists as two strands designated L (for long) and S (for short). The S segment is the more abundant of the two as it codes for the major structural components such as the internal proteins and the glycoproteins, while the L segment codes for RNA polymerase and perhaps a few structural proteins. The protein coat has a number of T-shaped glycoproteins protruding from it, composed of GP (glycoprotein)2 which is the base and GP1 which is the T-bar. (Southern, 1996) This structure is what inserts itself into the receptors on the host cell. When the virus first gains entry into the cell, it quickly takes over the mechanisms of the cell to its own purpose. First it creates clones of its RNA strands, then directs the cell to make the glycoproteins on the host cell membrane. The virions bud from the cell membrane, leaving the cell intact until, finally, the production of virions exceeds the cells capabilities and the c
ell lyses.
Infection with Lassa virus leads, after a 10 day incubation period, to a gradual onset of fever, then full blown Lassa fever begins. First, the throat gets exceedingly sore, even to point of severe ulceration and inability to swallow or drink. In the first week anorexia, vomiting and chest pains are also common. The second week is worse. The chest pains move to the abdominal region and intractable vomiting begins followed by severe edema of the throat and neck, tinnitus (ringing in ears), bleeding from the gums and mouth, huge rashes, coughing and dizziness. During this acute phase extremely low blood pressures of <90 mm Hg systolic, occurs leading some patients to suffer additional symptoms correlated with the weak BP. It is this second week that patients who are going to survive begin to recover. Those who don't recover experience mental cloudiness, grand mal seizures, pleural effusion (fluid in the lungs) and shock. Shock or asphyxiation are the most common causes of death. The illness lasts for 7 - 31 day
s in non-fatal cases, and 7 - 26 days in fatal cases. Most survivors report hear loss and tinnitus as a result of the infection. Its mortality has been reported as high as 45%, but the average is around 20%.(Sanford, 1992)
To date there has been no intensive mapping of the extent of virulent Lassa distribution in Africa and there is no surveillance for spread or contraction of the established highly endemic zones.(Southern, 1996) It took a number of sick westerners to grab the attention of the developed nations before they began to investigate this illness. Now that we have discovered it and are convinced it is not an immediate danger, we have retreated to our own nations, without so much as a single rodent eradication program. As a result the disease has spread to a much larger endemic area. The feeling is that it could be controlled by proper hygienic and educational measures, but the developed world chooses to leave the dying and forgotten continent, Africa, to suffer yet another vicious and deadly disease.
LITERATURE CITED
Garrett, Laurie, 1994, "Into the Woods", The Coming Plague; Newly Emerging Diseases in a World out of Balance, 71 -99.
Southern, Peter, 1996, "Arenaviridae: The Viruses and Their Replication", Fields Virology, 1505 -1520.
Sanford, Jay, "Lassa Fever", The Merck Manual, 218 -219.
f:\12000 essays\sciences (985)\Biology\Le Morse.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE DES MATIERES
Titre # de page
Introduction....................................................................................3
Description......................................................................................4
Habitudes de vie.............................................................................5
Ses Prédateurs................................................................................5
Le Morse Aime L'eau....................................................................6
Les Chicanes...................................................................................6
Le Menu Du Morse........................................................................7
Le Morse Et Moi............................................................................7
Habitat............................................................................................7
Mode De Reproduction.................................................................8
Conclusion......................................................................................9
Bibliographie..................................................................................9
Index..............................................................................................10
2
INTRODUCTION
Avez-vous de grandes dents? Surement, ils ne sont pas aussi grandes que celles de l'animal que j'ai choisi. Vous insistez que les votres sont plus grandes? A moins que vous êtes un morse, vous avez tord. Bien sûr, l'animal que j'ai choisi c'est le morse. Cet animal est fascinant. Il est très original. C'est pour ça que j'ai choisi ce sujet. Lis bien, et vous saurez de ce qu'à l'air le morse, de son mode de reproduction, de ses habitudes de vie, de ce qu'il mange et de son habitat. Alors, allons-y au monde des morses!
3
DESCRIPTION
Les morses sont tous regroupés ensemble comme une grosse famille. Ils sont au minimum, 1000 morses par plage.
On peut facilement distinguer le morse. Il a deux dents énormes qui sortent de la bouche dont ceux des mâles sont à peu près de 1 m de long et les femelles un peu plus courts mais plus arquées. Plusieurs pensent que ces dents se servent de défenses. En effet, ils leurs aident à fouiller dans l'eau pour trouver de la nourriture. pour la grandeur, le mâle se varie entre 3 et 4 m et pèse jusqu'à 1500 kg. La femelle est plus petite et moins massive. Elle a de 2,5 m à 3 m de longueur et peux peser entre 600 et 800 kg.
Une autre façon de differer les deux sexes est d'observer la couleur de sa peau. Les mâles ont la peau plus foncés que les femelles. Les mâles peuvent avoir de la peau de plus de 2,5 cm d'épais tandis que les femelles ne peuvent en avoir que de 1,75 cm. Elle est si grande, qu'on dirait qu'elle est trop grande pour son corps. Sa peau est rugueuse et de la couleur brune.
Vous avez peut-être remarqué leur moustache. A quoi sert-il? Ces longs cheveux fouillent dans l'eau pour trouver de la nourriture. Celle-ci est composé de 400 poils raides, très resistants et épais. Ils sont placés en rangées bien rectiligne. Ceux-ci sont comme 400 petits doigts.
4
HABITUDES DE VIE
Ses cousins sont les phoques et les otaries. Ils font toute partie de la famille des pinnipèdes. Cela signifie " Pied-nageoire ". Ces animaux sont dans l'eau la plupart du temps. C'est alors que c'est plus utile d'avoir des nageoires que des pattes.
Les morses ont une façon en particulier de marcher : en effet, ils marchent uniquement à l'aide de lurs rames tandis que ses cousins se servent de toute leur corps.
SES PRÉDATEURS
Les morses ne peuvent peut-être pas se proteger eux-même mais quand ils restent ensemble, ils peuvent se proteger contre n'importe quel de ses prédateurs. Il s'agit de mettre au moins un morse qui garde les autres pendant qu'ils dorment. Aussitôt qu'il y a un ennemi en vue, ce morse signale les autres en faisant son cri, qui ressemble beaucoup à un rugissement, et tous les mâles se mettent à l'attaque pendant que les femelles protègent leurs petits. Si, par malchance, ils ne sont pas assez pour le tuer, ils vont tous plonger dans l'eau et se cacher.
5
LE MORSE AIME L'EAU
La plupart du temps que le morse est réveillé il est dans l'eau. En effet, 90% de leur journée se passe dans l'eau. Mais, pourqu'ils puissent rentrer et sortir il faut avoir des poumons*. Ce qui veut dire qu'il faut qu'ils remontent à la surface pour respirer.
Quand le morse est dans l'eau, ses muscles se détendent et son coeur abat plus lentement, ce qui fait qu'il peut respirer d'un maximum de 15 minutes. C'est beaucoup!
Il peut nager jusqu'à 24 km/h dans l'eau. le record humain est de 4,75 km/h. C'est plus de 5 fois la vitesse record des humains!
LES CHICANES
Comme tout espèce, le morse ne s'entend pas toujours avec son prochain. Ils se passent parfois des chicanes.
Comment commencent-ils? Une source de ces disputes est qu'ils sont tous serrer ensemble et ça peut produire des accidents comme si un morse pique son voisin avec un de ces dents énormes en se tournant. Ou peut-être que l'endroit pour dormir d'un morse est si bien que celui à coté lui veut tant qu'il essaye de lui voler.
Que font-ils? D'habitude les morses ne se battent pas pour le vrai. La plupart du temps, ils se chicanent seulement. Parfois, ils crient si fort que l'on peut les entendre de jusqu'à 2 km à la ronde. Parfois, une vrai bataille se passe. Les deux ennemis se mettent sur leurs rames postérieurs et mettent leurs têtes en arrière pour montrer plus leurs défenses et faire peur à l'autre morse. C'est très rare qu'une chicane peut arriver avec des coups.
Pour finir cette bataille, des fois quelqu'un d'autre intervienne ou un de ces ennemis quitte (d'habitude c'est celui qui a les défenses les plus courts).
6
LE MENU DU MORSE
On a dit que sa moustache se servait de fouiller dans l'eau pour trouver de la nourriture. "Mais, quel sorte de nourriture?" Demandiez-vous peut-être. Celle-ci consiste de poissons, de crevettes, de buccins*, de vers marins, de holothuries*, de mollusques*,d'échinodermes*, d'animaux bentiques et de son repas préféré, les poulards*. Le morse est capable de manger des poulards sans manger le moindre morceau de coquille. Personne sait comment il fait. Des scientistes ont fait beaucoup de recherches sur ce sujet,mais personne l'a découvert.
LE MORSE ET MOI
Même si le morse est un animal aquatique, il n'est pas du tout comme les poissons. Il respire à l'aide des poumons et n'a pas de branchies. De plus, il est à sang chaud*.
HABITAT
Le morse abite dans le grand nord. Il aime nager dans l'eau froide. Il reste sur les gros morceaux de glace flottant. Une autre fonction de ses défenses est de grimper sur les glaces trop gissantes. Ils se reposent sur les ooglis* pour dormir.
Voici une carte qui illustre ou vit le morse en Amérique du Nord.
7
MODE DE REPRODUCTION
L'époque de la reproduction se passe entre avril et juin. L' accouplement dure 1 an. La gestation et l'accouplement se passe dans l'eau. Le morse est le seul de tout les pinnipèdes d'accoupler chaque 2 ans et non pas chaque année.
Chaque famille de morses consiste de 1 mâle, 1à 3 femelles et plusieurs petits de 4 ans et moins.
L'allaitement prend à peu près 1 an et 1/2. Le mâle se choisi jusqu'à 30 femelles pour s'accoupler. Il se trouve un territoire et il se prépare à se battre avec n'importe quelqu'un qui lui dérange. Aussitôt que la saison des amours est fini, le mâle ne s'occupe plus de son territoire ni de ses femelles. Ces derniers doivent élever leur petit seul.
En environ mai-juin de l'année prochaine, 1 bébé (C'est très rare que la maman aïe des jumeaux) est né de environ 2 m de longueur et pèse entre 45 et 70 kg. A la place d'être brun comme les adultes les petits son plus grisâtre.
Le bébé morse passe la plupart de son temps collé à sa mère. C'est normal puisqu'il n'a qu'une couche de graisse très mince et n'a qu'une fourrure de poils ras pour lui protèger contre le froid.
Les 2 sexes sont à peu près la même grandeur. C'est seulement plus tard que le mâle va devenir 50 cm plus gros et va pèser environ 750 kg de plus que la femelle.
8
CONCLUSION
En terminant, je voudrais dire comment ça a été amusant à faire cette recherche. J'ai appris beaucoup de choses sur le morse. J'espère que ça a été la même chose pour vous. J'ai appris sa forme et sa couleur, ce qu'il fait, qu'il peut se chicaner, comment il reproduis, où il vit et ce qu'il mange. Si vous voulez en savoir même plus sur cet animal vous pouvez chercher des livres à la bibliothèque la plus proche. J'espère que vous avez aimé ma recherche et que vous avez appris beaucoup.
BIBLIOGRAPHIE
Le Morse, Laima Dingwall, Le Monde Merveilleux Des Animaux, Grolier, 1987, 46.
Animaux Des Pôles, Gabrielle Pozzi, Les Animaux Du Monde Entier, Deux Coqs D'or, 1976, 46
9
INDEX
Buccins: Gros Invertébré au corps mou. Vit dans les régions
polaires.
Échinodermes: Animaux marins à symétrie rayonnante.
Holothuries: Animal marin muni de ventouses sur la face ventrale
et de papilles rétractimes sur la face dorsale.
Mollusques: Invertébré au corps mou.
Ooglis: Bord des mers de la région arctique qui est recouvert de
glace.
Poulards: Animaux a coquille feuilletée ou rugueuse, comestibles
ou recherchés pour leur sécrétion minérale(nacre, perle).
Poumons: Chacun des deux viscères logés symétriquement dans la
cage thoracique, organe de la respiration où se font les
échanges gazeux.
10
f:\12000 essays\sciences (985)\Biology\Leprosy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mike Wallis
Leprosy
Leprosy or Hansen's disease, is a chronic, infectious disease that mainly affects the skin, mucous membranes, and nerves. A rod shaped bacillus named Mycobacterium leprea, causes the virus. Mycobacterium leprea is very similar to the bacillus that causes tuberculosis. The reason Leprosy is also known as Hansen's disease, is because it was first identified in 1874 by a Norwegian physician named Gerhard Henrik Armeur Hansen.
Leprosy appears in both the Old and New Testaments. In the bible Leprosy was not the disease that is recognized now, but as various physical conditions that were nothing like the disease. A punishment from God was what these conditions were considered to be. The victim was said to be in a state of defilement. This Hebrew term was translated as lepros, which the word leprosy came from.
The disease's probable origin was the Indus Valley that is located in India. Leprosy spread from there to the Mediterranean region and North Africa, then all of Europe was affected. This disease is much less common now, as the world case count has dropped below 1 million. During 1995 about 530 000 new cases of leprosy were discovered. It is obvious that third world countries have way more cases as India, Indonesia, and Myanmar account for almost 70% of the cases reported in the world. 5500 know cases of Leprosy still exist in the US, and about 200 cases a reported annually.
Tests to produce leprosy in experimental animals, have not been successful as of yet. Though the organism can be grown in
Armadillos, several laboratories have been reported cultivating leprosy in the test tube.
Loss of sensation in a patch of skin is often the first symptom that Leprosy displays. In the lepromatous form, large area's of the skin may become infiltrated. The mucous membranes of the nose, mouth, and throat may be invaded by large numbers of the organism. Because of damage to the nerves, muscles may become paralyzed. The loss of sensation that accompanies the destruction of nerves may result in unnoticed injuries. These may result in secondary infections, the replacement of healthy tissue with scar tissue, and the destruction of bone. The classic disfigurements of Leprosy, such as loss of extremities from bone damage or the so-called leoline facies, a lionlike appearance with thick nodulous skin, are signs of advanced disease, now preventable with early treatment.
For many years the use of chaulmoogra oil was used for the treatment for Leprosy. Today drugs such as dapsone, rifampin, and clofazimine are used alongside a healthy diet. If killed too quickly, bacilli may cause a systematic reaction. The reaction is called erythema nodosum leprosum, or ENL may cause progressive impairment of the nerves. Corticosteroids control such reactions.
Of all contagious diseases Leprosy is maybe the least infectious. New patients are rarely ever quarantined. Most patients are treated on an outpatient basis. A Leprosy vaccine is currently under development.
f:\12000 essays\sciences (985)\Biology\Leukemia 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Leukemia
Leukemia strikes all ages and both sexes. In 1995 approximately 20,400 people died from Leukemia. The all time five year survival rate is 38%. This rate has gone to 52% in the mid 1980's. Approximately 25,700 cases were reported in 1995 alone(American Cancer Society-leukemia, 1995).
Leukemia is a form of cancer in the blood cells. Most forms of Leukemia occur in the white blood cells. These abnormal cells reproduce in large quantities and look and perform differently than normal cells(MedicineNet-leukemia, 1997).
Right now the causes of Leukemia are unknown. Some studies have shown that exposure to high-energy radiation increases chances of contracting leukemia. Such radiation was produced in the atomic bombing of Japan during World War II. There is also enough energy in nuclear plants so strict safety precautions are taken. Some research shows that exposure to electric magnetic fields, such as power lines and electric appliances, is a possible risk factor. More studies are needed to prove this link. Some genetic conditions, such as Down's syndrome, are also believed to increase the risk factor. Exposure to some chemicals is also suspected to be a risk factor. By learning the causes of leukemia treatment options will become available(MedicineNet-leukemia, 1997).
There are many symptoms of leukemia. The symptoms of leukemia are the same for all the different types of leukemia. The acute types of leukemia, ALL and AML, symptoms are seen more quickly than in the chronic types of leukemia, CLL and CML, where symptoms do not necessarily appear right away. The symptoms are flu symptom, weakness, fatigue, constant infections, easily bleed and bruise, loss of weight and appetite, swollen lymph nodes, liver or spleen, paleness, bone or joint pain, excess sweating, swollen or bleeding gums, nosebleeds and other hemorrhages, and red spots called petechiae located underneath the skin. In acute Leukemia the cancerous cells may collect around the central nervous system. The results can include headaches, vomiting, confusion, loss of muscle control, or seizures. These clumps of cancer cells can collect in other various parts of the body(MedicineNet-leukemia, 1997 and American Cancer Society-leukemia, 1995).
Leukemia can be diagnosed in a number of ways. Blood work is commonly done in the laboratory. Different forms of blood work include checking the hemoglobin count, platelet count, or white blood cell count. X-rays are routinely done for treatment follow-up. Ultrasound is also used as a treatment follow-up. CT Scan is a special type of x-ray used as a detailed cross section of a specific area of the body. Bone marrow is routinely tested to examine progress of the disease. Spinal taps are also used in certain types of cancers. The spinal fluid is checked to see if cancer cells are present(Parent and Patient handbook-hematology/oncology clinic, Children's Hospital of Michigan, 19??)
Treatment of Leukemia is very complex. Treatments are tailored to fit each patient's needs. The treatment depends on the type of the cancer and features of the cells. It also depends on the patient's age, symptoms, and general health. Acute Leukemia must be treated immediately. The goal of treatment is to get the cancer into remission. Many people with Leukemia may be cured. To be considered cured, you must be cancer free for at least five years. This time also varies depending on the type of cancer. The most common treatment of Leukemia is chemotherapy. Bone marrow transplants, Radiation, or biological therapy are also available options. Surgery is also occasionally used. Chemotherapy is a treatment method in which drugs are given to kill off the cancerous cells. One or more drugs may be used depending on the type of Leukemia. Anticancer drugs are usually given by IV injection. Occasionally they are given orally. Chemotherapy is given in cycles: a treatment period followed by a recovery perio
d followed by another treatment period and this process continues for a certain amount of time. Radiation therapy is used along with chemotherapy in some occasions. Radiation uses high energy beams to kill the cancerous cells. Radiation can be applied to either one area or to the whole body. It is applied to the whole body before bone marrow transplants. Bone marrow transplants are used in certain patients. The patients bone marrow is killed by high doses of drugs and radiation. The bone marrow is then replaced by a donor's marrow or the patient's marrow that was remove before the high amounts of drugs and radiation. Biological therapy involves substances that affect the immune system's response to the cancer(MedicineNet-leukemia, 1997).
In conclusion, Leukemia can be fatal, but with early diagnosis, proper treatments, and a lot of luck, it can be put into remission. With treatment options improving constantly, there may one day be a sure cure. Leukemia is a very dominant disease and very hard to treat. The key may be in the causes.
f:\12000 essays\sciences (985)\Biology\Leukemia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Leukemia
Leukemia is a disease characterized by the formation of abnormal
numbers of white blood cells, for which no certain cure has been found.
Leukemia is also conditions characterized by the transformation of normal
blood-forming cells into abnormal white blood cells whose unrestrained
growth overwhelms and replaces normal bone marrow and blood cells.
Leukemias are named according to the normal cell from which they
originate, such as Lymphocyte Leukemia. Lymphocyte Leukemia is where a
Lymphocyte cell is transformed into a Leukemia cell. Another example of
Leukemia is Myelocytic or (Granulocytic Leukemia). This forms when a
Myelocytic cell is changed or transformed into a Leukemia cell. Different
Leukemia's are located in the microscope and by how much protein they
contain. These Leukemia's are usually very severe and need treatment right
away. The present incidence of new cases per year in the United States is
about 25 to every 100,000 persons.
The danger to the patient lies in the growth of these abnormal white
cells, which interfere with the growth of the red blood cells, normal white
blood cells, and the blood platelets. The uncontrolled growth of the
abnormal white cells produces a tendency to unstop bleeding, the risk of
getting serious infection in the wounds, and a very small possibility of
obstruction of the blood vessels.
Treatment of these Leukemias include chemotherapy with alkylafing
agents, or antimetabodies that suppress the growth of abnormal white cells.
Another treatment of some kind would be the x-ray or the administration or
radioactive substances, or radiophosphorus, may be used. After treatment
these diseases may last for many years. Age of the person diagnosed with
Leukemia does play an important part in how that individual responds to any
treatment. The older the person the less response he may have to treatment.
Leukemia in Animals white blood cells is much less common as Leukemia
in humans white blood cells.
Today's treatment mostly includes chemotherapy and or bone marrow
transplantation supportive care, where transfusions of blood components and
prompt treatment of complicating infections, is very important. Ninety
percent of children with Acute Lymphocyte Leukemia have received
chemotherapy and fifty percent of theses children have been fully cured of
Leukemia. Treatment of AML or Acute Myeolcytic Leukemia is not as
successful but has been improving more and more throughout the 1990's.
Scientists that study the cause of Leukemia have not had very much
success lately. Very large doses of x-rays can increase the efficacy growth of
Leukemia. Chemicals such as Benzene also may increase the risk of getting
Leukemia. Scientists have tried experiments on Leukemia in Animals by
transmitting RNA into the body of the Animal. Interpretation of these results
in relation with human Leukemia is very cautious at this time. Studies have
also suggested that family history, race, genetic factors, and geography may
all play some part in determining the rates of growth of these Leukemias.
Stewart Alsop is an example of Acute Myeoblastic Leukemia, or
AML. On the day of July 21, 1971 Stewart was made aware of some of the
doctors suspicions due to his bone marrow test. He was told by his doctor in
Georgetown that his marrow slides looked so unusual that he had brought in
other doctors to view the test and they could not come to an agreement so
they all suggested that he take another bone marrow exam. The second test
was known to be "hypocelluar" meaning that it had very few cells of any sort,
normal of abnormal. The Georgetown doctors counted, about fourty-four
percent of his cells were abnormal, and he added, with a condor that he later
discovered characteristics. "They were ugly-looking cells." Most of them
looked like Acute Meyoblastic Leukemia cells, but not all some of them
looked like the cells of another kind of Leukemia, Acatymphoblastic
Leukemia, and some of them looked like the cells of still another kind of
bone marrow cancer, not a Leukemia, it is called Dysprotinemia. And even
the Myeloblastic cells didn't look exactly like Myeloblastic cells should look.
Stewart has been treated with chemotherapy and is still living today but he
doesn't have very much longer to live.
Sadako Saski was born in Japan in the year of 1943 she died twelve
years later in the year of 1955 of Leukemia. She was in Hiroshima when the
United States Air Force dropped an atomic bomb on that city in an attempt to
end World War II. Sadako Saski was only two years old when all this had
happened. Ten years later, Sadako had been diagnosed with Leukemia as a
result of the radiation from the bomb. At this time Sadako was only a twelve
year old little girl and she died of Leukemia. Everyday Sadako grew weaker
and weaker thinking about her death and the day finally came. Sadako died
on October 25, 1955. Sadako was very much loved by all of her classmates.
At the time of death, her classmates folded 356 paper cranes to be buried
with her. This is a symbol in Jpan of thoughtfulness.
In summary to what I have learned about Leukemia it is a very painful
disease. The people with Leukemia suffer very much throughout the disease
and treatment of the disease, even if they are eventually cured. The treatment
it took to get there was very painful. The studies of Leukemia have helped
alot of people to be cured but there are still alot of people suffering due to no
cure found to help them. I'm sure like all other cures needed, the money is
short funded for the research that cost so very much. Maybe someday soon,
we hope, they will find a cure for all kinds of cancer.
f:\12000 essays\sciences (985)\Biology\life of octopus dofleini.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
This is a research report on octopuses in general, however will focus in on a particular species of octopus, the North Pacific Giant or octopus dofleini , which is a bottom dwelling octopus that lives on coasts of the pacific ocean, from California to north Japan. This report will cover the habitat, and lifestyle of this amazing mollusk, that is so often misunderstood. The octopus is a very intelligent, and resourceful invertebrate whose natural abilities should make this a fairly interesting reading.
REPRODUCTION OF O. DOFLEINI
The spawning of the giant pacific may occur at any time of the year, however the mating of the octopus peaks in the winter months, with the peak of egg laying in April and may. Octopuses reproduce sexually, and have both male and female octopuses. Reproduction takes place as follows: The male octopus uses his tentacle to take a mass of spermatophore from within his mantle cavity, he then inserts it into the oviduct, in the mantle cavity of the female. This process occurs at depths from 20-100m and, lasts hours. With female octopuses receiving spermatophore up to 1m long.
Female octopus seem to prefer larger males as mates and male octopus may mate with more than one female in their life span, however the male octopus only lives a few months after breeding, and the female will die shortly after the eggs hatch.
Incubation can take from 150 days to seven or more months. The female may produce any where from 20,000 to 100,000 eggs over a period of several days. During incubation the female octopus will take to cleaning and aerating the eggs. This takes place at a depth of less than 50 meters
LIFE SPAN OF O.DOFLEINI
After hatching, the baby octopus (or larvae) take on the role of plankton, drifting around the ocean feeding on neuston (dead food) as opposed to hunting live prey. This stage on an average lasts for 30-90 days.
Without mating the octopus may survive up to five years, and Giant Pacific octopus have been found to reach a weight of 600 pounds, and an estimated width of over 31 feet, But the average size is only 100 pounds and 3m, still weighing in as the largest species of octopus.
During their life span, many octopus fall victim to fatal, and non-fatal predation. Therefore a high percentage of octopus are mutilated or missing arms, this percentage increases in octopus that live in deep water, perhaps this is because older octopus tend to occupy deeper waters and would naturally have more battle scars. However larger octopus are less prone to these injuries. Among the predators of octopus are, other octopus, sea otters, seals, sea lions, and fish.
THE DEN OF THE OCTOPUS
In finding a den an octopus is a very resourceful animal. Although most octopuses prefer to make natural rock crevices, and underground caves their dens smaller octopus tend to excavate areas of sea floor to build their own den, and still other octopuses prefer to occupy man made dens, such as ship wrecks. Although the octopus is not territorial, and may only occupy a particular den for a few weeks at a time, the den seems to be a important aspect of the octopuses life. The octopus uses its den for hatching its eggs, feeding, and even retreats to its den to hide from predators such as other octopuses, and seals. A common site marking the entrance to an octopuses den is a pile of shells, and other refuge discarded after feeding. Although dens are an important place to the octopus, octopuses are very mobile animals.
FEEDING HABITS OF THE OCTOPUS
Octopuses feed on everything from smaller octopus, to crustaceans, but a favorite food appears to be crab, and shrimp. As a general rule octopuses hunt prey during hours of darkness, and retreat to their den to feed. Many octopus over take prey with use of venom of varying strengths, while others simply capture prey and consume them with their bird like beak.
LIFESTYLE AND PHYSICAL ATTRIBUTES OF OCTOPUS
As in other aspects of the octopuses life, it is very resourceful, and interesting in its defenses and hunting techniques. Some species of octopus, such as the Blue ringed variety (Hapalochlaena lunulata) Are deadly poisonous to man. This octopus can administer its poison in two ways, it can either bite with its bird like beak, or release its poison into the water surrounding its prey. This poison attacks the nervous, and respiratory systems of man, causing death in roughly one hour. There is no known anti-venom, so the only way to survive an attack is through the administration of CPR until the poison wears off in several hours. It should be noted that the primary use of this poison is in hunting prey, not defense.
Octopus have the ability to change their skin coloration (like a chameleon) in order to camouflage themselves. This is accomplished through action of the chromatophore cells in the skin. Chromatophore cells are made up of three bags containing different colors. These colors are adjusted until the background color is matched. The normal color of the North Pacific Giant is brown, however the octopus can change color according to mood, Red representing anger, white representing fear, and surely there are more moods with colors to match which are more subtle. This ability to change color according to mood was for several years doubted by the scientific community, but is today a common belief.
The skin of the octopus is of varying softness, but all octopuses have very soft bodies. In fact the only hard part of the octopuses body is the beak, this allows octopuses to fit through holes no larger than the beak its self.
All octopus have the ability to shoot out a jet of purple, to black inky fluid from under their eyes, in order to perform a disappearing act when they feel threatened. The octopus can shoot out several blotches of this fluid before the fluid sac is emptied. This trick is not always an option, the ink is actually toxic to the octopus, and if shot in a confined area, the octopus will become sick or even die.
Octopuses have fairly good eyes, in fact they are comparable to ours in clarity. The eyes of the octopus differ from ours, in the respect that they focus by moving in, and out. Whereas the human eye focuses by changing the shape of the lens its self.
The octopus posses the most advanced brain of all invertebrates, with both short, and long term memories. This allows the octopus to learn in much the same way as humans, through trial and error. When an octopus learns a lesson it remembers and puts its knowledge to use in the future. The octopus has eight arms, with 250 suckers on each arm for a total of 2000 suckers on their body. These suckers are very sensitive to touch, in fact, the octopus can differentiate between different objects just as well with their suckers as they can with their eyes. Some species have particular suckers that are larger than the rest, This is to aid in reproduction. Although octopuses often lose arms to predators, it is of no consequence as the arm will grow back in a short time.
The Pacific Giant Octopus is of the phylum mollusca, class cephalopod, order octopoda, family octopodidae, and their closest relatives are the chambered nautilus, squid, and cuttle fish. The squid is in many ways similar to the octopus. The squid (like the octopus) changes skin color according to mood and background, and The feeding activities of the planktonic O. dofleini are described as squid like darting.
THE MIGRATORY HABITS OF O. DOFLEINI
The medium to large pacific giant is believed to go through a migratory stage in which it migrates from shallow to deep water and back again, the migratory cycle runs as follows: shallow water October-November/deep water February-March/shallow water April-May/deep water August-September.
O. DOFLEINI AND MAN
The pacific giant is the most common commercial species of octopus and is caught by fisheries from north Japan to Washington state. The octopuses are caught in large sometimes clay pots and raised to the surface. The octopuses are used for bait and for consumption by humans. Although these octopuses are caught in nearly all of their habitats, they are not endangered.
The ocean is where life began, and is a far more competitive, and harsher world than the world we know. So it comes as no surprise that the most advanced and well adapted life forms would be found in the ocean. Although octopuses do not build large structured civilizations, they are obviously another form of intelligent and highly adapted life forms.
f:\12000 essays\sciences (985)\Biology\Lipids.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LIPIDS
The subject I will cover is lipids. I will tell you about the description of it's organic compound. I will tell you where it is found. I will tell you what the uses are in plants and animals. I will also tell you about it's chemical structure and give examples of types of these compounds, such as cholesterol.
The organic compound of lipids have many similarities. They are almost always greasy, fatty, oily, or waxy. They do not dissolve in water, but they do in other organic solvents. This is like if you get grease on your hands it is hard to wash of because it seems to repel the water.
You can find lipids in many places. They are usually in fatty foods like butter, salad dressing, and cooking oils. They can also be found inside of animals as the form of fat. Lipids are always found in fat because when you get a build up of lipids it forms fat.
Lipids have many uses amongst plants and animals. The main use of these are for energy and storing energy. When they store energy they make triglycerides also known as fat. There are also many other uses such as insulation and protection. They are also used in making cell membranes. They make it so that the cell can maintain it's shape by keeping water and water-soluble compounds from passing through it. The lipids that are waxy are usually used to make protective coatings on the surface of plants and animals.
Since a lipid is an organic compound it contains carbon. They also contain hydrogen and oxygen, but in some very complex chains there is also phosphorus and/or nitrogen. Lipids are made by the dehydration synthesis of glycerol and fatty acids. This is when three molecules of fatty acids combine with one molecule of glycerol by taking water out of the solution. Lipids are always huge molecules, which means they have a lot of energy like twice as much as sugar. This is because more energy goes into making it so you get more out of it when it is broken down. The following is what a lipid would look like.
When lipids are made they can produce many different compounds. One of those is phospholipids. These are what help make cell membranes and keep water out of them. They also make a very common lipid and that is cholesterol. Cholesterol is and extremely complex lipid. It builds up on the inner walls of the arteries. If it is built up in large amounts it can cause high blood pressure and increase the risk of heart attacks. If this occurs than you must change your diet and exercise to decrease the amount of cholesterol you have. Sometimes high cholesterol levels are passed genetically and you can't do much about it.
In this report I've covered the following information on lipids:
1. What the organic compound is like.
2. Where they are found.
3. What the animals and plants use them for.
4. What the chemical makeup is.
5. I've also given examples of types of compounds.
6. Explained what cholesterol is.
f:\12000 essays\sciences (985)\Biology\Lucid Dreaming.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Lucid Dreaming
Dreams are the playground of the mind. Anything can happen when one is dreaming. The only limitation is that we only rarely realize the freedoms granted to us in our dreams while we have them. Lucid dreaming is the ability to know when one is dreaming, and be able to influence what will be dreamt. A normal dream is much like passively watching a movie take place in your skull. In a lucid dream, the dreamer is the writer, director, and star of the movie. Lucid dreams are exceptionally interesting.
Lucid dreaming is defined as dreaming when the dreamer knows that they are dreaming. The term was coined during the 1910?s by Frederik van Eeden who used the word "lucid" in the sense of mental clarity (Green, 1968). Lucidity usually begins in the midst of a dream, when the dreamer realizes that the experience is not occurring in physical reality, but is a dream. Often this realization is triggered by the dreamer noticing some impossible or unlikely occurrence in the dream, such as meeting a person who is dead, or flying with or without wings. Sometimes people become lucid without noticing any particular clue in the dream; they just suddenly realize that they are in a dream. A minority of lucid dreams (about 10 percent) are the result of returning to REM sleep directly from an awakening with unbroken reflective consciousness (LaBerge, 1985). These types of lucid dreams occur most often during daytime napping. If the napper has been REM deprived from a previous night of little sleep their chances of having a REM period at sleep onset are increased. If the napper is able to continue his or her train of thought up to the point of sleep, a lucid dream may develop due to an immediate REM period.
The basic definition of lucid dreaming requires nothing more than the dreamer becoming aware that they are dreaming. However, the quality of lucidity varies greatly. When lucidity is at a high level, the dreamer is aware that everything experienced in the dream is occurring in their mind, that there is no real danger, and that they are asleep in bed and will awaken eventually. With low-level lucidity they may be aware to a certain extent that they are dreaming, perhaps enough to fly, or alter what they are doing, but not enough to realize that the people in the dream are just figments of their imagination. They are also unaware that they can suffer no physical damage while in the dream or that they are actually in bed. Lucidity and control in dreams are not the same thing. It is possible to be lucid and have little control over dream content, and conversely, to have a great deal of control without being explicitly aware that one is dreaming.
Lucid dreams usually happen during REM sleep. Working at Stanford University, Dr. Stephen LaBerge proved this by eliciting deliberate eye movement signals given by lucid dreamers during their REM sleep. LaBerge's subjects slept in the laboratory, while the standard measures of sleep physiology (brain waves, muscle tone and eye movements) were recorded. As soon as they became lucid in a dream, they moved their eyes in large sweeping motions left-right-left-right, as far as possible. This left an unmistakable marker on the physiological record of the eye movements. Analysis of the records showed that in every case, the eye movements marking the times when the subjects realized they were dreaming occurred in the middle of unambiguous REM sleep. LaBerge has done several experiments on lucid dreaming using the eye-movement signaling method, demonstrating interesting connections between dreamed actions and physiological responses.
It has been debated if lucid dreaming interferes with the function of ?normalÓ dreaming. According to one way of thinking, lucid dreaming is normal dreaming. The brain and body are in the same physiological state of REM sleep during lucid dreaming as they are during most ordinary non-lucid dreaming. In dreams the mind creates experiences out of currently active thoughts, concerns, memories and fantasies. Knowledge that a person is dreaming simply allows them to direct their dream along constructive or positive lines, much like they direct their thoughts when awake. Furthermore, lucid dreams can be even more informative about the self than non-lucid dreams, because one can observe the development of the dream out of one?s feelings and tendencies, while being aware that one is dreaming and that the dream is coming from the self. The notion that dreams are unconscious processes that should remain so is false. Waking consciousness is always present in dreams. If it were not, we would not be able to remember our dreams, because one can only remember an event that has been consciously experienced. The added "consciousness" of lucid dreaming is nothing more than the awareness of being in the dream state.
The first thing that attracts people to lucid dreaming is often the potential for adventure and fantasy fulfillment. Flying is a favorite lucid dream delight, as is sex. Many people have said that their first lucid dream was the most wonderful experience of their lives. A large part of the extraordinary pleasure of lucid dreaming comes from the exhilarating feeling of utter freedom that accompanies the realization that one is in a dream, where there will be no social or physical consequences of one?s actions.
Unfortunately for many people, instead of providing an outlet for unlimited fantasy and delight, dreams can be dreaded episodes of limitless terror. Lucid dreaming may well be the basis of the most effective therapy for nightmares. If one becomes aware that they are dreaming they can realize that in a dream nothing can harm them. There is no need to run from or fight with dream monsters. In fact, it is often pointless to try because the horror is in their own mind, which can pursue them wherever they dream themselves to be. The fear is real, but the danger is not. The only way to escape is to end the fear, for as long as they fear their dream, it is likely to return. Unreasonable fear can be defused by facing up to the source, or going through with the frightening activity, so that one can observe that no harm comes to them. In a nightmare, this act of courage can take any form that involves facing the "threat" rather than avoiding it. Monsters often transform into benign creatures, friends, or empty shells (Saint-Denys, 1867/1985) when courageously confronted in lucid dreams. This is an extremely empowering experience. It teaches in a very visceral manner that fear can be conquered.
Lucid dreaming can also help people achieve goals in their waking lives. There are many ways that individuals can use lucid dreams to prepare for some aspect of their waking activities. Some of these applications include: rehearsal (trying out new behaviors, or practicing them, and honing athletic skills), creative problem solving, artistic inspiration, overcoming sexual and social problems, coming to terms with the loss of loved ones, and physical healing. If the possibility of accelerated physical healing, suggested by anecdotes from lucid dreamers, is born out by research, it would become a tremendously important reason for developing lucid dreaming abilities.
The following is an excerpt from Dr. LaBerge?s book entitled Lucid Dreaming. In it he gives advice on how to dream with lucidity.
There are several methods of inducing lucid dreams. The first step, regardless of method, is to develop your dream recall until you can remember at least one dream per night. Then, if you have a lucid dream you will remember it. You will also become very familiar with your dreams, making it easier to recognize them while they are happening. If you recall your dreams you can begin immediately with two simple techniques for stimulating lucid dreams. Lucid dreamers make a habit of "reality testing." This means investigating the environment to decide whether you are dreaming or awake. Ask yourself many times a day, "Could I be dreaming?" Then, test the stability of your current reality by reading some words, looking away and looking back while trying to will them to change. The instability of dreams is the easiest clue to use for distinguishing waking from dreaming. If the words change, you are dreaming. Taking naps is a way to greatly increase your chances of having lucid dreams. You have to sleep long enough in the nap to enter REM sleep. If you take the nap in the morning (after getting up earlier than usual), you are likely to enter REM sleep within a half-hour to an hour after you fall asleep. If you nap for 90 minutes to 2 hours you will have plenty of dreams and a higher probability of becoming lucid than in dreams you have during a normal night's sleep. Focus on your intention to recognize that you are dreaming as you fall asleep within the nap. (LaBerge, 1985)
External cues to help people attain lucidity in dreams have been the focus of Dr. Stephen LaBerge's research at the Lucidity Institute for several years. Using the results of laboratory studies, he has designed a portable device, called the DreamLight ($950), for this purpose. It monitors sleep and when it detects REM sleep it gives a cue (a flashing light) that enters the dream to remind the dreamer to become lucid. The light comes from a soft mask worn during sleep that also contains the sensing apparatus for determining when the sleeper is in REM sleep. A small custom computer connected to the mask by a cord decides when the wearer is in REM and when to flash the lights.
The phenomena of lucid dreaming has been looked into as a possible explanation of the out-of-body experience. In an Out-of-Body Experience (or OBE) a person feels that they are separated from their body and are free to float or fly about. They feel as if they are perceiving the physical world from a location outside of their physical body. The OBE has also been linked with the Near-Death Experience (or NDE) wherein a person who is at the brink of death has an OBE. The NDE, the OBE, and lucid dreaming all have the common element of being separated from the physical body. All include the sensation of flying combined with a feeling of freedom. In an attempt to explain the OBE and the NDE, lucid dreaming has come up as a reasonable theory. It is thought that in a half-awake and half-dreaming state a person dreams of leaving their body. OBE?s are often elicited during deep meditation and relaxation where it is reasonable to assume that during the trance a person could fall asleep and have a lucid dream which felt like an OBE but was just a dream. Further research into this area is certain to be done in the future.
Since we spend about 9% of our lives in the dream world it would make sense to make the most of that time. By exploring that world with a conscious awareness one can see the inside of their own head, actually see their thoughts being formed. A lucid dream has infinite possibilities, it can happen every night of one?s life, and best of all it is totally free of charge.
References
Green, Celia (1968). Lucid Dreams. London: Hamish Hamilton.
LaBerge, Stephen (1985). Lucid Dreaming. New York: Ballantine Books
LaBerge, Stephen, & Rheingold, Howard (1990). Exploring the World of
Lucid Dreaming. New York: Ballantine Books
f:\12000 essays\sciences (985)\Biology\Mad Cow Disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Bovine spongiform encephalopathy (BSE), better know as Mad cow disease is a relatively new disease. Most sources state that BSE first showed up in Great Britain in 1986 [Dealler p.5] but some say it popped up in 1985 [Greger p.1]. However the official notification was not until 21 June, 1988 [Dealler stats. p.1]. Spongiform encephalopathies are invariable fatal neurodegenerative diseases and there is no treatment nor is there a cure for this disease [Greger p.1]. The recent scare of BSE has arisen because of the contraction of Creutzfeldt-Jakob disease (CJD: see Appendix B) in humans from eating beef products. Although there are many forms of Spongiform encephalopathies that affect a wide range of animals, BSE has received the most attention because many people in the world consume beef and people are that they might contract the disease from eating a burger at their favourite fast-food restaurant. In this essay I will discuss BSE and other forms of Spongiform encephalopathies, how it affects the animal,
what causes the animals to contract the disease, and the recent issues of BSE in the world. I hope to set out the true facts about BSE and that it only affects a small percent of the world population. Due to the fact BSE is a new disease most of my information might be proven wrong in the future because there is a great deal of testing going on in the scientific community. They are also very concerned about this new disease and the effects it can have on humans if it is not stopped.
Bovine spongiform encephalopathy is not some bacteria and it is not a virus, but in fact it is an infectious protein or prion [Greger p.2]. Before I go into more detail, I would like to discuss what a prion is. A prion is composed solely of proteins, and lacks genetic material in the form of nucleic acids. They are the tiniest infectious agents known, they can only be viewed under the strongest of electron microscopes [see appendix A]. Most scientists are puzzled because nucleic acid is the basis reproductive material needed in all other life forms [Britannica vol.9 p. 978]. Because of their unique makeup, prions are practically invulnerable. They can survive for years in the soil. Chemical disinfectants, weak acids, DNAase, RNAase, proteinases [Dealler p.8], ultraviolet light, ionising radiation, heat, formaldehyde sterilization, and chemicals that react with DNA [Greger p.2], all have little effect on the infectivity of the prion. Only marinating your hamburger in Drain-O would make your burger safe to
eat [Greger p.2].
BSE, is a slowly progressing degenerative disease affecting the central nervous system of cattle. BSE is the same as most of the other spongiform encephalopathies, they evoke no immune response and consequently slowly accumulate for an incubation period up to 30 years. You cannot detect them, purify then, nor can you isolate them [Greger p.2]. One of the main issues that affect most farmers is how do they know if a cow has BSE. Cattle affected by BSE develop a progressive degeneration of the nervous system. Affected animals may display changes in temperament, such as nervousness or aggression, abnormal posture; incoordination and difficulty in rising, decreased milk production, or loss of body condition despite continued appetite [Kent p.10]. However it has been noted the signs in American cows is much different. They instead stagger to their death like downer cows do. "A downer cow" is referring to the industry term which describes cows who fall down and are too sick to get up [Greger p.4]. There is no
treatment so all affected cattle die. The incubation period ranges from two to eight years [Hodgson p.2]. Following the onset of clinical signs, the animal's condition deteriorates until it dies or is destroyed. This usually takes from two weeks to six months. Most cases in Great Britain have occurred in dairy cows between three and five years of age [Dealler Bio p.7]. The parts of the cow that is affected by BSE are the brain, spinal cord, and retina from naturally infected animals have been found to be infective and also the lower ileum (intestine) from experimental cattle inoculated was found to be infective [Varner p.3].
Great Britain is the site where the major problem of BSE started. The increase of BSE in the UK was mostly due to the fact that farmers were feeding their cattle a bovine food which included parts of dead sheep that had scrapie [see Appendix B.]and also the offal [see Appendix B] of dead cows that carry the BSE disease. This method of preparing the bovine food started in 1980, in order to be protein concentrated which in return made the cows increase their milk yield. Most people did not know BSE could be transmitted through the food derived from dead sheep and cattle. Because the normal incubation period for a cow is 2-8 years, most of the BSE infected cattle did not start to show signs until sixth and seventh year. Due to the fact that a very small amount of the cows that were infected with BSE showed the symptoms early in the 1980's , they were not detected as having BSE. Most of these cows were then recycled into bovine food, which was then feed to more cattle and more cattle became infected. It was
not until July of 1988 that the feed manufacturers were issued a warning to stop the production of bovine food with the presence of cattle offal that were infected with BSE [Dealler pg.2]. And it was not until 25 September, 1990 that bovine offal were specified to be banned from the food of all species [Dealler stats. p.1].
In 1987 the British Government stated that BSE could not be transmitted to any other species because it was the same as scrapie [Greger p.1]. They were proven wrong and within a few weeks a cat died of a hitherto unknown feline spongiform encephalopathy contracted from an infected cat food. This caused an all out worry in Britain, fearing that BSE would spread into the human population. Nursing homes, hospitals, and more than 2000 schools stopped serving beef or restricted its consumption to a minimum [Greger p.1]. With this the price of poultry shot up 12 percent and beef prices drooped 10-25 percent [Greger pg.1] devastating the cattle industry. The number of cases started to rise, in 1990 it was 300 cases per week and by 1993 it was up to 800 cases per week [Dealler bio pp. 2-3]. It was in 1993 that the greatest number of BSE cases were reported in Great Britain with 36,533 (see Appendix A).
It has been stated that 1.8 million infected cattle will have been eaten before the year 2001 and that is if there are no cases after 1991[ Patterson p.265]. Since there in not enough information on how BSE can transfer from one cow to another the number of infectious cattle eaten by the year 2001 might be as high as eight million[Kent p.10]. If the ban on affected cattle food was applied one year eaearlier,he number of affected cattle would have been less than half of the number of cows that are now affected[Dealler issues p.4].
It was in 1994 that one of the biggest scares came when a 16 year old girl from North Wales claimed to be dying from CJD, which was contracted from eating a BSE infected beef product. In 1995 a farmer died and another farmer was dying from CJD. Both of these farmers came from farms that had BSE affected herds on them[Dealler bio p.4]. The question that puzzles scientists is the fact that the people that have been affected with BSE are all under the age of 40. This is so puzzling because the average age that people generally contract CJD is 57 years old [Dealler p.3]. Up to this date there has been no scientific proof to prove that BSE can cause CJD in humans, but there is lots of circumstantial evidence that points to BSE as the cause of the new form of CJD that is infecting people in Britain[Varner p.4].
The latest outbreak has made the British Government issue an order that all cows born before 1994 will have to be killed (about six million cows) and burned[Cox ]. With Great Britain having more than 98% of the cases, world wide you can see that they have the biggest worry. Canada has only experienced one case of BSE. The cow was imported from Great Britain in 1987 and was not diagnosed with BSE until six years later. The Canadian government took extraordinary measures to deal with the risk of BSE. The measures included the destruction of the entire herd containing the BSE infected cows and the trace back and elimination of all other cattle imported to Canada from Great Britain since 1982. Also, they incinerated all of the carcasses of the dead cattle .
Although I learnt a great deal about BSE while completing this essay, the facts today may not be true tomorrow. BSE is such a new disease no on knows for sure what the future will bring. Some scientists even feel this may be a catastrophic disease. Dr. Richard Lacey a microbiologist from England was quoted as saying "BSE is much more serious than AIDS"[Greger p.1], meaning there could be millions of people infected before the transmission of the disease is stopped. We can only hope the research and drastic measures being t aken will stop this disease before many more humans and cattle die..
Appendix B
Scrapie- It is a naturally occurring disease of sheep found in many parts of the world, but not everywhere. It has been known for more than 200 years, and thought to of started in Spain. Sheep inoculated with scrapie infected tissue will have a short incubation period, possibly as low as two months. Scrapie can not be transmitted to humans [Deller pg.1-2].
Creutzfelt-Jakob Disease(CJD)- It was first described in 1920 when it was known as 'spastic pseudosclerosis' or 'subacute spongiform encephalopathy'. The illnesss exists throughout the world and has an annual incidence of approximately one case per million of the population [Dealer pg.2]. The average age in a typical CJD in 56 years, and only seven cases between 18 and 29 years old have been reported. The symptoms start with changes in sleeping and eating patters and progress over a few weeks to a clearly neurological syndrome. The disease progresses with deterioration in cerebral and cerebellar function to a condition which most neurological activity is decreased, sensory and visual function decays, and the patient dies, possibly after a decrease in lower motors neurological function and seizures [Dealler pg.3].
Offal-is any of various nonmuscular parts of the carcasses of beef or veal, mutton and lamb, and pork, which are either consumed directly as food or used in the production of other foods. Beef offal includes the stomachs, tripe, or large stomach, brain, heart, liver, tongue, and kidneys [Britannica vol.8 p.881].
Works Cited
Greger, Michael. "Mad Cow Disease" March 1996): 9 pp. Internet. 5 April 1996.
Available: http://envirolink.org/arrs/AnimaLife/spring94/madcow.html
Dealler, Steve. "Biology of BSE"(April 1996): 10 pp. Internet. 5 April 1996.
Available: http://www.airtime.co.uk/bse/tse.htm
Dealler, Steve. "BSE statistics"(April 1996):12pp. Internet. 10 May 1996.
Available: http://www.airtime.co.uk/bse/statb.htm
Dealler, Steve. "History of BSE" (April 1996):4 pp. Internet. 5 April 1996.
Available: http://www.airtime.co.uk/bse/hist.htm
Dealler, Steve. "Publications and abstracts recently in print" (April 1996): 12 pp. Internet. 5 April 1996. Available: http://www.airtime.co.uk/bse/news2.htm
"National Institute of Animal Health"(15 May 1996):4 pp. Internet. 20 April 1996.
Available:http://ss.niah.affrc.go.jp/bse/bse.html
Prion. (1991). Encyclopedia Britannica: Micropaedia
Offal. (1991). Encyclopedia Britannica: Micropaedia.
Agriculture and Agri-Food Canada. (1996) Bovine Spongiform Encephalopathy: Factsheet
Government of Canada.
Kent, John. (1995). British Food Journal (vol. 97) (pp 3-18)
Hodgson, Barry. (1990, July). What is BSE. Scientific American, p.34.
Patterson, W. J. (1995). Public Health Medicine (vol. 17 num. 3) (pp.261-268)
Cox, Wendy. (1996, April 29). The fear of Mad Cow Disease. London Free Press, p. A6.
Varner, Tim. The Economist Newspaper limited (March 3, 1996): 4pp. Internet. 4 May 1996.
Available: http://www.economist.com/issue/30-03-96/sf1.html
f:\12000 essays\sciences (985)\Biology\Malaria.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Malaria
Malaria parasites have been with us since the beginning of time, and fossils of mosquitoes up to thirty million years old show that malaria's vector has existed for just as long. The parasites causing malaria are highly specific, with man as the only host and mosquitoes as the only vector. Every year, 300,000,000 people are affected by malaria, and while less than one percent of these people die, there are still an estimated 1,500,000 deaths per year. While Malaria was one of the first infectious diseases to be treated successfully with a drug, scientist are still looking for a cure or at least a vaccination today (Cann, 1996). Though many people are aware that malaria is a disease, they are unaware that it is life threatening, kills over a million people each year, and is a very elusive target for antimalarial drugs (Treatment of Malaria, 1996).
Being a very specific disease, malaria is caused by only four protozoal parasites: Plasmodium falciparum, Plasmodium vivax, Plasmodium ovale, and Plasmodium malariae. Not only is the disease specific, but the parasites are too, with only 60 of 380 species of female Anopheles mosquitoes as vectors. With the exception of Plasmodia Malariae which may affect other primates, all parasites of malaria have only one host, Homo sapiens. Because some mosquitoes contain substances toxic to Plasmodium in their cells, not all species of mosquitoes are vectors of Plasmodium. Although very specific, malaria still causes disruption of over three hundred million people worldwide each year (Cann, 1996).
The life cycle of the parasite causing malaria exists between two organisms, humans and the Anopheles mosquito. When a female mosquito bites a human, she injects an anticoagulant saliva which keeps the human bleeding and ensures an even flowing meal for her. When the vector injects her saliva into the human, it also injects ten percent of her sporozoite load. Once in the bloodstream, the Plasmodium travel to the liver and reproduce by asexual reproduction. These liver cells then burst releasing the parasites back into the bloodstream where they then enter red blood cells. Here, the Plasmodium feed on hemoglobin and reproduce again by asexual reproduction. Afterwards, the red blood cells burst and release the parasites. Some of the parasites released from red blood cells may be able to replicate by sexual reproduction. When the host has been bitten by a mosquito again, infected blood inters the mosquito. Here, sexual forms of the parasite develop in the stomach of the Anopheles mosquito completing the
parasites life cycle (Herman, 1996).
People infected malaria have several symptoms including fever, chills, headaches, weakness, and an enlarged spleen (Herman, 1996). The amount of time for symptoms to appear differs depending on the form of the parasite. Those infected with Plasmodium falciparum experience symptoms after about twenty-four hours, those infected with Plasmodium vivax and Plasmodium ovale produce symptoms after a forty-eight hour interval, and after seventy-two hours Plasmodium malariae begin causing fever and chills (Cann, 1996).
Most malaria cases seem to cluster in the tropical climate areas extending into the subtropics, and malaria is especially endemic in Africa. In 1990 eighty percent of all reported cases were in Africa, while the remainder of most cases came from nine countries: India, Brazil, Afghanistan, Sri-Lanka, Thailand, Indonesia, Vietnam, Cambodia, and China. Globally, the disease circulates in almost one hundred countries causing up to 1,500,000 deaths annually (Cann, 1996).
Because there is no definite cure for malaria, scientists are trying their hardest to contain the parasite to where it now exists. The range of a vector from a suitable habitat is fortunately limited to a maximum of two miles (Cann, 1996). If this were the only factor, scientist would have no problem containing the disease. Humans migrate, however, and over time the disease has slowly spread throughout the tropics. Major problems also exist when ignorant tourists to Africa transfer the parasite to non malarious areas (Graham, 1996). Biologists are also using control measures, such as spraying DDT to kill mosquitoes, draining stagnant water, and using the widespread use of nets to contain the mosquito itself (Herman, 1996). Because of the worsening situation, the World Health Organization (WHO) declared malaria control to be a global priority (Limited Imagination, 1996).
Although limiting the spread of malaria is not easy, finding a cure has presented several problems in recent years. One main reason finding a cure for malaria is so hard is that different strains in different parts of the world require different drugs, all of which soon lose their effectiveness as the parasite evolves resistance to them (Limited Imagination, 1996). Secondly, once the parasite enters the human bloodstream, it changes form several times inside the body, making it an elusive target for the immune system (Cann, 1996). Last, while research and development is very expensive, Africa's third world countries don't have the money to support such research (Graham, 1996).
Research in the field of malaria's microbiology enables a search for better vaccines and a possible cure for malaria (Atovaquone, 1996). In the past several decades, scientists have developed many drugs that have all fallen victim to the resistance of the Plasmodium parasites. Such drugs include chloroquine, pyrimethamine, chloroguanide, desipramine, halofantrine, mefloquine, and arteether (Herman, 1996). Scientists too often find their drugs effectiveness wearing off as malarial parasites build tolerance to them (Graham, 1996).
Several drugs used to treat the disease have been around for centuries. One such drug is quinine, a compound extracted from the bark of the cinchona tree. This drug was a secret of the locals of the Amazon jungle for centuries until European missionaries learned of its use. The trouble remains that quinine is expensive to harvest, is extremely hard to synthesize, and fails to prevent relapses (Limited Imagination, 1996). Another unique treatment of malaria is the use of the herb Artemisia annua. This herb has been used for centuries in traditional Chinese medicine to treat malaria and fever. Neither of these drugs are one hundred percent effective (Herman, 1996).
While the need for malarial vaccines grows urgent, so does the number of people affected each year. Although it is caused by a highly specific parasite, malaria still seems to kill off between one to two million people annually. As the Plasmodium parasites mutate more and more to resist the effect of antimalarials, it becomes harder for scientist to find a cure (Treatment of Malaria, 1996). Over forty percent of the world's population still at risk from this deadly disease, is yearning for a cheap, effective vaccine (Cann, 1996).
Bibliography
Dr. Cann, Alan J. PhD., "The Walter and Eliza Hall Institute Malaria Database", 1996, http://www.wehi.edu.au/biology/malaria/who.html.
Graham, David, "Malaria-Proof Mosquitoes," Technology Review, October 1996, Vol. 99, Issue 7, p20-22, MAS FullTEXT ELITE, Nancy Guinn Library.
Herman, Robert, "Malaria," New Groliers Multimedia Encyclopedia, Copywrite 1996.
"Atovaquone and Proguanil for Plasmodium Falciparum Malaria," Lancet, June 1, 1996, Vol. 347, Issue 9014, p1511-1515, MAS FullTEXT ELITE, Nancy Guinn Library.
"Limited Imagination," Economist, September 28, 1996, Vol. 340, Issue 7985, p80-82, MAS FullTEXT ELITE, Nance Guinn Library.
"Treatment of Malaria," New England Journal of Medicine, September 12, 1996, Vol. 335, Issue 11, p800-807, MAS FullTEXT ELITE, Nancy Guinn Library.
f:\12000 essays\sciences (985)\Biology\Male Circumcision A social and Medical Misconception.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Male Circumcision: A Social and Medical Misconception
University of Johns Hopkins
Running head: MALE CIRCUMCISION: A SOCIAL
Introduction
Male circumcision is defined as a surgical procedure in which the prepuce of the penis is separated from the glands and excised. (Mosby, 1986) Dating as far back as 2800 BC, circumcision has been performed as a part of religious ceremony, as a puberty or premarital rite, as a disciplinary measure, as a reprieve against the toxic effects of vaginal blood, and as a mark of slavery. (Milos & Macris, 1992) In the United States, advocacy of circumcision was perpetuated amid the Victorian belief that circumcision served as a remedy against the ills of masturbation and systemic disease. (Lund, 1990) The scientific community further reinforced these beliefs by reporting the incidence of hygiene-related urogenital disorders to be higher in uncircumcised men.
Circumcision is now a societal norm in the United States. Routine circumcision is the most widely practiced pediatric surgery and an estimated one to one-and-a-half million newborns, or 80 to 90 percent of the population, are circumcised. (Lund, 1990) Despite these statistics, circumcision still remains a topic of great debate. The medical community is examining the need for a surgical procedure that is historically based on religious and cultural doctrine and not of medical necessity. Possible complications of circumcision include hemorrhage, infection, surgical trauma, and pain. (Gelbaum, 1992) Unless absolute medical indications exist, why should male infants be exposed to these risks? In essence, our society has perpetuated an unnecessary surgical procedure that permanently alters a normal, healthy body part.
This paper examines the literature surrounding the debate over circumcision, delineates the flaws that exist in the research, and discusses the nurse's role in the circumcision debate.
Review of Literature
Many studies performed worldwide suggest a relationship between lack of circumcision and urinary tract infection (UTI). In 1982, Ginsberg and McCracken described a case series of infants five days to eight months of age hospitalized with UTI. (Thompson, 1990) Of the total infant population hospitalized with UTI, sixty-two were males and only three were circumcised. (Thompson, 1990) Based on this information, the researchers speculated that, "the uncircumcised male has an increased susceptibility to UTI." Subsequently, Wiswell and associates from Brooke Army Hospital released a series of papers based upon a retrospective cohort study design of children hospitalized with UTI in the first year of life. The authors conclusions suggest a 10 to 20-fold increase in risk for UTI in the uncircumcised male in the first year of life. (Thompson, 1990) However, Thompson (1990) reports that in these studies analysis of the data was very crude and there were no controls for the variables of age, race, education level, o
r income. The statistical findings from further studies are equally misconstruing. In 1986, Wiswell and Roscelli reported an increase in the number of UTIs as the circumcision rate declined. By clearly leaving out "aberrant data", the results of the study are again very misleading. In 1989, Herzog from Boston Children's Hospital reported on a retrospective case-control study on the relationship between the incidence of UTI and circumcision in the male infant under one year of age. Here too, the results were not adjusted to account for the variables of age, ethnicity, and drop-out rate of the participants. It is obvious that this research is statistically weak and should not be the criteria on which to decide for or against neonatal circumcision.
Lund (1990) reports that a study conducted by Parker and associates estimates the relative risk of uncircumcised males to be double that of circumcised males for acquiring herpes genitalis, candidiasis, gonorrhea, and syphilis. Simonsen and coworkers performed a case-control study on 340 men in Kenya, Africa in an attempt to explain the different pattern for acquired immune deficiency syndrome (AIDS) virus in Africa as compared to the United States. (Thompson, 1990) The authors conclude that the relative risk for AIDS was higher for uncircumcised men. Results from similar studies in the United States remain conflicting. Although most of the existing studies do associate a relationship between the incidence of venereal disease and circumcision, the American Academy of Pediatrics found existing reports inconclusive and conflicting in results. (Lund, 1990) There is an overwhelming incidence of STD and AIDS in the United States, where a majority of the men are circumcised.
It is imperative that we look at ways of altering our risk of exposure to these agents than at altering the sexual anatomy of the healthy male. These disease states are caused by specific pathogens and high-risk behavior, not by the uncircumcised penis.
Clinical research clearly supports the idea that circumcision performed in the neonate has many characteristics associated with pain. There is an increase in heart rate, crying, blood pressure, and in serum cortisol levels. (Myron & Maguire, 1991) Researchers are also in agreement that the neural pathways for pain perception are present in the newborn and that the intraneuronal distances in infants compensate for the incomplete myelinization of the nerve. (Myron & Maguire, 1991) Although the use of a local anesthetic may reduce the neonatal physiologic response to pain, this has not become a routine procedure for most physicians. Beliefs that the risks outweigh the benefits, that anesthesia produces additional pain, and that the immature neuroanatomy of the neonate renders a minimal pain response help to explain why physicians do not administer anesthesia during circumcision. (Myron & Maguire, 1991)
Thompson (1990) reports that the exact incidence of post-operative complication remains unknown. Errors such as the removal of too much or too little skin, formation of skin bridges or chordee, urethrocutaneous fistula, and necrosis of the glands or entire penis can occur following circumcision. The reported incidence of excessive bleeding ranges from 0.1% to as high as 35%. (Snyder, 1991) Infection can also occur resulting in staphylococcal scalded skin syndrome, gangrene, generalized sepsis, or meningitis. (Snyder, 1991) Almost all of these complications can be avoided in practice. However, many problems are due to the fact that circumcision is viewed as a minor surgery and is often delegated to the new physician with little direct supervision or prior instruction. Snyder (1991) refers to the Wiswell study on the risks of circumcision. The total complication rate after circumcision was .19%, however, the risk of severe complications following noncircumcision remained extremely low, .019%. (Snyder, 1991)
Assuming that circumcision is not performed in such a meticulous manner worldwide, it is possible that the risks of circumcision are far greater that the current research in this country suggests.
Discussion
Clinical evidence cited from the literature confirms that circumcision in the neonate can result in unnecessary trauma and pain. There is no unequivocal proof that lack of circumcision is directly related to the incidence of UTI and STDs. Despite these facts, circumcision is still performed as a routine procedure.
As stated in the American Nurses' Association (ANA) Code of Ethics (1985), nurse's are required to have knowledge relevant to the current scope of nursing practice, changing issues and concerns, and ethical concepts and principles. It is the responsibility of the nurse to educate and provide the patient with choices. As health care professionals, we are responsible for providing unbiased counseling. Nurse's must disregard their own personal biases when discussing circumcision with the patient. According to the doctrine of informed consent, we must present all of the known facts to the patient. The patient needs to be informed that circumcision is an elective surgery, and to the best of their ability the nurse must present what constitutes the benefits, risks, and alternatives available. (Gelbaum, 1992)
According to the ANA Standards of Clinical Nursing Practice, (1991) the nurse shares knowledge with colleagues and acts as a client advocate. Therefore, it is imperative in light of the current research that the nurse disclose these findings to associates in the health care profession and continue to lobby against the use of unnecessary surgical interventions in the neonate.
Summary
In summary, there is no statistical evidence in the
literature that circumcision is directly related to a decrease in
urinary tract infection, sexually transmitted disease, or AIDS
in this country. There is evidence that circumcision evokes a pain
response and carries the post-operative risks of infection,
trauma, and disformity. Although circumcision is highly performed
within our medical community, it still cannot be recommended
without undeniable proof of benefit to the patient. According to
the ANA, it is the nurse's responsibility to read the literature,
obtain the facts, and share their knowledge with patients and
colleagues.
Conclusion
Circumcision evolved out of a cultural and religious ritual and has been maintained over the decades despite the risks associated with this nonessential, surgical procedure. The current literature does not reveal a need for circumcision in the neonate. However, circumcision in the male neonate will continue to be a topic of wide debate until the risks can be shown, without a doubt, to outweigh the benefits. Circumcision has truly become a social norm in our country that the medical community attempts to justify with weak and inaccurate research.
According to the ANA, it is not the role of the nurse to decide for the parent on the need for circumcision in the infant. Rather, it is the nurse's role to present all of the information in an unbiased manner and remain an advocate of the rights of the patient. Nurse's need to realistically analyze the data available and decide if they truly are an advocate, or are merely following in the steps of their colleagues.
References
American Nurses Association (1991). Standards of clinical nursing
practice. Washington, D.C.: American Nurses Association.
Gelbaum, I. (1992). Circumcision to educate not indoctrinate-a
mandate for certified nurse-midwives. Journal of Nurse-
f:\12000 essays\sciences (985)\Biology\MANATEES.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MANATEE
The manatee popularly called the sea cow is any of
the species of large water animals in the genus
Trichechus. There are three species of manatee with T.
inunguis found in the Amazon and Orinoco river systems;
T. manatus is found in central Florida and along the
Gulf of Mexico and Caribbean coasts; and T. senegalenis
found in the rivers of tropical West Africa. A manatee
is a slow moving, seal shaped mammal that lives in
shallow coastal waters where rich plant grows. It
usually is at home in salt or fresh water but rarely
straying far from home.
A manatee is grayish-black stout thick skinned
animals and almost hairless. Its corpulent body tapers
to a horizontally flattened, round tail. The fore
limbs are set close to it's head and are used to push
algae, such as seaweed and other water plants toward
their mouths. They have a small head, with a straight
snout and cleft upper lip with bristly hairs. Adults
can grow up to 15ft (4.6 meters) but they usually only
grow to about 10 feet. They weigh an average of 1300
pounds.
Manatees live in small family groups sometimes up
to herds of 15-20. After a gestation of up to 6
months, usually a single pinkish calf is born.
Manatees ferquently communicate by muzzle to muzzle
contact and when alarmed they emit chripy squeaks.
The number of manatees has been reduced over the
past several years due to heavy hunting for their
hides, meat, and blubber oil. Some governments,
including the United States, have placed the manatees
under the endangered species list. One practical
reason for this is that they have proved useful in
clearin girrigation and transport channels clogged with
aquatic plant life. There has also been an increase in
manatee death due to passing boats that speed through
channels.
If we all do not help protect these sea cows today
then they will not be around for future generations to
enjoy. Everyone must do their part in protecting these
mammals of they ocean. If we do not help save their
dying species who will?
f:\12000 essays\sciences (985)\Biology\Melatonin and the Pineal Gland.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Set deep in our brains is a tiny gland called the pineal gland. This tiny gland is in charge of the endocrine system, the glandular system that controls most of our bodily functions. The pineal runs our Œbody clocks¹, and it produces melatonin; the hormone that may prove to be the biggest medical discovery since penicilin, and the key to controlling the aging process. The pineal gland controls such functions as our sleeping cycle and the change of body temperature that we undergo with the changing seasons. It tells animals when to migrate north and south, and when to grow or shed heavy coats. By slowing down and speeding up their metabolisms, it tells them when to fatten up for hibernation, and when to wake up from hibernation in the spring.
Melatonin is the hormone that controls not only when we feel sleepy, but the rate at which we age, when we go through puberty, and how well our immune systems fend off diseases. Being set in the middle of our brains, the pineal gland has no direct access to sunlight. Our eyes send it a message of how much sunlight they see, and when it¹s dark. The sunlight prohibits the gland from producing melatonin, so at night, when there¹s no sun, the sleep-inducing hormone is released into our bodies. Because of the pineal gland and melatonin, humans have known to sleep at night and wake during the day since long before the age of alarm clocks.
Humans don¹t produce melatonin right from birth; it is transfered in utero to babies through the placenta. For their first few days of life, babies still have to receive it from breast milk. Our levels of melatonin peak during childhood, then decrease at the beginning of puberty, so that other hormones can take control of our bodies. As we get older, the amount of melatonin we produce continues to decrease until at age 60, we produce about half as much as we did at age 20. With the rapid decrease from about age 50 on, the effects of old age quickly become more visible and physically evident. With what scientists have recently discovered, we may very soon be able to harness melatonin to slow down aging, fend off disease, and keep us feeling generally healthy and energetic; not to mention the things melatonin can do for us right now like curing insomnia and regulating sleeping patterns, eliminating the effects of jet-lag, and relieving every day stress.
Melatonin is known as the ³regulator of regulators², because it sends out the messages that control the amounts of all the different hormones in our bodies. It is a balance among our different hormones that keeps us healthy, and as we age, our different hormone levels can become unbalanced, which results in aging.
Everything our bodies do requires energy, from running a mile to sitting still and just breathing. Every cell in our bodies requires at least some energy to function. Within all of our cells are microscopic structures called mitochondria. Mitochondria are considered the powerhouses of the cells, because they convert energy into ATP; the substance which fuels most every cell in our body. In order to create ATP, we need to take in and Œburn¹ oxygen. As we age, our mitochondria age, and as our mitochondria age, their production of ATP slows, which results in the buildup of excess oxygen. This buildup results in the oxidization, (or rusting) of the cells and their different components. This is why when we¹re older, we don¹t have as much energy as when we¹re young. Here¹s where melatonin steps in. Melatonin metabolizes the thyroid hormone (which supplies energy to the mitochondria, among other cell organelles) so that it carries more energy. When the mitochondria receive more power from the thyroid hormo
ne, they can produce more ATP, giving more energy to every cell in our bodies, and they use up all of the oxygen that we take in, so that our cells don¹t begin to oxidize.
There are mitochondria in the cells of the pineal gland, which give it the power to produce and secrete melatonin. Pineal function declines as its cells¹ mitochondria provide it with less ATP, and instead start to produce calcium salt, which calcifies the gland. Calcification is the hardening of the gland (with calcium deposits) which hinders its performance. Once the pineal gland begins to function less perfectly, the production of energy for the entire body is thrown off. Therefore, with age comes less energy, which leads to less melatonin, which leads to less energy and more leftover oxygen, which causes aging. To stop this vicious cycle from beginning, one must only take enough of a dose of melatonin to keep the levels of all the involved hormones where they are when we are young.
That only touches on the surface of what regulated melatonin levels can achieve. The calcification that adversely affects the pineal gland happens elsewhere in the body as the mitochondria in the various types of cells slow down. For example, calcium deposits in the blood vessels leads to hardening of the arteries, which can eventually lead to a stroke or heart attack. These same kinds of calcium deposits are also found in such organs as the heart and brain, and can lead to other complications. The reason that children aren¹t afflicted with these conditions is that levels of melatonin in the human body are at their peak during our childhood.
To sum up, when the pineal can no longer do its job, it results in the breakdown of mitochondria throughout the body, the powerhouses of the cells that regulate energy. When the mitochondria break down, this causes a chain reaction throughout the body that leads to the eventual collapse of all other organ systems. This collapse is what defines aging to us, and melatonin is the tool we can use to prevent it, or at least put it off a while longer.
It is also being said that melatonin is an effective weapon against disease, and can strengthen our immune systems. Part of this is simply logical reasoning when the effects of melatonin on aging are taken into consideration. It is a decline in the functions of our vital organs that leads to many of the diseases known to man. Therefore, when the aging of our individual organs is hindered, as described in the first part of this paper, the diseases that often accompany that aging will no longer be able to do so. Melatonin will also effect various afflictions in the same way as it would effect atherosclerosis (hardening of the arteries); with melatonin levels increased, the excess calcium salt that can cause so many problems is no longer present to cause them.
The way in which melatonin effects our actual immune systems is slightly more complex. One main cell of the immune system is the white blood cell. One type of white blood cell is a lymphocyte, and one type of lymphocyte is known as a T cell. T cells are responsible not only for protecting cells against viruses and bacteria, but also for ferreting out possible trouble-making agents within our bloodstream. These cells have to be very finely tuned so that they don¹t attack any of the helpful cells or materials in our bodies. It would be disasterous if our immune system started to kill the cells that make up the tissue of our various organs, or if it attacked the nutrients we derive from the food we eat. This sometimes happens; disorders like this are known as autoimmune diseases.
The reason for autoimmune diseases, and for the greater frequency and severity of illnesses in older people, is the aging of the immune system. Certain T cells have memories, which is why many times after a person has had a particular infection, they are often immune when later exposed to the bacteria that caused the original infection. The main effect of aging on the immune system is that our T cells can no longer remember what cells are harmful to us, and can no longer distinguish our body¹s cells from harmful invading ones. We have supressor cells, to stop attacks on our own bodies that our immune systems may mistakenly launch. However, when we age, our supressor cells can fail to work well or at all.
As it has been demonstrated, it is age itself that leads to most of the afflictions about which I¹ve written, and it has been described how melatonin can slow the aging process and its effects. In this same way, it can keep our T cells and the other various parts of our immune system working at a peak physical (youthful) level. With our immune systems working as efficiently at the age of 50 as they did when we were 10, the illnesses associated with old age will seldom be of concern to us, thanks to melatonin.
Besides helping us to live longer and to fend off diseases better, melatonin supplements can help with more commonplace things like stress, jet-lag, and everyday fatigue. Stress isn¹t just an abstract idea caused by bad feelings, it¹s indirectly created by chemical reactions within our bodies, as a result of perfectly normal situations. Humans have basic survival instincts that we¹ve had since the beginning of mankind. When faced with a threatening situation, we have a Œfight or flight¹ urge; the urge to react either offensively or defensively to the threat. What happens is that our nervous system stimulates our adrenal glands, which produce adrenaline, which causes our metabolism to speed up, our muscles to tense, our heart to beat faster, which often causes us to become hot or start to sweat, and often to produce excess stomach acid. In this day and age, however, we can¹t always release this tension the way that our body may intend us to. For example, in a threatening confrontation with a teacher, I c
ould niether punch my teacher or run away from him. Therefore, the hormones floating around in my body making me all excited and wanting to react don¹t achieve their objective and remain in my body. This is how stress occurs, on both a chemical and emotional level; our impulses are not able to be acted on, and unresolved they do us harm. Melatonin neutralizes adrenaline and other Œexcitement hormones, thereby calming us down. That means no tense knots of muscle from an unrelieved situation, and no excess stomach acid creating an ulcer in my stomach.
with regard to jet-lag and energy loss, the answer is closely related to getting more and better sleep. Jet-lag occurs when our body clocks are slow to readjust in a new time zone; the clock on the wall tells us that it¹s a different time than our bodies think it is. I¹ve already explained how melatonin causes us to sleep, so when you¹re on the plane to wherever you¹re going, or maybe the day before you leave, you take enough melatonin so that your body thinks it¹s time to go to sleep according to what time it is wherever you¹re traveling to. That way, when you get there, you¹re already on the same schedule as the people there.
Once again, not having enough energy to make it through a normal day is often the result of not having had a good enough sleep the night before. Melatonin helps us to sleep more soundly, therefore eliminating this problem, so long as we leave time so that we can get as much sleep as we need.
Melatonin is a hormone secreted by a tiny gland deep in the middle of our heads, but having supplemental doses can accomplish great things for us. We can look forward to such great things as extending the length of our lifetimes. We can live those extra years feeling healthy and young, and with much less threat of illness. We can accomplish such useful and even-more-immediate goals like curing jet-lag, improving the quality of the sleep we get, and cutting down on the stress in our lives by both chemical and emotional means. While the study of melatonin and its many miraculous uses has gone on for many years, it must still go on for many more, to determine with more exactness the effects of the hormone on a long-term basis. However, if it only provides a healthy good night¹s sleep, it¹s a great discovery; but if it will really do all that we think it can, it will be one the greatest medical discoveries of our time.
Bibliography
1. Your Body¹s Natural Wonder Drug: Melatonin, by Russel J. Reiter,Ph.D. and Jo Robinson
Copywrite 1995 Bantam Books, Ny, NY
2. The Melatonin Miracle,
3. Melatonin, by Geoffrey Cowley; Newsweek, Aug. 7, 1995
4. The World Book Encyclopedia, World Book, Inc., 1981, Chicago, Il
f:\12000 essays\sciences (985)\Biology\membrane function.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What are the major components of biological membranes and how do they
contribute to membrane function?.
___________________________________________________________________
Summary.
The role of the biological membrane has proved to be vital in countless
mechanisms necessary to a cells survival. The phospholipid bilayer performs the
simpler functions such as compartmentation, protection and osmoregulation. The
proteins perform a wider range of functions such as extracellular interactions and
metabolic processes. The carbohydrates are found in conjunction with both the lipids
and proteins, and therefore enhance the properties of both. This may vary from
recognition to protection.
Overall the biological membrane is an extensive, self-sealing, fluid,
asymmetric, selectively permeable, compartmental barrier essential for a cell or
organelles correct functioning, and thus its survival.
_____________________________________________________________________
Introduction.
Biological membranes surround all living cells, and may also be found
surrounding many of an eukaryotes organelles. The membrane is essential to the
survival of a cell due to its diverse range of functions. There are general functions
common to all membranes such as control of permeability, and then there are
specialised functions that depend upon the cell type, such as conveyance of an action
potential in neurones. However, despite the diversity of function, the structure of
membranes is remarkably similar.
All membranes are composed of lipid, protein and carbohydrate, but it is the
ratio of these components that varies. For example the protein component may be as
high as 80% in Erythrocytes, and as low as 18% in myelinated neurones. Alternately,
the lipid component may be as high as 80% in myelinated neurones, and as low as
15% in skeletal muscle fibres.
The initial model for membrane structure was proposed by Danielli and
Davson in the late 1930s. They suggested that the plasma membrane consisted of a
lipid bilayer coated on both sides by protein. In 1960, Michael Robertson
proposed the Unit Membrane Hypothesis which suggests that all biological
membranes -regardless of location- have a similar basic structure. This has been
confirmed by research techniques. In the 1970s, Singer and Nicholson announced a
modified version of Danielli and Davsons membrane model, which they called the
Fluid Mosaic Model. This suggested that the lipid bilayer supplies the backbone of
the membrane, and proteins associated with the membrane are not fixed in regular
positions. This model has yet to be disproved and will therefore be the basis
of this essay.
The lipid component.
Lipid and protein are the two predominant components of the biological
membrane. There are a variety of lipids found in membranes, the majority of which
are phospholipids. The phosphate head of a lipid molecule is hydrophilic, while the
long fatty acid tails are hydrophobic. This gives the overall molecule an amphipathic
nature. The fatty acid tails of lipid molecules are attracted together by hydrophobic
forces and this causes the formation of a bilayer that is exclusive of water. This bilayer
is the basis of all membrane structure. The significance of the hydrophobic forces between
fatty acids is that the membrane is capable of spontaneous reforming should it become
damaged.
The major lipid of animal cells is phospatidylcholine. It is a typical
phospholipid with two fatty acid chains. One of these chains is saturated, the other
unsaturated. The unsaturated chain is especially important because the kink due to the
double bond increases the distance between neighbouring molecules, and this in turn
increases the fluidity of the membrane. Other important phospholipids include
phospatidylserine and phosphatidylethanolamine, the latter of which is found in
bacteria.
The phosphate group of phospholipids acts as a polar head, but it is not always the
only polar group that can be present. Some plants contain sulphonolipids in their membranes,
and more commonly a carbohydrate may be present to give a glycolipid. The main carbohydrate
found in glycolipids is galactose. Glycolipids tend to only be found on the outer face of
the plasma membrane where in animals they constitute about 5% of all lipid present. The
precise functions of glycolipids is still unclear, but suggestions include protecting the
membrane in harsh conditions, electrical insulation in neurones, and maintenance of ionic
concentration gradients through the charges on the sugar units. However the most important
role seems to be the behaviour of glycolipids in cellular recognition, where the charged
sugar units interact with extracellular molecules. An example of this is the interaction
between a ganglioside called GM1 and the Cholera toxin. The ganglioside triggers a chain of
events that leads to the characteristic diarrhoea of Cholera sufferers. Cells lacking GM1
are not affected by the Cholera toxin.
Eukaryotes also contain sterols in their membranes, associated with lipids. In
plants the main sterol present is ergosterol, and in animals the main sterol is
cholesterol. There may be as many cholesterol molecules in a membrane as there are
phospholipid molecules. Cholesterol orientates in such a way that it significantly
affects the fluidity of the membrane. In regions of high cholesterol content,
permeability is greatly restricted so that even the smallest molecules can no longer
cross the membrane. This is advantageous in localised regions of membrane.
Cholesterol also acts as a very efficient cryoprotectant, preventing the lipid bilayer
from crystallising in cold conditions.
The biological membrane is responsible for defining cell and organelle
boundaries. This is important in separating matrices that may have very different
compositions. Since there are no covalent forces between lipids in a bilayer, the
individual molecules are able to diffuse laterally, and occasionally across the
membrane. This freedom of movement aids the process of simple diffusion, which is
the only way that small molecules can cross the membrane without the aid of proteins.
The limit of permeability of the membrane to the diffusion of small solutes is
selectively controlled by the distribution of cholesterol.
Another role of lipids is their to dissolve proteins and enzymes that would
otherwise be insoluble. When an enzyme becomes partially embedded in the lipid
bilayer it can more readily undergo conformational changes, that increase its activity, or
specificity to its substrate. For example, mitochondrial ATPase is a membranous enzyme that
has a greatly decreased Km and Vmax following delipidation. The same applies to
glucose-6-phospatase, and many other enzymes.
The ability of the lipid bilayer to act as an organic solvent is very important in
the reception of the Intracellular Receptor Superfamily. These are hormones such as the
steroids, thyroids and retinoids which are all small enough to pass directly through the
membrane.
Ionophores are another family of compounds often found embedded in the
plasma membrane. Although some are proteinous, the majority are polyaromatic
hydrocarbons, or hydrocarbons with a net ring structure. Their presence in the
membrane produces channels that increases permeability to specific inorganic ions.
Ionophores may be either mobile ion-carriers or channel formers. (see fig.4)
The two layers of lipid tend to have different functions or at least uneven
distribution of the work involved in a function, and to this end the distribution of
types of lipid molecules is asymmetrical, usually in favour of the outer face. In general
internal membranes are also a lot simpler in composition than the plasma membrane.
Mitochondria, the endoplasmic reticulum, and the nucleus do not contain any glycolipids. The
nuclear membrane is distinct in the fact that over 60% of its lipid is phospatidylcholine,
whereas in the plasma membrane the figure is nearer 35%.
The protein component.
All biological membranes contain a certain amount of protein. The mass ratio
of protein to lipid may vary from 0.25:1 to 3.6:1, although the average is usually 1:1. The
proteins of a biological membrane can be classified into five groups depending upon their
location, as follows;
Class 1. Peripheral.------------These proteins lack anchor chains. They are
usually found on the external face of membranes
associated by polar interactions.
Class 2. Partially Anchored-----These proteins have a short hydrophobic anchor
chain that cannot completely span the membrane.
Class 3. Integral (1)-----------These proteins have one anchor chain that spans
the membrane.
Class 4. Integral (5)-----------These proteins have five anchor chains that span
the membrane.
Class 5. Lipid Anchored---------These proteins undergo substitution with the
carbohydrate groups of glycolipids, therefore
binding covalently with the lipid.
This classification is not definitive in including all proteins, since there may
well be other examples that span the membrane with different numbers of anchor chains.
The structure of proteins varies greatly. The first factor affecting structure is
the proteins function, but equally important is the proteins location, as shown above. Those
proteins that span the membrane have regions of hydrophobic amino acids arranged in
alpha-helices that act as anchors. The alpha-helix allows maximum Hydrogen bonding, and
therefore water exclusion.
Proteins that pass completely through the membrane are never symmetrical in
their structure. The outer face of the plasma membrane at least always has the bulk of
the proteins structure. It is usually rich in disulphide bonds, oligasaccharides, and
when relevant, prosthetic groups.
The proteins found in biological membranes all have distinctive functions,
such that the overall function of a cell or organelle may depend on the proteins
present. Also, different membranes within a cell, (i.e. those membranes surrounding
organelles) can be recognised solely on the presence of membranous marker proteins.
In the majority of cases membranous proteins perform regulatory functions.
The first group of such proteins are the ionophores, as mentioned before. The
proteinous ionophores are found in the greatest concentration in neurones. Here, the
diffusion of inorganic ions is essential to maintaining the required membrane
potential. The main ions responsible for this are Sodium, Potassium and Chloride -
each of which has its own channel forming ionophore.
The observed rate of diffusion of many other solutes is much greater than can
be explained by physical processes. It is widely accepted that membranous proteins
carry certain solutes across the membrane by the process of facilitated diffusion. This is
done by the forming of pores of a complimentary size and charge, to accept specific ions or
organic molecules. The pores are opened and closed by conformational changes in the proteins
structure. There are three main types of facilitated diffusion. None of these processes
require an energy input.
Active transport is the movement of solutes across a membrane, against the
concentration gradient, and it therefore utilises energy from ATP. An example of this
is the Sodium-Potassium-ATPase pump, which is an active antiport carrier protein
common to nearly all living cells. It maintains a high [Potassium ion] within the cell
while simultaneously maintaining a high [Sodium ion] outside the cell. The reason for
this is that by pumping Sodium out of the cell, it can diffuse in again at a different site
where it couples to a nutrient.
As well as transporting solutes across a membrane, there are many proteins
that transport solutes along the membrane. An example of this are the respiratory
enzyme complexes of the inner mitochondrial membrane. These complexes are
located in a close proximity to each other, and pass electrons through what is known
as the respiratory chain. The orientation of the complexes is vital for their correct
functioning.
Another key role of membranous proteins is to oversee interactions with the
extracellular matrix. Many hormones interact with cells through the membranous
enzyme - adenylcyclase. The binding of specific hormones activates adenylcyclase, to
produce cyclic adenosine monophosphate (c.AMP) from adenosine triphosphate
(ATP). c.AMP acts as a secondary messenger within the cell. A wide variety of
extracellular signalling molecules work by controlling intracellular c.AMP levels.
Insulin is an exception to this generalisation, because its receptor is enzyme linked
rather than ligand linked. This means that the cystolic face of the receptor has
enzymatic activity rather than ligand forming activity. The enzymatic activity of the
Insulin receptor is in the reversible phosphorylation of phospoinosite.
Vision and smell rely on a family of receptors called the G-protein receptors.
The cystolic faces of these receptors bind with guanosine triphosphate (GTP). This
action is coupled to ion channels, so that the activation of a receptor changes the
intracellular levels of c.GMP, which in turn activates the ion channels, and thus
allows a membrane potential to be developed.
The composition of proteins in the biological membrane is far from static.
Receptors are constantly being regenerated and replaced, and this is important in the
ever changing environment of the cell. For example, the transferrin receptor is
responsible for the uptake of Iron. In the cytosol, an enzyme called aconitase is present
which inhibits the synthesis of transferrin by binding to transferrins mRNA. In a low Iron
concentration, aconitase releases the mRNA allowing transferrin to be synthesised.
A similar process occurs with the Low Density Lipoprotein (LDL) receptor.
This receptor traps LDL particles which are rich in cholesterol. The LDL receptor is
only produced by the cell, when the cell requires cholesterol for membrane synthesis.
The number of receptors in a biological membrane varies greatly between
different type of receptor.
The immune responses of cells are controlled by a superfamily of membranous
proteins called the Ig superfamily. This superfamily contains all the molecules
involved in intercellular and antigenic recognition. This includes major
histocompatability complexes, Thymus T-cells, Bursa B-cells, antibodies and so on.
Although this family is vast, the important point is that all antigenic responses are
mediated by membranous proteins.
As there are glycolipids in the biological membrane, there are also
glycoproteins. One of the key roles of glycoproteins is in intercellular adhesion. The
Cadherins are a family of Calcium dependant adhesives. They are firmly anchored
through the membrane, and have glycolated heads that covalently bind to
neighbouring molecules. They seem to be important in embryonic morphogenesis
during the differentiation of tissue types. The Lectins and Selectins are similar
families of molecules responsible for adhesion in the bloodstream. However the most
abundant adhesives are the Integrins, which are responsible for binding the cellular
cytoskeleton to the extracellular matrix.
The range of membranous proteins has proved to be vast, due to the wide
variety of functions that must be performed. It would be possible to continue
describing proteins for many more pages, but one final example will be used in
conclusion, and that is the photochemical reaction centre of photosynthesis. This
very large protein complex is found in the Thylakoid membrane of chloroplasts. Each
reaction centre has an antenna complex comprising hundreds of chlorophyll molecules
that trap light and funnel the energy through to a trap where an excited electron is
passed down a chain of several membranous electron acceptors.
Conclusion.
The role of the biological membrane has proved to be vital in countless
mechanisms necessary to a cells survival. The phospholipid bilayer performs the
simpler functions such as compartmentation, protection and osmoregulation. The
proteins perform a wider range of functions such as extracellular interactions and
metabolic processes. The carbohydrates are found in conjunction with both the lipids
and proteins, and therefore enhance the properties of both. This may vary from
recognition to protection.
Overall the biological membrane is an extensive, self-sealing, fluid,
asymmetric, selectively permeable, compartmental barrier essential for a cell or
organelles correct functioning, and thus its survival.
Bibliography.
1) Alberts,B; Bray,D; Lewis,J; Raff,M; Roberts,K; Watson,J.D. Molecular
Biology of the Cell, Third Edition. p.195-212, p.478-504. Garland Publishing,
1994.
2) Beach; Cerejidol; Gordon; Rotunno. Introduction to the study of Biological
Membranes. p.12. 1970.
3) Fleischer; Haleti; Maclennan; Tzagoloff. The Molecular Biology of
Membranes. p.138-182. Plenum Press, 1978.
4) Perkins,H.R; Rogers,H.J. Cell Walls and Membranes. p.334-338. E & F.N.
Spon Ltd, 1968.
5) Quinn,P. The Molecular Biology of Cell Membranes. p.30-34, p.173-207.
Macmillan Press, 1982.
6) Stryer,L. Biochemistry, Third Edition. p.283-309. W.H. Freeman & Co, 1994.
7) Yeagle,P. The Membranes of Cells. p.4-16, p.23-39. Academic Press Inc,
1987.
f:\12000 essays\sciences (985)\Biology\MEMORY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Memory
2
MEMORY
Memory is defined as the faculty by which sense impressions and information are retained in the mind and subsequently recalled. A person's capacity to remember and the total store of mentally retained impressions and knowledge also formulate memory. (Webster, 1992)
"We all possess inside our heads a system for declassifying, storing and retrieving information that exceeds the best computer capacity, flexibility, and speed. Yet the same system is so limited and unreliable that it cannot consistently remember a nine-digit phone number long enough to dial it" (Baddeley, 1993). The examination of human behavior reveals that current activities are inescapably linked by memories. General "competent" (1993) behavior requires that certain past events have effect on the influences in the present. For example, touching a hot stove would cause a burn and therefore memory would convey a message to not repeat again. All of this is effected by the development of short-term memory (STM) and long-term memory (LTM).
Memories can be positive, like memories of girlfriends and special events, or they can be negative, such as suppressed memories. Sexual abuse of children and
Memory
3
adolescents is known to cause severe psychological and emotional damage. Adults who were sexually abused in childhood are at a higher risk for developing a variety of psychiatric disorders, anxiety disorders, personality disorders, and mood disorders. To understand the essential issues about traumatic memory, the human mind's response to a traumatic event must first be understood. The memory is made up of many different sections with each having different consequences on one another.
Can people remember what they were wearing three days ago? Most likely no, because the memory only holds on to what is actively remembered. What a person was wearing is not important so it is thrown out and forgotten. This type of unimportant information passes through the short-term memory. "Short-term memory is a system for storing information over brief intervals of time." (Squire, 1987) It's main characteristic is the holding and understanding of limited amounts of information. The system can grasp brief ideas which would otherwise slip into oblivion, hold them, relate them and understand them for its own purpose. (1987) Another aspect of STM was introduced by William James in 1890, under the name "primary memory" (Baddeley, 1993). Primary memory refers to the information that forms the focus of current attention and that occupies the stream of thought. "This information does not need to be brought back to mind in order to be used" (1993). Compared to short-term memory, primary memory
Memory
4
places less emphasis on time and more emphasis on the parts of attention, processing, and holding. No matter what it is called, this system is used when someone hears a telephone number and remembers it long enough to write it down. (Squire, 1987)
Luckily, a telephone number only consists of seven digits or else no one would be able to remember them. Most people can remember six or seven digits while others only four or five and some up to nine or ten. This is measured by a technique called the digit span, developed by a London school teacher, J. Jacobs, in 1887. Jacobs took subjects (people), presented them with a sequence of digits and required them to repeat the numbers back in the same order. The length of the sequence is steadily increased until a point is reached at which the subject always fails. The part at which a person is right half the time is defined as their digit span. A way to improve a digit span is through rhythm which helps to reduce the tendency to recall the numbers in the wrong order. Also, to make sure a telephone number is copied correctly, numbers can be grouped in twos and threes instead of given all at once. (Baddeley, 1993)
Another part of short-term memory is called chunking, used for the immediate recall of letters rather than numbers. When told to remember and repeat the letters q s v l e r c i i u k, only a person with an excellent immediate memory would be able to do so. But, if the same letters were given this way, q u i c k s i l v e r, the results would be
Memory
5
different. What is the difference between the two sequences? The first were 11 unrelated letters, and the second were chunked into two words which makes this task easier. (1993)
"Short-term memory recall is slightly better for random numbers than for random letters, which sometimes have similar sounds. It is better for information heard rather than seen. Still, the basic principals hold true: At any given moment, we can process only a very limited amount of information." (Myers, 1995)
The next part in the memory process involves the encoding and merging of information from short-term into long-term memory. Long-term memory is understood as having three separate stages: transfer, storage, and retrieval. Once information has entered LTM, with a size that appears to be essentially unlimited, it is maintained by repetition or organization.
A major part of the transfer process concerns how learned information is coded into memory. Long-term and short-term memory are thought to have different organizations. Where the STM is seen as being organized by time, LTM is organized by meaning and association then put into categories. For example, our memory takes in Coke and Pepsi as drinks then organizes and puts them in categories such as soda. An important role in the transferring of information into long-term memory is rehearsal.
Memory
6
The critical aspect is the type of rehearsal or processing that takes place during the input time. "Simple repetition, which serves only to maintain the immediate availability of an item, does little if anything to enhance subsequent recall. Active processes such as elaboration, transformation, and recoding are activities that have been found to enhance recall." (Asken, 1987)
Information that is stored in LTM is stored in the same form as it was originally encoded. Major forms of storage are episodic memory and semantic memory. Episodic memory involves remembering particular incidents, such as visiting the doctor a week ago. Semantic memory concerns knowledge about the world. It holds meanings of words or any general information learned. Knowledge of the capitals of all the states would be stored in semantic memory. A Canadian psychologist, Endel Tulving discovered that there was more activity in the front of the brain when episodic memories were being retrieved, compared to more activity towards the back of the brain with semantic memory.
Retrieval, the third process related to LTM, is the finding and retrieving of information from long-term storage. The cues necessary to retrieve information from memory are the same cues that were used to encode the material.
Memory
7
For some, positive memories are recalled through music. Certain songs remind people of special times spent with friends. Couples sometimes have songs that remind them of their time spent together. Everyone has some way of remembering good times from the past.
Along with positive memories come the negative ones, which are suppressed deep in our minds. Another word for negative is traumatic, an experience beyond "the range of usual human experience," (Sidran Foundation, 1994) and is brought about with intense fear, terror and helplessness. Examples include a serious threat to one's life (or that of one's children, spouse, etc.), rape, military combat, natural or accidental disasters, and torture.
So how does trauma affect memory? People use their natural ability to avoid concern of a traumatic experience while the trauma is happening. This causes the memories about the traumatic events to emerge later. People with Post Traumatic Stress Disorder (PTSD) who have survived horrific events experience extreme recall of the event. Some people say they are haunted by memories of traumatic experiences that disrupt their daily lives. They cannot get the pictures of the trauma out of their head. This brings recurring nightmares, flashbacks, or even reliving the trauma as if it were happening now. Vietnam veterans experience this symptom because of what
Memory
8
they saw and lived through. Some researchers have proven in the laboratory that ordinary or slightly stressful memories are easily distorted. However, this laboratory research on ordinary memory may be irrelevant in regard to memories of traumatic experiences. Other scientists argue that traumatic memories are different from ordinary memories in the way they are encoded in the brain. Evidence shows trauma is stored in the part of the brain called the limbic system, which processes feelings and sensory input, but not language or speech. (1994) People who have been traumatized may live with memories of terror, though with little or no real memories to explain the feelings. Sometimes a current event may trigger long forgotten memories of earlier trauma. The triggers may be any sound or smell like a particular cologne which was worn by an attacker.
Whether remembered or not, the memories are stored in the brain, and today with hypnosis, recall can bring forth what has been deeply suppressed. The question is, does one really want to know what is not remembered? Along with memories that are recovered, comes the effects that follow.
Short-term memory holds every experience encountered, while long-term memory retains only what's important. Memory is stored through episodic and semantic memory. The retrieval of decoded information occurs the same way it was
Memory
9
encoded. Memory is affected through positive and negative emotions, some remembered others suppressed. Not only is memory used to dwell in the past, it also helps formulate the present and the future.
f:\12000 essays\sciences (985)\Biology\Mendels Theories.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gregor Mendel played a huge role in the underlying principles of genetic
inheritance. He grew up in a Augustinian brotherhood where he learned
agricultural training with basic education. He then went on to the Olmutz
Philisophical Institute and then entered the Augustinian Monestary in 1843.
After 3 years of theological studies, Mendel went to the University of Vienna
where he was influenced by 2 professors, the physicist Doppler and a botanist
named Unger. Here he learned to study science through experimentation and
aroused his interest in the causes of variation in plants. Then in 1857, Mendel
began breeding garden peas in the abbey garen to study inheritance which
lead to his law of Segregation and independent assortment.
Mendel's Law of Segregation stated that the members of a paror of
homologous chromosomes segregate during meiosis and are distributed to
different gametes. This hypothesis can be divided into four main ideas. The
first idea is that alternative versions of genes account for variations in
inherited characters. Different alleles will create different variations in
inherited characters. The sescond idea is that for each character, an organism
inherits two genes, one form each parent. So this means that a homolohous
loci may have matching alleles, as in the true-breeding plants of Mendel's P
generation(parental). If the alleles differ, then there will be F hybrids. The
third idea states that if the two alleles differ, the receessive allele will have no
affect on the organism's appearance. So a F hybrid plant that has purple
flowers, the dominant allele will be the purple-color allele and the recessive
allele would be the white-color allele. The idea is that the two genes for each
character segregate during gamete production.
Independent assortment states that each member of a paor of
homologous chromosome segregates during meiosis independently of the
members of other pairs, so that alleles carried on diffenret chromosomes are
different distributed randomly to the gametes.
In conclusion, Mendel's work was very inportant to the science
community and to this day are being studied.
f:\12000 essays\sciences (985)\Biology\minerals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Minerals are natural compounds or elements of inorganic nature. There are 92 naturally occuring elements that have specific physical properties, definite chemical composition, and characteristic atomic structure. You can also find between 2,000 to 2,500 minerals in the earths crust. Minerals are formed in a positive response to their environment, most of them to deep for an observer. Environments in which minerals are formed far beneath the earths surface are plutonic igneous, pegmatitic, hot temperature vein, moderate temp. vein, low temp. vein, and a metamorphic environment. Environments in which minerals form near the earths surface are groundwater, weathering, and sedimentary. Minerals are divided into groups on the basis of their composition. About one third of all mineral belong to the group silicates. Other groups are carbonates they includes calcite, oxide which includes magnetite, sulfides which includes pyrite, halides which includes halite, sulfates which includes gypsum, and phosphates whic
he apalite the mineral belongs to. The last group is every mineral that is a chemical element and is found their uncombined state. The elements include copper, silver, gold, and so on.
By: Nick Hirschmann
October 25, 1996
f:\12000 essays\sciences (985)\Biology\Mononucleosis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MONONUCLEOSIS
Mononucleosis is an infectious disease of humans in which the blood and tissues contain mononuclear leukocytes (white blood cells with only one nucleus), either monocytes or lymphocytes. An infectious disease is a disease that can give you an infection, can be transmitted by infection without actual contact, or can be caused by a microorganism. All species of animals are afflicted with infections caused by a wide variety of organisms, from submicroscopic viruses to wormlike parasites. When a person has an infectious disease like mono the organism gains access to the patients body, survives, and then multiples. Next, the patient gets the symptoms. Then the patient may die or recover spontaneously, or the infection may respond to specific therapy. Often there is an immunity. Infectious diseases have strongly influenced the course of history on Earth. The organisms responsible for human infections are viruses. Viruses are simple life forms consisting of nucleic acid, encoding genetic information, and sur
face components of protein that enable them to enter cells. Viruses are unable to multiple outside of cells. Mono is found in the DNA in the body. Another name for mononucleosis is glandular fever because of the fever and swelling of the lymph nodes throughout the body. What causes mononucleosis is the Epstein-Barr virus (EBV), which is like herpes. The herpes virus also causes some cases of mono and other diseases. Mono usually occurs in adults 15 to 30 years old, but is known to appear at any age.
Mono symptoms include fever, chills, fatigue, malaise, sore throat, head-
aches, swelling of the lymph nodes (noticeable in the neck), and skin rashes. Liver inflammation may occur. Also, swelling of the upper eyelids is a common symptom. In some cases blood may be found in the urine. The throat is often red; a membrane, white to dark gray in color and resembling that of diphtheria, may be present. In many cases there is a petechial rash on the soft palate. Mono is mostly transmitted by oral contact with exchange of saliva, that is why it is sometimes known as the " kissing disease. " Sharing a cup is another way to get mono. It is not highly contagious. The incubation period is thought to be about 30 to 40 days. In about two/thirds of the patients the spleen is enlarged. The illness is mild to moderate, death is rare, but in some cases a patient may die of rupturing the spleen. A rash consisting of small hemorrhages or resembling measles or scarlet fever sometimes appears. Also, pneumonia occurs in about 2 percent of the infected patients. Although, involvement of th
e liver occurs almost in all the patients, but severe disease of the liver is rare. Encephalitis, meningitis, or peripheral neuritis occurs uncommonly. Death has followed encephalitis. While having mono, the heart is rarely affected.
During the illness antibodies develop. One way to detect this is by the Paul Bunnell test. The diagnosis is made by studying the blood. A sample of the serum (clear liquid) of the patient's blood is mixed with sheep's blood. If the patient has the disease, the sheep's blood cells will stick together. There is no treatment, but bed rest is suggested depending on the seriousness. Medical care is for relief of symptoms and prevention of secondary infections. Mono usually lasts for about a week or two or sometimes it may persist for several weeks, especially when the liver or nervous system is affected. A relapse occurs uncommonly, and second attacks are probably very rare. Recovery may take several months. The disease known as Chronic Fatigue Syndrome, or "yuppie disease " resembles mono. For a while the Chronic Fatigue Syndrome was suspected of also being caused by EBV, but this theory has been discounted. Still no cure or therapy has been found to help us with the infectious disease called mon
onucleosis.
f:\12000 essays\sciences (985)\Biology\Mood Affects caused by the Sun.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Jared Sousa 1/20/96
Descriptive Research
Thesis: The amount of sun people receive affects their mood.
A young woman lies asleep on a cold, overcast winter morning. At 4 A.M., a faint
incandescence radiates from a light bulb placed near her bed. The light gradually gains
intensity and covers until 6 A.M., when the woman awakes. She had just experienced a
simulated dawn of a new day. After being treated with this for several days, the woman's
annual winter depression slowly goes away. Does this mean that the less sun you get the
worse you feel, or perhaps the more you get the better your mood? It is very possible that
you may feel this way as millions of people worldwide have experienced it first-hand. This
phenomena is still sort of a mystery as many researchers don't completely understand why
this happens. "It may be that certain individuals have inherited vulnerability that causes
them to develop depression in the absence of exposure to sufficient environmental light"1.
Frederick A. Cook, the arctic explorer, provided a vivid description of the effects of
prolonged darkness on the human psyche: "The curtain of blackness which has overfallen the
outer world has also descended upon the inner world of our souls," Cook wrote in his
journal on May 16, 1898, "Around our tables . . . . men are sitting about sad and dejected
lost in dreams of melancholy. For brief moments some try to break the spell by jokes, told
perhaps for the 50th time. Others grind out a cheerful philosophy; but all efforts to
infuse bright hopes fail."2 Some believe that light affects the body's ability to make
serotonin, a neurotransmitter that helps induce feelings of calm and well being. The eye's
sensitivity may also play a part in sun/mood relations. A study was done to a group of
people in the winter and summer. In the winter the many individuals experienced much more
difficulty seeing dim light after sitting in the dark for a while.3 Another study done in
Vancouver shows that electrical activity in the retinas when a bright light is shone, is
significantly less in winter4.
As much as 5% of Americans suffer from Seasonal Affective
disorder, also known as SAD5. SAD is an illness in which the sufferers feel depressed, feel
lethargic, and they overeat . There is no known cause for this widespread illness. Many
researchers of SAD are speculating on the idea that SAD patients might have seasonal
variations in their melatonin secretions. A study of melatonin patterns in SAD sufferers
was done to determine if melatonin was a factor in the disorder. Since mostly women are
affected by SAD, researchers used healthy women as the control. The researchers who found
that the significant difference in winter and summer pacemaking that occurred in SAD
patients also saw similar patterns in the healthy women. Other studies show that a SAD
sufferer's eye usually does not take in as much sunlight in the winter as a normal person,
which may exaggerate the depression and other symptoms.6 Most SAD patients treated with
light therapy for a few weeks usually lose the depression. SAD patients that tended to eat
more than one portion of sweet things (such as chocolate, cake, or ice cream) per day
usually found temporary relief from their illness.7 Swiss scientists believe that the sweet
foods seems to "trigger" the release of the same mood-altering substances that light
triggers.
Nevertheless, light -- or lack thereof -- can really get under our skin. For instance,
"Rapid changes in the day length greatly modify the daily cycle of sleep and melatonin
secretion," report researchers led by psychiatrist Thomas A. Wehr of the National Institute
of Mental Health, ". . . brain mechanisms that detect and respond to seasonal changes in
day length may have been conserved in the course of human evolution."8 The findings with
the sun's affect on humans matched those already observed in rats. Many of us have not yet
realized what an important factor light is in our daily life. "Light is a complex stimulus
that has been inadequately specified, given the intense clinical experimentation of the
last five years."9 Research with these results easily prove that the sun and light really
do alter our mood, and have a great influence on our lives.
f:\12000 essays\sciences (985)\Biology\Morality and the Human Genome Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Morality and the Human Genome Project
Bibliography
Congress of the United States, Office of Technology Assessment, Mapping Our
Genes: Genome Projects: How Big, How Fast?, Johns Hopkins University
Press: Baltimore,1988.
Gert, Bernard, Morality and the New Genetics: A Guide for Students and Health
Care Providers, Jones and Bartlett: Sudbury, Massachusetts,1996.
Lee, Thomas F., The Human Genome Project: Cracking the Genetic Code of Life,
Plenum Press: New York, 1991.
Murphy, Timothy F., and Lappe, Marc, ed., Justice and the Human Genome
Project, University of California Press: Berkeley, 1994.
Does the Human Genome Project affect the moral standards of society?
Can the information produced by it become a beneficial asset or a moral evil?
For example, in a genetic race or class distinction the use of the X chromosome
markers can be used for the identification of a persons ethnicity or class
(Murphy,34). A seemingly harmless collection of information from the
advancement of the Human Genome Project. But, lets assume this information is
used to explore ways to deny entry into countries, determine social class, or even
who gets preferential treatment. Can the outcome of this information effect the
moral standards of a society?
The answers to the above and many other questions are relative to the
issues facing the Human Genome Project. To better understand these topics a
careful dissection of the terminology must be made. Websters Dictionary defines
morality as ethics, upright conduct, conduct or attitude judged from the moral
standpoint. It also defines a moral as concerned with right and wrong and the
distinctions between them. A Genome is "the total of an individuals genetic
material," including, "that part of the cell that controls heredity" (Lee,4).
Subsequently, "reasearch and technology efforts aimed at mapping and
sequencing large portions or entire genomes are called genome projects"
(Congress,4). Genome projects are not a single organizations efforts, but instead
a group of organizations working in government and private industry through
out the world. Furthermore, the controversies surrounding the Human Genome
Project can be better explained by the past events leading to the project, the
structure of the project, and the moral discussion of the project.
The major events of genetic history are important to the Human Genome
Project because the structure and most of the project deals with genetics.
Genetics is the study of the patterns of inheritance of specific traits
(Congress,202). The basic beginnings of genetic history lay in the ancient
techniques of selective breeding to yield special characteristics in later
generations. This was and still is a form of genetic manipulation by "employing
appropriate selection for physical and behavioral traits" (Gert,2). Futheralong,
the work of Gregor Mendel, an Austrian monk, on garden peas established the
quantitative discipline of genetics. Mendel's work explained the inheritance of
traits can be stated by factors passed from one generation to the next; a gene.
The complete set of genes for an organism is called it's genome (Congress,3).
These traits can be explained due to the inheritance of single or multiple genes
affected by factors in the environment (3). Mendel also correctly stated that two
copies of every factor exists and that one factor of inheritance could be dominate
over another (Gert,3).The next major events of genetic history involved DNA
(deoxyribonucleic acid). DNA, as a part of genes, was discovered to be a double
helix that encodes the blueprints for all living things (Congress,3). DNA was
found to be packed into chromosomes, of which 23 pairs existed in each cell of
the human body. Furthermore, one chromosome of each pair is donated from
each parent. DNA was also found to be made of nucleotide chains made of four
bases, commonly represented by A, C, T, and G. Any ordered pair of bases
makes a sequence. These sequences are the instructions that produce molecules,
proteins, for cellular structure and biochemical functions. In relation, a marker
is any location on a chromosome where inheritance can be identified and tracked
(202). Markers can be expressed areas of genes (DNA) or some segment of DNA
with no known coding function but an inheritance could be traced (3). It is these
markers that are used to do genetic mapping. By the use of genetic mapping
isolated areas of DNA are used to find if a person has a specific trait, inherent
factor, or any other numerous genetic information. In conclusion, the genetic
history of ancient selective breeding to Mendel's garden peas to the current
isolation of genes has been reached only through collaborative data of many
organizations and scientist.
The Human Genome Project has several objectives. To better understand
the moral issues that exist the project itself must be examined. Among the many
objectives, DNA databases that include sequences, location markers, genes, and
the function of similar genes (Congress,7). The creation of human chromosome
maps for DNA markers that would allow the location of genes to be found. A
repository of research materials including ordered sets of DNA fragments
representing the complete DNA in chromosomes. New instruments for analysis
of DNA. New methods of analysis of DNA through chemical, physical, and
computational methods. Develop similar research technologies for other
organisms. Finally, to determine the DNA sequence of a large fraction of the
human genome and other organisms. The objectives of the Human Genome
Project are carried out by organizations such as the Department of Energy,
National Institutes of Health, Howard Hughes Medical Institute, and various
private organizations. These organizations all have two shared features, placing
"new methods and instruments into toolkit of molecular biology" and "build
reasearch infrastructure for genetics." Making the directives of the Human
Genome Project apparent is important in making a moral judgment on this
genetic technology.
Any attempt to resolve moral issues involving new information from the
Human Genome Project requires direct, clear, and total understanding of
common morality. Subsequently, a moral theory is the attempt to explain,
justify, and make visible "the moral system that people use in making their
moral judgments and how to act when confronting a moral problem" (Gert,31).
This theory is based on rational decisions. With this in mind, the moral system
must be known by everyone who is judged by it. This leads to the rational
statement that "morality must be a public system" (33). The individuals of the
public system must know what morality requires of them, and the judgments
and guidelines made must be rational to them. Just like any game, the players
play by a set of rules and these rules dictate how play is done. The game is
played only when everyone knows how to play. When rules are broken
penalties are inforced by the other players judgment according to the rules
allowed. However, if everyone agrees to change the rules then the game
continues without any penalties. Therefore, "the goal of common morality is to
lessen the amount of harm suffered by those protected by it" and it is
constrained by the knowledge and need to be understood by all it applies to (47).
Justified violations also exist in common morality. Just like in the game, a
change in the rules causes acceptance, morality can be viewed not as an evil by
the public perception but as a decision backed by common morals.
Based on the pattern of common morality the issues of genetic race or
class distinction or any other controversies involving the Human Genome Project
can be put to a set of common moral standards. Just like the moral standard that
says killing is wrong but killing is justifiable in self-defense, the Human Genome
Project can be argued along the same pattern of moral discussion.
The justifiable violations that genetic information is based on depends on the
common morality which is based on the public system which is based on the
decisions of right and wrong. In conclusion, the moral dilemma of genetics is
that will it be an asset to the individuals public perception of common morality
or will it be an evil to the individuals public perception of common morality
based on the right and wrong of the information. This answer is based on the
societies structure. In one time period it may be accepted in another in may not.
f:\12000 essays\sciences (985)\Biology\Multiple Births.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Multiple births are rare in humans with twins as being the most common form of this
event. Multiple births can arise in many different combinations of ways but the
probability of giving birth to more than one child remains fairly constant when compared
to the entire human race. The chances of multiple births can also vary from race and
genetic background. Scientist and researchers do not know what the exact cause of these
variations is but many of them feel that it is caused by hormone differences between
different racial groups and/or the difference in social class.
The prenatal and infant mortality is much higher in multiple pregnancies than in
pregnancies that only involves one child. The danger of premature birth also increased
with the higher number of offspring that are involved. In many multiple births, not all of
the children survived to childhood or were born dead. Through the advances in
technology, the survival rate of infants born in a multiple pregnancy has increased. The
first quintuplets, five babies born in a single pregnancy, to survive in medical history
were the Dionne quintuplets.
The use of drugs that treat female sterility, or fertility drugs, may increase the
chances of giving birth to multiple children. These drugs cause the ovaries to release an
egg once a month but in some cases they release more than one egg, sometimes releasing
several at a time, increasing the chances of a multiple birth. The drug clomiphene citrate
is one of the most widely taken fertility drug and has resulted in the birth of twins about
once in every twelve births, much greater than the chances of the birth of natural twins.
Twins are the most common form of a multiple pregnancy. About one in
eighty-seven births result in the birth of twins. Twins can be fraternal, also known as
dizygotic twins, or identical, also called monozygotic twins, with the birth of identical
twins being the rarest, occurring about four times in every thousand births, about
one-forth as often as the birth of fraternal twins. This ratio of the birth of twins to the
total number of births remains fairly constant but the birth of fraternal twins can alter
greatly. Fraternal twins are most common among black African followed by people of
European origin. Asian races originating from the orient are the least likely to give birth
to fraternal twins.
The birth of twins can occur in two different ways, the fertilization of a single egg
or the fertilization of two eggs. In the case of dizygotic twins, the woman's ovaries
release two eggs about the same time instead of one, with each being fertilized by the
male's sperm, fraternal twins begin to develop. The two zygotes develop differently, each
having a different genetic coding. They can be both boys, both girls, or a single boy and
a single girl. Because each embryo develops on its own from different genetic
characteristics, dizygotic twins resemble family characteristics as do brother and sister.
The birth of monozygotic twins takes place much differently than do the births of
fraternal twins. Identical twins originate from a single egg, fertilized similarly to that of
a single pregnancy. A change transpires early in the pregnancy that causes the
development of identical twins. The change from a single birth to the birth of
monozygotic twins occurs when the zygote ruptures into two separate structures. These
two parts begin to develop into individual fetuses, sharing a similar genetic code and
developing in a similar manner. Identical twins are of the same sex, resemble each other
very closely, and have similar fingerprints and blood types.
Scientists also believe that Siamese twins, also known as conjoined twins,
develop in a similar fashion as do monozygotic twins. Siamese twins are identical twins
with the difference that the zygote did not divide completely during their development.
Such twins are usually joined at the hip, chest, abdomen, buttocks, or head. With current
monitoring equipment, conjoined twins can be detected maturing in the mother and
during birth a Cesarean section is sometimes needed to deliver the children safely.
Separation of the twins sometimes leads to the death of one or both of the twins. This
births are a rare event, occurring only about once in every fifty-thousand births.
Other forms of multiple births such as triplets, quadruplets and quintuplets, occur
very similarly to the birth of twins. Births of triplets are rarer than the birth of twins,
occurring approximately once in every seven-thousand nine-hundred births. The birth of
triplets can occur in a combination of ways. If the zygote divides into three separate
structures in the early stages of development, identical twins will be born. If a two cells
are fertilized and one divides into two structures as in monozygotic twins, the birth of a
pair of identical twins will be born along with a single infant. If the ovaries release three
eggs, which are then fertilized, then the birth of fraternal triplets will occur.
The same level of combinations can be observed in the development of
quadruplets, quintuplets, and the very rare occasion of a pregnancy of six children. The
birth of quadruplets only takes place once in every seven-hundred and five thousand
births and quintuplets are born once in every eight million births. The birth of six infants is so
rare that accurate statistics cannot be made while compared to the total amount of births
around the world. In all of these forms of multiple pregnancies, the process is very
similar. If the woman's ovaries release four or five eggs then the birth of quadruplets or
quintuplets will arise. If the cell divides into four of five structures will then cause the
pregnancy of identical quadruplets or quintuplets. These and numerous other
combinations can occur as the number of infants involved increases.
Multiple pregnancies are a rare occasion, one of the factors for the incomplete
statistics of the birth of a six child pregnancy. The number of infants born in multiple
births, as well as larger births consisting of six or more children, are also increasing due
to the use of fertility drugs. This widespread use is forcing researchers not to accept any
multiple births that were endorsed by drugs as part of the general statistics. Some aspects
of multiple births are still not clear to scientist. The difference between the probabilities
of multiple pregnancies of women of different races is just one of these problems. More
about multiple births has been learned over the ages but more is still to be learned and tested by
scientists.
f:\12000 essays\sciences (985)\Biology\MURDER RAPE AND DNA.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Murder, Rape, and DNA
DNA is the information needed by a cell in order to reproduce an
identical offspring. In some crimes detectives have no evidence or
fingerprints to tell who had committed a crime. Now there is a way of finding
who has committed the crime by a method called DNA Typing. DNA Typing
is finding bacteria or blood on clothing or skin and amplifying the gene. This
process was pioneered in the 1980's by a Scientist named Alec Jeffreys.
If blood, sperm, or any other human cells are left at the scene of a
crime, the DNA in the cells can be analyzed and compared with some DNA
taken from the suspect's blood. If they match, this information and testimony
of a scientist can be used to convict a rapist or murderer.
In the O.J. Simpson case DNA Typing was used. There was blood
found on various areas at the crime scene. The investigators gathered the
evidence, and took it to the laboratory where it was analyzed. The jury
determined that the samples were contaminated because of the way they were
handled.
DNA Typing is not perfect. There are many loop holes in it. An
example is the O.J. Simpson trial. During the process the DNA may be
tampered with or damaged.
In most rape cases DNA Typing is used by taking semen off the body
of clothes then amplifying the genes. The machine that copies the DNA is
called the PCR. The DNA is then cut and placed the wells in trays. PCR
copies the small pieces of DNA. It is performed by a blotting process. This
process has 21 different catogories.
Other tests include those for tracing genetic diseases within families,
finding the genes that cause genetic diseases. DNA Typing can also prove
the relationship with families.
DNA Typing is becoming more common than ever. This process is
helping to free convicted people of crimes they may not have committed by
taking samples of the prisoners DNA and comparing it with the evidence left
at the crime scene.
Jonathan Dewees
February 16, 1997
f:\12000 essays\sciences (985)\Biology\Muscle Growth.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Muscle Growth
Introduction
With the introduction of such modern conveniences such as the automobile, remote control, and
even the electric toothbrush people are relying on technology to do everything for them. With a
generation growing up in todays society physical tasks have almost become obsolete. Tasks such
as even going shopping and going out to visit a friend can be done from the comfort of your own
computer. With this sedentary lifestyle, muscular size will almost be unnecessary, except for the
athlete who wants to succeed in sports. To the non-athlete, there will be no reason to leave the
house because everything that you need will be at your fingertips, you will not have to get up and
do anything. Any type of exercise is good for the body and muscles. Muscle growth is essential if
you want to look better, feel better, and perform everyday tasks such as walking to the car, and
getting out of bed easier. A person who is in shape will also sleep better then an out of shape
person, and feel more revitalized in the morning.
Muscles account for approximently 35% of the body weight in women, and about 45% of the
body weight in men. With over 600 muscles covering the human skeleton muscles give the body
bulk and form. Then human body contains millions of muscle fibres whose coordinated
contraction cause the whole muscle to contract.
Muscles are the foundation on which our bodies are built. Without muscles our bodies could not
perform the simplest tasks such as opening our eyes, talking, breathing and even the pumping of
our heart or the most difficult tasks, such as running the hurdles in a track and field event..
Muscles are also important to maintain balance and posture.
Description of Muscles
In the body there are several types of muscles that control different functions in the body, one of
these types being skeletal muscle. Skeletal muscle is the most evident in the human body due to it
having the most mass the other types of muscles and that it lies directly under the skin attached to
the skeleton by tendons and ligaments.
Skeletal muscles are divided into three structural units, the entire muscle, the muscle bundle, and
the muscle fiber (cell). Each muscle fiber is divided into two types of fiber structure, fusiform and
pennate, with the pennate being broken up into three basic structures. These structures being the
unipennate, bipennate, and multipennate.
Notice the longditudinal grain of the
fusiform muscles (left) compared the
pennate muscles. (below)
Striated muscle tissue (above) is associated with the muscles related to the skeleton and
movement. Striated muscle tissue is the muscle tissue located directly under the skin and are the
muscles that are the most visible.
There are two types muscles in skeletal muscles, these are fast twitch and slow twitch muscles.
Fast twitch muscles have a fast form of myosin ATP and are very good of delivering calcium to
the muscle cell. Slow twitch muscles have a slow form of myosin ATP and are not very good at
delivering calcium to the muscle cell. Fast twitch muscle fibers reach peak tension more then
twice as fast as slow twitch muscles, making them more explosive muscles which would be more
desirable for athletes such as sprinters.
Notice the dark slow twitch fibers and the light slow
twitch fibers magnified. The fast twitch fibers tire
more easily then the lighter slow twitch fibers.
Causes of Muscle Growth
Muscles growth (hypertrophy) takes place in the muscle fiber themselves. When a muscle grows
there is not a increase in the amount of muscle fibers, since this is set at birth, but rather an
increase in the size of those muscle fibers, and an increase in the amount of connective tissue in
the muscle. Muscle fibers are enlarged through resistant training, or regular activity by
stimulating the amount of the contractile proteins, actin and myosin. What this does is make more
cross bridges available to do more work.
Muscle fibers with no apparent resistant training
Muscle fibers with considerably more resistant
training
The stimulus that tells the muscle to grow is a result of two things, the shortening of the muscle
against a resistance and the intensity of the contraction. For growth process to start a point must
be made in your workout where the exercised muscle is working near maximal capacity against a
resistance, and the relative intensity of the exercise is very high. When performed just right, a
highly intense resistant exercise disrupts cell wall and cellular microfilaments, which begins the
growth process.
A: General Adaptation Syndrome (G.A.S)
The way our muscles respond to training is the same way that any other stimulus response
mechanism in our body responds to a stressor. This mechanism is called the General Adaptation
Syndrome.
If a muscle is given a stimulus (stressor) that it is not accustomed to it will respond by going
through three stages. The first of these stages is called the alarm stage. The alarm stage occurs
immediately after a very intense stimulation, muscle cells are disrupted, and later on muscle
soreness is felt. This happens because the muscle that is being trained is not accustomed to this
stimulus and is slightly injured trying to respond to it.
The second stage is the resistant stage, this is where muscle hypertrophy takes place. The muscle
responds to the damage by rebuilding and restoring the muscle tissue.
In the third stage the muscle not only has repaired the damaged muscle, but will now add new
muscle proteins to avoid having the muscle slightly injured like it was in the alarm stage. This
addition of muscle proteins will cause the muscle to become bigger and stronger in order to be
able to properly handle that type of stress again.
Another factor in muscle growth is genetics. Genetically, men have the potential to have more
muscle growth then women because of the higher testosterone levels that they have. Another
genetics factor in muscle growth is what body type you have. An ecotmorph, will have a very
high metabolic, and may have a very hard time gaining weight, whether it be muscle or fat, and an
endomorph will not have as hard of a time gaining muscle mass, but must be very careful because
of a low metabolic rate, will also gain fat very easily. The "ideal" body type for muscle growth is
the mesomorph body type. Mesomorphs are generally athletically skilled, and there body type has
a predominance of muscle, bone and connective tissue.
"Artificial" Muscle Growth
One way which muscle growth is achieved more rapidly then just by resistant training, is with the
aid of artificial growth hormones, most notably, anabolic steroids.
Steroids are compounds that resemble the natural male hormone testosterone. Testosterone has
an anabolic effect in the body, and is produced naturally in the body, but only a small portion of
the testerone produced naturally are used for the anabolic (muscle growth) effect, but rather for
the androgenic (male sexual characteristics) effect. Steroids are constructed synthetically to
maximize the anabolic effect, and to minimize the androgenic effect.
Steroids are taken mainly for the person who wants the quick results of muscle growth, and to
increase athletic performance. Steroids cause many negative side effects in the body, which
include: increased aggressiveness, increased acne, development of facial in women, development
of gynecomastia (breast like tissue) in males, just to name a few. Steroids use is most prolific in
sports such as bodybuilding and professional football. Even though these sports have their own
testing programs very few athletes are ever caught using steroids, or other growth hormones. It
is estimated that approximently 90% of players in the NFL, and approximently 99%-100% of
professional body builders take anabolic steroids, or some other type of anabolic hormone. These
growth hormones are taken to gain excess muscle size and strength in order to gain the
competitive edge on their opponent.
An example of steroid us in a female professional
builder, notice the bulking muscles which cannot be
achieved naturally by females due to a lack of
naturally occurring testosterone in their bodies.
Effects of Aging on the Muscles
Though muscle atrophy (reduction) does take place due to a lack of exercise, another factor that
contributes to atrophy is the factor of age. Between the ages of 24 to 80 males tend to lose 40%
of their skeletal muscle mass. Although some older males may continue to be physically active,
the skeletal muscle atrophy will not be as significant, but it will still take place.
Muscle strength can be relatively maintained up through 50 years of age. After that time a 15%
loss in strength per decade occurs between the ages of 50 and 70. From the ages of 70 to 80
years of age, a 30% loss in strength occurs per decade. Due to this muscle and strength loss there
is a decreased ability to partake in physical activity, and as a result, more atrophy takes place.
This type of muscle and strength can have negative effects on an individual because a decline in
strength in the lower part of the body may result in falls and hip fractures. A decrease of muscular
strength in the upper body increases the risk in accidents in activities such as cooking, and
cleaning, which require pushing and pulling motions.
Resistance training can also help a great deal in the fight to stay young. Older men show a greater
strength and muscle increase after a resistant training program then do that of younger males.
Resistant training can also reverse the effects of osteoporosis in older women, by building more
muscle mass around the bones, and making the bones stronger, and less susseptial to breakage.
Muscles and Self Esteem
Weight training and the desire of muscle growth is no longer set aside just for the elite athletes.
Weight training is being done by many people, from older people trying to stay young, and for
younger people for the sake of looking good. Muscle growth is desirable because it gives people
who have a pudgy look the look of a more lean and athletic person, rather then the look of a
couch potato. Weight training will not only help someone to look better, but they will also feel
better, give them more energy, and allow them to eat more due to the increased speed of their
metabolism. With an increased metabolism they will burn more fat while sedentary, then will a
person with a slow metabolism. They will also more likely be more attractive to the opposite sex,
and will be more confident with themselves.
Starting an Muscle Growth Program
After a muscle growth program your body will feel better, regardless if it your first time in a
muscle growth program or you have been training for a while. Even though you may feel sore
your first time training, or after a extended break from training, you will probably find that this is
a good sore in a way that you know you are doing something good for your body.
Keeping Track of Progress
Keeping track of progress can be done in many different ways during weight training, but is a
good idea in order to see what type of muscle and strength gains that are being made.
Progress can be plotted by actually taking the measurements of your muscles and recording the
differences every week or every couple of weeks. Another way that progress can be measured is
by keeping some sort of journal which has your workout in it, along with how much weight that
you are lifting for each exercise, this way you can monitor the strength gains that you are having.
Avoiding the "Plateau" Effect
After weight training, the first few months you may notice a drastic increase in your lifts, but after
that you may start to "plateau". What happens when you plateau is that the gains in strength that
you have been achieving seem to level off, and strength gains seem to stop. This can be a
discouraging factor in many young weight trainers and may lead them to give it up because they
think that it isn't doing any good. If this happens don't be alarmed, it is very common among
weight trainers. What is happening is your muscles are not getting the "alarming" effect that they
received when you first started working out, your muscles have adjusted to that routine and now
have built almost a resistant against. Many people feel that when this happens that more weight
must be added in order to achieve more strength gain, but this is not true. By altering your
workout your muscles will get "confused" and the alarming effect will occur in your muscles
causing muscle and strength gains to occur once again..
Equipment
When weight training, it is recommended that proper equipment be used before starting a training
program. Proper equipment would include proper shoes to cushion the feet and prevent any foot
soreness and to dampen the pressure put on the arches of your feet by the increased weight placed
on your body while lifting the weights. Another piece of equipment recommended, but not
essential would be a pair of workout gloves. Workout gloves protect the hands when gripping
the weight and may prevent a callus buildup on the hands. One piece of equipment that is
strongly recommended would be the weight belt. The weight belt is a thin leather belt which is
very wide and is worn around the waist in order to protect the lower back. Due to the strain that
some weight training exercises can put on the lower back the belt is highly recommended.
Cool Down
It is recommended to adequately cool down after a training program in order prevent, or reduce
muscle soreness. During a weight training program you should properly stretch the muscles being
worked in order to keep them loose. Stretching should also be done directly after a workout
because the muscles are still warm and can be stretched more easily. This will also increase
flexibility which can be very advantageous in preventing injuries such as muscle sprains and
strains.
A Conclusion to Muscle Growth
A muscle growth program can be beneficial to everybody, from the young athlete wanting to
succeed in sports, to the older man trying to help stay and feel young. The benefits of muscle
growth are too high to be passed out by anybody who has any sort of ambition of feeling better
about themselves, looking better, and having more energy for everyday tasks. You will find that
once you start weight training, and muscle and strength growths are noticed that it will almost
become addicting and the desire for bigger and better results will become greater and greater. I
would recommend muscle growth to anybody, and anyone who disagrees should give it a try, just
for a little while, and after the results of improved strength and muscle size are noticed weight
training will become a part of their life.
f:\12000 essays\sciences (985)\Biology\New Developments or Research in Genetic Cloning.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
New Developments or Research in Genetic Cloning: Summary
Since genetic cloning is a very wide topic, the focus of my paper lies mainly on the new discoveries which might be beneficial to human beings. The focus of the first section of the paper is on the various cloning techniques geneticists use nowadays. They techniques included range from the simplest and suitable for all situations, to complicated and suitable for certain areas.
The second section of the paper, the longest section, discusses five of the many researches performed over the last five years. The researches are arranged in descending chronological order, dating from February 1997, to April 1992. These researches are discussed because they all have one thing in common: they may be beneficial to human beings later on. For example, the newest entry in my paper, and perhaps the one that shocked the whole world, was the report about the first successful clone mammal from non-embryonic cells. This will be helpful in the future for patients waiting for organ transplants. Scientists will be able to clone a fully functional organ, and replace it with the damaged one. The report on the cloning of the human's morphine receptor is advantageous to us because this helps scientists to develop new analgesics.
The third section of the paper contains a brief discussion about the advantages and the disadvantages of genetic cloning. It speculates how our future will improve due to the technologies we are developing, and also the biggest drawbacks which might come from it.
The last part of the paper, is the explanation of complicated terms used in this paper. The terms which will be explained are printed in bold terms throughout the paper. This section, the glossary, is like the ones which appears in textbooks.
New Developments or Research in Genetic Cloning
Genetic cloning is one of the many aspects which has been recently introduced to improve our quality of live. Researchers are trying to improve our lives everyday applying genetic engineering onto our everyday lives. Cows can be genetically altered to produce more milk, receptors in our body could be cloned to improve our health. The techniques and new research reported in this paper is just one tree out of the whole forest of genetic engineering.
Part I: Techniques of Genetic Cloning
Geneticists use different cloning methods for different purposes. The method used to identify human diseases are different than the method used to clone a sheep. The following are used techniques in genetic cloning.
Recombinant DNA
In recombinant DNA, the desired segment is clipped from the surrounding DNA and copied millions of times. Each restriction enzyme recognizes a unique nucleotide sequence wherever it occurs along the DNA spiral. This nucleotide sequence, known as the recognition site is a short, symmetric sequence of bases repeated on both strands of the double helix. After the segment is removed, the ragged, or "sticky" ends that remain after cleavage by the restriction enzyme allow a DNA restriction fragment from one organism to join to the complementary ends. This method allows a foreign DNA to be cloned in a bacteria. The result will be identical clones of the original recombinant molecule in hundreds of copies per cell.
Polymerase Chain Reaction (PCR)
The PCR is a method of gene amplification. It is a better method than bacterial cloning because of its greater sensitivity, selectivity, and speed. Moreover, it does not require bacterial vectors and rapidly amplifies the chosen segment of DNA in the test tube without the need for living cells.
In this process, the DNA sequence to be amplified is selected by primers, which are short pieces of nucleic acid that correspond to sequences flanking the DNA to be amplified. After an excess of primers is added to the DNA, together with a heat-stable DNA polymerase, the strands of both the genomic DNA and the primers are separated by heating and allowed to cool. A heat-stable polymerase lengthens the primers on either strand, therefore generating two new, identical double-stranded DNA molecules and doubling the number of DNA fragments.
Positional Cloning
This method is used when scientists need to identify human disease genes. The overall strategy is to map the location of a human disease gene by linkage analysis and to then use the mapped location on the chromosome to copy the gene. There are two essential requirements for mapping disease genes. Firstly, there must be sufficient numbers of families to establish linkage and, second, adequate informative DNA markers. Once suitable families are identified, the investigators determine if diseased people in the family have particular DNA sequences at specific locations that healthy family members do not. A particular DNA marker is said to be "linked" to the disease if, in general, family members with certain nucleotides at the marker always have disease and family members with other nucleotides at the marker do not have the disease. The marker and the disease gene are so close to each other on the chromosome that the likelihood of crossing-over is very small.
Once a suspected linkage result is confirmed, researchers can then test other markers known to map close to the one found, in an attempt to move closer and closer to the disease gene of interest. The gene can then be cloned if the DNA sequence has the characteristics of a gene and it can be shown that particular mutations in the gene confer disease.
Cloning by Nuclear Transfer
This method has been used in mammals to provide a valuable tool for embryonic study and as a method to multiply "elite" embryos. In this method, two different cells are involved: an unfertilised egg and a donor cell. The donor cells are obtained by culture of cells from a mammalian embryos over a period of several months. This enabled the culture to consist of many genetically identical cells. To illustrate this method, sheep are used as an example. An all white breed sheep gave the donor embryo, while the Scottish Blackface ewes provided the recipients eggs. By micromanipulation the chromosomes were removed from the eggs before the nucleus of the donor cell was introduced by cell fusion. An electric current is used to trigger the egg to begin development. These new embryos were then transferred to recipient sheep to discover if they were able to develop to lambs. When the lambs were born, they were genetically identical female white lambs.
Complementation Cloning by Retroviral Technique
An efficient mammalian cDNA (complementary DNA) cloning process has been developed that utilizes retroviral cDNA expression libraries. Complentation cloning of bacterial and yeast genetic systems has produced a lot of information for researchers. This system, in addition to cloning genes, is also helpful in analysing the structure-function relationship of known proteins. One advantage this system has over others is that with the retrovirus expression system, because of its wide range of target cells, allows it to clone surface molecules genes.
Part II: New Research In Genetic Cloning
Late February, 1997: First Cloned Mammal
In late February, Dr. Ian Wilmut and his research team from the Roslin Institute in Edinborough made a major scientific breakthrough: they cloned a sheep from non-embryonic cells. To create this cloned sheep the research team focused on stopping the cell cycle. They then take the cells from the udder of a Finn Dorset ewe. In order to stop the cells from dividing, the scientists put these cells in a culture with very low nutrition concentration.
While this was happening, Dr. Wilmut and his team used the nuclear transfer technique (mentioned in part I) to continue. An unfertilized egg cell is taken from a Scottish Blackface ewe. The first step is to remove the egg's nucleus, while leaving the cytoplasm intact. They then place the nucleus along side the cell from the Finn Dorset ewe. An electric pulse was used to fuse them together, and a second one to imitate the burst of energy at fertilization, triggering cell division. About five to seven days later, the embryo was implanted into the uterus of another Blackface ewe.
September 1996: Purification and Molecular Cloning of Plx1
Cdc2, a protein which controls mitosis in a cell, is negatively regulated1 by phosphorylation on its threonine-14 and tyrosine-15 residues. Cdc25, a protein which dephosphorylates these residues, undergoes activation and phosphorylation by multiple kinases at mitosis. Plx 1, a kinase that associates with and phosphorylates the NH3 (amino) end of Cdc25, was purified extensively from Xenopus egg extracts. Dr. Kumagai and his colleagues in C.I.T. (California Institute of Technology) found that cloning its cDNA revealed that Plx 1 is related to the Polo family2 of protein kinases. Cdc25 phosphorylated by Plx1 reacted strongly with MPM-2, a monoclonal antibody to mitotic phosphoproteins. The team concluded that Plx1 may be a mechanism for coordinating the regulation of Cdc2 with the progression of mitotic processes such as separating chromosome.
November, 1995: Positional Cloning of Clock Gene, timeless
In November, 1995, Michael W. Young and his colleagues from the Laboratory of Genetics in Rockefeller University used positional cloning to clone timeless (tim) in the fly Drosophila. The Drosophila's gene timeless (tim) and period (per) interact, and both are required for production of circadian rhythms. Tim is a clock gene which controls circadian behavioural rhythms3, such as the sleep-wake cycle in humans and insect locomotor activity cycles. The molecular cloning of the gene tim has allowed the detection of circadian cycles in tim RNA expression. The research revealed a strong relationship between per and tim and suggest a rudimentary intracellular biochemical mechanism4 regulating circadian rhythms in the fly Drosophila.
January, 1995: Genetically Altered Bacteria Which Makes Ethanol From Xylose
Researchers have achieved a key step in efforts to develop genetically engineered bacteria that can produce ethanol efficiently from plant biomass for use in alternative transportation fuels. Scientists at the Department of Energy's National Renewable Energy Laboratory in USA have genetically modified the bacterium Zymomonas mobilis so that it also makes ethanol from the five-carbon sugar xylose. In its natural form, this bacterium produces ethanol from the six-carbon sugars glucose fructose and sucrose.
Right now, ethanol is produced by yeast fermentation of glucose. These Z. mobilis bacterium makes ethyl alcohol in five to 10 % yield than yeast. The team which made this discovery, molecular biologist Stephen Picataggio and his colleagues, spliced two operons from an E-coli into the genome of the Z. mobilis bacteria. One of these operons encodes xylose assimilation, while the other encodes the pentose metabolism enzymes. The modified bacteria grows on xylose and ferments it efficiently to ethyl alcohol. The teams work will also improve the ethanol-producing abilities of E. coli because it is now able to produce ethanol from pentose sugars and hexose sugars.
July 1993: Morphine Receptor Cloned
In July, 1993, a team led by Dr. Lei Yu, associate professor of medical and molecular genetics at Indiana University School of Medicine, decoded the amino acid sequence for the morphine receptor that is located on the surface of nerve cells.
The group isolated the sequence from a rat brain cDNA library, and since homology between the rat and human sequence is expected to be high, Dr. Yu performed straightforward biological technique5 to isolate the human sequence from the genome.
The most promising application of this work will be the ability to design new analgesics that are more potent than morphine but lack the side effects caused by it. From this research, another possibility is to find a powerful analgesic that does not become quickly tolerated by the body as morphine does. This could bring relief to people who suffer from chronic pain. However, the most immediate advantage of this discovery is the ability to screen new pharmacologic compounds for their similarity to the µ receptor far more quickly and accurately than conventional methods. The research could also have serious significance for the understanding of narcotic addcition and how to treat people efficiently who have become addicted to these drugs.
August 1992: The Cloning of a Family of Genes that Encode Melanocortin Receptors
Proopiomelanocortin (POMC) is expressed primarily in the pituitary and in limited regions in the brain and periphery. It is processed into a large and complex family of peptides with different biological activities. The three major activities include the regulation and production of a hormone called adrenal glucocorticoid and aldosterone, control melanocyte growth and pigment production, and analgesia.
Roger Cone of Oregon Heath Sciences University cloned the murine and human melanocyte stimulating hormone receptors (MSH-Rs) and a human ACTH receptor (ACTH-R). The cloning of these receptors allowed the researchers to define the melanocortin receptors as a subfamily of receptors coupled to guanine nucleotide-binding proteins that may include the cannabinoid receptor. Also, from the information they found in their experiment, they found that the melanocortin receptor is the smallest guanine coupled receptor identified to date. (in 1992)
April 1992: Cloning of the Interleukin-1 Converting Enzyme
Interleukin-1 mediates a wide range of immune and inflammatory responses. The active cytokine is generated by proteolytic cleavage of an inactive precursor. A complementary DNA encoding a protease that carries out this cleavage has been cloned. Recombinant expression in cells enabled the cells to process precursor IL-1 to the mature form. Sequence analysis indicated that the enzyme itself may undergo proteolytic processing. The gene encoding the protease was mapped to a site frequently involved in rearrangement in human cancers. This discovery, by Douglas Cerretti and his team provides new insight in this field of biology and offers a new target for the development of therapeutic agents.
Part III: Brief Discussion about the Advantages and Disadvantages of Gene Cloning
Advantages
The most appealing aspect of genetic cloning is that it will improve our lifestyle. Fatal diseases such as AIDS could be cured by genetics. Through X-ray crystallography, new drugs could be manufactured to stop mutation of proteins. Our quality of food will increase, because farmers will only sell produces of the highest quality.
Disadvantages
There are a few disadvantages that will be the result of genetic cloning. The problems will mainly arise in the agricultural area. Since future livestocks might be cloned, that means that they will have identical immune systems. If an epidemic spread through the animals, most of the animals, if not all, will be killed by the disease or virus in which they are not immune to.
Huge improvements of our lifestyles have been made possible because of technological advancements. Biotechnology, such as genetic cloning, could have dramatic impacts on human beings in the near future. Millions of people will be benefitted if these knowledge is put into good use.
Part IV: Glossary of Terms
primers: An already existing DNA chain bound to the template DNA to which nucleotides must be added during DNA synthesis.
restriction enzyme: A degradative enzyme that recognizes and cuts up DNA that is foreign to a cell.
cDNA (complementary DNA): DNA that is identical to a native DNA containing a gene of interest except that the cDNA lacks noncoding regions because it is synthesized in the laboratory using mRNA templates.
operon: A unit of genetic function common in bacteria and phages and consisting of regulated clusters of genes with related functions.
genome: The complete complement of an organism's genes; an organism's genetic material.
homology: Similarity in characteristics resulting from a shared ancestry.
analgesia: The insensibility to pain without loss of consciousness.
proteolytic processing: the hydrolysis of proteins or peptides with formation of simpler and soluble products (as in digestion)
Notes
1 Akiko Kumagai. "Purification and Molecular Cloning of Plx , a Cdc25-Regulatory Kinase from Xenopus Egg Extracts." Science 273 (1996): 1377.
2 Akiko Kumagai. "Purification and Molecular Cloning of Plx , a Cdc25-Regulatory Kinase from Xenopus Egg Extracts." Science 273 (1996): 1377.
3 Michael Young. "Positional Cloning and Sequence Analysis of the Drosophila Clock Gene, timeless." Science 270 (1995): 805.
4 Michael Young. "Positional Cloning and Sequence Analysis of the Drosophila Clock Gene, timeless." Science 270 (1995): 805
5 Lei Yu. "µ Receptor Cloned From Rat Brain cDNA Library." Molecular Pharmacol: (44) 1993: 8-12
f:\12000 essays\sciences (985)\Biology\NURSING.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Lab Report
NURSING
Lifting, Transferring and Positioning
Student No.
{xxxxxxxxx}
Group No.
{x}
Marker's Name: {xxxxx xxxxxxxx}
{PAGE BREAK}
ABSTRACT
Lifting, transferring and positioning of patients is frequently undertaken
by nurses on each working day. This is necessary for patient comfort,
medical reasons and completion of self care needs. Lifting can be done in
numerous ways. As well as the nurse physically lifting or moving patients,
a number of devices are also available to assist in the transfer of
patients. These range from straps that are attached to or placed under the
patients, to mechanical hoists and lifters. Any assistance the nurse has is
beneficial for both the patient and the health care worker, as patient's
weights are generally heavier than the nurses physical capabilities. This,
combined with incorrect lifting techniques, can result in muscle strain, or
more seriously, spinal injury for the nurse, and discomfort, muscle strain
or further injury for the patient.
{PAGE BREAK}
INTRODUCTION
When lifting, transferring or positioning patients, the most important
consideration is safety. Any of these procedures need to be undertaken with
it in mind. This safety is inclusive of both the patient and the health
care worker. Communication is an important part of the lifting process as
the nurse should elicit information from the client to find out how and when
they prefer to be moved. This allows the patient to be involved in the
decision making process and be fully aware of what is occurring. By
communicating with the client, the nurse is also aware of whether or not the
patient is experiencing any discomfort during or after the lift.
The actions of lifting, transferring or positioning need to be completed for
numerous reasons, including relief of pressure points. Due to the patient
being in one position continuously, they are prone to the development of
pressure areas. In terms of patient needs, being in the same position
constantly is physically uncomfortable. However, mentally, a change in the
immediate surroundings is also beneficial for the patient. It is also
necessary for the patient to be moved for completion of their self care
needs. This includes their hygiene needs, which include, bathing or
showering, elimination, hair, oral and nail care.
{PAGE BREAK}
METHOD
When lifting, transferring or positioning patients manually, safety is the
most important factor. This safety is for the nurse themselves as well as
for the patient. One aspect of safety is for the nurse to utilise "good
body mechanics" (Kozier et al 1995, p.879). This refers to the nurse having
balance, which can be achieved with the feet being spread approximately
shoulder width apart, which gives stability and a "wide base of support"
(Kozier et al 1995, p.888). According to Kozier et al, (1995 p.879) balance
is also achieved by correct body alignment and good posture. The use of
correct body alignment reduces the strain on muscles and joints, and makes
lifting the clients much easier.
When lifting clients, the first thing the nurse should do is explain to the
patient what they are doing and ask the patient if there is any particular
way they would prefer to be moved. This allows the patient to have some
opinion about what is being done to them.
The next thing that should be done when moving a patient is a routine
assessment. The nurse may assess the situation by firstly observing the
patient and reading the nursing care plan. The nurse needs to be aware of
the patients capabilities to see how much they can do or if they can assist
in any way. Another important part of assessment is observing the
surrounding environment, to be sure there is no obstructions or other
hazards which may be injurious to the nurse or patient before, during or
after the move.
The next phase is that of planning the move. The nurse decides how the
patient will be moved from their current position to where they are going.
This may involve the nurse getting assistance for the lift, either from
other health care workers or by mechanical devices, such as a lifter or
hoist. When moving or lifting the client, wherever possible the nurse
should have assistance. This assistance is necessary for both nurse and
client safety. This is supported by Kozier (1995 p.910), who says, wherever
possible,
"the preferred method is to have two or more nurses move or turn the client".
When moving clients physically, there are different types of moves that can
be used. When moving a client up in bed, the client should be encouraged to
help if possible. The nurse can ask the patient to bend their knees, so
that when the nurse is ready, the patient can assist by pushing backwards
when the nurse says. Two nurses stand on opposite sides of the bed facing
each other. With knees bent and legs shoulder width apart, the nurses lock
forearms underneath the patient's thighs and shoulders. The nurses, on the
count of three, at the same time as the patient is pushing backwards,
transfer the weight to the legs that are in the same direction that the
patient is going to be moved.
When moving a client from a lateral lying position to sitting at the side of
the bed, the first thing that the nurse should do after assessment, is to
get the patient in a side lying position. This is done by the nurse placing
one hand on the client's hips and one hand on the client's shoulder. The
nurse then transfers their weight onto the back foot while at the same time
rolling the client towards them. The next step is the nurse places one arm
underneath the patient's shoulders and one arm underneath the knees. The
nurse then turns on the balls of the feet while at the same time pulling the
client's legs down on the floor.
The next move is transferring a client from the bed to a chair. Once the
client is sitting on the edge of the bed, the nurse can easily move the
patient to a chair. This procedure therefore follows on from the procedure
of sitting a client up in bed. This can be done by the use of a "transfer
belt" (Kozier 1995 p.924). Before commencing the lift, the nurse must have
the wheelchair ready and parallel to the bed. The nurse must make sure the
client's feet are placed flat on the floor with one foot slightly in front
of the other. The nurse then places the belt around the client's waist.
The nurse stands facing the client with their arms around the client's
waist, holding onto the belt. The nurse asks the patient to assist by
transferring the weight onto the front foot on the count of three, while at
the same time, the nurse transfers their weight onto the back foot, lifting
the client up to a standing position. The nurse supports the client until
they are balanced when standing. The nurse and client, when ready, pivot in
the direction of the chair. The client then holds the arms of the chair as
a means of support and to assist when lowering into the chair. The nurse
then lowers the client into the chair, bending at the knees. The transfer
belt is then removed when the nurse has assessed that the client is
comfortable and secure in the chair. The nurse should also ensure the
client has suffered no ill-effects as a result of the move.
When the transfer belt is not available, Kozier (1995 p.925), recommends
that the nurse puts both hands at the sides of the patient's chest and
continue the procedure in the same way.
When transferring the patient from the chair to the bed, the same procedure
is implemented but in reverse. However, the transfer is started, the nurse
should ensure that the bed is clean and dry. The client is then moved from
the chair to the bed and then assisted to a lying down position.
Manually lifting patients is effective, however, when able, the nurse should
lift or transfer with a mechanical lifter. These are especially effective
in reducing the risk of injury. This is supported by Seymour (1995 p.48)
who says that,
"more nurses are beginning to realise the equipment's potential for
protecting both client and carer from injury."
When using these devices, the nurse should tell the patient what is being
done and how it is being done. Mechanical lifters either have two slings,
one sling for underneath the shoulders and one for underneath the thighs or
buttocks. Other lifters have an all in one sling which extends from the
client's upper back to lower thighs. The lifters substantially reduce the
strain on the nurse and the patient and are able to be used for all
transfers. The nurse places the sling underneath the patient and attaches
the slings to the lifter with hooks, and the nurse then controls the lifter
for the desired action.
When using a mechanical lifter, some problems which may arise include the
lifter being broken or unavailable. The nurse should therefore be aware of
how to correctly manually lift the client in the event of this occurring.
Another problem with mechanical lifters, according to Scott, (1995 p.106)
was that mechanical devices were,
"often left because staff did not feel confident enough to use them."
This highlights the fact that all staff need to be taught the correct way
that the lifters are used.
The problem with lifting patients physically, is that nurses are often
required to lift loads greater than they are physically able. This is due
to,
"the likely mismatch between the size of a patient to be lifted and the
physical capabilities of the nurses on duty." (Love 1995, p.38).
This can lead to potential injury for nurse and client.
Another problem with lifting patients manually, is that the correct lifting
procedure may not be carried out. This can lead to patient discomfort, as
well as long term back problems for the carer involved. One problem which
may also arise from incorrect lifting techniques is the development of
pressure areas, due to the patient being dragged and not lifted across the
sheets. This friction can lead to the patient developing reddened skin
which may lead to skin breakdown.
{PAGE BREAK}
DISCUSSION
By the health care worker implementing the correct lifting techniques, the
nurse and the patient's safety is not compromised in any way. Nurses should
be constantly aware of any new methods of lifting or transferring which
arise, so they are able to maximise the level of safety for themselves as
well as for the patients. By the nurse using the correct lifting
techniques, and not dragging the patient, the risk of the patient sustaining
further injury, such as pressure areas, is reduced. By communicating with
the client, the nurse is also made aware of any problems the client has with
any aspect of the lift.
Regular maintenance of equipment is essential so that the equipment does not
breakdown frequently. Hooks, straps and slings need to be constantly
checked to ensure optimum working order, as well as ensuring client safety.
Staff need to be educated on the use of the lifters and regular testing
would ensure that the staff are confident and competent in their use. This
may lead to a decrease in the amount of mismatched clients and nurses in
terms of weight, as if staff are more confident of using the lifters there
may not be as much manual lifting necessary.
Education about manual handling is also vital to ensure correct lifting
techniques are used. Constant re-evaluation of the staff's abilities and
methods would ensure safety for both parties involved. This would make
staff aware that the least amount of strain placed on the muscles and joints
as possible is beneficial to them.
The re-evaluation is also important in the fact that it allows the health
care worker to be constantly up to date on any new procedures which may be
developed.
{PAGE BREAK}
REFERENCES
Kozier, B., Erb, G., Blais, K., Wilkinson, J.M. 1995, {italics on}
Fundamentals of Nursing {italics off}, 5th Edition, Addison Wesley
Publishing Company Inc., United States of America.
Love, C. 1995, 'Managing manual handling in clinical situations', {italics
on} Nursing Times {italics off}, vol. 91, no. 26, pp. 38-39.
Scott, A. 1995, 'Improving patient moving and handling skills', {italics on}
Professional Nurse {italics off}, vol. 11, no. 2, pp. 105-110.
Seymour, J. 1995, 'Handling Aids - Lifting and moving patients', {italics
on} Nursing Times {italics off}, vol. 91, no. 27, pp. 48-50.
f:\12000 essays\sciences (985)\Biology\On the Theory of Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
On the Theory of Natural Selection
Whether it is Lamarck¹s theory that evolution is driven by an innate tendency towards greater complexity, Darwin¹s theory of natural selection, or the belief that the evolution of plant and animal life is controlled by a higher being, the process of evolution cannot be denied. Archaeological investigations have proven that species evolve over time, but the unanswered questions are ³How?² and ³Why?² The answer lies in Charles Darwin¹s theory of evolution.
Charles Darwin was born in Shrewsbury, England on February 12, 1809. Mr. Darwin was easily bored with his studies as a child, he turned away from his father¹s footsteps and becoming a physician after seeing several operations performed without anesthesia. He became interested in geology and natural history and was not intrigued by his studies of the holy ordge University. He was sent on a trip to explore the world and while he was on this journey, he became enthralled with biology and geology. He made and wrote observations about coral reefs (1842), and volcanic islands (1844), but his greatest biological observations were those pertaining to his theory of evolution.
Darwin¹s findings begin in the Galapagos Islands where he noticed a wide array of finches whose beaks were different sizes. He believed that the physical conditions on the island did not affect the birds¹ beaks, but it was the birds¹ feeding habits. For instance, the birds with the large, powerful beaks ate large seeds, while the birds with the small or fine beaks, ate small seeds or insects. He theorized that each bird was suited to its surroundings and was adapted to its environment, thus the birds best suited to the environment prevailed and reproduced, leaving those who did not adapt, extinct.
In his book, On the Origin of Species, Darwin presented the idea that species evolve from more primitive species through the process of natural selection, which works spontaneously in nature. Darwinism states that not all individuals of a species are exactly the same, but individuals have variations and that some of these variations make their bearers better adapted to their particular ecological conditions. Not only does this theory make perfect sense, it is also very simple and difficult to dispute. Darwinism can be compared to the today¹s world by using an analogy such as, two people apply for a job, one person has the educational background and experience that is required to obtain the job, while the other does not. The person with the experience and education has better adapted himself to his surroundings ,therefore the person who has adapted to his environment is given the job and the other remains unemployed.
Darwinism is often opposed by orthodox churchgoers who believe that their God is directly responsible for every happening in nature. This is a respectable opinion, but Darwinism and religious beliefs do not have to be in direct conflict, for example, in the Christian Bible in the book of Genesis (1:11-12,1:24) it is stated that God said, ³Let the earth produce all kinds of plants...So the earth produced all kinds of plants, and God was pleased with what he saw. Most Christians have difficulty believing that evolution is justifiable, all God commanded was that the earth was to produce the plants and animals, the Bible does not say that the earth looked like it does today, and we know, in fact, that it did not. The Bible does not explain how evolution occurred, what processes evolution entails, and therefore does not contradict it. The Bible describes how things began, and is not even very descriptive at that.
Charles Darwin¹s Theory of Evolution is an easily justifiable way of explaining the process of evolution. His ideas have made an enormous impact on the world, and have revolutionized biology. Though some disagree with Darwin¹s ideas, they are still worthy of acceptance and should be revered as one of the most intelligent and important biological findings in history.
On the Theory of Natural Selection
Dale Anderson
AP Biology
February 3, 1997
f:\12000 essays\sciences (985)\Biology\Opponents Factual Brief.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Opponent's Factual Brief
OPPONENT'S BRIEF
Factual Proposition: Consuming marijuana is detrimental to one's health.
Definition of key terms:
1. Consumption= Smoking or eating marijuana.
2. Marijuana= Psychoactive mind altering substance, also known as cannabis.
3. Detrimental= Serious harm.
Primary Inference:
Smoking or eating marijuana is likely to create serious health problems for most individual users or society.
Overview:
Since the 1920's supporters of marijuana prohibition have exaggerated the drugs dangers. Many of the "reefer madness" tales that were used to bring support for early anti- marijuana laws , continue to appear in reports today. The most important studies of recent times took place in the 1970's in Greece, Costa Rica and Jamaica. These tests reported on the effects of marijuana on its users in there natural environment. The reports covered marijuana's effect on the brain, immune and reproductive systems. (1) These studies didn't answer all the questions about the effects of marijuana on the user, but supported the idea that marijuana for the majority of its users was not detrimental to the health of the users brain, immune or reproductive system. In looking at all the reports that are published there are perhaps, random studies which may indicate greater toxicity of the drug. But in all of these cases, the research was flawed or inaccurate since the findings cannot be duplicated by other scientists.
Contention I: Marijuana does not damage brain cells.
A. Claim: Use of marijuana does not cause memory loss.
1. Grounds: In a recent study rhesus monkeys were exposed to the equivalent of 4-5 joints per day for an entire year without any alteration of hippocampal architecture.(2) Slikker, W. et al," Behavioral, Neurochemical, and Neurohistological Effects of chronic Marijuana Smoke Exposure in the Nonhuman Primate,"pp219-74 in l. Murphy and A. Bartke (eds), Marijuana Neurobiology and Neurophysiology, Boca Raton: CRC press(1992)
2. Warrant: Alteration in hippocampal structure results in memory loss.
3. Backing: A study reports " Any alteration of the hippo campus, a cortical brain region, results in negative consequences for learning and memory in humans.(3) Heath, B.C. et al, "Cannabis Sativa: Effects on Brain Function and Ultra structure in Rhesus Monkeys, "Biological Psychiatry 15:657 (1980).
B. Claim: Use of marijuana does not cause cognitive impairment.
1. Grounds: In a study it is reported " marijuana intoxication does not impair brain related cognitive functions"(4)
Weckowicz, T.E. et al, "Effect of Marijuana on Divergent and convergent Production Cognitive Tests," Journal of Abnormal Psychology 84: 386-98(1975)
2. Warrant: Studies have shown that marijuana does not effect brain related cognitive functions.
3. Backing: Researchers have proved scientifically that marijuana does not impair cognitive brain functioning include(5)Hooker, W.D. and Jones, R.T., "Increased Susceptibility to Memory Intrusions and the Stroop Interference Effect During Acute Marijuana Intoxication," Psychopharmacology 91: 20-24 (1987)
Claim C: Use of marijuana does not cause difficulties in learning.
1.Grounds: No evidence found that marijuana users suffer from brain impairment.
2. Warrant: Since there is no evidence correlating marijuana use to brain impairment there can be no learning difficulties associated specifically with the use of marijuana.
3. Backing: A study in 1988 shows " In comparing chronic marijuana users with non-users, there are no significant differences in learning, memory recall, and other attention functions."(6) Page, J.B., "Psychosociocultural Perspectives on Chronic Cannabis Use: The Costa Rican Follow Up,"Journal of Psychoactive Drugs 20: pp 57 (1988)
Contention II: Marijuana does not impair immune system functioning.
Claim A: Using marijuana stimulates the immune system.
1. Grounds : In the last two years THC (the active drug in Marijuana)has been discovered as a " Peripheral cannabinoid receptor associated with lymphatic tissue proving as a effective immune system stimulant"(7) Lynn, A.b. and Herkenham, M., "Localization of cannabinoid Receptors and Non saturable High Density Cannabinoid Binding Sites in Peripheral Tissues of the Rat: Implications for Receptor- Mediated Immune Modulation by Cannabinoids, " Journal of Pharmacology and Experimental Therapeutics 268:1612-23 (1994)
2. Warrant: The active drug in marijuana is THC, thus marijuana is an immune stimulant.
3. Backing: In 1988, a study showed " an increase in responsiveness when white blood cells from marijuana smokers were exposed to immunological activators.(8)Wallace, J.M. et al, "Peripheral Blood Lymphocyte Sub populations and Mitogen Responsiveness in Tobacco and Marijuana Smokers," Journal of Psychoactive Drugs 20: 9-14 (1988)
Claim B: Use of marijuana does not increase bacterial, viral or parasitic infection.
1.Grounds: There has never been any scientific data which proves marijuana increases bacterial, parasitic or viral infections among humans.
2.Warrant: Since there is no evidence of an increase in viral, parasitic or bacterial infection when marijuana is used it cannot be associated with an increase in these infections.
3. Backing: A study performed in the 1970's declares "there is no difference in disease susceptibility between marijuana users and matched controls.(9) Carter, W.E. (ed), Cannabis in Costa Rica: A study of Chronic Marijuana Use, Philadelphia: Institute for Study of Human Issues (1980)
Claim C: The use of marijuana does not increase the risk of HIV infection.
1. Grounds: There have only been myths, but no scientifical evidence proving use of marijuana increases the rate of infection for HIV.
2. Warrant: Since there is no evidence , marijuana is not responsible for any increase in the risk of infection from the HIV virus.
3. Backing: A study taken in 1990, clearly states "Marijuana use does not increase the risk of HIV infection.(10) Coates, R.A. et al , "Cofactors of Progression to Acquired Immunodefifiency Syndrome in a Cohort of Male Sexual Contacts of men with Immunodeficiency Virus Disease," American Journal of Epidemiology 132:pp717 (1990)
Contention III: Marijuana does not harm ones sexual maturation and reproduction.
Claim A: Marijuana does not impair in anyway male reproductive functioning .
1. Grounds: The Jamaican field studies proved "There are no differences in hormone levels or reproductive functioning between marijuana users and non-users"(11) Knights, R., "Reproductive Test Results," p111 in V. Rubin and L. Comitas (eds), Ganja in Jamaica, The Hague:Mouton (1975)
2. Warrant: Since science has proven there is no difference in male functioning , marijuana does not effect the male reproductive system in any way.
3. Backing: In surveys of marijuana users it has been reported " no problems with fertility have emerged as important as a result of marijuana use"(12) Hembree, W.C. et al, "Changes in Human Spermatozoa,"pp429 in G.G. nahas and W.D. M. Paton (eds)Oxford : Pergamon Press (1979)
Claim B: Marijuana does not impair female reproduction in humans
1. Grounds : there is no support in scientific literature tha is current reporting that marijuana impairs female reproductive functioning.
2. Warrant: Without scientific fact, the claim that marijuana effects females reproduction is nothing but a myth.
3. Backing: There have been no epidemiological studies showing any information that female users of marijuana are effected reproductively.
Claim C: Use of Marijuana does not retard adolescents sexual development.
1. Grounds: Besides of one individual case where a adolescent didn't attain puberty,(13) Copeland, K.C. et al ," Marijuana Smoking and Pubertal Arrest," Journal of Pediatrics 96:1079-80 (1980). There has been no proof that sexual development of adolescents who smoke marijuana exists.
2. Warrant : Without scientifical data the claim that marijuana retards an adolescents sexual development is nothing but a myth.
3. Backing: Scientific research shows " There have been no epidemiological studies indicating sexual retardation has occurred in adolescents" (14) Block, R.I. et al , "Effects of Marijuana use on Testosterone, Luteinizing Hormone, and Follicle Stimulating Hormone in Humans" Drug and Alcohol Dependence 28:121 (1991)
Conclusion:
Supporters of marijuana prohibition make claims about marijuana without scientifically proving them. At the present day marijuana has been scientifically proven not to be detrimental to the body's brain, immune and reproductive systems. If we as a society can analyze scientific evidence, instead of being persuaded by some unwarranted claims , perhaps we can convert our ignorance into awareness .
Bibliography
(1). Carter, W.E. (ed), Cannabis in Costa Rica: A Study of Chronic Marijuana Use, Philadelphia: institute fot study of Human Issues(1980): Rubin, V. and Comitas, L., Ganja in Jamaica , The Hague : Mouton (1975): Stefanis, C. et al , Hashish: Studies in Long Term Use , New York : Raven Press (1977).
(2) Slikker, W. et al," Behavioral, Neurochemical, and Neurohistological Effects of chronic Marijuana Smoke Exposure in the Nonhuman Primate,"pp219-74 in l. Murphy and A. Bartke (eds), Marijuana Neurobiology and Neurophysiology, Boca Raton: CRC press(1992)
(3) Heath, B.C. et al, "Cannabis Sativa: Effects on Brain Function and Ultra structure in Rhesus Monkeys, "Biological Psychiatry 15:657 (1980).
(4)Weckowicz, T.E. et al, "Effect of Marijuana on Divergent and convergent Production Cognitive Tests," Journal of Abnormal Psychology 84: 386-98(1975)
(5)Hooker, W.D. and Jones, R.T., "Increased Susceptibility to Memory Intrusions and the Stroop Interference Effect During Acute Marijuana Intoxication," Psychopharmacology 91: 20-24 (1987)
(6) Page, J.B., "Psychosociocultural Perspectives on Chronic Cannabis Use: The Costa Rican Follow Up,"Journal of Psychoactive Drugs 20: pp 57 (1988)
(7) Lynn, A.b. and Herkenham, M., "Localization of cannabinoid Receptors and Non saturable High Density Cannabinoid Binding Sites in Peripheral Tissues of the Rat: Implications for Receptor- Mediated Immune Modulation by Cannabinoids, " Journal of Pharmacology and Experimental Therapeutics 268:1612-23 (1994)
(8)Wallace, J.M. et al, "Peripheral Blood Lymphocyte Sub populations and Mitogen Responsiveness in Tobacco and Marijuana Smokers," Journal of Psychoactive Drugs 20: 9-14 (1988)
(9) Carter, W.E. (ed), Cannabis in Costa Rica: A study of Chronic Marijuana Use, Philadelphia: Institute for Study of Human Issues (1980)
(10) Coates, R.A. et al , "Cofactors of Progression to Acquired Immunodefifiency Syndrome in a Cohort of Male Sexual Contacts of men with Immunodeficiency Virus Disease," American Journal of Epidemiology 132:pp717 (1990)
(11) Knights, R., "Reproductive Test Results," p111 in V. Rubin and L. Comitas (eds), Ganja in Jamaica, The Hague:Mouton (1975)
(12) Hembree, W.C. et al, "Changes in Human Spermatozoa,"pp429 in G.G. nahas and W.D. M. Paton (eds)Oxford : Pergamon Press (1979)
(13) Copeland, K.C. et al ," Marijuana Smoking and Pubertal Arrest," Journal of Pediatrics 96:1079-80 (1980).
(14) Block, R.I. et al , "Effects of Marijuana use on Testosterone, Luteinizing Hormone, and Follicle Stimulating Hormone in Humans" Drug and Alcohol Dependence 28:121 (1991)
f:\12000 essays\sciences (985)\Biology\Orangutans.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tim Sanderson Orangutans (Pongo pygmaeus)
Anth 111
In Malay orang means "person" and utan is defined as "forest'. Thus Orangutan literally means "Person of the Forest". Orangutans are found in the tropical forests of Sumatra and Borneo. They are the most arboreal of the great apes and move amongst the safety of the trees from one feeding site to the next. They are so well adapted to arboreal life that they cannot place their feet on the ground, instead they walk on the outside of their curved foot.
There is a scattered population of orangutan in Indonesian Borneo, Malaysia Borneo and northern Sumatra. The different habitats have isolated the orangutan reproductively and geographically from one another creating a "degree of difference" or two subspecies. There are several different characteristics between the two subspecies of orangutans and it has recently been suggested that they may be a separate species. The Borneo male has relatively large cheek pads, a tremendous laryngeal sac and a square shaped face. The Sumatran male has small pads and laryngeal sac, a ginger coloured moustache, a pronounced beard, and a diamond shaped face. Individuals can also be distinguished chromosomally, biochemically, and by their cranial characteristics.
There is a great deal of individual variety in the orangutan. "Each orang-utan had a distinct personality and in dealing with such highly intelligent animals in captivity, the keeper's knowledge of the individual was probably more important than the knowledge of the overall behaviour patterns " (Markham, 1980). Orangutan males, however, appear to be totally intolerant of one another, especially the Borneo males who are even aggressive towards females and infants. Male orangutans' participation in social groups is limited to sexual "consortship" with females. However, the Sumatran males tend to stay with females for a longer period of time usually until the birth of the infant. They may stay longer with their partner because of the presence of large predators absent in the Borneo habitat. The orangutan has a menstrual cycle of 29-30 days, menstruation lasting 3-4 days. The Gestation period lasts slightly less than nine months. Offspring pass through three stages, infancy (0-4), juvenile (4-7), and ado
lescents (7-10). Mother young relationship lasts for a long time, the young usually stay with their mother until they are mature. Female Orangutans are not sexually mature or fully grown until the age of twelve and will not have their first offspring until they are at least fourteen. Males become sexually mature and fully grown at the age of fifteen. The cheek flanges of the male easily recognize the differences between adults and semi-adults. The flanges in the Boreal male curve out ward from the face and develop around the age of eight and are not completely grown until the age of fifteen. Sumatra flange development begins at the age of ten and is not complete until the early twenties. The flange in the Sumatra orangutan lie flat against the face and give a wide facial appearance especially in the mid facial region. The life expectancy of orangutans in the wild is not known, but captive orangutans have been known to live up to fifty years.
Orangutans are sexually dimorphic. Males are approximately twice the size of females and weigh about 220 lbs. and reach a height of five feet. It is believed that the males larger size may be an adaptation for mating because there is strong competition among males for females. The pendulous laryngeal sac, when inflated, increases the tone of the animals voice, producing "long calls". In both subspecies (Borneo and Sumatran) calling acts as " a spacing mechanism between the males and also advertise the location of the highest ranking male to the mature females." (Rijksen,1978). The long call of the Borneo male is long and drawn out where as the Sumatran is much shorter and has a faster tempo. The difference may be attributed to the larger throat pouch the Borneo has. The reason for the different calls is unclear. They may be related to the terrain each subspecies inhabits. The faster call of the Sumatran may be more effective in the rugged, mountainous terrain. The longer call of the Borneo may be d
ue to the wide distribution of this race.
A large portion of an orangutans day is spent looking for and consuming food. Their diet primarily consists of fruit but they also eat leaves, bark, flowers, insects, and birds eggs. One of their preferred foods is the fruit off of the durian tree, it is supposed to taste like sweet garlic. After they have finished eating and bedtime comes around the orangutans build themselves a new nest forty to fifty feet up in a tree made of boughs.
Like the other great apes (chimpanzees and gorillas), orangutans are highly intelligent. Tests have indicated that their intelligence is relatively similar. Wild orangutans use their intelligence to solve problems usually related to arboreal living and food gathering. In captivity, however, they have been trained to perform tricks and to use sign language. They have also made tools to throw at humans, get food, and gain leverage.
Today, the total number of orangutans ranges between 20-27,000. They are now endangered primarily because their habitat continues to be destroyed and the practice of killing the mother in order to capture a baby for animal trade. Even though they are protected by international laws, it is difficult to enforce them.
Orangutans are inhabit the forests on the islands Sumatra and Borneo. Through evolution and reproductive and geological isolation two sub species have emerged (Borneo and Sumatra). They generally live alone with the exception of the long term relationship between a female and her young. When orangutans do meet one another they are very tolerant and aggression is rare, unless two mature males meet each other. Males maintain their distance from on another with "long calls", these also advertise their location to adult females. Orangutans are generally fruit eaters, because fruit is abundant in the forests they inhabit. They lead a very solitary life. The population continues to decline because of habitat loss, and fewer than 30,000 orangutans are thought to remain in the wild.
Tim Sanderson Orangutans (Pongo pygmaeus)
Anth 111
In Malay orang means "person" and utan is defined as "forest'. Thus Orangutan literally means "Person of the Forest". Orangutans are found in the tropical forests of Sumatra and Borneo. They are the most arboreal of the great apes and move amongst the safety of the trees from one feeding site to the next. They are so well adapted to arboreal life that they cannot place their feet on the ground, instead they walk on the outside of their curved foot.
There is a scattered population of orangutan in Indonesian Borneo, Malaysia Borneo and northern Sumatra. The different habitats have isolated the orangutan reproductively and geographically from one another creating a "degree of difference" or two subspecies. There are several different characteristics between the two subspecies of orangutans and it has recently been suggested that they may be a separate species. The Borneo male has relatively large cheek pads, a tremendous laryngeal sac and a square shaped face. The Sumatran male has small pads and laryngeal sac, a ginger coloured moustache, a pronounced beard, and a diamond shaped face. Individuals can also be distinguished chromosomally, biochemically, and by their cranial characteristics.
There is a great deal of individual variety in the orangutan. "Each orang-utan had a distinct personality and in dealing with such highly intelligent animals in captivity, the keeper's knowledge of the individual was probably more important than the knowledge of the overall behaviour patterns " (Markham, 1980). Orangutan males, however, appear to be totally intolerant of one another, especially the Borneo males who are even aggressive towards females and infants. Male orangutans' participation in social groups is limited to sexual "consortship" with females. However, the Sumatran males tend to stay with females for a longer period of time usually until the birth of the infant. They may stay longer with their partner because of the presence of large predators absent in the Borneo habitat. The orangutan has a menstrual cycle of 29-30 days, menstruation lasting 3-4 days. The Gestation period lasts slightly less than nine months. Offspring pass through three stages, infancy (0-4), juvenile (4-7), and ado
lescents (7-10). Mother young relationship lasts for a long time, the young usually stay with their mother until they are mature. Female Orangutans are not sexually mature or fully grown until the age of twelve and will not have their first offspring until they are at least fourteen. Males become sexually mature and fully grown at the age of fifteen. The cheek flanges of the male easily recognize the differences between adults and semi-adults. The flanges in the Boreal male curve out ward from the face and develop around the age of eight and are not completely grown until the age of fifteen. Sumatra flange development begins at the age of ten and is not complete until the early twenties. The flange in the Sumatra orangutan lie flat against the face and give a wide facial appearance especially in the mid facial region. The life expectancy of orangutans in the wild is not known, but captive orangutans have been known to live up to fifty years.
Orangutans are sexually dimorphic. Males are approximately twice the size of females and weigh about 220 lbs. and reach a height of five feet. It is believed that the males larger size may be an adaptation for mating because there is strong competition among males for females. The pendulous laryngeal sac, when inflated, increases the tone of the animals voice, producing "long calls". In both subspecies (Borneo and Sumatran) calling acts as " a spacing mechanism between the males and also advertise the location of the highest ranking male to the mature females." (Rijksen,1978). The long call of the Borneo male is long and drawn out where as the Sumatran is much shorter and has a faster tempo. The difference may be attributed to the larger throat pouch the Borneo has. The reason for the different calls is unclear. They may be related to the terrain each subspecies inhabits. The faster call of the Sumatran may be more effective in the rugged, mountainous terrain. The longer call of the Borneo may be d
ue to the wide distribution of this race.
A large portion of an orangutans day is spent looking for and consuming food. Their diet primarily consists of fruit but they also eat leaves, bark, flowers, insects, and birds eggs. One of their preferred foods is the fruit off of the durian tree, it is supposed to taste like sweet garlic. After they have finished eating and bedtime comes around the orangutans build themselves a new nest forty to fifty feet up in a tree made of boughs.
Like the other great apes (chimpanzees and gorillas), orangutans are highly intelligent. Tests have indicated that their intelligence is relatively similar. Wild orangutans use their intelligence to solve problems usually related to arboreal living and food gathering. In captivity, however, they have been trained to perform tricks and to use sign language. They have also made tools to throw at humans, get food, and gain leverage.
Today, the total number of orangutans ranges between 20-27,000. They are now endangered primarily because their habitat continues to be destroyed and the practice of killing the mother in order to capture a baby for animal trade. Even though they are protected by international laws, it is difficult to enforce them.
Orangutans are inhabit the forests on the islands Sumatra and Borneo. Through evolution and reproductive and geological isolation two sub species have emerged (Borneo and Sumatra). They generally live alone with the exception of the long term relationship between a female and her young. When orangutans do meet one another they are very tolerant and aggression is rare, unless two mature males meet each other. Males maintain their distance from on another with "long calls", these also advertise their location to adult females. Orangutans are generally fruit eaters, because fruit is abundant in the forests they inhabit. They lead a very solitary life. The population continues to decline because of habitat loss, and fewer than 30,000 orangutans are thought to remain in the wild.
f:\12000 essays\sciences (985)\Biology\Order by Instinct.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Order is sought instinctively. In Literature, as well as Biology, order is sought instinctively by authors and scientists. Authors use order to convey real-life incidents and make their stories seem more realistic. Scientists use a way of classification to bring order to Biology. The life cycle, as the cycle of a virus, shows order.
The young boy in the short story, "Sunrise on the Veldt," found order in the life cycle. He sought this order to help him explain the death of a buck. The death of the buck made the young boy think about the life cycle. He shot the buck, and the buck became injured. Then the buck died. An organism is born, it grows, it lives for a period of time, then it dies. The human life cycle is similar. A baby is born. The baby's parents take care of it, then the baby turns into an adult. The adult lives for a period of time, then the adult dies. Humans seek order in the life cycle to help explain death. The order in the life cycle was sought instinctively, because people wanted an explanation of death. The order in "Sunrise on the Veldt" was shown in the life cycle.
In the novel, The Wave, a teacher sought order to help keep his classroom under control. The order helped keep the classroom under control. But the students began to notice they were not thinking, and the order began to tear the school apart. The teacher sought order because he wanted his students to behave better. Scientists use order to control viruses. Viruses are classified by several attributes; their shape, the vectors that transmit them, and their RNA or DNA content. Once a virus is classified, it can be examined, and controlled. Biologists use order to classify other organisms as well. Charles Darwin sought order instinctively by becoming a naturalist. He studied animals and plants and devised a theory of evolution. He decided that variations exist within populations. Some variations are more advantageous for survival and reproduction than others. Organisms produce more offspring than can survive. Over time, offspring of survivors will make up a larger proportion of the population. Darwin believed tha
t organisms produce more offspring so that the stronger offspring can live, but the weak offspring die. Darwin, the virus cycle, and The Wave portray order being sought instinctively.
A virus seeks order instinctively. A virus attaches itself to a host. Then it enters into the host by exchanging its DNA or RNA. The virus then replicates itself, inside the host. Lastly, the replicated viruses release themselves from the host, and begin to attack the organism. The virus automatically does this every time it infects a host, not necessarily in the same order, but by the same method. This order is sought by the virus to help it infect a host. If the virus began replicating itself before it entered the host, the virus would not be able to infect the organism as well. The order helps the virus by helping it survive. The order allows the virus to replicate itself, which helps the virus to survive. "The Fatalist" also seeks order instinctively. He believes in fate. A woman wanted to test the Fatalist. She bet him to lie on train tracks and wait for a train. If a train killed the man it would have been his fate to die. If the train did not kill the man, it would have been his fate to live. The Fatal
ist sought his order by accepting the woman's bet. By accepting the bet, he put his life at risk, but more importantly his belief in fate at risk. The train stopped just in time and did not kill the man. It was his fate to live. Somewhere, there is an order to fate, fate is a series of events. Order can also be a series of events. However, the Fatalist's fate was found in the order of the train. The train noticed a man lying on the tracks, and it slowed down until it stopped.
In conclusion humans as well as organisms seek order instinctively. Order helps authors relate ideas to real life experiences. The order of events allows their books to become more realistic. A young boy found order by watching the life cycle of a buck. Biologists use order to help them classify organisms. Students sought order by behaving better. Viruses use order to help regenerate themselves. The Fatalist found order by lying on train tracks.
f:\12000 essays\sciences (985)\Biology\Orthoptera.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Here are the facts about Orthoptera.
Where they live.
Field crickets, the familiar black or brownish crickets are often
abundant in meadows and fields. Also in dwellings or in small clusters in
the ground. Tree Crickets are more often heard then seen. Usually
colored green these slender crickets live in shrubs and trees. Mole Crickets
can burrow rapidly through moist soil. They also can live in caves, hollow
logs, beneath stones, and other dark moist places. Grasshoppers are also
part of this group. They often become very abundant, and migrate in
tremendous swarms. Destroying nearly all plants in their path. They like
to live in wet grassy areas. Locust also contribute to Orthoptera. Locust
plagues have been recorded since the beginning of history and are still one
of the worlds major insect problems. Cockroaches are in this group too.
Their are an estimated 3,000 cockroach species in the world. About 55 live
in the U.S., and only 4 species ar common household pets. German
cockroaches or Croton bugs, are common in the U.S. especially in the
northern states. They commonly enter the house in bags or boxes from
grocery stores. They tend to cluster in warm moist places around hot
water pipes. They stay hidden when they are not eating.
Eat
Crickets will eat holes in paper or in garments especially those soiled
with persperation. They also eat young roots and seedlings, peanuts,
garden crops, grain, clothing, and sometimes other insects and even each
other. Grasshoppers are a different story. They eat crops and destroy
millions of dollars a year in them. Cockroaches are just a pest and they eat
almost any thing. Cockroaches feed on a great variety of foods, meats,
cheeses, sweets, and starches(like the starch in clothing or in the glue like
that in book bindings, and stamps.). When abundant they may also eat
human hair, skin and nails. They secrete sticky, odorous fluid that may be
lift on foods or materials.
Movement
Cockroaches move very swiftly. They have 6 legs with 3 joints, as
muscles contract at the base of the body the legs move. This motion causes
a roach to lurch forward in rapid motion. Crickets have wings so they
may fly. The movement of the crickets aren't the same as the
grasshopper's. The grasshopper is an insect that can leap about 20 times
the length of its body. If a human being had the same leaping ability as the
grasshopper they could jump at least 20 feet.
Helpful things they do.
In Russia roaches have been regarded as an antidote for dropsy.
Also in Southeast Asia, and China the bits of meat plucked from around
the legs of boiled roaches is considered a delicacy. I 1968, 71% of more
than 700 U.S. allergy patients injected with an experimental roach extract
reported on easing of their symptoms. Roaches are ideal lab animals also.
They are easy to care for, and but don't bite or sting. Roaches have been
impucated as disease vectors, but this has never been proven. They eat
almost anything because of a wide variety of bacteria and protozoans in
their gut. They help in rapid decomposition of forest litter, and animal
fecal matter. We cope with poison baits, insecticides, dusts, and sprays.
Other ways we can cope with common household things ar orange, and
lemon peels. This instantly rill imported fire ants, house flies, stable flies,
and ext.
Harmful
Members of Orthoptera cause lots of crop damage. Plagues of
locusts occur in countless millions. When they are finished eating in one
place they move on not leaving a green stem in the field. The term locust
designates grasshoppers that migrate. Grasshoppers have caused more
direct crop loss that any other insect. From 1925 to 1949 they damaged
more than half a billion dollars worth of damage to crops. In 23 states in
the western U.S. grasshoppers are considered to be among our most
serious insect pests. Millions of dollars are spent in an attempt to control
them. Millions of people around the word have died of starvation. In the
U.S. the problem is serious, but is small compared with other areas. The
Middle East and areas adjacent to it are usually the hardest to hit.
Cockroaches, also a common pest, don't bite but contaminate food. The
roaches carry diseases, and damage book bindings. They will eat almost
anything.
Sound/Communication.
Grasshoppers are well known for their sounds. These are produced
by rubbing their hind legs against the fore wings. The inner side of each
hind leg has a ridge with a row of small pegs. When this ridge is rubbed
against the gardened vein of the fore wing a audible vibration is produced.
Both pitch, sound, and rhythm of stridulation vary according to species. In
almost all species the sound production is limited to males. This serves to
attract to females and possibly to help identify members of the same
species. Hearing organs are located on either side of the abdominal
segment. Males produce sound by rubbing a groove niche on the
underside of one front wing against the sharp edge of another sharp
wing(breeding session). Males attract females with this call.
Reproduction/Lifecycle
The male cockroach is a very active breeder during his life, while the
female only breeds once. He first starts by secreting a substance
underneath his wings. When he calls out to a female she mounts him and
starts to comsume to substance. This is when to male and female join.
They will stay toghether for a couple of days before disengaging. The
female will keep the sperm in her body for months on end sometimes.
When she fertlizes her eggs so begins to have and egg sac start to some out
from her backend. After the sac has fallen off she just leaves it. After a
couple of days the small larva will start to suck up air this expanding
themselves, and the egg case will start to tear. Once out the little
cockroaches look like small transparent roaches. Some will be eaten by
predators, while some will be eaten even by their own kind. But since
roaches almost always are mating this really doesn't hinder their young.
Soon the roach will reach maturity and the process will start all over again.
Article.(National Geographic)
The praying mantis eats nothing but live food, mostly insects. Prey is
taken only from flowers, leafage, twigs, bark, or the ground. Never while
the potential victim is in flight. Many species have wings but seldom use
them. A mantis's catching of prey is at times larger than the mantis itself.
Its severed by surprising small mouth parts, similar to those of its
cockroach ancestors. Over millions of years of evolutionary time, mantises
have occupied all accessible regions that may have a suitable climate. They
abound especially in tropical and subtropical areas and have adapted by
protective color and form to a variety of habitants. If danger is imminent,
a mantis may explode into action, scurrying with crablike speed upward
and around to the opposite tree. All and all mantises are very
extraordinary creatures just like the rest of the group Orthoptera.
f:\12000 essays\sciences (985)\Biology\PCR And Its Use.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PCR And Its Use
Often times, scientists only have a small amount of DNA to deal with when doing genetic research or studies. In these situations, scientists can do one of several things. One is to just try to work with it anyway, but this is nearly impossible (depending on how much there is). Ther are a couple other processes they can use, or they can use PCR. PCR is one of the more complicated, but reliable ways to do tests on DNA when they only have a small amount to begin with. PCR, or Polymearse Chain Reaction, is the scientific process used by genetic scientists to clone DNA.
"A 'rapid diagnostic' technique used in the clinical microbiology lab to detect pathogens. It relies upon amplification technology utilizingthe heat stable DNA polymerase from a thermophilic organism." (from http://www.genes.com/pcr/pcrinfo.html) Dr. K.Mullis recently received the Nobel prize for inventing the technique.
This is how they go about doing this: They first get their small DNA sample. Then they mix all the chemicals (this includes the primer, etc). Then they have to run it through the PCR machine. Here is a (rather detailed) description of the process: "The cycling protocol consisted of 25-30 cycles of three-temperatures: strand denaturation at 95degC, primer annealing at 55degC, and primer extension at 72deg C, typically 30 seconds, 30 seconds, and 60 seconds for the DNA Thermal Cycler and 4 seconds, 10 seconds, and 60 seconds for the Thermal Cycler 9600, respectively."
Basically, that means that they set it to certain temperatures, then put it in different cyles for different amounts of time. PCR machines can be compared with washing machines. There are the different temperatures (here for example, there is 72degC, where in the washing machine you would set it to cold/cold respectively.
For it to properly replicate, we must know how to match each of the following:
A T G A T A T G G C A G C A A C G A C C A T A
the match would be
T A C T A T A C C G T C C T T G C T G T A T
The whole process is pretty much summed up like this: They heat up the DNA to let the enzymes break it down (or 'unzip' its bonds). Then add specific amounts of the primer (relative to the amount of DNA you have. Then you add the enzyme to sets of 4 nuclotides that will go through the genetic sequence of nucleotides and hook up the matching nucleotide (A goes to T and G to C etc). Keep adding 4 more after the enzymes finish with the one you just added it to.
When all this is done, there will no longer be a shortage of DNA, but an abundant amount, so the tests can be properly run on it. PCR isn't as difficult to understand as it may seem at first, but it can be explained in a very simple way:
C = Cytosine
G = Guanine
A = Adenine
T = Thymine
You will now assume the role of a genetic scientist. Here is the little bit of DNA that you have managed to obtain:
C G A T T A T G A G C C G A G
The PCR process will perform an artificial 'protein synthesis' in a way. It (through heat) will break down the bonds that currently keep your specimin in tact. It will, basically, just line up the nucleotides with their match, and the two strands of the double helix will become two full strands of DNA. So, the above code is the coding for one strand of your DNA sample. The PCR machine, will in effect, match them up:
G C T A A T A C T C G G A T C
PCR has many uses. It can be used in criminal cases, when they only have a fragment of a speck of blood to deal with. They can also use it to piece back together the DNA of an ancient fossil of a dinosaur. The possibilites just never seem to end with DNA. Until recently, there was no such thing as the PCR or a PCR machine. You had to do things by hand and that really added to the cost of research. In effect, not as many people heard about what was going on in the world of DNA. People should be educated about DNA because if you know about DNA it can be useful if you are ever called to jury duty and they are using that kind of evidence. You will be able to make a wise decision.
f:\12000 essays\sciences (985)\Biology\Photosyntesis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Matt Lazar
5/13/96
8th Hr.
Photosynthesis
When you and me eat, we find our food. When plants eat, they make their own food and energy. They make their food and energy through a process called photosynthesis. Through photosynthesis oxygen is also produced. Photosynthesis is "a process in which green plants synthesize carbohydrates from carbon dioxide and water....The reverse of this reaction provides energy for plants, for animals that eat plants, for animals that eat animals that eat plants" for animals that eat animals that eat animals that eat plants, etc. (Levine 726) All humans rely on these plants to produce oxygen from photosynthesis. The presence of light and of the green pigment chlorophyll makes the change of carbon dioxide and water to glucose and oxygen possible. It was Jan Ingenhouz with Jean Senebier in Geneva that founded the basic gist of the theory of photosynthesis in plants.
My experiment involves testing different brands of plant food on plants to see what brand of plant food makes the plant grow the tallest. I will test three types of plant food on three sets of plants and one set with no food. I will water the plants all the same amount and let all the plants get the same amount of sun, then measure the height of all the plants at the end of six weeks and record the data.
The first step of photosynthesis is the absorption of light by a chlorophyll molecule. "The energy of the absorbed photon is transferred from one chlorophyll molecule to another until it reaches a site called a reaction center....One oxygen molecule is produced per eight photons absorbed." (Alberty 708) James Huheey, professor at the University of Maryland, talks about chlorophyll in his book Inorganic Chemistry:
"Chlorophyll absorbs low-energy light in the far red region....Such absorption serves a twofold function: (1)The energy may be passed along to the chlorophyll system and used in photosynthesis; (2)It protects the biological system from photochemical damage." (629)
"Chlorophyll a and chlorophyll b contain networks of alternating single and double bands, and have strong absorption bands in the visible part of the spectrum....," says Robert Alberty, who is a professor of chemistry at the Massachusetts Institute of Technology. (708)
Photosynthesis, "...which occurs entirely in the chloroplasts of green cells, involves a number of steps catalyzed by enzymes. The chloroplasts contain chlorophyll a, chlorophyll b, carotenes, electron carriers, and enzymes, and have internal membranes that keep reactants separated," says Alberty. (708)
James Huheey talks about some "features of the chlorophyll system which enhance[s] its usefulness as a pigment in photosynthesis:
"First, there is an extensive conjugation of the porphyrin ring. This lowers the energy of the electronic transitions and shifts the absorption maximum into the region of visible light. Conjugation also helps make the ring rigid and thus less energy is wasted in internal thermal degration." (631)
"In order for phosphorescence to occur there must be an excited state with a finite lifetime. If such an excited state is available then there is time for a chemical reaction to take place to take advantage of the energy prior to phosphorescence....The presence of a metal atom is necessary in order that phosphorescence takes place." (Huheey 631)
"Chlorophyll contains a conjugated ring system that allows it to absorb visible radiation...." (Levine 726) In Inorganic Chemistry, James Huheey reports about ring system:
"The chlorophyll ring system is a porphyrin in which a double bond in one of the pyrrole rings has been reduced. A fused cyclopentanone is also present....In addition, other pigments such as carotenoids are present which absorb higher energy light." (629)
The exact frequency of the light that is absorbed by the chlorophyll depends upon the nature of the substituents on the chlorophyll.
Photosynthesis is a crucial process that is necessary for the human race and other races that breath the oxygen that the plants produce to live. Photosynthesis is a complex process that we know only broad details about. Chlorophyll and light are what is needed for plant to photosynthesize. I think that my experiment will prove the best brand of plant food out of the three. The best brand would be the one that makes the plant grow highest. So consider just how important that green plants really are to the human race and their survival on the earth.
Works Cited
Alberty, Robert A. Physical Chemistry. New York: Wiley, 1983.
Asimov, Isaac. Photosynthesis. New York: Basic, 1968.
Brock, William H. The Norton History of Chemistry. New York: 1993.
Huheey, James E. Inorganic Chemistry. New York: Harper, 1972.
Levine, Ira N. Physical Chemistry. New York: McGraw, 1978.
f:\12000 essays\sciences (985)\Biology\Pituitary Dwarfism.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Pituitary Gland is situated at the base of the brain and it produces hormones which control growth. Too large an amount of these hormones causes giantism, a condition where facial features, hands, etc. become abnormally large. Too little causes dwarfism, where the overall stature of a person is very small.
Dwarfism is the condition of being undersized, or less than 127 cm (50 in) in height. Some dwarfs have been less than 64 cm (24 in) in height when fully grown. The word midget is usually applied to dwarfs. Another growth disorder disease is Cretinism which is a result of a disease of the thyroid gland it is the cause of most dwarfism in Europe, Canada, and the United States. Other causes of dwarfism are Down's syndrome, a congenital condition with symptoms similar to those of cretinism, achondroplasia, a disease characterized by short extremities resulting from absorption of cartilaginous tissue during the fetal stage, spinal tuberculosis, and deficiency of the secretions of the pituitary gland or of the ovary.
Causes of pituitary dwarfism may vary. Abnormally short height in childhood may be due to the pituitary gland not functioning correctly, resulting in underproduction of growth hormone. This may result from a tumor in the pituitary gland, absence of the pituitary gland, or trauma.
Growth retardation may become evident in infancy and persists throughout childhood. Normal puberty may or may not occur depending on the degree of pituitary insufficiency that is present, which is the inability of the pituitary to produce adequate hormone levels other than growth hormone.
Physical defects of the face and skull may also be associated with abnormalities of the pituitary gland. A small percentage of infants with cleft lip and cleft palate may have decreased growth hormone levels.
No ideal treatment has been developed yet for pituitary dwarfism. Replacement therapy with growth hormone is indicated for children who have documented growth hormone deficiency. If the deficiency is an isolated growth hormone deficiency only growth hormone is given. If the deficiency is not isolated other hormone replacement preparations will be required.
There are a few complications of pituitary dwarfism. Some are short stature and delayed puberty development. Creutzfeldt-Jacob disease has been acquired from cadaver derived growth hormone which is no longer available. Synthetic growth hormone is now available which is free of all infectious disease risk.
Pituitary dwarfism is a sad disease to see a person have. Most cases are not preventable. The future may look good for the disease though. Medical breakthroughs are always happening. It may not be easy but doctors are constantly in a lab somewhere working on these terrible disorders and diseases such as pituitary dwarfism.
f:\12000 essays\sciences (985)\Biology\Plagues.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Why Me?
Humans are remarkably good at finding a religious scapegoat for their problems. There has always been someone to blame for the difficulties we face in life, such as war, famine, and more relevant, disease. Hitler blames the Jews for economical woes in a corrupt Germany long after the Romans held the Christians responsible for everything wrong in a crumbling, has-been empire. In the fourteenth century, when Plague struck Europe, it was blamed on "...unfavorable astrological combinations or malignant atmospheres..." (handout p2), and even "...deliberate combination by witches, Moslems (an idea proposed by Christians), Christians (proposed by Moslems) and Jews (proposed by both groups)." (H p2) The point is, someone was to blame even when the obvious reasons, flea ridden rats, were laying dead on the streets. As time progressed to the twentieth century, there have been few if any exceptions made to this phenomena. In the case of Oran, the people raced to find a culprit for the sudden invasion of their town, which bec
ame the unrepentant man. This is one of Camus' major themes; The way a society deals with an epidemic is to blame it one someone else. Twenty years ago, when AIDS emerged in the US, homosexual men became the target of harsh and flagrant discrimination, and even today are still held accountable by some beliefs. While we may no longer lynch in the nineties, we do accuse innocent groups, like the gay male population, for the birth and explosion of AIDS in our society. Given, there are some differences between each respective situation, but there are striking similarities that cannot be ignored.
As the Plague invaded the town of Oran, the people quarantined within its walls began to look to their leaders for answers. Most likely these people had trouble believing that such an awful thing was happening to them, and needed someone to point the finger at. In the meantime, Father Paneloux was preparing a speech to answer the questions and fears that surrounded him, and probably vexed him as well. The truth is, his speech was as much therapeutic as it was didactic, and in winning the opinion of the public he could calm his own fears. " If today the plague is in your midst, that is because the hour has struck for taking thought. The just man need have no fear, but the evildoer has good cause to tremble." (p95) Paneloux is passing the blame, but in a very intriguing way. "You believed some brief formalities, some bendings of the knee, would recompense Him well enough for you criminal indifference. But God is not mocked." (p97) He has found the blame, the weak observer of Christ, but in the end, especially i
n a heavily religious town like Oran, believes they are that person? Who in the city, after reflecting upon their record of attendance at church, could find it possible to blame themselves? In his sermon, Paneloux did not point out a specific group as the cause such as the lower class, but associated the plague with a general group that is fundamentally vague. It is an interesting way of passing the blame, in such a manner that puts no certain group in danger. The fact is that taking into consideration the townspeople's manic state of paranoia, to accuse one particular group would be murder. If Paneloux told the masses that the street cleaners brought the Plague, each and every one of them would be strung up on the closest available tree. It seems that Oran provided the blueprints for the AIDS epidemic, relative to how even today, parts of our society still blame who we feel is a lesser group for the disease.
In the late seventies, AIDS began its invasion of the US population. For years it confined itself to the gay community, but as the new decade arrived it was spreading much more effectively, as heterosexuals, dirty needles, and infected blood transfusions became efficient avenues for the virus to change hosts. However, at this time the public was hardly educated about AIDS. They knew little if anything about how it was spread. In fact, all they really knew was that the disease is one hundred percent fatal, contagious, and carried mostly by gay males. Interestingly enough, until the AIDS virus broke into the heterosexual community, in general no one really bothered themselves with it. This may be because so little was known about it, even in medical circles, but there is a definite connection to a "hear no evil, speak no evil" attitude. The virus was not affecting the straight community, so why bother? However, when people in the workplace, friends and family began to get sick, panic struck swiftly. Someone wa
s to blame, and many found specific groups, like homosexuals, junkies and prostitutes excellent focal points for a certain frustration that comes from a state of helplessness. These three groups, representing the gutter of society, were an easy target because the had no leverage in society. The general public needed a scapegoat, and they had found it. Gays were the foremost to be blamed, mostly because sodomy is defined In the bible as a grievous sin, following the story of Sodom. People called the disease "God's revenge", His way to erase an abomination of his creation. Again, this case is remarkably similar to Oran, because while Father Paneloux blamed it on a much more general group, it was still a group that angered God, and brought forth his wrath. Even industry supported this absurd theory, as an infamous T-shirt, using the RAID bug spray logo, read instead, "AIDS, kills fags dead.", as opposed to "RAID, kills bugs dead." In short, society had found its scapegoat, and would not let it go. Even today, af
ter all we've learned about the disease, all we've found to be true and untrue, gays are still blamed by some for bringing AIDS into society, just as the unrepentant man was blamed for bringing the Plague into Oran.
When an epidemic like AIDS or the Plague attacks a city, state or country, society deals with it by finding someone to blame it on. People stab in the dark for the reason problems like AIDS befall them, and religion often dictates who they will blame. It is a never ending tennis match, where the ball is the blame being bounced back and fourth, while little or no effort is made to remedy the situation. The Japanese have a saying, which translates, "when an archer misses a target, he can only blame himself, and not the target." This is a great expression, and makes a lot of sense. However, it is rarely followed in our society, especially when an epidemic strikes. While we should be finding ways to cure it, prevent it, learn about it, and come to terms with it, all we seem capable of doing is finding someone to blame for it.
f:\12000 essays\sciences (985)\Biology\PraderWilli Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Prader-Willi Syndrome is a serious genetic disorder that begins
at birth with no known cure ; causing mental retardation,short stature,low
muscle tone,incomplete sexual development,and its main charecteristic,the
desire to eat everything and anything in sight.
Prader-Willi syndrome was first known as Prader-Labhart-Willi Syndrome after three Swiss doctors who first described the disorder in 1956. The doctors described a small group of kids with obesity, short stature and mental deficiency , neonatal hypotonia (floppiness) and a desire to constantly eat because they are always hungry. Many other features of PWS have since been described, but extreme obesity and the health problems associated with being fat are the most prominent features.
Individuals with PWS have some but not all of the same features and symptoms.
PWS is a birth defect. A defect in the hypothalamus, a region of the brain, is suspected to be the cause.The hypothalamus determines hunger and satiety.They can't fell satiety,so they always have a urge to eat.Some PWS cases are so out of control thay will eat bottlecaps,glass,pencils,garbage,bugs,dogfood, and anything else they can stuff in their mouths.
"The ingenuity and determination of PWS children in surreptitiously obtaining edibles
is almost legendary and belies their cognitive defects. Serial weighing may be the only way to discover whether such a child is, in fact, stealing food"(Finey,1983).
PWS occurs in about l in 10,000 births. It occurs in both males and females equally and is found in people of all races and all nations.It is one of the ten most common conditions seen in genetics clinics.
Young people with PWS resemble each other very much.Most of the time, they look like brother and sister. Most of PWS people have almond shaped eyes, narrow foreheads, downturned mouth, thin upper lip and a small chin. Other common features are : obesity , they may be short; they have small hands and feet; have a skin picking habit, thick and sticky saliiva,incomplete sexual development, a curved spine (scoliosis),and chronic sleepiness.
PWS patients also have similar personalities: talkative, friendly,extreme attempts towards getting food,arguementivness,repetitve thoughts and behavior, stubbornness, frequent temper tantrums, and sometimes sudden acts of violence.
Most people with PWS have some degree of mental deficiency. The average IQ of people with PWS is 65 ,and it ranges from 20 to 90. 41% of PWS people have IQs in the normal or borderline range.Specific academic weakness in math and writing are common, but reading and art are considered strengths.A delay in getting to early developmental milestones is common in PWS. The average IQ testing shows that people with PWS are mildly retarded, the range is from severely retarded to not retarded, with 40% having borderline retardation or just a low normal intelligence. Most affected children, besides their IQ scores, will have many, severe learning disabilities,and will show poor academic performance no matter what their IQ shows to their mental abilities.
There are many signs and symptoms of PWS that show up before birth.some are decreased fetal movement in 80-90% and having an abnormal delivery in 20-30% due to having a really floppy baby. There are two distinct clinical stages of PWS.
Stage 1
Babys with PWS are called "floppy babies"a lot. Thats because they have weak muscles, officially it is known as hypotonia. This hypotonia,which almost always occurs, could be mild to severe. Neonatal hypotonia makes sucking difficult, and a special feeding method called a gavage is used.A gavage the placing of a tube into the stomach through the mouth.They use it during the first days of life a lot.. Decreased caloric intake from the special feeding difficulties may lead to failure to gain weight. To keep the baby's weight under control supervision by a professional nutritionist or a specialist who understands the syndrome might be necessary. Physical therapy is strongly recommended to improve muscle tone.
When the muscle tone improves enough, an increased appetite and weight gain starts.The beginning of the second stage has begun. This hypotonia does not progress and begins to improve between 8 and 11 months of age in most cases.It improves,but it is never completly normal.
Stage 2
stage 2 occurs between one and two years of age and is characterized by an appetite that can not be satisfied whic causes excessive weight gain. Speech problems, sleepiness, decreased pain sensitivity, skin picking habits and decreased growth are also characteristics of the second stage of PWS. The personality problems develop between ages 3 and 5 years also.
Most parents who have a kid with PWS do not have another kid affected with PWS . The cases of PWS are thought by scientist to have occured by chance in isolated flukes of nature. But, there have been reports of families with more than one kid with PWS, but it is not common. Fewer than a dozen families with more than one affected offspring have ever been reported.
A blood sample for high resolution chromosome analysis is drawn on anyone who is though to have PWS.This will check out the chromosones. Chromosomes are packages of information found in the cells of our bodies. Each cell has a set of 46 chromosomes, which come in pairs numbered from 1 to 23. Parents contribute with one chromosome from each pair.Okay,now Prader-Willi Syndrome is caused by the absence of some genes on one of the chromosones that affect the functionimg of the hypothalamus.Many laboratories around the world are researching this. About three-fourths of people with PWS have a tiny piece missing from one member of the pair of chromosone fifteens (the one inherited by the father).The other one fourth are missing the dads contribution to this part of the chromosone by missing all of the fathers chromosone fifteen and having two copies the mother's chromosone fifteen.The genes in this region are not functional and noone understands why.
As soon an the kid has improved muscle tone, and has increased its appetite, and is old enough to get move on the floor,than any food that can be easily gotten must be moved to a safer,out-of-reach place. To make inappropriate "food" unavailable to the kid with PWS, parents must learn special patterns of food storage and handling
Sleepiness during the day and napping a lot are some of the common features of PWS. Recently, studies have show that there is a strong link between this and sleep quality. Some of the types of sleep disorders that have been described in PWS affected people are: disturbance to the sleep\ wake cycle, obstructive sleep apnea, hypoventilation syndromes and narcolepsy. Although patients with PWS fall asleep very quickly, their sleep period is significantly disrupted with frequent awakenings and abnormal patterns of rapid eye movements sleep (rems). Obstructive sleep apnea occurs with increased upper airways resistance, either from enlarged tonsils , relaxation of the upper airway musculature, or from structural airway anomalies. Sometimes actual pauses in breathing during sleep can occur.. Narcolepsy, which involves sleep attacks and occasional loss of muscle tone,
Short stature is also a common feature of almost all PWS affected people (80-100%), but birth height is usually normal. The average adult height is 59 inches in women and 61 inches in men. Abnormal growth hormone response suggests a possible dysfunction of the hypothalamus and, growth hormone deficiency as a contributing factor in short stature. Improvement in growth rate and decreased rate of weight gain have recently been demonstrated in several growth hormone-deficient children with PWS after six months of growth hormone treatment
Other significant actions of growth hormone that have been reported is an improvement of muscle mass, muscle strength, energy expenditure, bone mineralization ,sexual development ,and also a decrease in fat mass ,have led to further investigations in people with PWS.
Children with PWS have distinct behavioral abnormalities because of all the frustrations associated with the syndrome. These behaviors may begin as early as two years of age. They will get a variety of different eating behaviors like foraging for food, secretly eating large amounts of food, and other attempts to continue eating. Other problems include verbally and physically aggressive behaviors such as lying, stealing, scratching and skin picking. Tantrums and unprovoked outbursts are common among children and youths with PWS.
People with mild cases of PWS can do many things their normal peers can do,such as go to school,get jobs,and sometimes even move away from home.However they need a lot of help.Kids going to school would need to be enrolled in special education programs(Otherwise they'd be eating their pencil and paper).They need to be constantly supervised.
f:\12000 essays\sciences (985)\Biology\Prairie.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PRAIRIE
An ecoysystem is all the biotic and abiotic components of an environment. The prairie is located
in North America. The tempaturevaries greatly and the environment doesn't get enough rainfall to support trees.
One organism in the ecoysystem that adapts well is grass which withstands grazing animals and occasional fires.
Producer: An autotroph organism (grass).
Consumer: Organism that eats producers (caterpillars, bison).
Primary consumer: Organisms that eat consumers (chicken, meadowlark).
Secondary consumers: Organisms that eatprimary consumers (praire felcon, eagle).
Decomposers: Organisms that uses nutrients from dead plants and animals, it starts the chain over (bacteria).
Say that an organism was removed from the web, such as a caterpillar. Though it's not the only grass eating organism it would still mess up the web. Say you put the bison in its place, that part would work, except for the fact that the indigo bunting wouldn't eat the bison. Which would also eliminate that from the food chain and so on.
f:\12000 essays\sciences (985)\Biology\Praying Mantis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
praying mantis
(Mantis Religiosa)
Contents
Introduction
Classes
First Things First
Key Features
Basic Features
Diet & Combat Style
Reproduction
Growth & Development
Self-Defense
Cultural Significance
Praying Mantis Kung-Fu
INTRODUCTION
"Praying Mantis" is the name commonly used in English speaking countries to refer to a large, much elongated, slow-moving insect with fore legs fitted for seizing and holding insect prey. The name, "Praying Mantis" more properly refers to the specific Mantid species Mantis Religiosa or the European Mantis, but typically is used more generally to refer to any of the mantid family. The name is derived from the prayer-like position in which the insect holds its long, jointed front legs while at rest or waiting for prey. It is also called the "preying" mantis because of its predatory nature.
CLASSES
Many questions have risen regarding the praying mantis. Such questions include how many different species there are in the animal kingdom. Estimates range from 1500 to 2200 different mantid species WORLDWIDE. The most common figure given, though, is about 1800.
The ways the Mantid's are classified in the Animal Kingdom. There is agreement that the collection of mantid species make up the Mantidae family of insects. The Mantidae family, in turn, is part of the order/suborder Mantodea that includes a variety of mantid-like species. But the existing literature does not reflect a clear consensus about what insect order Mantodea belong in. Some have placed Mantodea in the Dictyoptera Order-with the roaches.
Others place Mantodea in the Orthoptera Order-with crickets and grasshoppers. Finally, some believe that Mantodea constitute their own independent order of insects. There seems to be an emerging consensus around this position.
FIRST THINGS FIRST
The Mantis Religiosa was first named such and classified by the inventor of the modern system of biological taxonomy Carolus Linnaeus. The three common species of mantids in North America are the European mantis (Mantis religiosa), the Chinese mantis (Tenodera aridifolia sinensis), and the Carolina mantis (Stagmomantis carolina)
distinguishing features of these three species:
Size
The Chinese mantis is the largest of the three, reaching lengths of three to five inches. The European mantis, however, is a little smaller than the Chinese variety and it only reaches lengths of two to three inches. And finally the Carolina mantis is smallest of the three usually less than two inches in length.
Color
The Chinese mantis is mostly light brown with dull green trim around its wings. The European mantis is more consistently bright green in color. The Carolina mantis is a dusky brown or gray color, perhaps to blend in with the pine forests and sandhills of its native South.
Egg cases
The best way to distinguish the three species is by the shape of their egg cases or ootheca. The egg case of the Chinese mantis is roughly ball-shaped, but has a flattened area on one side. The European mantid's egg case is rounded without this a this "flat portion" The Carolina mantis has an egg case that looks like a short elongated tube, often spread out along a portion of twig or stem.
Range
The Chinese mantis can be found throughout the United States. The European mantis is most common east of the Mississippi. And the Carolina mantis makes its home in the Southeastern part of the U.S.
Other Physical Characteristics
One of the most notable features of the Carolina mantis is that their wings only extend about 3/4 of the way down the abdomen.
Markings
The European mantis is also distinguished as the only of three species that bears a black-ringed spot beneath its fore coxae.
Species Origins
The Carolina mantis is one of 20 mantid species native to North America. The European and Chinese mantids were introduced to America around the turn of the century. The European mantis is said to have first been brought to Rochester New York in 1899 on a shipment of nursery plants. The Chinese mantis arrived in 1895, from China (duh), on nursery stock sent to Philadelphia, Pennsylvania.
KEY FEATURES
Key features of mantid physiology include a triangular head with large compound eyes, two long, thin antennae, and a collection of sharp mouth parts designed for devouring live prey.
Because of its compound eye, the mantid's eyesight is very good. However, the sharpest vision is located in the compound eye's center so the mantis must rotate its head and look directly at an object for optimum viewing. Fortunately, the mantis can also rotate its head 180 degrees to see prey or approaching threats, the mantis can scan a total of 300 degrees. The mantid's eyes are very sensitive to light, changing from light green or tan in bright light, to dark brown in the dark.
An elongated prothorax or neck that helps gives the mantis its distinctive appearance. The prothorax is also quite flexible, turning and bending easily which aids in its locating and seizing of prey.
Two long, "raptorial" front legs that are adapted to seize and hold prey.
These legs have three parts:
1. The lower part of the legs or tibia have sharp spines to firmly grasp prey
2. These spines "fold-up" into matching grooves in the upper femur, creating a "jackknife" effect that allows the insect to assume its distinctive "praying" position.
3. Finally, the upper coxa functions like a shoulder to connect the femur and tibia to the mantid's body.
4. Four other long, thin legs designed for climbing and movement. These legs regenerate if broken or lost, but only during the molting process, but unfortunately limbs that regenerate are often smaller than the others. Since a full grown adult no longer molts, he or she cannot replace lost limbs. The front "raptorial" legs do not regenerate and if a mantis loses one of them it will not survive
5. Two pairs of wings that fold neatly against its abdomen when not in use. A front set of leathery tegmina wings that overlay and protect the 'inner' wings. Back wings used for flight and to "startle" enemies
6. A large, segmented abdomen which contains the mantid's digestive system and reproductive organs. The male has 8 abdominal segments. The female is born with 8 segements, but with each successive molting, the 6th segment gradually overlaps the 7th and 8th until 6 segments remain at the adult stage
7. 60% of mantid species--especially those that have wings--also have an "ultrasonic ear" on the underside of their metathorax The mantid is an auditory cyclops, unique in the animal kingdom. That is, it has only a single ear. The ear is made of a deep, 1 mm long slit with cuticle-like knobs at either end and two ear drums buried inside. The ear is specially tuned to very high "ultrasonic" frequencies of sound--25 to 60 kilohertz. Apparently, the ear is designed to primarily respond to the ultrasonic echo-location signal emitted by hunting bats. The mantis primarily uses its ultrasonic ear while in flight. When a relatively slow flying mantis sense a bat's ultrasonic echo at close range, it curls its abdomen upwards and thrusts it legs outward creating drag and resulting in a sudden aerial "stall". The mantid in-flight maneuver creates an inherently unpredictable flight pattern-sometimes looping up and around, banking left or right, or a sudden spiral towards the ground. This tactic is a
pparently very effective for avoiding a hungry bat's attack
BASIC FEATURES
Abdominal Structure-the female mantis has 6 segments. the male 8 segments.
Size-the female mantis is usually larger than the male
Behavior-the male mantis is more prone to take flight in search of a mate while the female often remains more stationary
DIET & COMBAT STYLE
Basically the praying mantis is extremely predacious ESPECIALLY the female. The mantid eats only live prey, or at least prey that is moving, and hence, appears alive. Some might go as far as saying that the praying mantis will eat "anything," even reptiles and small birds, but others indicate it prefers "soft bodied" insects which it can easily devour. These dietary preferences very by species. Males are generally less aggressive predators than females. Cannibalistic behavior is present in the mantid, both as a nymph and as a adult. Baby mantids will eat other babies, adults will eat their own or others' babies, and adults will eat each other. Mantids are diurnal, that is, mainly eats during the day. But mantids also congregate and feed around artificial light sources.
Mantids usually wait motionless for unsuspecting prey to get within striking distance--a "sit-and wait" and wait or ambush strategy, but can also slowly stalk prey. The mantid often begins to undulate and sway just before striking its prey. Some have speculated this is to mimic the movement of surrounding foliage. Others suggest that this behavior aids in the visualization process. They attacks by "pinching" and impaling prey between its spiked lower tibia and upper femur.
The mantid's strike takes an amazing 30 to 50 one-thousandth of a second. The strike is so fast that it cannot be processed by the human brain. It uses the view before and after the strike and "tricks" you into seeing what occurs in-between. After securing the prey with its legs, rapidly chews at the prey's neck to immobilize it. If well fed, mantids will selectively choose to devour "select" parts of its prey and discard the rest. If any part of the prey is dropped during feeding, the mantid will not retrieve it. After eating, will often use its mouth to clean the food particles from the spines of its tibia, and then wipe its face in a cat-like manner.
REPRODUCTION
One of the most interesting, and to humans, disturbing features of mantid life is the female's tendency to eat her mate. During late summer, a female mantis, already heavy with eggs, is believed to excrete a chemical attractant to tempt a willing male into mating. The current state of research seems to indicate that the female sometimes devours the male during the mating process (between 5-31% if the time)
The dead male may also serve as a source of protein for the female and her young. Recent research indicates that fertilization can take place without the male's death and that his demise is not necessary to the process. The male's sperm cells are stored in a special chamber in the female's abdomen called the spermatheca. The female can begin lay her eggs as early as a day after mating. As the eggs pass through her reproductive system, they are fertilized by the stored sperm. After finding a suitably raised location--a branch, stem, or building overhang--special appendages at the base of her abdomen "froth" the gelatinous egg material into the shape characteristic of the particular species as its exits her ovipositor.
By instinct, the female twists her abdomen in a spiral motion to create many individual "cells" or chambers within the ootheca or egg case. The egg laying process takes between 3 and 5 hours. The ootheca soon hardens into a paper mache like substance that is resistant to the birds and animals that would attempt to eat it. The carefully crafted pockets of air between the individual egg cells acts insulation against cold winter temperatures. The number and size of egg cases deposited by a female also varies by species and she dies sometime after her final birthing
GROWTH & DEVELOPMENT
The life-cycle of North American mantid species runs from spring to fall. When springtime temperatures become sufficiently warm, the mantid nymphs emerge from the ootheca. They drop toward the earth on thin strands of stringy material produced by a special gland in their body--often descending in a writhing mass-before breaking free to live solitary lives. Mantid nymphs are hemimetabolous (did I spell that right)-that is, they undergo only a partial metamorphosis from nymph to adult stage. Mantid nymphs appear like small adults (about 3/8' long) except that their wings are not fully formed. The nymphs go through a series of 6-7 molts-the casting off of the outer layer of skin-before reaching their adult form.
When molting, the nymphs attach their "old," loose skin to a stick or rough surface with a secreted glue-like substance, chews an opening in it, creates a split or tear on top of the thorax and down the back, and then wriggles free. The mantid's leg casings do not split open, and many nymphs die when unable to fully kick free of their old skin. Young mantids feed on whatever small insects they can find including each other. The mantids continue to grow until the time for mating comes in late summer, and then the whole process begins again.
SELF-DEFENSE
The mantid primary enemies are birds, mammals (especially bats), spiders, snakes, and, of course, man. The mantid has four primary defense mechanisms against those who would prey on it.
Camouflage-the mantid's brown and green color allow it to blend in with surrounding foliage.
Stealth-the mantid's ability to stay perfectly still for long periods of time causes it be overlooked by many would-be predators.
Startle-display-when confronted by an enemy the mantid can rear up in its hand legs and spread and rattle its wing in an act of intimidation.
Ultrasonic ear-used when encountering bats in flight. Unfortunately, the mantid has no defense against pesticides which it ingests through its prey.
Incidentally, there is a form of martial art called Praying Mantis Kung-Fu
Please refer to the section entitles Praying Mantis Kung-Fu at the end of the document for more information
CULTURAL SIGNIFICANCE
The word "mantis" comes from ancient Greece and means "diviner" or "prophet". Many cultures have credited the mantid with a variety of magical qualities:
France-French peasants state that If a child is lost, the
mantids praying-stance points the way home.
Turkey & Arabia-The mantid always prays toward Mecca.
Southern U.S.-The brown saliva of the mantid will make a
man go blind or kill a horse.
4. China-Roasted mantid egg cases will cure bed wetting.
Africa-If a mantid lands on a person it brings them good luck and A mantid can bring the dead back to life.
European Middle-Ages-The mantis was a great
worshipper of God due to its time spent in prayer.
Perhaps the best measure of the hold mantids have on our cultural imagination is the fact they are almost surely prominently pictured on any book about insects intended for a popular audience interesting and common names that the Praying Mantis has been commonly acquainted with
1. Sooth-sayers-(England)-from the Greek roots of the word "mantis"-meaning "prophet."
2. Devil's Rearhorses, Devil horses (Southern U.S.)-from the mantid's tendency to rear up on its hind legs when threatened.
3. Mulekillers (Southern U.S.)-from the (false) belief that the brown saliva emitted by a mantis will kill a mule.
Camel-crickets (Unknown)
PRAYING MANTIS KUNGFU
If one talks about Praying Mantis Boxing then one must know that its founder and patriarch was someone, named Wang Lang. However it is unknown when exactly he lived and what kind of family he came from but certainly his family was not wealthy. Wang Lang was famous for his passion for martial arts and was an outstanding person. He traveled a lot around the Empire Under Heaven (China), while studying different styles of boxing and had many friends skillful in martial arts.
Once, during the mid-autumn festival Wang Lang went hiking to Lao Shan mountains. He looked at the magnificent cliffs above and boundless rivers below and felt astonished by this mighty vastness. When out of curiosity he decided to climb even higher, following the curvy and steep path going up the mountains, Wang Lang suddenly heard the quiet sound of a bell ringing somewhere nearby. Walking along the path Wang Lang soon reached an ancient temple, abode of hermits and decided to enter in order to get some food and water. The first thing he saw were taoist monks practicing the art of boxing in the main plaza of the temple. Wang Lang counted about sixty positions and styles that he had never seen before. Then Wang Lang asked the taoist monks a question but was not regarded with an answer, he asked again but the answer was just a silence randomly interrupted by the sounds of their movements. Finally, Wang Lang decided to attract the attention of one of the practitioners by pulling his arm. The monk became angry s
eeing a great boldness of this uninvited guest and lack of etiquette and jumped on Wang Lang with clinched fists, ready to punish him. However the monk was immediately knocked down by Wang Lang's quick response. A dozen of monks ran to help their religious brother but all failed. Monks started yelling and called the abbot. When the abbot came Wang Lang explained to him the situation that he just wanted to ask for food and water and did not have any bad intents. Abbot replied: "All these are my disciples and monks and I am strongly ashamed by their failure, would you please indulge me with a just fight?" Wang Lang agreed but lost the fight.Then Wang Lang realized the depth of the abbot's martial skills and immediately left the temple.
Wang Lang went deep in the woods and decided to rest, he laid down and started thinking about his unsuccessful fight and the reasons why he lost it. Suddenly he saw two white praying mantises on the tree branch. One of them was holding a fly in his front legs and the other tried to take away the prey. During the fight one mantis was attacking and another would jumping from side to side, ducking and counter-attacking with the lightning speed. Wang Lang concentrated all his mind on this fight and suddenly realized the hidden principals of outstanding flexibility and agility of praying mantis' attacks, counter-attacks and moves.
He then immediately returned to the taoist temple and started a fight with the abbot. As soon as the venerable abbot saw that hand techniques of Wang Lang were noticeably different from the last time they had fought and also had a feeling that this fight would be won by Wang Lang, the abbot asked about the source of such a technique, but Wang Lang continued fighting in complete silence. After a while the abbot asked again but did not get an answer. Only when Wang Lang won the fight, did he tell the abbot the reason of his success. The abbot immediately sent his disciples to the woods to catch about ten pairs of praying mantises. When the insects were delivered the abbot put them on the table and set them to fight each other. In this manner Wang Lang and the abbot spent quite a long time learning movements and tactical positions of the praying mantises, engaged in deadly fights. Then the two masters developed a new, secret technique of boxing which was significantly different from other ones. Later Wang Lang sa
id to the abbot: "Even though you and I developed a new style of boxing, we should not forget the cause and the source of our knowledge. If the praying mantis while striving for food and existence did not reveal us its secrets, we would never develop this new style." The abbot replied: "You are right! In order to perpetuate the memory of the source, we shall call this style "The Gates of Praying Mantis" (Tang Lang Men). Wang Lang and the abbot developed twelve characters - guiding principles of the praying mantis fighting technique: zhan (contacting), nian (sticking), bang (linking), tie (pressing), lai (intruding), jiao (provoking), shun (moving along), song (sending), ti (lifting), na (grabbing), feng (blocking), bi (locking). Also they developed formal sets of praying mantis technique, such as: Beng bu (crushing step), Lan jie (obstruction), Ba zhou (eight elbows), Mei hua lu (plum blossom technique) and Bai yuan tou tao (white ape steals the peach). However, this new style for a long time was a privilege o
f the taoist monks of the Lao Shan taoist religious community and it was kept as a part of the secret taoist doctrine and closed to lay people. Wang Lang, for the rest of his days, lived in the taoist temple practicing self cultivation, developing Praying Mantis boxing and following the way of the Tao..."
f:\12000 essays\sciences (985)\Biology\PredatorPrey relationships.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The relationship between predators and their prey is an intricate and complicated relationship; covering a great area of scientific knowledge. This paper will examine the different relationships between predator and prey; focusing on the symbiotic relations between organisms, the wide range of defense mechanisms that are utilized by various examples of prey, and the influence between predators and prey concerning evolution and population structure.
Symbiosis is the interaction between organisms forming a long term relationship with each other. Many organisms become dependent on others and they need one another or one needs the other to survive. Symbiotic interactions include forms of parasitism, mutualism, and commensalism.
The first topic of discussion in symbiosis is parasitism. Parasitism is when the relationship between two animal populations becomes intimate and the individuals of one population use the other population as a source of food and can be located in or on the host animal or animal of the other population(Boughey 1973). No known organism escapes being a victim of parasitism(Brum 1989).
Parasitism is similar to preditation in the sense that the parasite derives nourishment from the host on which it feeds and the predator derives nourishment from the prey on which it feeds(Nitecki 1983). Parasitism is different from most normal predator prey situations because many different parasites can feed off of just one host but very few predators can feed on the same prey(1973). In parasite-host relationships most commonly the parasite is smaller than the host. This would explain why many parasites can feed off of one single host. Another difference in parasite-host relationships is that normally the parasite or group of parasites do not kill the host from feeding, whereas a predator will kill it's prey(1983). Efficient parasites will not kill their host at least until their own life cycle has been completed(1973). The ideal situation for a parasite is one in which the host animal can live for a long enough time for the parasite to reproduce several times(Arms 1987).
Parasites fall under two different categories according to where on the host they live. Endoparasites are usually the smaller parasites and tend to live inside of the host(1973). These internal parasites have certain physiological and anatomical adaptations to make their life easier(1987). An example of this is the roundworm, which has protective coating around it's body to ensure that it will not be digested. Many internal parasites must have more than one host in order to carry out reproduction(1989). A parasite may lay eggs inside the host it is living in, and the eggs are excreted with the host's feces. Another animal may pick up the eggs of the parasite through eating something that has come into contact with the feces.
The larger parasites tend to live on the outside of the host and are called ectoparasites(1973). The ectoparasites usually attach to the host with special organs or appendages, clinging to areas with the least amount of contact or friction(1973). Both endo and ectoparasites have the capability of carrying and passing diseases from themselves to hosts and then possibly to predators of the host(1973). One example of this is the deer tick which can carry lyme disease and pass it on to humans or wildlife animals. The worst outbreaks of disease from parasites usually occur when a certain parasite first comes into contact with a specific population of hosts(1975). An example of these ramifications would be the onset of the plague.
Many parasites are unsuccessful and have a difficult time finding food because appropriate hosts for certain parasites may be hard to find(1987). To compensate for low survival rates due to difficulty in finding a host, many parasites will lay thousands or millions of eggs to ensure that at least some of them can find a host and keep the species alive(1987). The majority of young parasites do not find a host and tend to starve to death. Parasites are also unsuccessful if they cause too much damage to their host animal(1987). Parasites are what is called host specific, this means that their anatomy, metabolism, and life-style is adapted to that of their host(1973).
Some parasites react to the behavior of their hosts, an interaction called social parasitism(1989). More simply put a parasite might take advantage of the tendencies of a particular species for the benefit of it's own. An example of this is the European Cuckoo. In this case the grown cuckoo destroys one of the host birds eggs and replaces it with one of it's own(1991). The host bird then raises the cuckoo nestling even when the cuckoo is almost too large for the nest and much bigger than the host bird(1991). This is a case where the parasite uses the host to perform a function and making life and reproduction easier on itself.
Parasite and host relationships hold an important part of homeostasis in nature.(1975). Parasitism is an intricate component in the regulation of population of different species in nature.
Mutualism is another topic at hand in discussing predator-prey relationships.
Mutualism is a symbiotic relationship in which both members of the association benefit(1989). Mutualistic interaction is essential to the survival or reproduction of both participants involved(1989). The best way to describe the relationships of mutualism is through examples. We will give examples of mutualism from different environments.
Bacteria that lives inside mammals and in their intestinal tract receive food but also provide the mammals with vitamins that can be synthesized(1975). Likewise termites whose primary source of food is the wood that they devour, would not be able to digest the food if it was not for the protozoans that are present in their intestinal tract(Mader 1993). The protozoans digest the cellulose that the termites cannot handle. Mycorrhizae which are fungal roots have a mutualistic symbiotic relationship with the roots of plants(1989). The mycorrhizae protect the plants roots and improve the uptake of nutrients for the plant, in exchange the mycorrhizae receives carbohydrates from the plant.
Mutualistic partners have obtained many adaptations through coevolution. Coevolution has led to a synchronized life cycle between many organisms and through mutualism many organisms have been able to coincide together as a working unit rather than individuals.
Commensalism is a relationship in which one species benefits from another species that is unaffected(1975). For instance several small organisms may live in the burrows of other larger organisms at no risk or harm to the larger organisms. The smaller organisms receive shelter and eat from the larger organisms excess food supply.
An example of commensalism is a barnacle's relationship with a whale. The barnacles attach themselves to the whale and they are provided with both a home and transportation. Another example are the Remoras which are fish that attach themselves to the bellies of sharks by a suction cup dorsal fin. The Remora fish gets a free ride and can eat the remains of a sharks meals. Clownfish are protected from predators by seeking refuge in the tentacles of sea anemones. Most other fish stay away because the anemones have poison that does not affect the clownfish, therefore the clownfish is safe.
Commensalism consists of dominant predators and opportunistic organisms that feed off of the good fortune of the larger predators. Another topic concerning predator prey relationships is the defense mechanisms that are necessary for prey to outwit their predators.
In order for an animal to sustain life, it must be able to survive among the fittest of organisms. An animals anti-predatory behavior determines how long it can survive in an environment without becoming some other animals prey. Some key antipredator adaptations will be described and examined .
Perhaps the most common survival strategy is hiding from one's enemies(Alcock,1975). Predators are extremely sensitive to movement and locate their prey by visual cues. By getting rid of these key signals, enemies(predators) are forced to invest more time and energy looking for them. This may increase the time a prey has to live and reproduce(1975).
Hiding is generally achieved through cryptic coloration and behavior(1975). How effective an organisms camouflage is depends on how long an organism can remain immobile for a long amount of time. Animals can resemble a blade of grass, a piece of bark, a leaf, a clump of dirt, and sand or gravel. In less than 8 seconds, a tropical flounder can transform it's markings to match unusual patterns on the bottom of their tanks in the laboratory(Adler,1996). When swimming over sand, the flounder looks like sand, and if the tank has polka dots, the flounder develops a coat of dots(1996). Without any serious changes, the flounder can blend surprisingly well with a wide variety of backgrounds (Ramachandran, 1996). Behavioral aspects of camouflage in organisms include more than just remaining motionless. An organism will blend into it's background only if it chooses the right one. When the right one is chosen, the organism will position itself so that it's camouflage will match or line-up with the background. Despite the fact that an organism may be beautifully concealed, it may still be discovered at some point by a potential consumer(Alcock,1975).
Detecting a predator is another antipredator adaptation that is very useful. Some prey species have an advantage over other prey species by being able to detect a predator before it spots them or before it gets to close to them. In order to detect enemies in good time to take appropriate action, prey species are usually alert and vigilant whenever they are at all vulnerable(Alcock,1975). A test was conducted in the early 1960's at Tufts University dealing with ultrasonic sound wave that bats give off, and the way moths can detect these soundwaves(May,1991). In most cases bats are blind, so they rely only on their sense of hearing to help them maneuver and hunt while flying in the dark. Also flying in the dark/nighttime, are insects, moths in this case. In a laboratory, bats and moths were observed, and every time a moth would come close to a bat giving off an ultrasonic signal, the moth would turn and go the opposite way(1991). When the moth would become too close to the bat, it would perform a number of acrobatic maneuvers such as rapid turns, power dives, looping dives, and spirals(1991).
Detection by groups of animals will usually benefit the whole group formation. By foraging together several animals may increase the chance that some individual in the herd, flock, or covey will detect a predator before it is too late(Alcock,1975). Each individual benefits from the predator detection and alarm behavior of the others, which will increase the probability that it will be able to get away.
There is always a chance that prey will be chased by a predator. Evading predators is sometimes necessary for an organism to employ, to make sure they will not be captured when being pursued. Outrunning an enemy is the most obvious evasion tactic(Alcock,1975). When a deer or antelope is being chased, they don't just run in one direction to flee, they alter their flight path. The prey will demonstrate erratic and unpredictable movements(1975). The deer or antelope may zig and zag across a savanna to make it more difficult for the predator to capture them.
Repelling predators is a strategy that can either be last chance tactic or the primary line of defense for an organism. This attack on the predator is used drive it away from the prey. These adaptations can be classified as (1)mechanical repellents, (2)chemical repellents, (3)and group defenses(Alcock,1975). An example of a mechanical repellent is sharp spines or hairs that make organisms undesirable. Some chemical repellents involve substances that impair the predators ability to move or cause a predator to retreat due to undesirable odor, bad taste, or poisonous properties. Groups of organisms can also repel predators. Truly social insects utilize many ingenious group defenses(1975). For example, soldier ants posses an acidic spray and a sticky glue to douse their enemies with(1975). They can also chop and stab their enemies with their sharp jaws.
One of the last types of antipredator behaviors/adaptations is mimicry. An organism that is edible but looks like it is a bad tasting organism is known as a Batesian mimic. A good example of this mimicry works is how birds at first were more likely to go after the more conspicuous looking items rather than those that didn't stand out(Adler,1996). If too many mimics exist, more predators will consume them, and soon they will become a primary food source. Organisms that share the same style of coloration take part in Mullerian mimicry. An example of this is the yellow and black stripes on bees and wasps. The symbiont states that this single look helps bird-brained predators to learn which organisms to avoid. This warning coloration in turn saves the organisms life as well as helps the predator to avoid a distasteful, maybe even toxic meal.
Defense mechanisms vary drastically, and change according to different circumstances. The ability of an organism to survive depends solely on how well it can use it's defense mechanisms to prolong it's life.
The next topic of discussion is the relationship between predators and their prey. Predators and prey effect each other from day to day interactions to the evolution of each other. Predator and prey populations move in cycles, the number of predators will influence the number of prey and the number of prey available will influence the population of predators. Predators and their prey also influence the evolution of each other. Michael Brooke(1991) points out that natural selection should favor traits that help a species survive. A general example would be the increase in speed of potential prey. These evolutionary traits are usually followed with an evolution in the predator. Using the increase of maximum speed as an example, evolution will favor predators that are fast enough to continue to catch the prey. This will lead to the evolution of a faster predator. Brooke (1991)compares the evolutionary process to an arms race, for both sides have to keep advancing in order to stay alive.
While predator/prey populations fluctuate, it is important to note that they operate within a cycle. In an ideal cycle, the predators and prey will establish stable populations. Predators play a crucial role in the population of the prey. The importance of predators can be seen in the Kaibab Plateau in Arizona(Boughey, 1968). At the beginning of this century, 4,000 deer inhabited 727, 000 acres of land. Over the next 40 years, 814 mountain lion were removed from the area. At the same time, over 7,000 coyote were removed. When the predators were removed, the population jumped up to 100,000 deer by 1924 (Boughey, 1968). This population crashed in the next two years by 60% due to overpopulation and disease. Without predators, the prey could not establish a stable population and the land supported a much smaller number than the estimated carrying capacity of 30,000 (Boughey, 1968).
The example can work in reverse; an increased number of predators feeding on a limited number of prey can lead to the extinction of the predators. This is the case with the ancient trilobites, these marine anthropods died 200 million years ago in the Permian age(Carr, 1971). According to Carr, (1971)over 60 families of this animal have been found in fossil records. This highly successful creature became extinct due to changes in the prey population. During the Permian period, glaciation took place that changed the availability of the trilobites food source, algae. One may conclude that the prey population dwindled and the trilobites could no longer support themselves.
Parasite/prey relations fit under the topic of predator/prey relationships. Parasites feed off of their prey just as predators do(Ricklefs, 1993), but it is in the interest of the parasite to keep it's host alive. In some cases, the parasite will act so efficiently that it will lead to the death of it's host, but most parasites can achieve a balance with their hosts. Even though parasites might not lead directly to the death of it's host, it can effect the host in a variety of other ways. A host could become weaker and not be able to compete for food or reproduce, or the parasite could make it's host less desirable to mate with, as is the case with Drosophila nigrospiracula(the Sanoran desert fruit fly).
Michal Polak et al.(1995) conducted a study examining the effects of Macrocheles subbadius (a Ectoparasitic mite) on the sexual selection of the fruit flies. The mites feed off of animal dung and rotting plant tissue (Polak et al., 1995) and relies on the fruit flies for transportation between feeding sites as well as a food source. Polak et al. found that male flies infested with the mites had less of a chance of mating compared to males that had never been infested. But Polak et al.(1995) also found that once the mites were removed from the flies and the male was allowed to recover from any damage done by the mite, the fruit fly had the same chance of mating than a male which was never infested. This suggests that females are selective when choosing their mates.
With females choosing not to mate with males that are infected with the mites, the evolution of the species is being affected. Males that exhibit resistance to mites are favored, so these characteristics will be passed onto the offspring, leading to the development of mite resistant Drosophila nigrospiracula. There are several theories on what basis the mites affect the males. Based on the research compiled by Polak et al. (1995), males could be overlooked because infested males might not survive to help raise the offspring, or males do not mate because they are weakened by the parasites and do not perform well in contests for mates. Whatever the case, parasites have an effect on their prey.
In a similar scenario, the parasitic relationship between cuckoos and other birds, the development of resistance to a parasite leads to the evolution of the parasite. This polymorphism is known as coevolution. Nitecki uses grass as a simple example of this phenomenon(1983). Grass evolves a resistance to a strain of rust by making a single gene substitution, and the rust counters this step with it's own single gene substitution(Nitecki, 1983). He adds that many parasites are host specific, so they are keyed into their host and can adjust to the appropriate changes when necessary. This is why parasites are a continual problem, not just an irritant that is rendered extinct by one simply change in the host's evolution.
This helps explain why the cuckoo continues to successfully lay it's eggs in the nests of Meadow Pipits, Reed Warblers, Pied Wagtails, and Dunnocks(Brooke, 1991). According to Brooke(1991), the host birds usually are deceived by the cuckoo's egg and then raise the cuckoo chick instead of their own. By examining the cuckoo, it is easy to see how evolution has perfected the parasitic process. According to Brooke (1991), the cuckoo will watch it's prey as it builds its nest, wait until both parents are away from the nest, then enter the nest to remove one of the original eggs and lay it's own. Each species of cuckoo has evolved to specifically target one of the four possible birds. According to Brooke, (1991) the Great Reed Warbler-Cuckoo will lay an egg that is similar in size and color to the hosts, and the cuckoo has perfected the intrusion to a science, spending about 10 seconds in the nest of it's host.
The next step of parasitism comes once the cuckoo has hatched. The process that the chick goes through is described by Brooke (1991); the chick hatches before the rest of the clutch due to it's shorter incubation period and then pushes the other eggs out of the nest. The host family will not abandon the chick, while the exact reason is not known, there are several theories. According to Brooke (1991), the parents have nothing to compare the chick with or do not decide that it is too late to raise a new clutch and will raise their adopted chick.
Brooke describes some of the tests carried out in his research (1991) concerning the factors that influence the rejection rate of cuckoo eggs. Most birds will not reject eggs that are similar too their eggs, but larger eggs are have a higher rate of rejection. But if the host birds see the cuckoo in the nest, then the rate of rejection is much increased(Brooke, 1991), which explains why cuckoos have evolved such a fast predatory process.
Brooke shows an example of the evolutionary process at work when he examines the Dunnock's relationship with the cuckoo(1991). The Dunnock-Cuckoo has not developed an egg that mimics the Dunnock egg because Dunnocks accept eggs of any size and color. Brooke (1991) believes that the Dunnock is a new species of bird under parasitism, for only 2% of the Dunnocks are preyed upon in England. Therefore, Dunnocks have not yet developed any defenses against the cuckoo, so the cuckoo has no need to develop any traits to aid in parasitism. Brooke (1991) showed other examples of evolution by testing isolated species of hosts. These birds were not as discriminating, implying that they lacked the evolutionary advancements of detecting and rejecting parasitic eggs. The cuckoo and their hosts are clear examples of how both the predators and they prey affect the evolution of each other.
In some cases, predator/prey relations take place between members of the same species. Many animals exhibit group behavior; worker bees serve the queen bee and wolves follow an established ranking system. But when members of the same species endanger each other for individual protection, the member of the species that faces death is being used as prey by the member of the species surviving. Robert Heisohn describes this relationship in lions when territorial disputes occur. The leader lion will be 50-200 meters ahead of the laggards when approaching an invading lion(Heinsohn, 1995). The leader will face severe injury and even death while the laggards reduce their risk by staying behind(Heinsohn, 1995). Similar behavior has been observed in many species of birds. The hatchlings commit siblicide in order to maximize their own chances of survival as described by Hugh Drommond et al. (1990). Drommond et al. observed cases of siblicide in black eagles; one of the chicks is hatched usually 3 days before the other and therefore is significantly larger than it's sibling (1990). Drommond et al. observed the older eaglet deal 1569 pecks to it's younger sibling in 3 days, eventually killing the younger chick. This phenomena supports several key concepts in evolution. The older sibling is competing with others for resources(food and nesting space), so killing the weaker member promotes the survival of the older bird (Drommond et al., 1990). If resources are limited and both siblings cannot survive, the species will continue to survive due to the death of the younger sibling. However, Drommond et al.(1990) point out that there are several evolutionary losses that occur when a sibling dies; reproductive potential is lost as well as a degree of insurance(in case one of the offspring does not survive to maturity). Excuse the pun, but putting all of the eggs in one basket is a large risk.
Predators and their prey are part of a cycle; both are necessary components and they depend on each other for their existence. Any change made in one area will affect the other.
Overall, predator prey relations are very complex. By breaking the topic into the three topics of; symbiotic relationships, defense mechanisms, and the influence relationship between predators and prey. It is important to see how all three of these subjects tie in together. Parasitism is an example of a symbiotic relationship, parasites are predators living off of their prey, and parasites also effect the evolution of their hosts. Natural selection favors species that are resistant to parasites, so these organisms evolve. The organisms have a range of defense mechanisms available in order to protect themselves from predators. So, predators now face tougher prey, so they undergo evolution in order to stay successful. This completes the cycle and leads to a diverse and interesting world.
References
Adler, T. 1996. Fish Blend Quickly into the Background. Science News, 149:133.
Adler, T. 1996. How Bad-Tasting Species Got their Markings. Science News, 160:118.
Alcock, John. 1975. Animal Behavior. Sunderland, Sinauer Associates. 379-385.
Boughey, Arthur S. 1968. Ecology of Populations. New York, Macmillan Company,
89-101.
Brooke, Michael and Nicholas B. Davies. 1991. Coevolution of the Cuckoo and Its Hosts. Scientific American, 264:92.
Brum, Gil, Larry McKane, and Gerry Karp. 1993. Biology, Exploring Life. New York, John Wiley. 973-975.
Carr, Donald E. 1971. The Deadly Feast of Life. Garden City, Doubleday and Company, 179-180.
Drummond, Hugh, Douglas Mock and Christopher Stinson. 1990. Avian Siblicide.
American Scientist, 78:438.
Heinsohn, Robert and Craig Packer. 1995. Complex Cooperative Strategies in Group- Territorial African Lions. Science, 269:1260.
Mader, Sylvia S. 1993. Biology. Dubuque, Wm. C. Brown Publishers, 761-762.
May, Mike. 1991. Aerial Defense Tactics of Flying Insects. American Scientist, 79:316,
Nitecki, Matthew H. 1983. Coevolution. Chicago, University of Chicago Press, 1-38.
Polak, Michael and Therese A. Markow. 1995. Effect of Ectoparasitic Mites on Sexual Selection in a Sonoran Desert Fruit Fly. Evolution, 49: 660.
Ramachandran, V.S., C.W. Tyler, R.L. Gregory, and D. Rogers-Ramachandran. 1996. Rapid Adaptive Camouflage in Tropical Flounders. Nature, 379:815.
Ricklefs, Robert E. 1993. The Economy of Nature. New York, W.H. Freeman and Company, 322.
Turk, Jonathan, Amos Turk, Janet Wittes, and Robert Wittes. 1975. Ecosystems, Energy, Population. Philadelphia, W.B. Saunders Company, 59-63.
f:\12000 essays\sciences (985)\Biology\Prions.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PRIONS
Prions have been a mistery for scientists from the day they where discovered. Prions act like viruses but they are not. Their structure and chemistry are unknown. They are believed to be proteins but that is yet to be completely proved.
Prion stands for "proteinaceous infectious particles". Prions are known to cause many diseases involved with nervous systems like the brain. They are the ones that cause the well known " mad cow " disesase in Britain and "scrapie" for animals. For humans they are known to cause a rare disease in Papua New Guinea called Kuru ( or "laughing death") which striked only the cannibals in the Highlander tribes. Investigation led to the discovery of prions inside the of the victims brains that were eaten by the tribesmen that when they died, as a sign of respect their brains where eaten and the chain went on and on.
The thing that makes prions so special is the fact that they lack the basic elements for reproduction, deoxyribonucleic acid and ribonucleic acid DNA and RNA respectively. This is what has given science a great deal of doubt as this would give the dogma of the beginning of live a radical turn.
Prions have been in research for many years with experiments like the one done by Stanley B. Pruiser and his team of scientists at the School of Medicine of the University of California at San Francisco in which a study was carried out on mice to see if he was able to purify the scrapie agent ,another prion disease, in mice. But mice as humans took very long to develope the disease, for example Gerstmann-Straussler-Scheinker disease or fatal familial insomnia, which appear mostly on humans which have passed the age of forty and only in very rare cases before, so the experiment was changed to hamsters as these die faster because developed the disease earlier. One of the methods used for this purification process was using a centrifuge, that separates the component of a mixture according to their size and density. After a decade of experiments using the centrifuge method and other chemical methods, several discoveries were made : It was found out that the infectious particles were extremely heterogencous in s
ize and density, the scrapie agent can be found in many molecular forms and the biological activity of the scrapie agent depends on a protein (PrP, called later when discovered it was a single molecular specie protein). This protein was found to be a glycoprotein (PrP): sugars are bounded to the amino acid, and it is half the size of hemoglobin. PrP is the protein, but we can also find today PrPc (for cellular) and PrPsc (for scrapie). Prp can be found in "steak" ( skeletal muscle) and also on the surface of lymphocytes present in milk, but there is no evidence that the ingestion of this things can cause disease in humans , but there is the still a risk.
Amyloid hypothesis.
Prions were found to form rods: long fibrils in brain tissues infected with scrapie and Creutzfeldt-Jacob disease. They believed that the fibrils can be distiguished from amyloid; that they represent a filamentous animal virus causing scrapie and that they are elongated form of prion rods.The most impotant aspect of these rods is the resemblance to amyloid. Amyloid plaques in the central nervous system form considered accumulations of waste formed as some sort of a disease process. This plaques are believed to be aggregations of prions in an almost crystalline state. The production of antibodies to PrP allowed to demonstrate that amyloid plaques in the brain of scrapie-infected hamsters contain prion proteins. This amyloid plaques have been found on Alzheimer's disease patients, which leads to the question if prions are related to that disease. Although it has not been proven yet, the hypothesis is quite reliable.
Can prions infectivity be reduced or eliminated?
There were some experiments done with substances to see if prion infectivity could be reduced or eliminated. One of the substances used was protease, which has only effects on proteins. Protease reduced prion infectivity indeed, but was not totally effective. Thats why PrPc is known to be "protease sensitive"and PrPsc is "relatively resistant to proteases" (thats one difference).
Also by boiling a prion solution in "sodium dodecyl sulfate" (SDS) the infectivity was reduced as the protein was denatured. Finally, extremely high doses of radiation inactivated the scrapie agent but this was not a good solution.
How do Prions infect?
There is a theory proposed by the scientists of the National Institute of Allergy and Infectious Diseases ( NIAID ) which states that prions do not need DNA but that they are simply proteins that convert other proteins to their cause. The experiment consisted in adding a traceable radioactive particle to a certain protein that was introduced in unlabeled scrapie and after a few days of incubation an enzyme was added to the solution in order to get rid of any protein left other than prions, the result was that they found a prion with the radioactive trace. Therefore we can say that the protein was transformed by the prion. The suggested theory is that prions form a sort of wall, where this harmless protein fits exactly like a brick and by the yet unknown how change of only one amino acid and turning the alpha-helix protein into a beta-plated one. This helps the prions as beta-pleated shaped proteins tend to be stickier ( because of the charges involved in the hydrogen bonds ).
Bovine Spongiform Encephalopathy (BSE) or Mad Cow disease
This prion infection has gained popularity again in the news headlines when it was discovered by the press. Before it had been known to scientists since the early 1980´s. The unusual popularity gained by this disease was because, eventhough yet not scientifically proved, that it could be transmitted to humans. This could either be by ingesting beef steaks or drinking cow´s milk. What is scientifically proved is the fact that it can be transmitted to cats, mice and other ruminants by the ingestion of the infected cow parts, especially the brain, which is a major point of infection as the PrP protein, which is supposed to be the one infected or rather mutated, is related to the nervous system to be exact with synapses. Eventhough other prion diseases such as Kuru are transmitted by brain ingestion, Kuru is a disease unique for humans while BSE is related only, until now, to other animals. Other prion diseases related to other animals are Scrapie, which attacks sheep, Transmissible Mink Encephalopaty, which att
acks mink and Chronic Wasting Disease which attacks elks and muledeers.
It is known that BSE was adquired by British cows when they started consuming a prepared industrial food which was made with what was left of sheep bones and meat, most of these had been infected by scrapie. This prion is known to survive pasteurization and all cooking methods such as frying and stewing. Yet there are no certain ways on how to treat prion diseases and the only way to avoid more infection is by killing the animals and get totally rid of their bodies as prions can survive in placenta and stay on the ground for a long time and also in the meat. It is not enough to get rid of the mother as the disease is hereditary.
Human prion diseases
CJD (Creutzfeld-Jacob Disease- It occurs most frequently in children and adult women,who suffer involuntary trembling and jerking (ataxia) of the leg muscles, incoordination then spreading to the arms, slurred speech, incontinence,and finally they are incapable of making sounds or swallowing).
*Today human growth hormone is manufactured through biotechnology engineering (r-hGH) so transmission of the Creutzfeldt-Jakob prion is no longer a risk with these recombinant products.
GSS (Gerstmann-Straussler-Scheinker Syndrome).
FFI (Fatal Familial Insomnia).
Kuru.("laughing death)
Alpers Syndrome.
* Sporadic CJD is about 1 per million per year.
GSS is less sporadic as it occurs in only 2% the times CJD occurs.
1 out of 10,000 people are believed to be infected with CJD at the time of their
death.
Other yet diseases to be proved are Alzheimer Disease (disease in which amyloid plaques when increased, rises mental disfunction. "Amyloids" explained above), Parkinson, amytrophic lateral sclerosis and other mental diseases which arrise with age.
Sporadic CJD is about 1 per million per year.
GSS is less sporadic as it occurs in only 2% the times CJD occurs.
1 out of 10,000 people are believed to be infected with CJD at the time of their death.
Prion diseases in humans usualy are related to senile people as they usually appear after the age of 40 as it is known that prions take some time to act on the human body, unlike hamsters which develop the disease rapidly. These are related to loss of motor control, dementia and paralysis wasting. The disease leads eventually to death after an attack of pneumonia usually. This symptoms are present because of the attack the Central Nervous System (CNS) recieves by the prions. As it was said earlier, the believed protein PrP which mutates to become a prion disease, is closely related to synapses which are the connectors of the human nervous system. Therefore the mutation of the protein may cause disorders in the transmission of the electrical impulses and as it usually happens in old people the replacement of this protein takes very long or it does not take place. When the dead people are opened the brain presents particular symptoms such as non-inflamatory lesions, vacuoles, amyloid protein deposits, astroglios
is and gives a spongy appearence to the brain tissue. Most of these diseases are hereditary but some as CJD are known to appear esporadically.
What exactly are prions, we still don't know, but as knew methods are used for research things appear clearer. Some solutions have appeared for prions, like the hormone manufactured through biotechnology engineering (r-hGH) that stops the transmission of the Creutzfeldt-Jakob prion, but many other diseases may be cured in the future, including Alzheimer's disease, which affects a great part of population, if it is related to it. As Stanly B. Prusiner said:
"If the prion is indeed a single protein and the product of a gene native to the host organism, the time may have come for a reconsideration of what is meant by the concept of infection.
f:\12000 essays\sciences (985)\Biology\Properties of Water.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Water is essential for life as we know it on earth. It is used by plants and animals for basic biological processes which would be impossible without the use of water. The origin of all life can be traced back to the water in the Earth¹s precambrien seas. Water is also the universal solvent. It reacts with more elements and compounds than any other substance known to man.
Water is a polar molecule made up of on atom of hydrogen and two atoms of oxygen. It is attracted to itself by hydrogen bonds. Hydrogen bonds are weaker than covalent bonds, but collectively these bonds hold water together and give it its cohesiveness. These bonds are also very important to water¹s ability to absorb heat, as without hydrogen bonds water would have a boiling point of -80 degrees C and a freezing point of -100 degrees C.
In reality, however, water has a boiling point of 100 degrees C and a freezing point of 0 degrees C. The amount of energy needed to raise the temperature of one gram of water by one Celsius degree is called a Calorie. One Calorie is about twice as much energy as you need to warm one gram of most other fluids by the same amount. This makes water much better for regulating the temperatures of animals and the environment.
Water also has a very high heat of vaporization. Converting one gram of cold water into ice requires 80 Calories of energy. Converting the same amount of very hot water into steam requires 540. The high amounts of energy required to change water from its liquid state make water tend to stay a fluid. The process of freezing water involves slowing down the activity of the water molecules until they contract and enter into a solid state. Once the ice is cooled down to 4 degrees or less, the hydrogen bonds no longer contract, but they become rigid and open, and the ice becomes less dense. Because the ice has become less dense, it floats on liquid water. Water freezes from the top down. Once the top freezes, it acts as an insulator, so that the water beneath it takes a very long time to cool off enough that it freezes. This also traps just enough warmth to keep marine animals alive during the winter.
The process of turning water into steam is a different story. Because it requires the breaking of water¹s hydrogen bonds, this process takes far more energy than it does to turn water into ice. The extra energy that is used in converting water into steam helps keep the overall temperature from getting too hot. In this manner water regulates the temperature of both animals when they sweat, and the earth through evaporation.
Water affects the earth¹s ecosystems in very important ways as well. When water in the earth¹s saltwater bodies evaporates into the air. This water vapor then cools off, becomes liquid again, and then falls as rain or snow. The salt is left behind, and the resulting precipitation helps replenish the water in lakes, streams, rivers, and the groundwater supply. However, all of this water eventually flows down to the level of the oceans, and the cycle begins again. Because of this cyclical pattern, water is consided to be a renewable resource. However, some chemical impurities can remain with the water, even through the process of evaporation. These remain in the water and cause problems until they are either filtered out by natural or artificial processes, or until they are diluted enough that they are no longer a problem. Of all the water on the earth, only three percent is fresh. Of that three percent, only 1/3 is considered safe for consumption.
The properties of water give it the ability to react with different elements and molecules in very interesting ways. Water¹s properties allow it to be the focal point of many cellular functions, primarily because of its reactive abilities.
Ionization is one example of these reactions. This occurs when a water molecule in a hydrogen bond with another one loses an atom of hydrogen. The remaining particle is a hydroxl ion. Micromolecules with different charges than water can cause ionization to happen as well. During the process of ionization water realeases an eaqual number of hydrogen (H+) and hydroxyl (OH-). This dissociation process involves only a few water molecules at once. The actual number is about 10-7 moles/liter).
Acids [L. acidus, sour] are molecules that release the hydrogen ions in the dissociation process. Strong acids, such as hydrochloric, dissociate almost entirely in water. Bases are molecules that take up these extra hydrogen ions.
Water passes through pores easily. Cells take advantage of this by having ³channels² -- tiny holes in the cell membrane. These are exactly the right size that water can get through them, while larger particles are held inside.
Osmosis [Gk. Osmo, pushing] is defined by the Sylvia Mader textbook as ³the diffusion of water across a differentially permeable membrane². This process is caused by a fluid attempting to seek equilibrium by going from a high pressure situation into a lower pressure one. This pressure that causes this operation is known as osmotic pressure.
Another interesting state that water can be in is that of an isotonic solution. These are solutions which neither water is neither gained nor lost, and the pressure is equal on both sides of the cell membrane. When this pressure is not equal, the degree of the inequality is defined as tonicity.
When the pressure is very unequal, so that the pressure causes water to flow inward, it is known as a hypotonic solution [hypo, less than]. The ³less than² prefix refers to a solution with a lower percentage of solute, and which contains more water than the cell. The cell then swells, possibly even to the point where the cell will burst. These exploded cells are referred to as lysis. The pressure that caused them to pop in the first place is referred to turgor [L. turg, swell] pressure.
The opposite state is referred to as a hypertonic solution [hyper, more than]. The ŒŒmore than² prefix in this word refers to a solution with a higher level of solute, and the cell contains more water than the outside solution. Therefore, a cell in a hypertonic solution tends to shrivel up like a grapefruit in the sun.
Animals regulate the amount of water in their bodies in very individual ways, each suited for the environment in which they each live. Sharks and fish are able to live in an environment nearly saturated by salt by having a sort a immunity to it. Some sharks survive by making their blood as toxic as the surrounding water.
Certain seaside animals as well have developed ways to keep the salt in their water from dehydrating them. Some kinds of birds and reptiles have a sort of nasal salt gland which allows them to excrete the large amounts of salt that they take in when they drink. Some mammals as well can live in highly saline environments by making their urine stronger, and having very dry fecal material.
THE END
f:\12000 essays\sciences (985)\Biology\Protein Synthesis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Protein Synthesis
Protein synthesis is the process by which genetic information from the DNA stored in the nucleus is transferred to the ribosomes where it is used to arrange amino acids into proteins.
The DNA molecule in the nucleus is unzipped by an enzyme called polymerase. From one of these single strands of a DNA molecule, a mRNA molecule is built. This is accomplished by an enzyme which travels along a portion of DNA between two exons and attaches the opposing base pairs to the backbone of the mRNA (a structure composed of phosphates and ribose). The nitrogen bases of this new molecule are identical to that of the opposite side of the original DNA molecule except that the thymine has been replaced with uracil. The formation of this molecule allows for the construction of proteins in the ribosome without risking the DNA in the cytoplasm.
The mRNA travels through the cell to a ribosome. Here tRNA which contain the appropriate anti-codon collect the amino acids coded for in the mRNA. Each amino acid is connected to the next amino acid. The mRNA thus transverses the ribosome with each three nitrogen base condon selecting an amino acid. This continues until the ribosome encounters a stop condon. When this happens the amino acid chain is released from the tRNA and is a protein.
This process allows for the genetic information stored in the DNA to be expressed in the physical and functional makeup of the cell.
f:\12000 essays\sciences (985)\Biology\Rabies Virus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Rabies
Rabies is an infectious disease of animals which is a member of a group of viruses constituting the family Rhabdoviridae. The virus particle is covered in a fatty membrane, is bullet-shaped, 70 by 180 nanometres and contains a single helical strand of ribonucleic acid (RNA).
Although rabies is usually spread among domestic dogs and wild carnivorous animals, all warm-blooded animals are susceptible to infection. The virus is often present in the salivary glands of infected animals, referred to as rabid, and is excreted in the saliva. The bite of the infected animal easily introduces the virus into a fresh wound. In humans, rabies is not usually spread from man to man, rather the majority of infections occur from rabid dogs. After a person has been inoculated, the virus enters small nerve ends around the site of the bite, and slowly travels up the nerve to reach the central nervous system (CNS) where it reproduces itself, and will then travel down nerves to the salivary glands and replicate further. The time it takes to do this depends on the length of the nerve it must travel - a bite on the foot will have a much lengthier incubation period than a facial bite would. This period may last from two weeks to six months, and often the original wound will have healed and been forgo
tten by the time symptoms begin to occur.
Symptoms in humans present themselves in one of two forms: 'furious rabies', or 'dumb rabies'. The former is called such because of the severe nature and range of the symptoms. The virus, upon reaching the CNS will present the person with headache, fever, irritability, restlessness and anxiety. Progression may occur on to muscle pains, excessive salivation, and vomiting. After a few days or up to a week the person may go through a stage of excitement, and be afflicted with painful muscle spasms which are sometimes set off by swallowing of saliva or water. Because of this the afflicted will drool and learn to fear water, which is why rabies in humans was sometimes called Hydrophobia. The patients are also extremely sensitive to air or drafts blown on their face. The stage lasts only fews days before the onset of a coma, then death. Dumb rabies begins similarly to furious rabies, but instead of symptoms progressing to excitement, a steady retreat and quiet downhill state occurs. This may be accompanied
with paralysis before death. Rabies diagnosis in this type of cases can be missed. Unfortunately with both furious and dumb rabies, once the disease has taken hold clinically, rapid and relentless progression to invariable death occurs despite all known treatments.
Treatment for the recently infected would include washing the wound with soap, detergent, and water. Then an anti-rabies serum can be administered to humans. Alternative to the serum, an effective and intensive treatment after infection can be obtained through the use of a killed virus vaccine, because of the unusually long incubation period. The vaccine, a Human Diploid Cell Vaccine (HDCV) is grown in human fibroblasts (the principal nonmotile cells of connective tissue) and is quite safe for human use. When used, the vaccine did dramatically cut the rabies death toll. Previous killed virus vaccines, which had been made from infected neural tissue, were not completely effective at immunisation and had caused adverse side effects.
Since contact with wild animals is the main source of infection for humans and their pets, avoidance of any direct contact with these animals reduces the risk of being bitten quite dramatically. Raccoons that are wandering in the daylight hours, or any animal that seems 'friendly' should be avoided as well. Other high-risk animals include skunks, foxes, jackals, wolves, as well as an odd association with bats.
f:\12000 essays\sciences (985)\Biology\rainforest.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acid rain is a serious problem with disastrous effects. Each day
this serious problem increases, many people believe that this issue
is too small to deal with right now this issue should be met head
on and solved before it is too late. In the following paragraphs I
will be discussing the impact has on the wildlife and how our
atmosphere is being destroyed by acid rain.
CAUSES
Acid rain is a cancer eating into the face of Eastern Canada and
the North Eastern United States. In Canada, the main sulphuric acid
sources are non(c)ferrous smelters and power generation. On both
sides of the border, cars and trucks are the main sources for
nitric acid(about 40% of the total), while power generating plants
and industrial commercial and residential fuel combustion together
contribute most of the rest. In the air, the sulphur dioxide and
nitrogen oxides can be transformed into sulphuric acid and nitric
acid, and air current can send them thousands of kilometres from
the source.When the acids fall to the earth in any form it will
have large impact on the growth or the preservation of certain
wildlife.
NO DEFENCE
Areas in Ontario mainly southern regions that are near the Great
Lakes, such substances as limestone or other known antacids can
neutralize acids entering the body of water thereby protecting it.
However, large areas of Ontario that are near the Pre(c)Cambrian
Shield, with quartzite or granite based geology and little top
soil, there is not enough buffering capacity to neutralize even
small amounts of acid falling on the soil and the lakes. Therefore
over time, the basic environment shifts from an alkaline to a
acidic one. This is why many lakes in the Muskoka,
Haliburton, Algonquin, Parry Sound and Manitoulin districts could
lose their fisheries if sulphur emissions are not reduced
substantially.
ACID
The average mean of pH rainfall in Ontario's Muskoka(c)Haliburton
lake country ranges between 3.95 and 4.38 about 40 times more
acidic than normal rainfall, while storms in Pennsilvania have
rainfall pH at 2.8 it almost has the same rating for vinegar.
Already 140 Ontario lakes are completely dead or dying. An
additional 48 000 are sensitive and vulnerable to acid rain due
to the surrounding concentrated acidic soils.Ô
ACID RAIN CONSISTS OF....?
Canada does not have as many people, power plants or automobiles as
the United States, and yet acid rain there has become so severe
that Canadian government officials called it the most pressing
environmental issue facing the nation. But it is important to bear
in mind that acid rain is only one segment, of the widespread
pollution of the atmosphere facing the world. Each year the global
atmosphere is on the receiving end of 20 billion tons of carbon
dioxide, 130 million tons of suffer dioxide, 97 million tons of
hydrocarbons, 53 million tons of nitrogen oxides, more than three
million tons of arsenic, cadmium, lead, mercury, nickel, zinc and
other toxic metals, and a host of synthetic organic compounds
ranging from polychlorinated biphenyls(PCBs) to toxaphene and other
pesticides, a number of which may be capable of causing cancer,
birth defects, or genetic imbalances.
COST OF ACID RAIN
Interactions of pollutants can cause problems. In addition to
contributing to acid rain, nitrogen oxides can react with
hydrocarbons to produce ozone, a major air pollutant responsible in
the United States for annual losses of $2 billion to 4.5 billion
worth of wheat, corn, soyabeans, and peanuts. A wide range of
interactions can occur many unknown with toxic metals.
In Canada, Ontario alone has lost the fish in an estimated 4000
lakes and provincial authorities calculate that Ontario stands to
lose the fish in 48 500 more lakes within the next twenty years if
acid rain continues at the present rate.Ontario is not alone, on
Nova Scotia's Eastern most shores, almost every river flowing to
the Atlantic Ocean is poisoned with acid. Further threatening a $2
million a year fishing industry.
Ô
Acid rain is killing more than lakes. It can scar the leaves of
hardwood forest, wither ferns and lichens, accelerate the death of
coniferous needles, sterilize seeds, and weaken the forests to a
state that is vulnerable to disease infestation and decay. In the
soil the acid neutralizes chemicals vital for growth, strips others
from the soil and carries them to the lakes and literally retards
the respiration of the soil. The rate of forest growth in the White
Mountains of New Hampshire has declined 18% between 1956 and 1965,
time of increasingly intense acidic rainfall.
Acid rain no longer falls exclusively on the lakes, forest, and
thin soils of the Northeast it now covers half the continent.
EFFECTS
There is evidence that the rain is destroying the productivity of
the once rich soils themselves, like an overdose of chemical
fertilizer or a gigantic drenching of vinegar. The damage of such
overdosing may not be repairable or reversible. On some croplands,
tomatoes grow to only half their full weight, and the leaves of
radishes wither. Naturally it rains on cities too, eating away
stone monuments and concrete structures, and corroding the pipes
which channel the water away to the lakes and the cycle is
repeated. Paints and automobile paints have its life reduce due to
the pollution in the atmosphere speeding up the corrosion process.
In some communities the drinking water is laced with toxic metals
freed from metal pipes by the acidity. As if urban skies were not
already grey enough, typical visibility has declined from 10 to 4
miles, along the Eastern seaboard, as acid rain turns into smogs.
Also, now there are indicators that the components of acid rain are
a health risk, linked to human respiratory disease.
PREVENTION
However, the acidification of water supplies could result in
increased concentrations of metals in plumbing such as lead, copper
and zinc which could result in adverse health effects. After any
period of non(c)use, water taps at summer cottages or ski chalets
they should run the taps for at least 60 seconds to flush any
excess debris.
Ô
STATISTICS
Although there is very little data, the evidence indicates that in
the last twenty to thirty years the acidity of rain has increased
in many parts of the United States. Presently, the United States
annually discharges more than 26 million tons of suffer dioxide
into the atmosphere. Just three states, Ohio, Indiana, and Illinois
are responsible for nearly a quarter of this total. Overall, twoªthirds of the suffer dioxide into the atmosphere over the United
States comes from coal(c)fired and oil fired plants. Industrial
boilers, smelters, and refineries contribute 26%; commercial
institutions and residences 5%; and transportation 3%. The outlook
for future emissions of suffer dioxide is not a bright one. Between
now and the year 2000, United States utilities are expected to
double the amount of coal they burn. The United States currently
pumps some 23 million tons of nitrogen oxides into the atmosphere
in the course of the year.
Transportation sources account for 40%; power plants, 30%;
industrial sources, 25%; and commercial institutions and residues,
5%. What makes these figures particularly distributing is that
nitrogen oxide emissions have tripled in the last thirty years.
FINAL THOUGHTS
Acid rain is very real and a very threatening problem. Action by
one government is not enough. In order for things to be done we
need to find a way to work together on this for at least a
reduction in the contaminates contributing to acid rain. Although
there are right steps in the right directions but the government
should be cracking down on factories not using the best filtering
systems when incinerating or if the factory is giving off any other
dangerous fumes. I would like to express this question to you, the
public:WOULD YOU RATHER PAY A LITTLE NOW OR A LOT LATER?
f:\12000 essays\sciences (985)\Biology\respiration .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mitochondria are responsible for energy production. They are also the responsible location for which respiration takes place. Mitochondria contain enzymes that help convert food material into adenosine triphosphate (ATP), which can be used directly by the cell as an energy source. Mitochondria tend to be concentrated near cellular structures that require large inputs of energy, such as the flagellum. The role of the mitochondria is very important in respiration.
In the presence of oxygen, pyruvate or fatty acids, can be further oxidized in the mitochondria. Each mitochondrion is enclosed by two membranes separated by an intermembrane space. The intermembrane space extends into the folds of the inner membrane called cristae which dramatically increase the surface area of the inner membrane. Cristae extend into a dense material called the matrix, an area which contains RNA, DNA, proteins, ribosomes and range of solutes. This is similar to the contents of the chloroplast stroma and like the chloroplast, the mitochondrion is a semi-autonomous organelles containing the machinery for the production of some of its own proteins. The main function of the mitochondrion is the oxidation of the pyruvate derived from glycolysis and related processes to produce the ATP required to perform cellular work.(Campbell 182-9)
Pyruvate, or fatty acids from the breakdown of triglycerides or phospholipids, pass easily through pores in the outer mitochondrial membrane made up of a channel protein called porin. The inner membrane is a more significant barrier and specific transport proteins exist to carry pyruvate and fatty acids into the matrix. Once inside the matrix, pyruvate and fatty acids are converted to the two carbon compound acetyl coenzyme A (acetyl CoA). For pyruvate this involves a decarboxylation step which removes one of the three carbons of pyruvate as carbon dioxide. The energy released by the oxidation of pyruvate at this stage is used to reduce NAD to NADH. (185)
The C2 acetyl CoA is then taken into a sequence of reactions known as Krebs cycle which completes the oxidation of carbon and regenerates an acceptor to keep the cycle going. The oxidation of the carbon is accompanied by the reduction of electron acceptors and the production of some ATP by substrate phosphorylation. The C2 acetyl CoA is coupled to oxaloacetate, a C4 acceptor in the cycle. The product is citrate a C6 compound. This first product, citrate, is the reason the cycle is sometimes called the citric acid or ticarboxylic acid cycle, referring it after the scientist whose lab most advanced our understanding of it, Sir Hans Krebs. (Comptons 160)
Two of the early reactions of the cycle are decarboxylations which shorten citrate to succinate a C4 compound. The CO2 lost does not actually derive from acetyl CoA, during that cycle, but two carbons are lost which are the equivalent of the two introduced by acetyl CoA. The decarboxylation steps are again accompanied by the reduction of NAD to NADH. The formation of succinate also sees the formation of an ATP molecule by substrate phosphorylation. (Brit 1041)
The last part of the cycle converts C4 succinate back to C4 oxaloacetate. In the process another reaction generates NADH while another reduces the electron acceptor FAD (Flavin Adenine Dinucleotide) to FADH.
The final stage of respiration in the mitochondria involves the transfer of energy from the reduced compounds NADH and FADH to the potential energy store represented by ATP. The process is oxidative phosphorylation and it is driven by a chemiosmotic system analogous to that seen in chloroplasts. (Moore 88-9)
The inner membrane contains an electron transport chain that can receive electrons from reduced electron carriers. The energy lost as electrons flow between the components of the electron transport chain is coupled to the pumping of protons from the matrix to the intermembrane space. The matrix is alkalinized while the intermembrane space is acidified. The electrons are ultimately combined with molecular oxygen and protons to produce water. Respiration is aerobic when oxygen is the terminal electron acceptor. (Brit 1042)
The energy that was contained in the pyruvate molecule has at this point been converted to ATP by substrate phosphorylation in glycolysis and Krebs cycle and to a free energy gradient of protons across the inner membrane known as the proton motive force (PMF). The gradient of protons will tend to diffuse to equilibrium but charged substances like protons do not easily cross membranes. Proton complexes in the inner membrane provide a channel for the protons to return to the matrix. Those protein complexes function as an ATPase, an enzyme that synthesizes ATP, because the energy liberated as the protons work to diffuse back to the matrix is used to push the equilibrium between ADP+Pi and ATP strongly toward ATP. (Campbell 182)
The electron transport chain has three sites along it that pump protons from the matrix. NADH donates its electrons to the chain at a point where the energy input is sufficient to drive all three proton pumping sites. FADH is less energetic than NADH and its electrons are donated at a point that drives two proton pumping sites. It is also possible for the NADH produced in glycolysis to enter the mitochondrial matrix and donate electrons to the electron transport chain. Depending on the system, NADH from glycolysis may be able to drive two or three proton pumping sites. For eukaryotes, only two pumping sites are driven; for prokaryotes, three. (184-5)
The importance of mitochondria is unremarkably, a key element in the process of respiration. Between the three distinct sections of respiration, glycolysis, Krebs Cycle, and Electron Transport, the mitochondrion is the site of which most of it takes place, either inside of the mitochondrion or outside it.
f:\12000 essays\sciences (985)\Biology\Respiratory Diseases.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Respiratory Diseases
Respiration is the process of taking in and using oxygen. There are
three different phases of respiration: external respiration, internal
respiration, and cellular respiration. External respiration is the intake of
oxygen from the environment and the release of carbon dioxide. In internal
respiration, oxygen is carried to the cells and carbon dioxide is carried away
from the cells. In cellular respiration, oxygen is used in chemical reactions
within the cells.
Some Diseases of the respiratory system are: bronchial asthma, the
common cold, and diphtheria.
Bronchial asthma is a disease in which the bronchial passages are
made smaller and swelling of the mucous lining causes blockage of
breathing, usually due to dust, animal fur or feathers, or pollen. Many people
have asthma which is caused by allergies, called extrinsic asthma, usually
suffer from hay fever. Non allergic asthma, which adults usually have, is
called intrinsic asthma. Intrinsic asthma is usually caused by respiratory
infections and emotional upsets. A typical asthma attack begins with
coughing, wheezing, and shortness of breath. Some people have dry
coughing as the only symptom. Attacks usually last only a couple hours. An
attack may happen again in hours to even years after the first attack.
Asthma attacks can be treated and prevented by the use of drugs.
Albuterol or terbutaline, which can bring relief within minutes, is the usual
treatment.
The common cold is another disease of the respiratory system. The
cold affects the mucous membranes of the nose and throat. It causes nasal
congestion, sore throat, and coughing. A cold usually lasts up to an average
of seven days.
There is no known cure for the common cold yet.
Diphtheria is another respiratory disease that, most of the time, affects
children. The disease enters the body through the nose and mouth and attack
the mucous membrane where they multiply and secrete a powerful poison.
The heart and central nervous system are damaged by the poison and it can
lead to death.
Toxoids, which are given to infants during the first year of life, are
harmless forms of the diphtheria poison which immunize the children against
serious infection.
f:\12000 essays\sciences (985)\Biology\Sacred Cow Vegetarianism.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hugh Buchanan
Growth problems. Animal population problems. Disease. These are all problems caused by being a vegetarian, that is, one who only eats vegetables. There are different degrees of being a vegetarian. To one extreme, is a person who eats nothing associated with animals (no yogurt, ice-cream, or even anything that has come in contact with meat or another animal). The opposite are those who just eat vegan most of the time and will still eat animal bi-products. Then there are others who are in between.
Being a vegetarian is not natural. Since the beginning of time, humans have been consuming animals. A vegetarians diet lacks energy, calcium, zinc, and vitamins (B-12 and D). Without supplements, severe medical problems can arise. Also, those supplements are usually man made and do not require some of substances, still un-identified, in meat.
A carnivorous diet has always been in American history. Turkey is eaten on Thanksgiving, not Tofu Surprise. Fish on Fridays, not salad. Pop and son would go hunting in the winter for fresh game to eat, they wouldn't go picking berries and roots. Those who could not or would not eat meat did not survive.
Studies have been done by M. J. Lentze, a German who found that vegetarianism causes impaired growth in children five years or younger. Vegan children even fail to grow as well with supplements that exceed the Recommended Daily Allowance.
It is true that many vegetables contain a high amount of protein, but the amount is not even close in comparison to meat. To get the same amount of protein from vegetables as you would from just one pound of meat (although most people don't eat this much meat anyway!), you would be eating for hours.
Most people become vegetarians because of the Animal Rights Movement (ARM). But in fact, the movement has "elevated ignorance about the natural world" according to Richard Coniff. If you give rights to animals, you should give rights to plants. Think of how they feel, often they are consumed before they even die!
If we all went vegetarian, the animal population would increase dramatically. In fact, if there were too many animals, us humans would starve because the animals require a large amount of plant based food.
The Greeks knew that moderation is good. Too much of anything is bad. That is how many Americans live, they have a little bit (or a lot of) everything.
I am not saying that the government should ban being a vegan, but I do think education is important. Few know of the ramifications of being a vegetarian. Sadly, when one parent becomes vegetarian, not only does the spouse end up having to follow along, but so do the children. This in effect causes growth to be impaired in the children.
Doctors should talk to their patients about their dietary habits. To save money is not a good reason to be vegetarian.
A human being can receive a diet that reduces the risk for chronic degenerative diseases without becoming a vegetarian. The best thing to do is to reduce the amount of fat in one's diet, although not totally eliminate fat since your body needs it.
Thanksgiving is often the only exception for vegans, except the occasional trip to McDonalds and/or Burger King in secret.
As for the young bachelor vegan, being a vegetarian inconveniences the hostess of the night. Not all people (in fact, very few) know how to cook vegan style. When little Johnny goes to Jane's house for dinner, Jane's mom will cook pot roast. What will little Johnny do for dinner? How could he not eat the food? He wants to make himself look good in front of Jane and her family... what a problem.
It was also an "in" thing at one point, it may still be, for teens to be vegan. We are still growing, we should not deprive ourselves of meat, and the vital nutrients in meat.
It would probably be going too far to treat it as an eating disorder, but in some cases, it should be. I knew a girl who found herself over-weight all the time. At one point she decided to not eat meat or fat, but she still ate salads and other assorted vegan meals. Immediately we saw a decrease in her weight, but we also noticed the lack of energy she had, and the depression. It got to a point where she even looked at or smelled fat/meat, she would vomit. I do not know if she ever obtained counseling, but in her case, she had a problem. I am sure there are many others out there.
It is not normal to be vegetarian. It is not healthy to be a vegeterian. It is not the American way to be vegetarian.
f:\12000 essays\sciences (985)\Biology\Schizophrenia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SCHIZOPHRENIA
Child schizophrenia, like other psychopathologies has many documented, and several uncertain causes. Some scientists have evidence that pregnant mothers have experienced an immune reaction that present dangers to the unborn child. Schizophrenia is a disorder where the body=s immune system attacks itself. Schizophrenia is not present at birth but develops during the adolescence period or young adulthood. ASchizophrenia is a biological brain disease affecting thinking, perception, mood, and behaviour. Its exact cause is unknown but overwhelming evidence points to faulty chemistry or structural abnormalities in the brain. In some cases schizophrenia is generic. Schizophrenia strikes one in 100 people at some point in his/her lifetime.@ (Compiled by Ontario Friends of Schizophrenia, Oct 94).
Schizophrenia worsens and becomes better in cycles, also known as relapses and remission. People who are suffering from schizophrenia look relatively normal. Schizophrenics suffer from such symptoms as: delusions, hallucinations, and thought disorders. Delusions are false beliefs that aren=t based on reality. Schizophrenics may believe that someone is following them, or planning to harm them. Schizophrenics believe that others can hear their thoughts , also known as Abroadcasting@ and even change them. A...hear their thoughts, insert thoughts into their minds, or control their feelings, actions or impulses. Patients might think they are Jesus, Napoleon, or Franklin D. Roosevelt.@ (American Psychiatric Association Annual >90 page 1)
Pregnant women who experience an immune reaction that presents danger to their unborn children, this reaction raises sharply the rates of schizophrenia in the unborn child. Severe malnutrition in the early months of the fetal development may contribute to schizophrenia. It is also known that schizophrenia runs in families. AThe probablitilty of developing schizophrenia as the off spring of one parent with the disease is approximately 13%. The probability of developing schizophrenia as the off spring of both parents with the disease is approximately 35%.@ (Pamphlet by: American Psychiatric Association Annual >90 page 7)
Hallucinations another symptom which schizophrenic patients suffer from. Hallucinations may be seen or heard. The most common hallucination are those heard by the schizophrenic. The schizophrenic may hear voices that tell them what to do, these voices may warn them of danger, tell them how to feel, or describe one=s actions.
Schizophrenics thought process is very Aloose@. Their thought s may shift rapidly from one unrelated topic to the next. They may make up their own words or use sounds or grunts to substitute words. These symptoms do not mean that people with schizophrenia are out of touch with the world completely, they know that roads are used for driving cars, and that people eat meals three a day.
Schizophrenia affect both men and women equally. Along with delusions, hallucinations, and thought disorders, they also suffer from paranoia, high anxiety , low stress tolerance, low motivation, lack of energy and the inability to feel pleasure. This makes work, leisure, relationships and even everyday tasks difficult, sometimes impossible. These are concerns not only for the people diagnosed with this psychopathology but for their friends and family. Family is looked upon for support in not only everyday tasks, but in dealing with this disorder whether its in remission or relapse. With schizophrenia there is the risk of suicide. ATen percent of all people with schizophrenia commit suicide. Either to escape the torment of their illness. Or because their >voices= command them to.@ (Compiled by Ontario Friends of Schizophrenics, Oct. 1994) Many schizophrenics also are incarcerated for crimes that they have committed while in a psychotic state, or are living on the streets, without any treatment. Schi
zophrenics may become violent while in a psychotic state, and may lose all sense of who they are and who others are around them.
Symptoms such as social withdraw , inappropriate or blunted emotions, and extreme apathy may persist for years, however many schizophrenics have recovered enough to be able to live on their own. ATen years after their first schizophrenic episode, 25 percent of people with schizophrenia have recovered completely. Another 25 percent are much improved and living fairly independent lives; 25 percent, although improved, still need extensive support; 15 percent are hospitalized and show no improvement, and 10 percent have killed themselves.@ (Compiled by Ontario Friends of Schizophrenics, October 1994)
Schizophrenia appears when the body is under going hormonal changes and physical changes in adolescence like other genetically related illnesses. Schizophrenia is said to lie Adormant@ during childhood, some researchers have suggested. AGenes govern the body=s structure and biochemistry. Because structure and biochemistry change dramatically in teen and young adult years, some researchers suggest that schizophrenia lies >dormant= during childhood. It emerges as the body undergoes changes during puberty.@ (Pamphlet by: National Alliance for the Mentally Ill, June >90 page 2)
The symptoms of schizophrenia appear gradually during adolescence, or young adulthood. Friends and family may not notice the signs as the illness takes initial hold. The young person often feels tense, can not sleep or concentrate, and they with draw socially. But at some point loved ones will begin to notice the changes. Their work performance, appearance and social relationships begin to deteriorate. As this illness progresses the symptoms become more and more bizarre, they develop particular behaviours and begin talking nonsense.
Drug therapy is the most common form of treatment, however it is not the only form. ACurrent treatment programs for schizophrenia include combinations of medication, psychotherapy, education, and social-vocational rehabilitation.@ (Pamphlet by Deborah Dauphinais: U.S. Department of Health and Human Services Annual >92 page 1). The primary medication for treatment of schizophrenia are the antipsychotic medications, also known as neuroleptic. These medications do not cure schizophrenia but reduce the symptoms. All widely used antipsychotic medications are equal in treating the symptoms of schizophrenia; however, individuals may prefer one medication to another due to their experience to different side effects. Medication may be increased, or decreased depending on the state that the patient is in. During a psychotic episode the medication will be increased and as the episode decreases so will the amount of medication, however this process is a slow and lengthy process. The medication will be tapered off
to the lowest possible dosage without the symptoms returning. Some side effects of medication may be: nausea, vomiting, abdominal cramps, diarrhea, or sweating. Medication is used to inhibit the action of dopamine which is a Aneurotransmitter@, or chemical in the brain that helps cells to communicate with one another.
Hospitalization is also an option in treatment. During a psychotic episode a hospital stay is often necessary. ASchizophrenics occupy more hospital beds than people with cancer, heart disease, diabetes and arthritis combined.@ (Compiled by Ontario Friends of Schizophrenics, October 1994)
Another form of treatment, which goes hand in hand with medication is counselling both for the patient and the family. ASupportive counselling or psychotherapy may be appropriate for these individuals as a source of friendship, encouragement, and practical advice during this process. Relatives and friends can also assist in rebuilding the person=s social skills. Such support is very important.@ (Pamphlet by: American Psychiatric Association Annual >90 page 1)
ASchizophrenia, a disease of the brain, is one of the most disabling and emotionally devastating illnesses known to man. But because it has been misunderstood for so long, it has received relatively little attention and its victims have been undeservingly stigmatized. Schizophrenia is not a split personaltiy, a rare and very different disorder.@ (Pamphlet by: National Alliance for the Mentally Ill June >90 page 1)
Since the funding is increasingly being cut so is the research for schizophrenia, leaving many unanswered questions. As Child and Youth Workers we need to provide support to parents and childern with this illness, we ourselves need to have a better understanding of schizophrenia.m The most important message for us to convey is Ayou are not alone.@
References
a. More than two authors
Pamphlets: Compiled by Ontario Friends of Schizophrenics. (1994). The Facts, Schizophrenia: Compassion Through Understanding, Risk of Suicide
b. More than two authors
Pamphlet by: Natonal Alliance for the Mentally Ill. (1990). Schizophrenia
c. One author
Bower, B. (1996). New culprits cited for schizophrenia. Science News, vol. 149, 68.
D. More than two authors
Pamphlet by: American Psychiatric Association Annual. (1990). Schizophrenia, 1-10
e. More than two authors
Pamphlet by:National Institute of Mental Heath Annual. (1990). You are not alone: Facts about mental health and mental illness, 1-9
f. One author
Dauphinais, D. (1992). Pamphlet: Medicatons for the treatment of schizophrenia: questions and answers, 1-4
f:\12000 essays\sciences (985)\Biology\Scoliosis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Scoliosis"
Everyone's spine has curves. These curves produce the normal rounding of the
shoulder and the sway of the lower back. A spine with scoliosis has abnormal curves with
a rotational deformity. This means that the spine turns on its axis like a corkscrew.
Scoliosis is a curvature of the spine which may have its onset in infancy but is
most frequently discovered in adolescence. It is more common in females by a 2:1 ratio.
However, when curves in excess of 30 degrees are evaluated, females are more
frequently affected by a ration of approximately10:1. The cause of the most common
form of scoliosis; idiopathic scoliosis. is unknown, but there have been hereditary factors
discovered that are present.
Scoliosis causes shoulder, trunk and waistline "asymmetry". In mild forms, the
condition may be barely noticed; however, in severe forms there is significant
disfigurement, back pain and postural fatigue, and it may be associated with heart failure.
Fortunately the majority of scoliosis cases need only close follow-up to watch for
worsening of the curve. Some cases require more aggressive treatment which could
include surgery.
The non-operative treatment of scoliosis involves observing the deformity with
examinations and repeated x-rays. Under certain circumstances, when spinal growth
remains, a brace may be used in combination with follow-up x-rays. Physical therapy
exercises have not been shown to be effective treatment for scoliosis.
The most common surgical treatment for scoliosis is a spine fusion using special
stainless steel rods, hooks, and a bone graft. The rods are attached to the spine with
hooks and the curved portion of the spine is carefully straightened. Then, small strips of
bone graft are placed over the spine to fuse it in a straight position. As the bone graft
heals over the next several months, the spine becomes solid and will not curve again. But
the part of the spine that has not been fused will still be flexible, and allow nearly normal
overall movement.
f:\12000 essays\sciences (985)\Biology\Seasonal Affective Disorder.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Seasonal Affective Disorder
Seasonal Affective Disorder, or SAD, is a common problem of people living in northern United States. People who are affected by this disorder commonly suffer from depression, lethargy, inability to concentrate, overeating and weight gain. People from the north tend to suffer more from this disorder because of the shortened days. It appears, that due to the deficiency of sunlight some people suffer from these symptoms. The shortened days have a hormonal effect on the body that causes these symptoms, and the use of artificial sunlight is the best way to relieve the disorder.
It was not until recently that SAD was discovered. It was discovered by Peter Mueller, who was reviewing a case of a 29-year-old woman. He had noticed a pattern, the woman's depression came in the winters and left in the spring. Over the course of years the woman moved from city to city. Mueller noticed, that the farther north she moved the early the depression. Mueller had begun to speculate that the lack of sunlight had contributed to the women's depression. In order to confirm this he exposed the patient to artificial sunlight. He found that over a period of time the patient had recovered from the depression. Today light therapy is the most commonly used method in treating SAD.
The two hormones that are affected by the sunlight, and are thought to be the cause of SAD, are melatonin and serotonin. Both of these chemicals "are influenced by photoperiodism, the earth's daily dark-light cycle" (Wurtman 1989). Melatonin is the chemical that effects mood and energy levels. In the human body melatonin is at its highest at night and is lowest in the day. There has been a study done to see if sunlight has a direct effect on suppressing melatonin. It is known that melatonin levels in urine are five times higher at night than they are in the day. It was not until a 1980 study that it was known that melatonin levels could be directly suppressed with light. In an experiment, subjects were woken up at two in the morning and exposed to a half an hour of artificial sunlight. The findings were that melatonin levels were greatly decreased. The decline in melatonin usually happens in the early morning, but in a SAD patient this does not occur until about two hours later. In order to suppress the levels the patient needs to be exposed to sunlight. It is found, that when the patient is exposed to the light there is a significant decrease in depression and the craving for carbohydrates. It is not known if SAD is directly caused by melatonin. We are still not sure what is the direct cause for the depression of SAD. We do know why people who suffer from SAD crave carbohydrates.
Serotonin is the chemical that regulates a person's appetite for carbohydrate rich foods. A patient who suffers from SAD, and is given an artificial shot of serotonin called d-fenfluramine "leads to a decrease in stress-induced eating" (Scientific American 1986). In each person blood stream we have a hormone known as trypton that is a derivative of serotonin. When it enters the central nervous system and reaches a group of cells called raphe nucli it is converted into serotonin. The amount of trypton in the blood is increased when carbohydrates are consumed. This may explain why many people who suffer from SAD have an increase their intake of carbohydrates. In testing results it patients who suffer from SAD when given "an 800-calorie, high-carbohydrate meal (six cookies), they reported feeling vigorous and energized" (Health 1989).
A large consumption of carbohydrates is one of the symptoms of SAD. In a number of cases, it has been noted that in the fall a person suffering from SAD will increase their intake of carbohydrates and decrease in the summer. It is also found, that those who crave carbohydrates tend to consume most of them in the late afternoon or early evening. A carbohydrate craver is found to eat 1,940 calories at a mealtime. The average for an adult female is 1,500 to 2,00 calories, an adult male from 2,200 to 2,700 calories. It is in the early evening that the craver consumes "an additional 800 or more calories person per day" (Wurtman 1989). The increase of carbohydrates leads to an increase in the levels of serotonin that relieves the symptoms of SAD.
The evidence all seems to push towards lack of daylight as the reason for SAD. Little research, although, has been done on the effects of temperature and barometric pressure on mental health. These factors change during different times of the year. In his lifetime Abraham Lincoln suffered from two sever bouts with depression. Both of them came "during the two largest barometric pressure changes recorded at that time" (Davis 1994). There is the possibility that the symptoms of SAD are caused by other seasonable changes besides daylight.
Evidently, the best treatment for those who suffer from SAD is light therapy. Although, we still are not for sure what the cause of SAD is we do know that light therapy suppresses the symptoms. The human body needs to be exposed to the daylight, and our everyday lives may hinder our exposure. For example, most people work inside in either an office or some type of building. This significantly reduces our exposure to daylight. It was found in San Diego that the male spent in a day an average of seventy-five minutes in the sun, and the female spent an average of twenty. The human body needs more exposure to the light to regulate the secretion of serotonin and melatonin.
There is still much testing that needs to be done on SAD. We still are not sure of the cause but we do have a solution. The best way to help suppress SAD is by artificial sun lamps, but these lamps are expensive. Many of the bulbs that are used cost over one-hundred dollars. Carbohydrates are an alternate way to suppress the symptoms, but the side effect is weight gain. It has been proven in studies, that those who crave carbohydrates also put on significant weight which is lost in the spring. Exposure to daylight will suppress the symptoms without any side effects. The best method to regulate the hormonal balance is by regular exposure to daylight.
f:\12000 essays\sciences (985)\Biology\sharks.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"It's tail swayed slowly from side to side, pushing the hunters body through the murky
water. All signs of motion were non-existant, except for the rhythmic movement of the
water over the five gill slits on either side of it's head. Slowly gaining speed, the shady
figures unmoving eyes fixed on it's target, a lost harbor seal pup. As the distance between
the predator and it's prey grew closer, the jaws of the massive fish drew forward,
exposing nearly eight rows of razor sharp teeth. Strings of it's previous meal hung in
rows from between it's teeth. Sensing danger, the harbor seal frantically tried to find a
place to seek refuge, but it was too late. The jaws of the shark closed around the seal with
an astounding 14,000 pounds of pressure, cutting the seal in half. The Great White shark
claims another victim.1"
Any one who's seen the famous movie series "Jaws" may look at the Great White
Shark in a similar manner. Perhaps it's the way that Hollywood uses a mix of fact and
fiction in the series. This may have frightened many people into hating the Great White
for it's ferocity. It might have also been the size of the shark in the movie that's kept
thousands of people off the beaches and out of the water. Better yet, it could have been
the overall storyline: A Great White shark with an eating disorder and a taste for human
flesh. Perhaps that's what is keeping vacationers from grabbing their trousers and
snorkels.
Over all, there have been 1026 attacks on humans by sharks in the last ten years.
Only 294 of these attacks have been linked to Great White sharks. That's roughly the
number of people who drown each year in swimming accidents. Of these 294 attacks, less
than eighteen percent were fatal. Out of the eighteen fatal incidents more than seventy
percent was contributed to loss of blood. This means that the shark didn't kill the victim.
The shark bit the victim and then released them (also known as the taste test). The shark
samples the victim by nibbling on an appendage or two often resulting in a severed artery
or other major blood vessel. Therefore, the Great White should be considered a mantaster
not a maneater.
This intrigued scientists considering the size of the sharks brain. The Great Whites
brain is about one half the size of a dogs. Over seventy percent of the brain is used for
tracking prey. The other thirty percent is used for body functions. Studies show that the
sharks main purpose is to eat. People think that the sharks main purpose is to kill. This is
incorrect, sharks only eat when they are hungry. Impulses from the brain are sent to the
jaws and the stomach telling the shark that it is time to hunt for food.
"Why do sharks not follow a basic attack pattern on a human? In a human attack, the
primary strike is usually the only contact, as though the shark finds us(humans) to be
unpalatable. There is a theory on this as well, involving the differences in our anatomy
and the pinnipeds(seals, sea lions). We are mostly muscle where the pinniped body has a
great deal of fat. It is theorized that the shark somehow senses this and abandons us as a
potential meal because our bodies are not as energy-rich as the pinnipeds. Of course, this
is often enough to kill us - or at least, really screw up our day!"
Cold Hard Facts
The Great White shark has remained unchanged for 250 million years. It's greek
name is Carcharodon carcharias. This is derived from carcharos meaning "ragged" and
odon meaning "tooth". There isn't a defined size range for the Great White but most
experts agree that the length of the shark is usually between 12 and 16 feet with the
maximum figure being about19 to 21 feet. (The 21 foot is an actual record from 1948.
The largest ever recorded!) If the Great White is that big try to imagine the size of those
massive jaws and teeth, Not to mention the enoromous power behind those jaws..
These huge eating machines used to be even bigger! The Great White was once known as
Carcharodon megalodon. The only difference in between the Great White and this
previous model is size. The Carcharodon megalodon was MASSIVE compared to the
modern day Great White. Averaging forty to forty-five feet in length, it is theorized that
this giant of the deep could swallow a city bus whole. There are many scientists who
theorize that there may still be some of these giants down there... down deep enough
where the bodies would never wash up on shore.
The Great Whites teeth are serrated like a bread knife. Averaging about one to
two inches in length and about one-half to one inch in width. These teeth are so ragged
and so sharp, old native spears have been found with these teeth on the end of them.
Scientists think the natives used these spears as saws!
The most mysterious aspect about the Great White is it's life span. No one in
history has recorded the life span of one individual shark. There was one shark though,
that was tagged and observed returning every two weeks or so to feed. This observation
went on for some eighteen years!! Is this shark young? Is it old? No one can say for
sure.
Of all the animals with a good sense of smell, the Great White tops them all. One
Great White can sniff out one drop of blood more than a mile away. This is after the drop
of blood has been diluted by billions of gallons of water. All of this is possible because of
fluid filled sacs on both sides of the fish called lamellae. These sacs run the length of the
fish. The walls of the tubing is so sensitive, vibrations as far away as eight miles can be
felt. Many people think that if they don't have any cuts, lacerations, or abrasions, they'll
be safe in the water. WRONG.
The sharks nose has twenty to thirty little black "freckles". These freckles can not only
pick up the scent of blood, it can also detect electrical fields as tiny as .005 microvolt.
That's the same as some one feeling the electrical jolt of a D-sized battery through a 1,000
mile long copper wire(that's not very big). Every living thing and most non-living things
put out a small electrical field when in the water. The main reason sharks attack the
bleeding victim first is because the blood in the water releases more ions, thus magnifying
the electrical field as well as the scent.
The Great White can swim at incredible speeds, sometimes as fast as thirty-five
knots(roughly 25 miles per hour). No human alive could stand a chance at out swimming
a Great White shark. The fastest human swimming record is held at a little over two miles
per hour.
Great Whites have enormous appetites. In one meal, a Great White can eat almost
eight hundred pounds of seal meat. Because of the amount of meat the shark consumes in
this meal they can go without eating anything else for nearly a month. The Great White's
diet consists of mainly lingcod, salmon, tuna, squid, other sharks, cetaceans (dolphins and
whales), and pinnipeds. They also show a preference for carcasses, especially large
whales. With some research done off the South Farallon Islands, located off the coast off
San Francisco, states that most of the shark attacks take place at the same time. This is
supported by the fact that Great Whites eyes are really sensitive to daylight viewing. The
time of day for the attacks is the same because the seals are forced each day to go into the
water because of the tides.
"The attack stratergies of the Great White were different on each species of the
prey. The Seal is usually attacked on the surface of the water, by the shark rising from
below. A large flowing blood stain at the surface indicates that the Great White carries the
seal underwater before removing a bite then releasing the carcass, that floats to the
surface.The shark almost always aims for the head, since the seal has alot of blood vessels
in that area.3"
The seals death is brought on by loss of blood or decapitation. The Sea Lion a diferent
type of attack method is used. The first attack is usually the most brutal.
The shark attacks while the sea lion is on the surface, the strike propels the shark out of
the water while the sea lion is still held in the beasts powerful jaws. It is then released to
float to the surface to bleed to death, then the shark returns later to feed on the carcass.
Great Whites are considered fish, but that doesn't make them entirely like other
fish. To start off, their skeleton is made completely of cartilage. This is the reason that no
shark bones have been found. Cartilage is a soft, flexible material that is light weight and
floats in water(You can find it in your nose if you move it from side to side). The
cartilage plays an important role in a sharks survival and buoyancy. Sharks have no gill
muscles so they must continue to swim in order to breathe. They even swim when they
are asleep! One exception is the Nurse shark. This shark has gill muscles and spends
most of it's time on the bottom of the ocean waiting for food to scuttle by. When a fish
must swim to survive, it's skeleton should be lightweight so the water pressure doesn't
push the fish down towards the bottom. Sharks also hunt. The cartilage helps relieve the
stress of the water resistance as the shark swims. The sharks skeleton is hollow on the
inside. Each bone is filled with a mixture of air and fluid. The shark can regulate the
amount of air in these bones by secreting a fluid that makes the body of the fish heavier.
In turn, the shark slowly descends into the depths of the ocean. When the shark wants to
come back up, it drains the fluid into its urinary bladder. The gills filter the air from the
water and fill the bones with air once again. The shark becomes lighter and can come up
to the surface.
Any one with enough courage to actually get close enough to touch the skin
surface of a shark, they'll notice that it's very smooth if you rub it one way, but if you rub
it the wrong way, watch out! Thousands upon thousands of sharp "spikes" cover the
shark. These spikes are called denticles; sandpaper rough along the side, but razor sharp
on the tip. Scientists believe that the denticles are used to cut the victim if it's eyesight is
failing or it's dark. Once the shark cuts the victim, the blood gives off a more accurate
position on the prey. The shark can then home in on the wounded animal and make it's
initial strike.
Sharks are considered fish, but what differentiates sharks from the ordinary
guppie? One is shape. The normal fish is not as broad shaped in the frontal area. Sharks
have broad heads and oversized mouths evolved for eating. Their tail fins are not
identical in shape to each other. The upper fin is bigger in size and shape than the lower
fin. Another difference lies inside the jaws. The fish has teeth used for grasping prey.
The shark however has larger, more broad teeth used for shredding and tearing meat.
The third difference is, of course, in the skeletal system. Sharks have the cartiliginous
skeleton while the everyday guppie has bones. Why? The answer lies in the size
difference between the two animals. In the ordinary fish, there is less surface area. Less
force is needed to move the fish in a forward direction. Because of the sharks broad
frontal area and immense size, a much greater force is needed to move the shark. Having
the light skeletal system reduces the weight tremendously and less force is exerted. A less
obvious difference is the extra set of fins on the shark. The pelvic fins are located just
below the dorsal fin on either side of the shark. The fish does not have the extra set of
fins. It is determined that the extra fins are used to stabilize the shark during migratory
swimming and hunting.
f:\12000 essays\sciences (985)\Biology\siberian tiger.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SIBERIAN TIGER
The Siberian Tiger, sometimes referred to as the
Manchurian Tiger, is an endothermic quadruped in the kingdom
of Animalia. It's phylum is Chordata and it's class is
Mammillia. It's order is Carnivora and it's family is the
Felidae. It's genus is Panthera and it's species is Tigris
Altaica.
The Siberian Tiger is a mobile creature and it lives in
northern Asia and is found as for north as the Arctic Circle.
It's territory is more than four thousand square miles and it
will keep that territory indefinitely, as long as the food
supply lasts.
The Siberian Tiger hunts very much but only about one
tenth of the hunts are successful. It requires more than
twenty pounds of meat per day. It is heterotrophic and it's
diet consists mainly of deer, boar, bear and fish.
The Siberian Tiger is a solitary animal. Males and
females are only together during mating season. Females will
only stay with their litter of two or three cubs for less
than two years.
The Siberian Tiger is the largest cat in the world
today. It measures thirteen feet long and weighs in at seven
hundred pounds. It's cousin, the Bengal Tiger, is only
seventy five percent it's size.
The Siberian Tiger's thick coat is normally yellow with
black stripes, however in winter, the yellow fades to near
white for better camouflage. It's body is heavily muscled
for great strength. It's hearing and sight are excellent,
and it's night vision is five times better than a humans.
The Siberian Tiger is now very rare and on the
endangered species list, mainly because poachers illegally
slaughter them for their fur. Currently, there are less than
two hundred animals in existence.
f:\12000 essays\sciences (985)\Biology\Silverfish.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
John Smith
3-22-97
3rd Period
NAME OF INSECT: Silverfish
HABITAT AND RANGE: Silverfish normally live outdoors under rocks, bark and leaf mold, in the nests of birds and mammals, and in ant and termite nests. However, many are found in houses and are considered a pest, or at least a nuisance, by homeowners. Usually they are found trapped in a bathtub, sink, or washbasin.
DIET: The Silverfish will feed on almost anything. A partial list includes dried beef, flour, starch, paper, gum, glue, cotton, linen, rayon, silk, sugar, molds, breakfast cereals, book bindings, and wall paper paste
MOUTH PART TYPE: The Silverfish has a concealed chewing mouth part.
LIFE SPAN: Most Silverfish bugs live for about two to 2 1/2 years.
COMPLETE SCIENTIFIC CLASSIFICATION: Kingdom: Animalia Phylum: Arthropoda Sub-phylum: Chelicerata Class: Insecta Order: Thysanum Family: Lepismati Genus: Metalus Species: Gerrainus
LIFE CYCLE: Silverfish undergo incomplete metamorphosis. Adults lay eggs in small groups containing a few to 50 eggs. The eggs are very small and deposited in cracks and crevices. A female normally lays less than 100 eggs during her life span . Under ideal conditions, the eggs hatch in two weeks, but may take up to two months to hatch.
The young nymphs are very much like the adults except for size. Several years are required before they are sexually mature, and they must mate after each molt if viable eggs are to be produced. Populations do not build up rapidly because of their slow development rate and the small number of eggs laid.
ECONOMIC IMPORTANCE: Silverfish sometimes feed on book binding and the paste that holds on wall paper causing damage to many houses.
IS IT A SOCIAL INSECT? Yes I think that the Silverfish is social because it lives in colonies.
3 INTERESTING FACTS: Some can live up to one year with out food, some can live up to eight years, can jump up to 1 1/2 feet
f:\12000 essays\sciences (985)\Biology\Spiders.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My essay is on spiders. I have chosen a few spiders to report about. I also have some basic info about spiders in general. Spiders comprise a large, widespread group of carnivorous arthropods. They have eight legs, can produce silk, and usually have poison glands associated with fangs. More than 30,000 species of spiders are found on every continent except Antarctica in almost every kind of terrestrial habitat and a few aquatic ones as well. Spiders range in body size from about 0.5 mm (0.02 in) to 9 cm (3.5 in). The term spider is derived from the Old English spinnan ( "to spin" ) referring to the group's use of silk. Spiders make up the order Araneae in the class Arachnida, which takes its name from the mythological character Arachne, a peasant girl who challenged the weaving skill of the goddess Athena. Arachne equaled Athena's skill in a contest, and in response to Athena's anger she hanged herself. In belated remorse Athena changed the body of Arachne into a spider and allowed her to retain her weavi
ng skill.
My first selection is the brown recluse spider. The brown recluse spider (Loxosceles reclusa) is a poisonous snake in in the United States. Its mostly found from Kansas and Missouri, south to Texas, and west to California. Found in sheltered places indoors and outdoors it is about 10 mm (.4 in.) long and has an orange-yellow body with a dark violin-shaped design on its back. Its bite isn't usually fatal to humans, the venom destroys the skin and it may take a few months to heal. The brown recluse is mostly active at night. it feeds on small insects that it paralyzes with its poison.
The Black Widow
The black widow, Latrodectus mactans, is a poisonous spider of the family Theridiidae, order Araneida. The female, about 1.3 cm (0.5 in) long, is glossy black, densely clothed with microscopic hairs, and has a red hourglass mark on the underside of the abdomen.
The male, which is rarely seen, is smaller than the female and has four pairs of red marks along the sides of the abdomen. The black widow is found worldwide in the warmer regions in every state in the United States except Alaska and Hawaii; it lives in a variety of natural and domestic habitats. Generally, the females are not aggressive unless agitated, although they are prone to bite when guarding an egg sac. The venomous bite of the black widow spider, Latrodectus mactans, causes muscle spasms and breathing difficulty in humans and may be fatal. The female is distinguished by a red hourglass marking on its underside. The black widow eats a diet of insects, spiders and centipedes captured in its web. After mating, the female may ensnare and feed upon her mate--hence the name black widow. It's venomous bite causes muscle spasms and difficulty in breathing.
Tarantulas
In common American usage, tarantulas are the large, hairy long-lived spiders that make up the family Theraphosidae. Related forms such as funnel-web spiders and trap door spiders are also often called tarantulas. The name came from a smaller wolf spider of Europe but was then applied by explorers of the New World to the giant spiders that they encountered. No North American species has a venom that is dangerous to humans, but tarantula body hairs may induce an allergic reaction. Tarantulas can give you a painful bite. Tarantulas occur in warmer regions, where they feed on both invertebrates and small vertebrates. Many grow to about 2.5 to 7.5 cm (1 to 3 in) long, with a 13-cm species (5-in) legspan, and some South American bird-catching species are larger. Some tarantulas reach an age of 20 years.
f:\12000 essays\sciences (985)\Biology\Steriods and their Affects on the Human Body.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Steroids
Drugs have been used in sports almost as long as sports themselves have been around. The ancient Incas discovered that the ashes from burned leaves of the Coca tree gave the people great stores of energy, and made sleep unnecessary for hours or even days, it was later discovered to be the stimulant cocaine. They would take it before long hunts, battles, and even found it useful in ancient sport competitions. It wasn't until 1886 that the first drug-related death in sports occurred. A bicyclist took a mixture of cocaine and heroine, called the "speedball," and died from it. Little were the doctors aware the epidemic that would follow in the next century.
Anabolic steroids, developed in the 1930's in Europe, are drugs that help to build new body tissue quickly, but with drastic side effects. Anabolic means the ability to promote body growth and repair body tissue. It comes from the Greek word anabolikos meaning "constructive." Steroids are basically made up of hormones.
Picture: One woman training to make the 1984 US women's basketball team used them, her muscles started to bulge, her voice grew deeper, and she even had the beginnings of a mustache. These are all the usual symptoms of anabolic steroids.
Steroids were not always used for sports, they started out the same way most drugs did, medicinal purposes. Victims of starvation and severe injury profited from it's ability to build new tissue quickly. They also helped prevent muscle tissue from withering in patients who had just had surgery. Steroids are used to treat Addison's disease.
Anabolic steroids are drugs that come from hormones or from combinations of chemicals that achieve the same result as hormones. Hormones may be given to an individual in their natural state, or in a synthetic one. The synthetic state is sometimes more potent than the natural one. Testosterone and progesterone are hormones used in steroids, another kind comes from the adrenal glands, which secrete various necessary bodily chemicals. The steroids themselves can be taken orally, as tablets or powders, and can also be liquids that are injected into the muscles.
The steroids taken by athletes contain testosterone or chemicals that act in similar way to testosterone. Testosterone is found in men and women, but in women it is present in much smaller amounts, mainly because it is produced in the testicles in men. More than one hundred and twenty steroids are based on the hormone testosterone. There are many brand names, such as Durabolin, Winstrol, Pregnyl, and Anavar.
Basically anabolic steroids control the bodily functions that are normally under control of the bodies natural testosterone. As well as turning women into men and men into manly men it has a stimulative effect on skeletal muscle mass, some visceral organs, the hemoglobin concentration, and the red blood cell number and mass.
Of course, most people take anabolic steroids illegally to stimulate growth in muscle cells. Once a person is born, he/she will not grow anymore muscle cells throughout their life. So when muscle mass increases it is the individual cells growing in girth to compensate for either an increase in work, or the release of androgen hormones(found in all anabolic steroids.) Exercise alone can stimulate the girth of muscle cells to increase by anywhere from thirty to sixty percent. The presence of androgen hormones allows for even greater growth. Anabolic steroids act like our natural androgen hormones in that they stimulate anabolic metabolism in the muscles. Anabolic metabolism involves the buildup of larger molecules from smaller ones and includes all the constructive processes used to manufacture the substances needed for cellular growth and repair. As a result of steroids stimulating anabolic metabolism, muscles increase in size to a substantially greater size than they would have been if the individual only exercised.
Doctors take different views on prescribing steroids. Most dislike the use of them in sports, and some will not prescribe them at all for use in sports. They see them as dangerous for healthy individuals, and the taking of drugs to get a winning edge they see as cheating. Others don't like steroids, but will prescribe them, knowing their patient, if not given them by their doctor, will get them from somewhere else. This way they can regulate them, tell the patient the correct way to use them, and keep an eye on them. Still others doctors consider steroids safe when administered under medical supervision, which includes carefully regulating dosages and watching for the first signs of trouble.
A fourth view doctors take is recognizing the possibility that although sometimes steroids do serious harm, the same can be said of minor drugs, such as aspirin. Millions of people take aspirin daily, because the benefits greatly outweigh the risks, and suffer no harm as a consequence, and the doctors feel the same is true about steroids. When under medical supervision, doctors feel their patients are safe because of their good physical condition and the drugs can be stopped if trouble begins to show. They feel that with steroids, much like with aspirin, the benefits greatly outweigh the risks.
None *of these views can be proven correct or incorrect, but one thing is certain. Steroids used without medical supervision do the greatest harm. The athletes generally do not know how much to take and take doses too large right from the start.
Many doctors believe that steroids can lead to heart attacks and even strokes. Steroids cause extreme bloating because they create an imbalance of chemicals in the body and to regain that balance the body holds water. This extra fluid raises the blood pressure and could cause strokes and heart-attacks. Steroids are also suspected of bringing on liver and kidney failure. The steroids seem just as capable of destroying tissues as creating it.
Women are seen as being especially endangered by steroids because of the increased amounts of testosterone. Testosterone steroids are androgenic drugs, which means they promote masculinity, as seen in the young basketball player mentioned above. Although women produce small amounts naturally, it is a male hormone. The testosterone present is kept in balance with estrogen, the female hormone. Like testosterone for males, estrogen gives females their feminine characteristics. The woman may bald, grow excess bodily hair, including a moustache, they lose the gentle curves of their body, their skin roughens, weight is gained, and the voice deepens. An unborn child is also endangered, female's unborn babies will develop such male traits as extra hair, and all unborn children, according to a few doctors, are subject to be handicapped and deformed.
Men also are endangered. They may experience a shrinking of the testicles, called atrophy, accompanied by a lowered sperm count, a lessening of sexual desire, infertility, and an enlargement of the prostate gland that men under fifty usually do not suffer from. Men will often develop breasts like those of a woman.
Steroids are dangerous when used incorrectly, and should be used only under medical supervision. It has undesired side effects for men, women, and even the unborn. When abused steroids are no longer anabolic, they stop building the bodies tissue and start tearing it down as anything will when used in excess.
Bibliography
Davidson, Julian M. Groliers Encyclopedia. Steroids. New York: Grolier, Inc., 1993
Dolan, Edward F. Jr. Drugs in Sports. New York: Franklin Watts, 1986.
Strizak, Alan Marc. MD. Groliers Encyclopedia. Sports Medicine. New York: Grolier, Inc., 1993.
Taylor, Willam N. MD. Macho Medicine. North Carolina: Mcfarland & Company, Inc., 1991
f:\12000 essays\sciences (985)\Biology\Sudden Infant Death Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sudden infant death syndrome, better known as S.I.D.S., is one of the leading causes for the inflated infant mortality rate in this country today. It is often misunderstood or unrecognizable. For the most part, the causes of SIDS are unknown to the general public. This is changing, however, as public awareness is ever increasing. Thus, the purpose of this paper will be to explain sudden infant death syndrome and its known or suggested causes. Also, the history of SIDS, the problems and emotional suffering that results from the loss of a child, the toll it takes on the surviving sibling, and possible counseling or other help that is available for parents who may have lost a child to SIDS are such areas that will be explored. Overall I hope to achieve a better understanding of all these suggested topics within the body of the paper.
SIDS is also commonly referred to as crib death. It is said to claim approximately in the range of 6,000 to 7,000 babies a year within the continental United States alone, with a slight increase each year (Bergman xi). This would seem to be an astounding figure, but when the figure of the total amount of babies that are born in the United States is compared to that of the number of deaths due to SIDS, it accounts for only a small percentage. It is a small percentage that hopefully can be reduced. And to any parents, the loss of just one child is definitely one too many, despite of the statistics that are currently available. During the first week of life is where most deaths that are associated with prematurity dominate, SIDS is the leading cause of death among infants under one year of age, according to Bergman. It ranks second only to injuries as the cause of death in children less than fifteen years of age. An unknown fact is that SIDS takes more lives than other more common diseases such as leukemi
a, heart disease or cystic fibrosis (Bergman 24). Ironically it was not until the middle of the 1970's until SIDS was no longer ignored as being a cause of death. For the most part, no research was being conducted, leaving families and victims left to wonder why their babies died (Mandell 129). For the family and friends of the family, who also are victims, this was definitely a tragedy. Not knowing the cause of death had to have caused physical and emotional distress in their lives. Self blame was something that had to exist, even though there was nothing that most of these parents could have possibly done.
Today where more research in this area is needed, researchers are making strides in combating this disease. But understanding the crucial aspects of SIDS and how to prevent it, are still limited. The leaders in this field are hoping to improve understanding of this disease by providing direction and opportunities for more quality intensified research. According to L. Stanley James, MD, chair of neonatology at Columbian Presbyterian Medical Center in New York City, "The government is now having a rejuvenation of SIDS research, and over the next five years, they are going to be putting in thirty to forty million dollars." The direction will be supplied through a five year research plan proposed by a panel of experts from The National Institute of Child and Human Development in Bethesda, Maryland (Zylke 1565). In response to a Senate request, there will be representatives from the fields of epidemology, neonatology, cardiorespiratory and sleep research, neuroscience, behavioral medicine, pathology, infectio
us disease, immunology and metabolism to meet an release a report on current knowledge and research recommendations (Zylke 1565). It was important to this group that people would have a definition of SIDS that would be acceptable to all. The current definition of SIDS, developed in 1969, states SIDS as being "the sudden death of any infant or young child which is unexpected by history and in which a thorough postmortem examination fails to demonstrate and adequate cause of death." (Bosa 5).
Much has been learned through research in the recent years. Such examples have now been considered to be facts, one being that the peak incidence occurs at about ten weeks of age and that it is uncommon at less than three weeks and greater than nine months (Zylke 1566). What also is commonly known is that death usually occurs during sleep and that most victims do not exhibit any illnesses in any one degree at that time. It must also be important to realize what complications might arise from a broad generalization such as the previous. It may be used by some doctors in the medical profession to cover up what might otherwise be considered to be malpractice. With the good comes the bad as well.
Therefore, the National Institutes of Health assembled a group of experts to come up with a new definition of SIDS. "The sudden death of an infant under one year of age which remains unexplained after a complete postmortem examination, including an investigation of the death scene and a review of the case history. Cases failing to meet the standards of this definition, including those with postmortem examinations, should no be diagnosed as having SIDS. Cases that are autopsied and carefully investigated but which remain unresolved may be designated as undetermined, unexplained, or the like" (Zylke 1566). A few conclusions can be determined from this quote. One is that it gives a more precise, operant definition of the SIDS is in terms of age. Another, is that it provides room for cases that do not have all the symptoms of what is to be considered SIDS to be classified as unexplained or ruled out as being due to SIDS itself. It also takes abuse and neglect into account by examining the scene of death
. Obvious conclusions can be raised if a child's environment was of poor living conditions where it was not well cared for, which most likely could have resulted in death. Is must be remembered that this definition only meant to serve as a benchmark for other research and can not be applied to all conditions where a death attributed to SIDS is considered.
There are also other socioeconomic and demographic factors that can be associated with an increased risk of SIDS, but few exact causes have been identified. There have been studied however, that may show a correlation between cigarette smoking and SIDS. It "has not been determined whether or not a history of maternal smoking during pregnancy is biological in nature or a proxy for maternal behavior is not clear" (Malloy 1380). Research done by Haglund and Cnattingius have shown that infants born to women who smoke during pregnancy die earlier because of SIDS than do those infants whose mothers did not smoke during pregnancy (Malloy 1381). Their report supports the plausibility of a biological mechanism. What they did find, was that it was not possible to conclude that there was a relationship between the age of death and a history of maternal smoking during pregnancy, but there was a relationship between quantity of cigarettes smoked with an increased risk of SIDS (Malloy 1381). These affects that mat
ernal smoking has on the SIDS baby have not gone without others taking notice. According to other researchers, respiratory disorders during sleep have been thought to be one of the major causes of SIDS. With a distinct link to breathing abnormalities in many SIDS cases, suffocation has also been linked to mothers who smoke during pregnancy.
Another study has shown that Chronic Fetal Hypoxia may predispose infants to SIDS as well (Raub 2731). This is due to low hematrocrit during pregnancy (Raub 2731). This study has been supported by the National Institute of Child Health and Human Development. Researchers analyzed 130 SIDS cases and 1,930 members in their control group that survived the first year of life. They found that infants whose mothers smoked ten or more cigarettes a day had increased their infants chance of SIDS by almost 70% (Raub). So it can be seen from this that the more cigarettes a mother smoked per day while pregnant would do nothing but increase their infants chance of SIDS, according to this research. These researchers also see that maternal smoking may predispose infants to SIDS by impairing their normal development of the fetal central nervous system (Raub). The central nervous system is in control of such bodily functions such as breathing, which goes back to the theory of suffocation during sleep in SIDS babies
(Martin 194). In breathing disorders have been theorized to cause SIDS, and maternal smoking has been shown to impair development of the fetal central nervous system, there is an obvious link that exists between the two. Mothers should become increasingly aware of smoking as a cause of SIDS, along with other drugs and carcinogens as well. Sometimes the best solution to this problem boils down to the obvious which is prevention. In this case, it is prevention of smoking during pregnancy.
Another possible cause of SIDS may be due part to a defect in the autonomic nervous system. Increases in cardiac sympathetic activity may induce malignant arrhythmia's even in the absence of heart disease ( Stramba 1514). There has been a consensus that SIDS might be multifactoral and that in most SIDS cases, death may be attributed to either cardiac or respiratory problems (Stramba 1515). There are still not any preventive measures for SIDS as of this time.
It is know that the development or maturation of cardiac and respiratory functions continue after birth, and that the chance of the infant having malignant arrhythmia's during this time are different from that of an adult ( Stramba 1514). To understand the mechanisms that cause SIDS, a fuller understanding of what goes on in this postnatal period is crucial. There is also the possibility that SIDS victims may have a cardiac instability during the first months of life (Stramba 1521). This idea supports the notion of heart rate problems in such infants. According to recent data, the risk for SIDS increases by almost 30% for babies with heart rates that deviate from the mean ( Stramba 1541). All of these ideas open up a new area in the understanding of SIDS. Maybe there is a way to predict or to test for SIDS by checking such measures as heart and breathing rates. But there is still the problem that physicians can not be totally confident in the use of such tests as they have not proved to be reliable in
accurately predicting SIDS. This is why further research and testing must be done in not only this but in all areas.
There has been recent research in the risk of SIDS associated with vaginal breech delivery. A study done by Germain M. Buck, PhD., clinical assistant professor in the Dept. of Social and preventative medicine in Buffalo, NY, has also shown that there is more than twice the risk of SIDS when mothers were in labor for approximately sixteen hours or longer (Bergman 214). According to Buck, "The majority of breech SIDS infants were single footling deliveries (a rarer type of breech presentation with the baby emerging with on foot first). The more common form of delivery called 'frank' presentation, with the baby exiting buttocks first was not associated with an increase in SIDS." (Bergman 215). What can be concluded from this is that a breech birth may be an indicator of an earlier problem in the development of the fetus, and problems in the development of proper heart rate and breathing. Oxygen and blood flow may be restricted to the fetus, which can be a contributing factor in improper fetal developmen
t (Bosma 107). It is important to realize that a breech delivery is not the direct cause of this syndrome, which may be a false conclusion that can be drawn from this.
Although today SIDS is what can be considered essentially a diagnosis of exclusion, there is currently no apparent consensus about the extent of the investigation that must be undertaken in order to eliminate other possible causes of death ( Thatch 126). There is supposed to be a thorough examination of the death scene by a medical examiner as stated previously, but this is at their own discretion and does not happen very often (Gregory 2731). Simply put, most coroners either do not have the time nor are they willing at times to go out and investigate the death scene for other possible reasons of death. By examining the death scene, they also bring themselves into conflict with the parents of the child as well as outside support groups (Thatch 127). It is the purpose of those who are to counsel those coping with the loss of a child due to SIDS to diminish the pain and guilt that is associated with the death ( Cruan 53). Any outside investigation by the police or medical examiner does nothing but to induc
e guilt, which is extremely hard on the parents especially if they are not truly at fault.
The consideration of such an investigation may yield some unwanted results. It will from time to time reveal potentially preventable causes of death that may have otherwise been diagnosed as SIDS. Such causes that are mistaken for "true SIDS" are namely overeating, overlying and most often, accidental suffocation ( Thatch 126). The harsh reality is that nobody is a perfect parent, and no matter how much care is given, accidents do happen. Another implication is that accidental suffocation by overlying during sleep can rarely, if ever, be conclusively proved by an examination (Bergaman 152). Once the parent awakens the baby from the sleeping position, the evidence is destroyed. The problems that arise from this are clear, making it obvious that death by suffocation may be unprovable. Which brings up the question of whether or not SIDS does actually exist. It is equally as hard to conclude the suffocation was not the cause of death, however.
There has been a presumed association between that of SIDS and apnea, which has led to the use of home apnea monitors for "diagnostic and preventive" purposes (Ahmann 719). They are located in homes where it is thought there is a risk of SIDS for the infant. According to the Congressional Office of Technology Assessment, as many as 45,000 infants are on home apnea monitors, which translates into 11.5 infants on monitors per every 1000 birth (Ahmann). The problem that is associated with the home apnea monitor is that most likely will cause distress within the family unit. It has been suggested to cause parental fatigue, anxiety, social isolation, and depression ( Defrain 215). This also leads to conflicts with others outside of the family such as friends, relatives, and those in the workplace that indirectly result from this problem in the home.
A Study was done by Elizabeth Ahmann to look for how home apnea monitors disrupted family life. Data from telephone interviews and mailed questionnaires were used to examine twelve aspects of family life such as parental depression, health, and attachment to the infant in ninety-three families that had infants who were considered to be at a high risk for SIDS, and who were on home apnea monitors. There was also a matched comparison of eighty-six families with infants that did not require monitoring. The results showed that the mothers of monitored infants were of poorer health than of those in the control group. Poor health, fatigue and somatic complaints were reported from mothers of monitored infants (Ahmann 722). Prior mental health was not considered in this study which may or may not account for those mothers of monitored infants that complained or showed poorer health. This could have possibly swayed the results, but the evidence still shows that those infants who were monitored had parents that
clearly exhibited more stress. When the point of view of the mother of the monitored infant is taken, the results may be easier to see. It must be difficult having a child that needs to be monitored because of a possible chance of death within the home. This most likely would make it hard for the parents to have any rest while their infant is sleeping for they may feel that if they do not keep a constant eye on the child, it will be their fault for the child's death, if it should so happen.
When a baby dies, each person in the family is going to experience it in a unique way. When a child dies from SIDS, this can be an even more tragic event because for the most part the death goes unexplained. It has been said that the death of a baby due to SIDS is extremely hard on the parents, for they feel a great amount of self blame. It takes approximately three years or more for the parents to recover from the death of a baby due to SIDS ( Defrain 229). What is clear is that people are really never the same not only after a death due to SIDS, but also a stillbirth or miscarriage as well. The parents must learn that they can heal emotionally and that they can and must go on for their future and their own good (Defrain 229). They need to learn that life will get better even though the memory that they will always have of the child will exist in their hearts and minds. Seeking professional help to cope with such an event is a good idea. Deep emotional feelings that are bottled up need to be expre
ssed and brought out into the open. This is very beneficial not only for the parents but for the entire family as well.
The death of an infant due to SIDS may also cause parental unconscious conflicts. Parents have been shown to have the preoccupation with death in their dreams and there spouses ( Arno 53). Parents also may exhibit a mode of rejecting their child's aliveness, independence or uniqueness ( Arno 54). These can be attributed to the obvious stress load that SIDS puts on the family. It shows that the pain and loss of a child reaches deep into the emotions of parents. During this time husband and wife may become closer to one another and show more feelings and compassion for one anther. These are defense mechanisms that are used to ease the pain of the passing of the infant. Denial may bring the parents closer to one another as they concentrate on other matters in order to lessen the loss of the child. These can be considered to be normal psychologically as long as they do not get out of hand to the point where it may go on to such drastic measures such as suicide. This is a harsh reality, that sometimes is
best dealt with by seeking professional help.
The surviving child in the SIDS family is an important factor. The mental health of a child that is part of such a loss is very important. Children grieve, often deeply, and the unexpected loss of a sibling due to SIDS elicits feeling from other family members that changes the family structure (Mandell 217). It is of utter importance to bring out the child's feelings into the open and to see how they feel about it. Negative feelings that are kept inside by the child may hurt the child's development and how he grows up. It is important to remember that the child is being discussed. An older child or teenager still has a great amount of sorrow but is more understanding and realistic to what has transpired.
It is now obvious that the impact that sudden infant death syndrome has on the family and friends can be considered to be tragic and shocking to say the least. Other health professionals also are at times struck by how SIDS can so suddenly take a infants life away. The role of the family's doctor and health care professionals are important in coping with this loss of life (Limerick 147). Providing early explanations and reassurance to the family along with the support of counselors and parents' organizations are helpful especially when there are legal investigations, and when there are no clear causes of death. It is up to such health professionals to provide families with the support and the advice that they need in order to cope with their loss. Losing an infant to SIDS can be one of the most devastating event in the lives of many parents, especially when they might feel that the death was their fault, when a lot of times it was due to outside circumstances that are beyond their control. There are s
ome things that parents can have no control over, and SIDS is one of these tragic events that can happen to a family unit.
Overall, I felt that I have explained SIDS, both causal and psychologically to a full extent. SIDS is a real problem in our society today, and it is one that can and must be dealt with, especially in USA. We have a very high infant mortality rate for a country of our stature. Our health care is top notch, but our babies are still not surviving. This can be prevented. The emotional strain that SIDS puts on parents is unbelievable, and can not be understood to its fullest extent unless one was to experience it first hand, in my opinion. There is a lot of research that is being done in this area to try to combat the causes of SIDS, which many times is incorrectly used as a cause of death among infants. This is a shame, because it may lead researchers who study these cases in the wrong direction. However, new research with good counseling offers new hope.
Works Cited
Ahmann, Elizabeth, et al. "Home Apnea Monitoring and Disruptions in Family Life". American Journal of Public Health. 2 (1992): 719-722.
Bergann, Abraham B. The Discovery of Sudden Infant Death Syndrome. New York: CBS Educational and Professional Publishing, 1986.
Bosma, James F. Development of Upper Respirator Anatomy and Function. Washington, D.C.: National Institutes of Health, 1974.
Cruan, Arno. "The Relationship of Sudden Infant Death Syndrome and Parental Unconscious Conflicts." Pre and Pari natal Psychology Journal. 2 (1987): 50-56.
Defrain, John. "Learning About Grief From Normal Families: SIDS, Stillbirth, and Miscarriage." Journal of Marital and Family Therapy. 12 (1991): 215-232.
Gregory, Geoff. "The Discovery of Sudden Infant Death Syndrome." The Journal of the American Medical Association 264 (1990): 2731.
Kahn, A., et al. "Problems in Management of Infants With an Apparent Life Threatening Event." Annals of the New York Academy of Sciences 533 (1988): 78-88.
Limerick, Sylvia. "Family and Health Professional Interactions." Annals of the New York Academy of Sciences. 533 (1988): 145-154.
Malloy, Michael H. "Sudden Infant Death Syndrome and Maternal Smoking." American Journal Of Public Health. 82 (1992): 1380-182.
Mandell, Frederick, et al. "the Sudden Infant Death Syndrome." Annals of the New York Academy of Sciences. 533 (1988): 129-131.
Mandell, Frederick, et al. "The Surviving Child in the SIDS Family." Pediatrician. 15 (1988): 217-221.
Martin, Richard, J. Respiratory Disorders During Sleep in Pediatrics. New York: Futura Publishing Co., 1990.
Powell, Maria. "The Psychological Impact of SIDS on Siblings." Irish Journal Of Psychology. 12 (1991): 235-247.
Raub, William. "Chronic Fetal Hypoxia May Predispose Infants to Sudden Infant Death Syndrome." The Journal of the American Medical Association. 264 (1990): 2731.
SUDDEN INFANT DEATH SYNDROME
(SIDS)
f:\12000 essays\sciences (985)\Biology\Tapeworm Infestation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Charlotte Cox
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish humans, deer,
sheep
Questions
What is the intermediate hose in Taenia solium?
1) dogs
2) cattle
3) pork
4) fish
Which is not a disease of tapeworms?
1) neurocysticerosis
2) anemia
3) hyatid cyst disease
4) severe weight loss/ excessive hunger
True or False : Ingesting the cysticercus of a tapeworm is more likely to cause disease than ingesting tapeworm
eggs.
What are three ways to diagnose tapeworm infestation?
radiologic tests
immunologic tests
fecal examination
True or False : Tapeworm infestation can be an example of commencalism.
Tapeworm Infestation
Causative agent: Tapeworms are parasitic Helmiths of the Phylum Platyhelminthes (Flatworms) and Class
Cestodes. They include Taenia saginata (beef tapeworm), Taenia solium (pork tapeworm) and Diphyllobothrium
latum and Echinococcus granulosus .
Anatomy: Scolex (head) with suckers, sometimes hooks. Proglottids -body segments continuously produced by
the scolex which contain both testes and ovaries that mature as they move away from scolex to produce eggs.
Taenia tapeworms can live up to 25 yrs. and grow to 18 ft. D. latum have been found as long as 32 ft. and can
produce millions of eggs per day. Tapeworms lack a digestive system but obtain food by absorbing it through their
cuticle.
Diseases and Symptoms: Usually, none at all - in fact, infection might never be known. Can cause blockage in
digestive tract, appendicitis. If the eggs hatch in a human, the larvae may cross the intestinal lining and enter the
bloodstream, migrate to different organs in the body and develop into cysticerci (5 mm. - 8 in.). D. latum larvae that
infect people are called plerocercoids. Depending on location and number of cysticerci, pathology can result. Ex:
Cysticerosis- (Taenia genus): eyes - blindness; spinal chord - paralysis; brain - neurocysticerosis with similar
symptoms to brain tumor, causing traumatic neurological damage. Persons of Sandinavian heritage are susceptible.
Diphyllobothriasis - (D. latum): Abdominal distention, flatulence, cramping, diarrhea and sometimes anemia
(parasite has a high affinity for vitamin B12)
Hydatidosis -(E. granulosus): Instead of cysticerci, egg develops into a hyatid cyst. Have been found large
enough to contain four gallons of fluid. Damage is due to cysts large size in vulnerable areas (brain, bone) or rupture
of cysts, leading to development of many daughter cysts. Rupture may cause anaphylactic shock. Infection is most
often seen in people who raise sheep or hunt/trap animals.
Diagnosis: Identification of eggs or proglottids in feces, immunologic tests, radiologic tests (CAT, MRI) to diagnose
presence of cysticerci.
Prophylaxis: meat inspection, cooking meat thoroughly, treat non-pathogenic cases theraputically to avoid spread of
disease, better personal hygiene, avoid use of human sewage as fertilizer.
Treatment: T. saginata, T. solium and D.latum - Niclosamide or Praziquantel; D. latum - surgical removal of
hyatid cyst; E. granulosus - Albendazole.
Tapeworm T. saginata T. solium D. latum E. granulosus
Definitive host human humans only mostly humans, bears Dogs, coyotes
Internediate host mainly cattle humans and pigs fish
f:\12000 essays\sciences (985)\Biology\The Application of Fractal Geometry to Ecology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Term paper:Principles of Ecology 310L
New Ecological Insights:
The Application of Fractal Geometry to Ecology
Victoria Levin
7 December 1995
Abstract
New insights into the natural world are just a few of the results from the use of fractal geometry.
Examples from population and landscape ecology are used to illustrate the usefulness of fractal
geometry to the field of ecology. The advent of the computer age played an important role in the
development and acceptance of fractal geometry as a valid new discipline. New insights gained from
the application of fractal geometry to ecology include: understanding the importance of spatial and
temporal scales; the relationship between landscape structure and movement pathways; an increased
understanding of landscape structures; and the ability to more accurately model landscapes and
ecosystems. Using fractal dimensions allows ecologists to map animal pathways without creating an
unmanageable deluge of information. Computer simulations of landscapes provide useful models for
gaining new insights into the coexistence of species. Although many ecologists have found fractal
geometry to be an extremely useful tool, not all concur. With all the new insights gained through the
appropriate application of fractal geometry to natural sciences, it is clear that fractal geometry a
useful and valid tool.
New insight into the natural world is just one of the results of the increasing popularity and use of
fractal geometry in the last decade. What are fractals and what are they good for? Scientists in a
variety of disciplines have been trying to answer this question for the last two decades. Physicists,
chemists, mathematicians, biologists, computer scientists, and medical researchers are just a few of
the scientists that have found uses for fractals and fractal geometry.
Ecologists have found fractal geometry to be an extremely useful tool for describing ecological
systems. Many population, community, ecosystem, and landscape ecologists use fractal geometry as
a tool to help define and explain the systems in the world around us. As with any scientific field, there
has been some dissension in ecology about the appropriate level of study. For example, some
organism ecologists think that anything larger than a single organism obscures the reality with too
much detail. On the other hand, some ecosystem ecologists believe that looking at anything less than
an entire ecosystem will not give meaningful results. In reality, both perspectives are correct.
Ecologists must take all levels of organization into account to get the most out of a study. Fractal
geometry is a tool that bridges the "gap" between different fields of ecology and provides a common
language.
Fractal geometry has provided new insight into many fields of ecology. Examples from population
and landscape ecology will be used to illustrate the usefulness of fractal geometry to the field of
ecology. Some population ecologists use fractal geometry to correlate the landscape structure with
movement pathways of populations or organisms, which greatly influences population and
community ecology. Landscape ecologists tend to use fractal geometry to define, describe, and
model the scale-dependent heterogeneity of the landscape structure.
Before exploring applications of fractal geometry in ecology, we must first define fractal geometry.
The exact definition of a fractal is difficult to pin down. Even the man who conceived of and
developed fractals had a hard time defining them (Voss 1988). Mandelbrot's first published
definition of a fractal was in 1977, when he wrote, "A fractal is a set for which the
Hausdorff-Besicovitch dimension strictly exceeds the topographical dimension" (Mandelbrot 1977).
He later expressed regret for having defined the word at all (Mandelbrot 1982). Other attempts to
capture the essence of a fractal include the following quotes:
"Different people use the word fractal in different ways, but all agree that fractal objects
contain structures nested within one another like Chinese boxes or Russian dolls." (Kadanoff
1986)
"A fractal is a shape made of parts similar to the whole in some way." (Mandelbrot 1982)
Fractals are..."geometric forms whose irregular details recur at different scales." (Horgan
1988)
Fractals are..."curves and surfaces that live in an unusual realm between the first and
second, or between the second and third dimensions." (Thomsen 1982)
One way to define the elusive fractal is to look at its characteristics. A fundamental characteristic of
fractals is that they are statistically self-similar; it will look like itself at any scale. A statistically
self-similar scale does not have to look exactly like the original, but must look similar. An example of
self-similarity is a head of broccoli. Imagine holding a head of broccoli. Now break off a large floret;
it looks similar to the whole head. If you continue breaking off smaller and smaller florets, you'll see
that each floret is similar to the larger ones and to the original. There is, however, a limit to how small
you can go before you lose the self- similarity.
Another identifying characteristic of fractals is they usually have a non- integer dimension. The fractal
dimension of an object is a measure of space-filling ability and allows one to compare and categorize
fractals (Garcia 1991). A straight line, for example, has the Euclidean dimension of 1; a plane has the
dimension of 2. A very jagged line, however, takes up more space than a straight line but less space
then a solid plane, so it has a dimension between 1 and 2. For example, 1.56 is a fractal dimension.
Most fractal dimensions in nature are about 0.2 to 0.3 greater than the Euclidean dimension (Voss
1988).
Euclidean geometry and Newtonian physics have been deeply rooted traditions in the scientific
world for hundreds of years. Even though mathematicians as early as 1875 were setting the
foundations that Mandelbrot used in his work, early mathematicians resisted the concepts of fractal
geometry (Garcia 1991). If a concept did not fit within the boundaries of the accepted theories, it
was dismissed as an exception. Much of the early work in fractal geometry by mathematicians met
this fate. Even though early scientists could see the irregularity of natural objects in the world around
them, they resisted the concept of fractals as a tool to describe the natural world. They tried to force
the natural world to fit the model presented by Euclidean geometry and Newtonian physics. Yet we
all know that "clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is
not smooth, nor does lightning travel in a straight line" (Mandelbrot 1982).
The advent of the computer age, with its sophisticated graphics, played an important role in the
development and acceptance of fractal geometry as a valid new discipline in the last two decades.
Computer-generated images clearly show the relevance of fractal geometry to nature (Scheuring and
Riedi 1994). A computer- generated coastline or mountain range demonstrates this relevance. Once
mathematicians and scientists were able to see graphical representations of fractal objects, they
could see that the mathematical theory behind them was not freakish but actually describes natural
objects fairly well. When explained and illustrated to most scientists and non-scientists alike, fractal
geometry and fractals make sense on an intuitive level.
Examples of fractal geometry in nature are coastlines, clouds, plant roots, snowflakes, lightning, and
mountain ranges. Fractal geometry has been used by many sciences in the last two decades; physics,
chemistry, meteorology, geology, mathematics, medicine, and biology are just a few.
Understanding how landscape ecology influences population ecology has allowed population
ecologists to gain new insights into their field. A dominant theme of landscape ecology is that the
configuration of spatial mosaics influences a wide array of ecological phenomena (Turner 1989).
Fractal geometry can be used to explain connections between populations and the landscape
structure. Interpreting spatial and temporal scales and movement pathways are two areas of
population ecology that have benefited from the application of fractal geometry.
Different tools are required in population ecology because the resolution or scale with which field
data should be gathered is attuned to the study organism (Wiens et al. 1993). Insect movements, like
plant root growth, follow a continuous path that may be punctuated by stops but the tools required
to measure this continuous pathway are very different. Plant movement is measured by observing
root growth through photographs, insect movement by tracking insects with flag placement, and
animal movement by using tracking devices on larger animals (Gautestad and Mysterud 1993,
Shibusawa 1994, Wiens et al. 1993).
Spatial and temporal scale are important when measuring the home range of a population and when
tracking animal movement (Gautestad and Mysterud 1993, Wiens et al. 1993). Animal paths have
local, temporal, and scale-specific fluctuations in tortuosity (Gautestad and Mysterud 1993) that are
best described by fractal geometry. The mapping of insect movement also required use of the proper
spatial or temporal scale. If too long of a time interval is used to map the insect's progress, the
segments will be too long and the intricacies of the insect's movements will be lost. The use of very
short intervals may create artificial breaks in behavioral moves and might increase the sampling effort
required until it is unmanageable (Wiens et al. 1993).
Movement pathways are one of the main characteristics influenced by the landscape. Movement
pathways are influenced by the vegetation patches and patch boundaries (Wiens et al. 1993). Root
deflection in a growing plant is similar to an animal pathway being changed by the landscape
structure. Paths of animal movement have fractal aspects.
In a continuously varying landscape, it is difficult to define the area of a specie's habitat (Palmer
1992). Application of fractal geometry has given new insights into animal movement pathways. For
example, animal movement determines the home range. Because animal movement is greatly
influenced by the fractal aspect of the landscape, home range is directly influenced by the landscape
structure (Gautestad and Mysterud 1993). Animal movement is not random but greatly influenced by
the landscape of the home range of the animal (Gautestad and Mysterud 1993). Structural
complexity of the environment results in tortuous animal pathways (Gautestad and Mysterud 1993),
which in turn lead to ragged home range boundaries.
Gautestad and Mysterud (1993) found that home range can be more accurately described by its
fractal properties than by the traditional area-related approximations. Since demarcation of home
range is a difficult task and home range can't be described in traditional units like square meters or
square kilometers, they used fractal properties to better describe the home range area as a complex
area utilization pattern (Gautestad and Mysterud 1993). Fractals work well to describe home range
because as the sample of location observation increases, the overall pattern of the position plots
takes the form of a statistical fractal (Gautestad and Mysterud 1993).
Fractal dimensions are used to represent the pathways of beetle movement because the fractal
dimension of insect movement pathways may provide insights not available from absolute measures
of pathway configurations (Wiens et al. 1993). Using fractal dimensions allowed ecologists to map
the pathway without creating an unmanageable deluge of information (Wiens et al. 1993).
Insect behavior such as foraging, mating, population distribution, predator- prey interactions or
community composition may be mechanisticly determined by the nature of the landscape. The spatial
heterogeneity in environmental features or patchiness of a landscape will determine how organisms
can move around (Wiens et al. 1993). As a beetle or an other insect walks along the ground, it does
not travel in a straight line. The beetle might walk along in a particular direction looking for something
to eat. It might continue in one direction until it comes across a bush or shrub. It might go around the
bush, or it might turn around and head back the way it came. Its path seems to be random but is
really dictated by the structure of the landscape (Wiens et al. 1993).
Another improvement in population ecology through the use of fractal geometry is the modeling of
plant root growth. Roots, which also may look random, do not grow randomly. Reproducing the
fractal patterns of root systems has greatly improved root growth models (Shibusawa 1994).
Landscape ecologists have used fractal geometry extensively to gain new insights into their field.
Landscape ecology explores the effects of the configuration of different kinds of environments on the
distribution and movement of organisms (Palmer 1992). Emphasis is on the flow or movement of
organism, genes, energy, and resources within complex arrangements of ecosystems (Milne 1988).
Landscapes exhibit non-Euclidean density and perimeter-to-area relationships and are thus
appropriately described by fractals (Milne 1988). New insights on scale, increased understanding of
landscape structures, and better landscape structure modeling are just some of the gains from
applying fractal geometry.
Difficulties in describing and modeling spatially distributed ecosystems and landscapes include the
natural spatial variability of ecologically important parameters such as biomass, productivity, soil and
hydrological characteristics. Natural variability is not constant and depends heavily on spatial scale.
Spatial heterogeneity of a system at any scale will prevent the use of simple point models
(Vedyushkin 1993).
Most landscapes exhibit patterns intermediate between complete spatial independence and complete
spatial dependence. Until the arrival of fractal geometry it was difficult to model this intermediate
level of spatial dependence (Palmer 1992, Milne 1988).
Landscapes present organisms with heterogeneity occurring at a myriad of length scales.
Understanding and predicting the consequences of heterogeneity may be enhanced when
scale-dependent heterogeneity is quantified using fractal geometry (Milne 1988). Landscape
ecologists usually assume that environmental heterogeneity can be described by the shape, number,
and distribution on homogeneous landscape elements or patches. Heterogeneity can vary as a
function of spatial scale in landscapes. An example of this is a checker board. At a very small scale,
a checker board is homogeneous because one would stay in one square. At a slightly larger scale,
the checker board would appear to be heterogeneous since one would cross the boundaries of the
red and black squares. At an even larger scale, one would return to homogeneity because of the
pattern of red and black squares (Palmer 1992).
An increased understanding of the landscape structures results from using the fractal approach in the
field of remote sensing of forest vegetation. Specific advantages include the ability to extract
information about spatial structure from remotely sensed data and to use it in discrimination of these
data; the compression of this information to few values; the ability to interpret fractal dimension
values in terms of factors, which determine concrete spatial structure; and sufficient robustness of
fractal characteristics (Vedyushkin 1993).
Computer simulations of landscapes provide useful models for gaining new insights into the
coexistence of species. Simulated landscapes allow ecologists to explore some of the consequences
of the geometrical configuration of environmental variability for species coexistence and richness
(Palmer 1992). A statistically self-similar landscape is an abstraction but it allows an ecologist to
model variation in spatial dependence (Palmer 1992). Spatial variability in the environment is an
important determinant of coexistence of competitors (Palmer 1992). Spatial variability can be
modeled by varying the landscape's fractal dimension.
The results of this computer simulation of species in a landscape show that an increase in the fractal
dimension increases the number of species per microsite and increases species habitat breadth.
Other results show that environmental variability allows the coexistence of species, decreases beta
diversity, and increases landscape undersaturation (Palmer 1992). Increasing the fractal dimension of
the landscape allows more species to exist in a particular area and in the landscape as a whole;
however, extremely high fractal dimensions cause fewer species to coexist on the landscape scale
(Palmer 1992).
Although many ecologists have found fractal geometry to be an extremely useful tool, not all concur.
Even scientists who have used fractal geometry in their research point out some of its shortcomings.
For example, Scheuring and Riedi (1994) state that "the weakness of fractal and multifractal
methods in ecological studies is the fact that real objects or their abstract projections (e.g.,
vegetation maps) contain many different kinds of points, while fractal theory assumes that the natural
(or abstract) objects are represented by points of the same kind."
Many scientists agree with Mandelbrot when he said that fractal geometry is the geometry of nature
(Voss 1988), while other scientists think fractal geometry has no place outside a computer simulation
(Shenker 1994). In 1987, Simberloff et al. argued that fractal geometry is useless for ecology
because ecological patterns are not fractals. In a paper called "Fractal Geometry Is Not the
Geometry of Nature," Shenker says that Mandelbrot's theory of fractal geometry is invalid in the
spatial realm because natural objects are not self-similar (1994). Further, Shenker states that
Mandelbrot's theory is based on wishing and has no scientific basis at all. He conceded however that
fractal geometry may work in the temporal region (Shenker 1994). The criticism that fractal
geometry is only applicable to exactly self-similar objects is addressed by Palmer (1982). Palmer
(1982) points out that Mandelbrot's early definition (Mandelbrot 1977) does not mention
self-similarity and therefore allows objects that exhibit any sort of variation or irregularity on all
spatial scales of interest to be considered fractals.
According to Shenker, fractals are endless geometric processes, and not geometrical forms (1994),
and are therefore useless in describing natural objects. This view is akin to saying that we can't use
Newtonian physics to model the path of a projectile because the projectile's exact mass and velocity
are impossible to know at the same time. Mass and velocity, like fractals, are abstractions that allow
us to understand and manipulate the natural and physical world. Even though they are "just"
abstractions, they work quite well.
The value of critics such as Shenker and Simberloff is that they force scientists to clearly understand
their ideas and assumptions about fractal geometry, but the critics go too far in demanding precision
in an imprecise world.
With all the new insights and new knowledge that have been gained through the appropriate
application of fractal geometry to natural sciences, it is clear that is a useful and valid tool.
The new insights gained from the application of fractal geometry to ecology include: understanding
the importance of spatial and temporal scales; the relationship between landscape structure and
movement pathways; an increased understanding of landscape structures; and the ability to more
accurately model landscapes and ecosystems.
One of the most valuable aspects of fractal geometry, however, is the way that it bridges the gap
between ecologists of differing fields. By providing a common language, fractal geometry allows
ecologists to communicate and share ideas and concepts.
As the information and computer age progress, with better and faster computers, fractal geometry
will become an even more important tool for ecologists and biologists. Some future applications of
fractal geometry to ecology include climate modeling, weather prediction, land management, and the
creation of artificial habitats.
Literature Cited
Garcia, L. 1991. The Fractal Explorer. Dynamic Press. Santa Cruz.
Gautestad, A. O., Mysterud, I. 1993. Physical and biological mechanisms in animal
movement processes. Journal of Applied Ecology. 30:523-535.
Horgan, J. 1988. Fractal Shorthand. Scientific American. 258(2):28.
Kadanoff, L. P. 1986. Fractals: Where's the physics? Physics Today. 39:6-7.
Mandelbrot, B. B. 1982. The Fractal Geometry of Nature. W. H. Freeman and Company.
San Francisco.
Mandelbrot, B. B. 1977. Fractals: Form, Chance, and Dimension. W. H. Freeman. New
York.
Milne, B. 1988. Measuring the Fractal Geometry of Landscapes. Applied mathematics and
Computation. 27: 67-79.
Palmer, M.W. 1992. The coexistence of species in fractal landscapes. Am. Nat.
139:375-397.
Scheuring, I. and Riedi, R.H. 1994. Application of multifractals to the analysis of
vegetation pattern. Journal of Vegetation Science. 5: 489-496.
Shenker, O.R. 1994. Fractal Geometry is not the geometry of nature. Studies in History
and Philosophy of Science. 25:6:967-981.
Shibusawa, S. 1994. Modeling the branching growth fractal pattern of the maize root
system. Plant and Soil. 165: 339-347.
Simberloff, D., P. Betthet, V. Boy, S. H. Cousins, M.-J. Fortin, R. Goldburg, L. P.
Lefkovitch, B. Ripley, B. Scherrer, and D. Tonkyn. 1987. Novel statistical analyses in
terrestrial animal ecology: dirty data and clean questions. pp. 559-572 in Developments in
Numerical Ecology. P. Legendre and L. Legendre, eds. NATO ASI Series. Vol. G14.
Springer, Berlin.
Turner, M. G. 1989. Landscape ecology; the effect of pattern on process. Annual Rev.
Ecological Syst. 20:171-197.
Vedyushkin, M. A. 1993. Fractal properties of forest spatial structure. Vegetatio. 113:
65-70.
Voss, R. F. 1988. Fractals in Nature: From Characterization to Simulation. pp. 21- 70. in
The Science of Fractal Images. H.-O. Peitgen and D. Saupe, eds. Springer- Verlag, New
York.
Wiens, J. A., Crist, T. O., Milne, B. 1993. On quantifying insect movements.
Environmental Entomology. 22(4): 709-715.
Thomsen, D. E. 1980. Making music--Fractally. Science News. 117:187-190.
f:\12000 essays\sciences (985)\Biology\The Cambrian Period.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Cambrian Period
Science
10/26/95
During the Cambrian Period a lot of things happened both geologically and biologically. The Cambrian Period was the first period of the Paleozoic Era, it lasted from 570 million years ago to 500 million years ago, 70 million years. It¹s very important because it corresponds with the firs t appearance of abundant fossils especially Trilobites, which characterizes a succeeding point in time. During the Cambrian Period the lapetus Ocean appeared, the predecessor of the Atlantic Ocean, which separated the young North American and Eurasian continents. Also Gondwanaland was in the final stages of development. Gondwanaland was a very large continent made up of what is now South America, Africa, Arabia, Madagascar, India, Australia, and Antarctica. Large shelled organisms first emerged during the Cambrian. Also Earth¹s atmosphere contained the same amount of oxygen as it does now, enough to sustain the metabolic rate of a complex organism. Running down the middle of the North American continent was a large inland sea called the Saulk sea. Inhabiting this sea, and the other seas of the primitive world, were Trilobites and Brachiopods . Trilobites were three - lobed arthropods, unlike other arthropods Trilobites had primitive ,unspecialized appendages. Brachiopods are simple bi- valve shellfish that resemble clams. During the Cambrian corals and chrinoids emerged. Chrinoids are sea animals of the class chrinoidea like the sea lily and the feather star. The only plant life of the Cambrian period were primitive algae.
f:\12000 essays\sciences (985)\Biology\The Canada Goose.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Branta Canadensis, better known as the Canada Goose is a magnificent
bird which can be found all over North America. People from all over North
America look towards the sky when the Canada Geese go honking overhead in
their trademark "V" formation, and because they nest all over Canada and some of
the United States many people have a chance to witness the birds migration to the
nesting grounds and back to the wintering grounds. The Canada Goose is
respected by so many of us because of it's dignity and courage and refusal to give
up. Over the years the Canada Goose has picked up many slang names, some of
these are: Canadian Goose, Canadian Honker, Honker, Honker Goose, Big
Honker, Old Honker, Boy Goose, Bernache (French for Barnacle Goose), Big
Mexican Goose, Blackee, Blacknecked Goose, Brant, French Goose, Northern
Goose, Reef Goose, Ringneck, Wavy, and White-cheeked Goose (Wormer).
The Canada Goose has excellent eyesight which makes it difficult to hunt
because the Goose can see the hunter well before the hunter ever sees the goose
(Wormer). This eyesight is essential for flying though, a Canada Goose can see
three quarters of a sphere without moving its head (Wormer). The Canada Goose
also has an acute sense of hearing, it's ears are positioned on the side of it's head
(Wormer). They have either no sense of smell or a very poor one, but this does not
impede the goose in any way (Wormer). Although there is a large variation in size
all subspecies of Canada Geese look the same physically (Wormer) The male and
female Canada Goose look almost exactly the same except the female can usually
be recognized because it is smaller and less aggressive (Wormer). Colors also vary
but, the color pattern is generally the same for all the subspecies (Godfrey). The
head and neck are dark black with a large white patch on each cheek which meet
under the chin, this is the Canada Goose's most easily recognized characteristic
because it is unique to the Canada Goose (Wormer). The upper parts of the body
as well as the wings are greyish brown, the feathers tipped with brownish white
(Godfrey). The tail is black with the upper tail coverts white and the under tail
coverts are white also (Godfrey). The under body is brownish grey with paler
feather tips, the sides being the darkest and the lower belly is white (Godfrey).
The feathers of the breast commonly called down are broad and square tipped
(Godfrey). The bill and legs are dark black, and the iris of the eye is brown with a
black pupil (Wormer). The Canada Goose in it's first Autumn and Winter is
similar to the adults but breast feathers are narrower, softer and more rounded, the
outer primaries on the other hand are less rounded than those of a mature adult
(Godfrey). The Canada Goose color pattern works as a great disguise, when lying
flat with the neck outstretched the Canada Goose looks like a clump of grass and
dirt and difficult to distinguish as a goose even on snow or ice (Wormer). All
goslings of all subspecies of the Canada Goose look identical (Breen). Goslings
are bright yellow and weigh less than one pound when hatched, after two weeks
they way two pounds, after one month their weight is three to four pounds and
their color is a dull grey, after six weeks a color pattern can be seen and inclination
to fly i.e.. running on top of the water flapping it's wings, after eight weeks they
look like adult and weigh six to seven pounds and some are able to fly others begin
to fly in their ninth week, further growth depends on the subspecies (Breen).
There are eleven subspecies of the Canada Goose but the characteristics that
separate them usually cannot be seen from a distance (Wormer). Branta
Canadensis Minima, also known as the Cackling Canada Goose is the smallest of
all subspecies weighing only two and a half to four pounds (Wormer). It is the
darkest in color and has the highest pitch call (Wormer). Branta Canadensis
Hutchinsii, also known as the Richardson Canada Goose weighs three to seven
pounds and is light in color, it's call has a pitch slightly deeper than that of the
Cackling Canada Goose (Wormer). Branta Canadensis taverneri, also known as
Taverner's Canada Goose weighs three and a half to five pounds and is dark in
color (Wormer). Branta Canadensis leucopareia, also known as the Aleutian
Canada Goose also weighs three and a half pounds and is identical to Taverner's
Canada Goose except it has a narrow white ring separating the black neck from the
dark grey-brown body (Wormer). Branta Canadensis Parvipes, also known as the
Lesser Canada Goose weighs six pounds and is light colored (Wormer). Branta
Canadensis Occidentalis, also known as the Dusky Canada Goose Weighs five to
twelve pounds and is dark brown almost chocolate covered (Wormer). Branta
Canadensis, also known as the Atlantic Canada Goose weighs six to eleven pounds
and is light colored (Wormer). Branta Canadensis Interior, also known as Todd's
Canada Goose also weighs six to eleven pounds and is medium colored (Wormer).
Branta Canadensis Moffiti, also known as the Western Canada Goose weighs
twelve to fifteen pounds and is medium colored (Wormer). Branta Canadensis
Fulva, also known as the Vancouver Canada Goose weighs six to thirteen pounds
and is dark in color, ninety percent of this species do not migrate and live in British
Columbia all year round (Wormer). Branta Canadensis Maxima, also known as
the Giant Canada Goose is said to be the most beautiful of all the subspecies but it
is known that they are the most easily domesticated (Wormer). Giant Canadas
Weigh eighteen to twenty pounds and are medium colored. Their diagnostic
feature is that there is a small backward projecting hook on the white cheek patch
(Wormer). The Canada Goose has ten vocalizations or calls which it uses to
communicate with other Canada Geese, honking, long distance call, greeting,
alarm, short distance call of mate, short distance call to goslings, special greeting
for female, adult distress, gosling distress, and gosling contentment call as well as a
scream of pain when the bird is bitten (Wormer).
It takes a female goose a day to a day and half to lay an egg (Wormer).
Each goose lays and average of five to six eggs, sometimes only two and
sometimes one goose may lay eleven to twelve eggs (Wormer). With sixty percent
of all eggs laid hatching tow Canada Geese produce an average of three goslings
per year (Wormer). Male to Female births are split down the middle, 50-50
(Wormer). The eggs are dull white and 2.86 by 1.89 inches to 3.43 by 2.34 inches
(Godfrey) and weigh 3.5 to 7.5 ounces (Breen). The incubation period lasts
twenty five to twenty eight days with an average incubation temperature of 100.4
degrees Fahrenheit to 101.3 degrees Fahrenheit (Wormer). Most of the Canada
Geese killed from hunting are twelve to twenty-three years old (Wormer). Canada
Geese in captivity however live an aver age of twenty to thirty years and
sometimes even over forty (Wormer). The Canada Goose has a very rapid growth
rate, in fact if an average human baby were to grow as fast as a gosling it would
weigh one-hundred and thirty-eight pounds by the time it was eight weeks old
(Wormer). Goslings begin to develop feathers after their third week and after their
fifth week the feather are the color of and adults (Breen). The adult geese begin
molting when the goslings are two weeks old and is unable to fly for five to six
weeks (Breen). After the molting period the goslings are eight to nine weeks old
and are ready to fly with their parents (Breen).
The Canada Goose has two types of habitat, breeding grounds, and
Wintering grounds (Ross). Canada Gees migrate north to their breeding grounds
and south to their wintering grounds (Ross). During migration north and south the
geese follow four main flyways, Atlantic flyway, Pacific Flyway, Mississippi
Flyway and the Central Flyway (Breen). Within these flyways are migration
corridors (refer to maps 1 and 2), biologists are not sure how they follow the same
corridor year after year (Breen). There are three main theories of how a Canada
Goose navigates to the same breeding and wintering grounds each migration
(Breen). One theory is that they rely on landscape cues, another theory is that they
use the position of the sun and stars, and the third theory is that they have iron rich
tissue in their brains, like that of a pigeon and they use the earth's magnetic field to
navigate, but exactly how Canada Geese navigate is unknown (Breen). Some
ducks may fly as fast as eighty miles per hour but the Canada Goose flies at a much
more graceful speed of forty-two to forty-five miles per hour during migration and
can fly as fast as sixty miles per hour. Canada Geese always take off into the wind
and usually fly at an altitude between one thousand and three thousand feet but in
bad weather will fly as low as a couple hundred feet and when traveling over short
distance they prefer walking because it uses less energy (Breen). When in flying
in flocks Canada Geese fly in their trademark "V" formation, this formation is
created because each goose flies behind and to the side of the goose in front of it
allows them to take advantage of the slipstream created, this technique is known to
automobile racers as drafting and it lets the Canada Goose fly seventy-one percent
further than just going by itself (Breen). Another skill Canada Geese use to land in
heavy wind is wiffeling, to do this the goose turns its body sideways so that it's
wings are perpendicular to the ground, the bird loses it's left and basically falls out
of the sky, this technique is known to glider pilots as side slipping because you slip
out of the sky (Breen). Most people believe that the migration north and the
migration south are the same but actually they are different (Breen). The migration
north to the breeding grounds is a slower and more relaxed one than that of the one
moving south (Wormer). The migration north sometimes begins in late January for
Canada Geese that are wintering far south, but the majority of movement occurs in
March (Resource Reader). The female chooses the breeding grounds and nesting
site, the breeding grounds are those of which she was hatched (Breen). Ideal
breeding grounds have the following characteristics: Browsing area for prior to
nesting season, firm foundations, excellent visibility in all directions, isolated,
brooding area of open water, aquatic feeding area, cover of emergent plants for
protection during molting, and a browsing area for brood after they learn to fly
(Wormer). Some areas with these characteristics are: swamps, marshes, meadows,
rivers, lakes, ponds, islands, Tundra and coastal plain (Wormer). Preferred places
to build the nest are small islets, muskrat houses, other birds abandoned or
sometimes unabandoned nests, in the case where the nest is still occupied the
female goose will incubate the other birds eggs as well as her own. Canada Geese
especially the Giant Canada will also use man made nests like washtubs, old tires
and haystacks (Wormer). Nest size varies from four inches deep by ten inches
wide to fifteen inches deep and forty-four inches wide (Wormer). After the female
has chosen the breeding grounds, nesting site and built the nest the male guards
while she incubates the eggs (Wormer). Canada Geese breed all over Canada and
in ideal breeding areas there may be many geese per acre but some territories may
be as much as thirty five acres (Wormer) (See maps 1 and 2 for breeding areas and
densities of geese). The migration south to the wintering grounds is a much faster
paced migration than the one north and done in much larger flocks (Breen). Each
flock usually consists of a group of families (Breen). October and mid-November
is when the greatest numbers of Canada Geese can be seen moving south
(Resource Reader). Popular wintering grounds have a good food supply, suitable
resting grounds near a lake, river or resivoir, the body of water should be large and
have low banks or shorelines for loafing and the climate should not be to cold
(Wormer). It is often on the wintering grounds that the geese choose their mate
whom they will pair with for life, unless one is killed (Obee). Some Canada Geese
migrate as far as Mexico, others stop further north, some don't migrate at all and
some even migrate across the ocean to Japan (Ross) (Refer to maps 1 and 2 for
wintering areas and densities of geese).
Canada Geese like to feed mid-morning and just before sunset leaving the
mid-day for relaxing. Canada Geese graze cord grass, spike rush, naiad, glasswort,
bullrush, salt grass, seepweed, Bermuda grass, golden dock, lycium, brome grass,
wild barley, rabbit-foot grass, pepper grass, saltbush, cattail, alkali grass, and tansy
mustard (Wormer). They will eat Ladino or Dutch white clover if it is mixed with
other grasses that the goose normally eats, they will not eat alfalfa unless it is
young and tender (Wormer). Canada Geese also feed on all human grown grains
but their favorite of all foods is corn (Breen). The most popular foods are, corn
which forty three percent of geese feed on, small grain fed on by twenty four
percent of geese, twenty two percent feed on pasture, and soybeans accounts for
the other nine percent (Breen). Apart from dry land grazing Canada Geese also
feed on some aquatic growth (Wormer). Canada Geese are mostly vegetarian but
they do feed on some small insects, insect larvae, mollusks and small crustaceans
(Wormer).
Dogs will chase and kill Canada Geese for fun and coyotes and wolves will
also kill Canada Geese for food, but most of the time geese are much to fast for
land mammals unless they are hurt or wounded or it is during molting season
(Wormer). Molting season is the most dangerous time of the Canada Goose's life
because it cannot fly, however even without their flight feathers a Canada Goose
can still outrun a man over land and may even be able to fight of an attacker with
strong blows from it's wings and using it's beak as a weapon (Breen). Humans are
the largest predator of the Canada Goose (Wormer). However due to strict
management of hunting of Canada Geese the population has not been decreased by
hunting (Wormer). In 1995 Goose hunting season for North Game Bird District
opened on September first and closed December ninth with a bag limit of nine
daily which not more than six may be dark geese and of these not more than four
may be whitefronts (Wiens). In the South Game Bird District of Saskatchewan the
season for goose hunting opened on September eleventh and also closed on
December ninth with a bag limit of eighteen of which not more than twelve may be
dark geese and of these not more than six may be whitefronts (Wiens).
Parasites are not responsible for to many adult goose deaths but they do
cause some (Wormer). Most of the damage parasites do is killing goslings two to
three days old (Wormer). Some internal parasites of Canada Geese include both
worm and blood parasites (Wormer). Externally the Canada Goose also has
various kinds of lice (Wormer).
Some times a female Canada Goose will nest in a nest that has already been
made by an eagle or hawk and may still be occupied (Wormer). If the nest
contains the eggs of the bird who built the nest the female Canada Goose will
incubate the other birds eggs as well as her own (Wormer). This benefits both
birds because it leaves the other bird more time to rest and eat and the Canada
Goose gets to use a nest (Wormer). Canada Geese frequently nest on top of
muskrat houses because they are on open water where the eggs are safe from other
birds and foxes, this does not disturb the muskrat in any way (Wormer). The
Canada Goose will also nest in an abandoned nest of a hawk, eagle or other large
bird (Wormer). There have been cases reported of small songbirds seen riding on
the backs of Canada Geese on their migration route or hunters who have shot a
goose and found a smaller bird tucked away in it's feathers, However there is no
scientific documentation of this (Breen).
The Canada Goose's largest competition is usually other Canada Geese
(Wormer). Canada Geese do not mind if other waterfowl such as ducks are
nesting nearby but they will fight other Canada Geese for their territory if it is
necessary (Wormer). It is important that Canada Geese do not build nests to close
together because when the goslings are first hatched they cannot recognize their
parents nor can their parents recognize them and the goslings can become easily
mixed up and follow a different set of parents (Wormer).
Humans have had a strong effect on the population of the Canada Goose,
good and bad effects. Agricultural waste water kills many geese each year another
human waste that kills geese is when they ingest spent ammunition with gravel, the
geese die of lead poisoning and it is a very painful death and more common than
most people think (Wormer). Urban growth, industry and draining land for
farming contribute to the four hundred thousand acres of wetland lost each year in
the United States which has had a tremendous effect on some waterfowl, however
this does not directly effect the Canada Goose's birth rate because most Canada
Geese breed far enough north that they are isolated from progress (Breen). The
disappearing of wetlands does effect them indirectly though because they are used
for resting and feeding along the migration route and are important for safety
(Breen). Nesting sites in the north aren't totally safe from humans though, the
Exxon oil spill has damaged Canada Goose habitat (Breen). Plans to dam the
Yukon River could also ruin the nesting grounds for over two hundred thousand
Canada Geese (Breen).
The number of people who are trying to protect wetlands has become quite
large (Breen). The largest and best known group is probably Ducks Unlimited
Canada which was founded in 1937 and has over one-hundred thousand members
most of which are hunters (Breen). In 1973 it expanded into the United States
which now has over five-hundred and fifty thousand members also which are
mostly hunters and one year after in 1974 Ducks Unlimited de Mexico joined the
other two groups in wetland protection (Breen). Since their founding Ducks
Unlimited have raised nearly one half billion dollars ninety three percent of which
has been invested in projects to aid waterfowl such as the Canada Goose (Breen).
As long as the Canada Goose's private northern breeding grounds are not
disturbed this magnificent bird should be with us a long time. For most people the
Canada Goose symbolizes autumn when we see them gracefully soaring through
the air to their warm winter home and they also symbolize spring time when they
come back from their winter home. The Canada Goose is a bird with dignity and
pride and is a bird that is loved by all who see and hear it.
Western Canada, moffiti 68 000
Dusky Canada, occidentalis 16 000
Vancouver Canada, fulva 14 000
Todd's Canada, interior/Atlantic Canada, Canadensis 1 000 000
Giant Canada, Maxima 55 000
Cackling Canada, Minima 172 000
Aleutian Canada, leucopareia remnant
Taverner's Canada, Taverneri 57 000
Lesser Canada, Parvipes/Richardson's Canada, Hutchinsii 300 000
Total estimated population of Canada Geese 1 682 000
f:\12000 essays\sciences (985)\Biology\The Dog.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE DOG
Domestic dog, carnivorous mammal, generally considered the first domesticated animal. The domesticated dog has coexisted with human beings as a working partner and household pet in all eras and cultures since the days of the cave dwellers. It is generally believed that the direct ancestor of the domestic dog is the wolf, originally found throughout Europe, Asia, and North America. Remains of a dog, estimated to be 10,500 years old, have been found in Idaho.
TAXONOMY
Kingdom: Animalia
Phylum: Chordata
Class: Mammalia
Order: Carnivora
Family: Canidae
Genus: Canis
Species: Canis familiaris
ECOLOGY & HABITAT
Little is known about wild dogs of the past but that they are carnivores: hunters and scavengers. This means that they are secondary consumers in web chains. Eventhough they are carnivores they sometimes accept eating green plants. The ecology of dogs right know is that it helps the human in many fields of life. Since the cave dweller times, dogs have been domesticated by humans and it has helped him to hunt, in herding, protection, etc. It has been very important as a work animal and as a psychological support for humans. The habitat of the dog is where it's owner lives. Different dogs have different adaptations to their ancestral habitat but nowadays, this is not applicable.
ANATOMY
SKELETON
The skeleton of the dog is the articulated structure, moved by the muscles, that holds the dog's body and protects some organs and the nervous system. It also functions as mineral and blood deposit of the body. The skeleton of a dog is made up of approximately 321 bones: 134 form the axial skeleton (skull, vertebrae, ribs, etc.), and 186 form the appendicular skeleton (appendages). An extra bone has to be added for male dogs which is the penile bone. The dog is a digigraded animal (it walks with it's toes). It lies on it's third phalanges which are protected by palm cushions. The dog's toes are arranged in an angle which gives more facility of rest after running or other activities. The teeth of the dog is composed of 42 teeth which include canines, molars, incisors, etc.
JOINTS
Joints permit the movement of the bones. There are three types of joints in a dog: fixed joints, movable joints, and semi-movable joints. Fixed joints, such as the ones in the skull don't permit any movement but keep the bones together. The semi-movable joints are those that permit a little movement. They are represented in the spinal column. The movable joints are those present in the rest of the bones. Inside of this group of joints there are various types: the hinge, the ball-and-socket, the pivot, and the gliding joints. The most movable joints are present in the appendages. Joints are held together by a fibrous wrapping, the joint capsule, which reinforced by ligaments. Muscles and tendons also help keeping the bones together.
MUSCLES
There are three types of muscles in a dog: the skeletal muscle, cardiac muscle, and smooth muscle. The skeletal muscle works in pairs, a flexor and an extensor. It permits the movement of the skeleton and it also moves the skin of the dog (cutaneal muscle is very developed in dogs). The cardiac muscle is the muscle which is exclusively in the heart. The smooth muscle is the one present in the walls of the digestive organs, arteries, veins, in the diaphragm (which separates the two cavities of the body: the thorax and abdomen), and in some other internal organs.
DIGESTIVE SYSTEM
The digestive system of a dog is very similar to a human one. It ensures the ingestion of food and it's transformation (by mechanical and chemical acts) to simple substances which the dog's body can absorb and assimilate. It all starts in the mouth, where food is broken down mechanically and also a little chemically (saliva and teeth). Then they pass through the esophagus to the stomach to the small intestine (only 3 meters long but it has a very strong digestion) and to the large intestine where the feces are made. Then excretion is made through the rectum and then through the anus. A series of added glands produce substances which are used in digestion and they perform various important jobs. The most important one is the liver (which is an organ).
RESPIRATORY SYSTEM
In the respiratory system, air goes through the nostrils in the snout. In the nasal cavity air is purified, moistened and warmed. Then it passes through pharynx, larynx (where vocal cords for barking are), and then to the trachea. Air is then canalized through the two bronchi and then inside the lung by the bronchioles which are subdivisions of the bronchi. The last part are the alveoli where oxygen diffuses to the blood.
URINARY SYSTEM
The urinary system, is composed by the kidneys and the urinary structures. The blood is purified of toxins and excess water by the kidneys. Then the toxic substances are diluted in urine which passes through the ureters to the urinary bladder (deposit for urine). Then it is released through the urethra to the exterior of the body.
GENITAL SYSTEM
It is composed of the genital glands (gonads) that produce the reproductive cells, the genital conducts that ensure the transport of the sex cells, and the copulation organs that permit the encounter of the gametes.
FEMALE
The gonads of the female dog are composed of the ovaries, which are under the kidneys. The ovaries produce the eggs. The ovaries are active after the dog is 4 to 6 months old and the process occurs each 6 months more or less (this is normally called "in heat"). After the egg is produce it goes to the oviduct where it waits for a sperm cell to fertilize it. I f a zygote is formed then it stays in the uterus for two weeks until it sticks to the wall. The vagina which is quite long in dogs is used as birth and copulation canal.
MALE
The gonads of the male are the testicles which are inside a sac and the scrotum. The sperm cells are produced here. The prostate is the gland that produces the liquid in which sperm cells are carried. Sperm cells go out through the urethra, which is surrounded by the penile bone, like urine does. The penis of the dog has tissue around the urethra which capable of dilatation when extra blood is pumped (for copulation).
CIRCULATORY SYSTEM
It includes a four chambered heart, arteries, veins, and lymphatic glands and vessels. The circulation of blood provides the dog's body with oxygen and removes carbon dioxide from it. Oxygenated blood and deoxygenated blood circulate separately
NERVOUS SYSTEM
The nervous system is composed of the central nervous system, the peripheral nervous system, and the autonomous nervous system. The central nervous system is composed by the brain and the spinal cord. The peripheral n. s. is made up of the nerve cells and the autonomous one is made up of the sympathetic and the parasympathetic systems. The autonomous nerve system is connected to the spinal cord and the peripheral to the spinal cord or the brain.
SENSORY ORGANS
The eye of a dog consists of the cornea, the iris, the lens, the retina, the choroid coat, the sclera, and the optic nerve which sends the image absorbed by the retina to the brain.
The ear of the dog is long and curved. It is composed of the same structures as that of a human: the tympanic membrane, the series of tiny bones such as the hammer, the semi circular canals, etc.
The touch of the dog is well developed specially in the legs and the tongue.
PHYSIOLOGY
CIRCULATION OF BLOOD
Deoxygenated blood from the right atrium, go to the right ventricle. It then goes through the pulmonary arteries which go to the alveoli capillaries where oxygen diffuses into the blood. Then it returns to the heart, to the left atrium, by the pulmonary veins. From the left ventricle, the blood, rich in oxygen, go to the Aorta, the principal artery which divides and divides into arterioles to reach every cell of the dog's body. Blood pressure is maintained in arteries by the smooth muscle surrounding it. Through the capillaries, the oxygen and the carbon dioxide exchange. Blood moves through veins because of the skeletal muscle movement. In veins blood is prevented to go back by opening and closing valves. Through the veins, to the vena cava, and then to the right atrium, the blood reaches the heart again and the cycle is repeated. The lymphatic system protects the organism from dangerous microorganisms (it produces and contains white blood cells which produce antibodies against intruders) and drains the inter
cellular spaces. The heart beat rate of dogs varies depending on the size of the dog and it's training but it is faster than that of a human.
RESPIRATORY SYSTEM
Air reaches the lungs by the same system as in humans. The diaphragm contracts, decreasing the size of the lung and therefore expelling the air. Then it relaxes and the change of pressure in the lungs compared to that of the outside, forces air in and the lungs inflate. As heart beat, the respiration rate is faster than in humans.
DIGESTION
INGESTION
The dog needs to drink huge amounts of water for it's body needs since a lot of it is lost in excretion, urine, and the evaporation of water in the respiratory canals. The dog drinks by moving it's tongue back and forth and in form of a spoon. Like that it impulses the water into the mouth. The dog keeps the big chunks of food still with it's anterior paws which serve as "hands". To eat solid food, the dog moves it's head into the food source, penetrating the food in it's mouth. The teeth perform an indispensable job in mechanical digestion eventhough it is not very effective in the dog since the mandible does not move laterally. The secretion of saliva can be produced by the taste or smell stimulus or when the dog perceive that food is present.
GASTRIC DIGESTION
The dog's stomach only contracts when food is present to mix the food intensively. The daily production of gastric juice is between 2 and 3 liters. This mixture of hydrochloric acid and enzymes breakdown proteins and separate the conjunctive tissue of meat. The production of gastric juice happens when food is ingested or when it's eating time in the dog's schedule.
INTESTINAL DIGESTION
The chime, the product from the stomach, passes to the small intestine. The pancreatic juice in the intestine contains enzymes that attack fats, proteins, and starch. The volume of the bile from the liver secreted is of 25milliters per kilogram of the dog's weight each day. Bile helps to set favorable conditions for the pancreatic juice to work and it eliminates different wastes. The gastric juice finish the braking down of food. The products are then absorbed into the blood in the small intestine. Some go to the blood (water, minerals, sugars, amino acids, and some fats) and some go the lymph (fats). In puppies, digestion of maternal milk is restricted to the stomach. A special enzyme which finishes after lactancy is present for the breakdown of milk.
EXCRETION
The wastes pass through the large intestine where the feces are produced. Then these feces go through the rectum and then to the anus where they finally leave the body.
REPRODUCTION
The male reproductory system as well as the female one is controlled by the hormones coming from the hypofisis and the nerve signals coming from the hypothalamus.
THE MALE DOG
The reproduction organs of the dog function yearlong. One millimeter of sperm contains 100,000 to 200,000 and the volume and concentration of these lower if the dog copulates too much in a row. The testosterone, the sex hormone of the male dog, is vital for the production of sperm cells, the definition of secondary sex characteristics (bigger size and weight, and the bark is of a lower pitch), and sexual behavior.
THE FEMALE DOG
The reproduction system of the female dog is much more complex than that of a male dog. The female dog has estruation each six months more or less. There are four periods in the sexual cycle: the proestruation, the menstruation, the posestruation, and the anaestruation. The anaestruation is the period in which the dog is not ready for reproduction. In the proestruation, which lasts for 9 days, the follicles grow in the ovaries. The dog's vagina swells and it spills a mucus type substance, later blood. The estruation comes next and it longs for 4 or 8 days. There is no more blood coming from the vagina but the dog is very nervous. Ovulation occurs starts 3 to 5 days after the estruation starts. It lasts 12 to 72 hours. The posestruation occurs if the dog is not pregnant and it lasts for 2 months. The walls of the uterus widen because of the progesterone secreted in the ovary. The anestro lasts 3 months and a half and it is the resting of the female reproductive system.
MATING AND PREGNANCY
Copulation permits the contact of eggs and sperm cells to form zygotes. The penis of the dog gets erect by the filling of erectal tissue with blood and the penile bone. Then it is introduced into the relaxed and lubricated vagina of the dog. Then the penis of the dog expands and the dog's vagina muscles contract trapping the penis in the vagina for a long time. The vagina then produces three ejaculations in the male dog. The middle one with more sperm than the other two. The female dog then liberates the male dog and there is a possibility of offspring production. If the dog gets fertilized then it takes 2 months for the parturition of the puppies. In the parturition, the female dog expels the puppy after it releases the amniotic liquid and the fetal sacs. The female cleans the puppy with its tongue to familiarize it with its mother and motivate its physiological functions and then it licks its vagina carefully. After. The placenta is released and it is eaten by the mother. The period between puppy is between
half and one hour. The puppies sucking of the mother's breast is essential for the further production of milk. The puppies' bumps, with their heads, on their mother's breasts is important for the production of hypofisiary hormones in the dog.
THERMO-REGULATION
The dog is a ectothermic animal and its body temperature is of 38,5ºC. The temperature regulators in young puppies is still imperfect so it needs the heat of the mother or other siblings. The puppy regulates its temperature but it is higher than that of an adult dog. When temperatures are low the dog uses two different systems to prevent heat loss: lower the loss of heat or producing more heat. The dog can lower the loss of heat by constricting the blood vessels of peripheral regions (extremities, ears, skin). This lowers the amount of heat exposed to the environment. The fur also isolate the body from cold. Little muscles in the bases of the hairs make hairs stiffer, increasing the warm air layer between the fur and the skin. The dog also arcs its body or goes near other dogs to prevent heat loss. If this is not enough, the dog uses more energy to keep the body temperature stable. The metabolism works more intensely (specially in the liver) and fat and other energy sources are used. The contrary situation is
also possible, too much heat. The dog uses the blood vessels to release extra heat. It dilates them in the peripheral regions to release the extra energy to the surroundings. When heat is too much the dog seeks cool and humid places to rest and release their heat. In other mammals evaporation of sweat coming from sweat glands alleviate the heat but the sweat glands in dogs are few and located in the feet cushions that makes them quite inefficient. As compensation for the scarce sweat released, the dog has a cooling system based in the mouth. The water of the mouth cavity, bronchi, and trachea evaporates with an intense respiratory rate. This cools the body of the dog but water losses are huge.
SENSE ORGANS
The smell is the most developed sense in the dog. A dog is capable of smelling a drop of acetic acid which has been diluted in a thousand liters of water. A drop of this solution is diluted in a thousand liters and the dog is still capable of detecting the acetic acid. The olfactory capability of dogs depend on the type of dog, the race. As comparison to the human smell, an average dog has 147 million smelling units... the human only has 500,000. The nose and the receptors function as the ones of other mammals such as humans.
The ear is also quite developed in dogs. Dogs can hear very high pitched sounds such as ultrasonic. The auditive range of dogs is between 20Hz and 60,000Hz. The human's is from 16Hz to 20,000Hz. The functioning of the ear is the same as in a human.
The vision capability of the dog is not comparable to that of a human. They can perceive changes of light but their ability to see clear forms is not much. Concerning the color, not much study has been done on this. Dogs have a special coating in the eye globe that permits the dog to see more light when it is darker.
LOCOMOTION
(Look at the diagram to see how dogs move)
Locomotion in dogs are the same as most complex animals. Flexor and extensor muscles work in pairs to move the structures.
BEHAVIOR
TERRITORIALITY
The male dog marks its "hunting" territory by odor signals produced by the anal glands. The smell also attracts females into the territory. When dogs, male and female, urinate at a certain place, that means that that is a common territory. The dog defends it's territory instinctively when an intruder that is not a companion appears. This defense instinct makes it have a guardian quality.
THE CANINE HIERARCHY
Dogs that live in groups of dogs have hierarchies, inherited from their wild ancestors. There are two types of dogs: dominant and submissive. This hierarchy not only divides in into two categories, ranks in the canine society also exist. Dogs do not have to live in groups to have these ranks, it is a natural instinct that even exist in dogs living with owners in cities. In the relation with the human owner, the owner is always dominant except in cases where the dog has mental disorders.
SEXUAL BEHAVIOR
Dogs are polygamists, they don't have a definite couple. When a female dog is in heat, it is nervous and seeks desperately for a male. It releases ferormones in its urine that a male can smell at kilometers of distance. When they meet, the female exposes it's vulva to the male. The sexual act goes after a series of ritual games and a profound smell familiarization. Then comes the copulation. If various dogs follow a female, disputes settle the hierarchy and the most dominant one gets to reproduce.
SEXUAL PROBLEMS OF DOGS
Female and male dogs may feel frustrated when they can't find a mating partner. Dogs can get depressed and they may tend to escape from their home. Male dogs relieve themselves by masturbating with an object. Female dogs may get very nervous and develop a hysterical pregnancy in which the dog produces milk as an expression of it's needs to have puppies.
MATERNAL INSTINCT
Days before giving birth, a dog looks for a safe, warm, and comfortable place to place her dogs to be born, a "nest". The dog looks for it's puppies but those who are deformed are killed by the mother herself (natural selection). A mother cuts the umbilical cord and licks it's new born puppies to stimulate physiological functions. If a mother does not do this, then she lacks maternal instinct. A mother is aggresive if she sees that something(an intruder) is menacing its puppies. If the person is familiar, then she is pleased to have them near its puppies. Very few male dogs care for puppies but some do.
THE DOG'S LANGUAGE
OLFACTORY COMMUNICATION
The dog uses it's urine that contains glandular secretions as well as urine. A dog uses this to transmit messages to others and to mark it's territory. To identify themselves, dogs smell each other profoundly especially the snout, the genitals, and the anus. In sexual reproduction smell also plays an important role as I explained above. The dog also can identify a prey by its smell.
THE VOICE
Dogs use to bark to advert others of danger. A howl in the middle of the night is a concentration call. When a dog is in heat a male dog can howl for hours and hours in desperation. The female responds with the same howl. Two dogs that may encounter an bark are just imposing themselves. This makes them have more confidence. The nature of dog sounds can be described as follows: a hoarse and cut bark expresses a menace (a dare to another animal or an advice to an intruder so that it moves out of the territory), a happy bark ( coming to its owner or before going for a walk), a special bark to indicate a prey, etc. Then there are the obvious sounds: cry, lament, howl, screams, and grunts.
THE EYES
The look of a dog can tell a lot. A fixed look means fear or malevolence. The look of a dog to sheep or cows can control them and place them in order.
ATTITUDES AND MIMIC
Gestures of a dog to show happiness are the constant movement of the tail. A tail between the legs means fear. A dog in an aggressive posture shapes and moves its body to make it look bigger: it elongates, it's hair gets straight up, etc. To exhibit submission a dog often lies itself on the floor and with its tail between its legs it shows its genitals. When a dog tries to play with another it lies in a sphinx position, or it "dances" around the other dog, or it gently bites the other's tail, etc.
f:\12000 essays\sciences (985)\Biology\The Double Helix.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biology 100, section 1, Summer Term 1996
The Double Helix
A review of Watson, James D. The Double Helix. New York: Atheneum, 1968.
James Watson's account of the events that led to the discovery of the structure of deoxyribose nucleic acid (DNA) is a very witty narrative, and shines light on the nature of scientists. Watson describes the many key events that led to the eventual discovery of the structure of DNA in a scientific manner, while including many experiences in his life that happened at the same time which really have no great significant impact on the discovery of the DNA structure.
The Double Helix begins with a brief description of some of the individuals that played a significant role in the discovery of DNA structure. Francis Crick is the one individual that may have influenced Watson the most in the discovery. Crick seemed to be a loud and out spoken man. He never was afraid to express his opinion or suggestions to others. Watson appreciated Crick for this outspoken nature, while others could not bear Crick because of this nature. Maurice Wilkins was a much calmer and quieter man that worked in London at King's College. Wilkins was the initial person that excited Watson on DNA research. Wilkins had an assistant, Rosalind Franklin (also known as Rosy). Initially, Wilkins thought that Rosy was supposed to be his assistant in researching the structure of DNA because of her expertise in crystallography; however, Rosy did not want to be thought of as anybody's assistant and let her feelings be known to others. Throughout the book there is a drama between Wilkins and Rosy, a dram
a for the struggle of power between the two.
Watson's "adventure" begins when he receives a grant to leave the United States and go to Copenhagen to do his postdoctoral work with a biochemist named Herman Kalckar. Watson found that studying biochemistry was not as exciting as he hoped it would be; fortunately, he met up with Ole Maaloe, another scientist doing research on phages (Watson studied phages intensively while in graduate school). He found himself helping Ole with many of his experiments and soon he was helping Ole with his experiments more than he was helping Herman with his experiments. At first, Watson felt like he was deceiving the board of trustees by not studying the material that the board sent him to study. However, Watson felt justified because Herman was becoming less and less interested in teaching Watson because of Herman's current personal affairs (Herman and his wife decided to get a divorce). With Herman's lack of interest in teaching biochemistry, Watson found himself spending the majority of the day working with Ole on his
experiments.
While in Copenhagen, Herman suggested that Watson go on a spring trip to the Zoological station at Naples. It was in Naples that Watson first met Wilkins. It was also in Naples that Watson first became excited about X-ray work on DNA. The spark that ignited Wilkins' fire was a small scientific meeting on the structures of the large molecules found in living cells. Watson had always been interested in DNA ever since he was a senior in college. Now that he learned of some new research on how to study DNA, he had the craving to discover the structure of the mysterious molecule that he believed to be the "stuff of life". Watson never had the chance to discuss DNA with Wilkins that spring; however, that did not kill Watson's desire to learn about its structure.
Watson's fire was further kindled by Linus Pauling, an incredibly intelligent scientist out of Cal Tech. Pauling had partly solved the structure of proteins. He discovered that proteins have an alpha-helical shape. Watson thought this was an incredible discovery! He was excited to research and learn about the DNA structure.
Watson was worried about where he could learn more about DNA and how to solve X- ray diffraction pictures so the structure of DNA could be understood. He knew he could not do this at Cal Tech with Pauling because Pauling was too great a man to waste time with Watson and Wilkins continually put Watson off. Soon Watson became aware that Cambridge was the place he could get experience to solve the DNA problem. It was about this time that Watson's grant was about to expire. He decided to write Washington and request that his grant be renewed, continuing his studies in Cambridge rather than Copenhagen. Thinking that Washington would not deny his request, Watson packed up and went to Cambridge. He worked several months in Cambridge when finally he received a return letter from Washington. The letter stated that his grant would not be continued. Nevertheless, Watson decided to remain in Cambridge and continue his stimulating intellectual experience.
It was in Cambridge that Watson first met Francis Crick. Here, Watson discovered the fun of talking to Crick. In addition, Watson was elated that he found someone in the lab that thought DNA was more important than proteins. Soon Watson and Crick found themselves having a daily lunch break together discussing many scientific topics, in particular, the unique aspects of DNA.
As reports came to Watson and Crick about Paulings efforts to discover the structure of DNA, they began to feel pressure to discover the structure before Pauling did. However, Watson and Crick were at a disadvantage because they did not have access to some valuable research done by Wilkins and Rosy. This did not discourage Watson and Crick. With the limited information they had, they began to riddle over the possible structures of DNA. So far all the evidence they had (and also their intuition) indicated that DNA was a helical structure like proteins with either one, two, or three strands. Pauling was able to discover the alpha-helix by fiddling with models; by trial and error he came up with the correct structure. Watson and Crick decided to try model building as a method of solving the structure of DNA.
Over a period of weeks to months, Watson and Crick fumbled around with DNA models. All did not go smoothly. One of the difficulties was that Watson and Crick did not have all the materials available to construct a model with the inorganic ions like DNA. With some manipulation of on-hand material they were able to create a model to their liking.
Watson and Crick had constructed a beautiful three chain helix representing DNA. The next obvious step would be to check the parameters with Rosy's quantitative measurements. To their knowledge the model would certainly fit the general locations of the X-ray reflections. Upon completion, Watson and Crick were ecstatic about their accomplishment. To be the first to discover the structure of such an important molecule like DNA was going to make a major impact in the world.
A phone call was made to Wilkins asking that he come to Cambridge to view the model and issue his opinion on its validity. The next day both Wilkins and Rosy came to Cambridge to view the model. Watson and Crick had their presentations prepared. They planned to dazzle their audience as they explained how they solved the complexity of the DNA structure. As their discussion went forth, Wilkins was skeptical on many aspects of the model. Rosy was completely dissatisfied with the model, especially with the fact that the model had Mg++ ions holding together the phosphate groups of the three-chain model. She noted that the Mg++ ions would be surrounded by tight shells of water molecules which contradicted the results she had gained on the water content of DNA molecules from her experiments.
The rest of the day was spent trying to salvage what little argument Watson and Crick had. Over lunch was no success, neither did they prevail when they returned to the lab. Soon the day was over and Wilkins and Rosy returned to London. When Watson and Crick's supervisors heard of the failure with the model, they ruled that no further research would be done at Cambridge on DNA. For over a year Watson and Crick let DNA alone, only to be pondered upon while not working on other projects.
That year Watson worked on researching the tobacco mosaic virus (TMV). A vital component to TMV was the nucleic acid, so it was the perfect front to mask his continued interest in DNA. Over time and hard work, Watson was able to show that some parts of TMV were helical in shape and thus decided to return to work on the structure of DNA.
With more knowledge and expertise the research went forward with passion. Watson had seen an X-ray picture taken by Rosy that to him gave sure evidence that DNA was helical. Wilkins data only furthered his conviction. Watson and Crick were back at it again with a new fervor. They knew that there was a sugar phosphate backbone to the structure and it was held together somehow by the nucleic acids (adenine, thymine, cytosine, and guanine). Watson had a hunch that the shape was going to be a double helix. At first Watson thought the two backbones were held together by a like-with-like structure (adenine-adenine, thymine-thymine, etc.) holding the nucleic acids together with a hydrogen bonds. After about a day Watson realized that a like- with-like structure just was not possible.
Watson knew that the amounts of adenine always equaled thymine and amounts of cytosine equaled guanine. With the help of Crick, they tried to construct a model by pairing adenine with thymine and guanine with cytosine. This fell together very nicely. After obtaining several opinions on the validity of their work they placed a call to Wilkins. Wilkins and Rosy came down and to the surprise of Watson and Crick, Wilkins and Rosy were immediately pleased with the model. After comparing results and measuring the model they decided that Wilkins and Rosy would publish a paper at the same time Watson and Crick published their paper, announcing their discovery.
This was indeed an incredible discovery for the world, especially for the world of biology. The structure for the "stuff of life" was finally discovered. Watson and Crick went on to win the Nobel Prize for their work. Pauling who had worked so hard to discover the structure was not disgruntled by the fact that someone had beaten him to the discovery, but rather pleased that the problem was finally solved. Everyone was enthusiastic about the new discovery.
This was excellent reading. Watson not only told the story of how the structure of DNA was discovered but he also let us in on the developments of parts of his personal life. He would speak of how he tried to have dinners at a school that was teaching young, pretty French girls English. He also spoke much of his relationship with Crick and Crick's wife, Odile. He made the book come alive and science seem more fun, breaking the stereotype of the scientist. I especially enjoyed how he described Rosy and her firm dedicated feministic attitude. The reader could feel sympathy for the tribulations Wilkins had to go through working with her.
The book was an excellent account of the discovery of the structure of DNA. Throughout the text, Watson mostly eluded to the greatness of others rather than to his own greatness. Even though he played probably the most significant part in the discovery of DNA's structure he gave credit to those that have inspired him.
f:\12000 essays\sciences (985)\Biology\The Downy Woodpecker.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Downy Woodpecker
Habitat
Downies take home in the United States and southern Canada. They have been recorded at elevations of up to 9,000 feet. The downies are not deep-forested birds, preferring deciduous trees. Open woodlands, river groves, orchards, swamps, farmland, and suburban backyards are all favorite haunts of the downy. Downies will also nest in city parks. About the only place you won't find them is deserts. The most attractive human dwelling sites are woodlands broken up by logged patches in a waterside area. Downies also enjoy open shrubbery with groves of young deciduous trees.
Call(s)
Like the hairy woodpecker, the downy beats a tattoo on a dry resonant tree branch. This drumming is the downy's song, though they do make some vocal noises. They have several single-syllable call notes which include tchick, an aggressive social note; a tick and a tkhirrr, which are alarm notes. There is also a location call, known as a "whinny", made up of a dozen or more tchicks all strung together.
Scientific Names
The downy woodpecker's scientific name is Picoides pubescens. There are also six particular downies with six particular scientific names all from different regions of the United States and southern Canada which I have listed below:
southern downy / Dryobates pubescens
Gairdner's woodpecker / Gairdneri pubescens
Batchelder's woodpecker / Leucurus pubescens
northern downy / Medianus pubescens
Nelson's downy / Nelsoni pubescens
willow woodpecker / Turati pubescens
The downy woodpecker is sometimes reffered to as "little downy."
Behavior Towards Humans
The downy is unquestionably the friendliest woodpecker. A bird lover in Wisconsin described downies at their feeding station: "The downies will back down to the suet container on the basswood tree while I sit only a few feet away on the patio. Even when I walk right up to them, most downies will not fly away, but will simply scoot around the backside of the tree trunk and peek around to see what I am doing. If I press them, they will hop up the backside of the tree trunk and then fly to a higher branch.
Food
Besides being friendly, downy woodpeckers are our good friends for another reason. Most of the insects they eat are considered destructive to man's orchards and forest products. About 75% of their diet is made up of animal matter gleaned from bark and crevices where insect larvae and eggs lie hidden. While standing on that unique tripod of two legs and and a tail, downies hitch up and down tree trunks in search of a whole laundry list of insect pests. With their special chisel-like bills and horny, sticky tongues, downies are adept at plucking out great numbers of beetle grubs, insect cocoons, or batches of insect eggs. They also eat spiders, snails, ants, beetles, weevils, and caterpillars, with other local insects included. 25% of a downy's diet are plants made up of the berries of poison ivy, mountain ash, Virginia creeper, serviceberry, tupelo, and dogwood. Downies also eat the seeds of oaks, apples, hornbeams, sumac, hickory, and beach. Acorns, beachnuts, and walnuts are the particular favorites.
Dr. John Confer and his students at Ithaca College have studied the downy woodpecker's use of goldenrod galls as a source of food. They discovered the downy's little jackhammer is just the tool needed to drill a hole in the side of the one to two inch goldenrod gall and extract the tiny grub contained inside. In fact, Confer's studies show that the goldenrod grubs form an important part of the woodpecker's winter diet.
Plumage
Tap, tap, tap! Tap, tap, tap! It is interesting how the downy woodpecker props itself with those stiff tail feathers while clinging to the bark. The tail relieves the birds weight. This unique tripod allows the downy to hop up the tree trunk with ease, but it must back down in the same position, a more akward motion.
The downy woodpecker gets its name of downy because of its soft fine feathers. The downy, smallest of the woodpecker clan, is not even as big as a robin. It is only about the size of the of a house sparrow at six inches tall. The downy can be separated from all other woodpeckers ~ except the hairy ~ by the broad, white strip down its back. The downy and the hairy are often confused since their markings are quite similar. Both range across the same territory except the lower southwest where the downy is less often seen. There are really only two ways to distinguish the downy and the hairy. (1) Look at the bill of the two birds. The downy will have a much shorter, stubbier bill. (2) The downy is about 2/3 the size of the hairy. That is another good clue to look for.
The downy is most likely to be the one that you see at the feeder, since the hairy keeps more to the forest than the downy. However, both will feed at feeders in the winter months, on suet especially.
The tail, wings, and back of both the downy and hairy woodpeckers have a black hue intermingled with white spots. A black cap adorns each, below which there is a white stripe. A small scarlet patch appears on the lower~back of the head. Another black stripe is below this. The downies have barred outer tail feathers not found on the hairies.
Courtship
Regardless of the elevation, downy woodpeckers begin thinking about nesting earlier than most birds and several months before they actually nest. After spending the winter alone, the downies seem to come to life in early February, moving more quickly and taking more interest in their own species. Their normal tap, tap, tap becomes a quite different unbroken trrrrrrrrrrrrrrrrrrrrrrrr, lasting several seconds. The tapping is no longer simply an effort to find food but a means of communicating to other downies that this is "my" territory. It is also the first attempt to attract a mate. Both sexes drum. So early does this drumming begin that it is not unusual to hear it on sub-zero mornings.
Some ornithologists believe that downy woodpeckers retain the same mate as long as they live. In this case, all the pair has to do in the spring is to renew their pair bonds. This fidelity, however, seems to be a result of an attachment to the nesting site rather than between the birds.
After the drumming has united the pair, the actual courtship begins with a curious dance or "weaving" action by both sexes. With their neck stretched out and bill pointed in line with their head and body from side to side balancing on the tips of their tail. Their entire body is elongated. There is also a lot of flitting and chasing from one branch to another, and more waving and weaving of head and body. Sometimes with wing and tail feathers spread. Considerable chattering accompanies these gyrations.
Nesting
Sometime during the courting period the actual selection of a nesting cavity occurs. The female is usually, though not always, the dominant bird and selects the nesting site. Ounce selected, both birds dig the hole. Downies will characteristically place the nesting cavity 3-50 feet above the ground on the underside of an exposed dead limb. The pair will alternate digging because only one bird at a time can fit into the cavity. As the hole is cut deeper, the bird working may disappear into the hole and remain out of sight for 15-20 minutes, appearing only long enough to throw out chips. (This is unlike chickadees, which will carry their chips away from the nesting site, downies are not concerned about predators finding chips at the base of the nesting tree.) Then the pair will change shifts for 15 or 20 minutes while the other bird digs. Though the female does most of the work, this may vary with individual pairs. Regardless, the cavity is finished in about a week.
When the cavity is completed, sometime in mid~May, it is shaped much like a gourd. The entrance is 1 !/4 inches in diameter. It is dug straight about four inches, then curves down 8-10 more inches and widens to about three inches in diameter. At the very bottom, the the cavity narrows to about two inches, where a few chips are left to serve as a nest. It is believed that woodpeckers have been nesting in cavities so long in evolutionary time that nesting material is no longer used. Chickadees and bluebirds have been nesting in cavities for a shorter period of time, and still build a nest at the bottom of the cavity as they did when they built their nests in the open.
The eggs, too, reflect this. Species that have been using cavities for many thousands of years, like the woodpeckers, lay pure white eggs. No protective coloration is needed when they are hidden in a cavity. Bluebirds and chickadees, on the other hand, still lay eggs with some protective coloration on them~specks in the case of chickadees and pale blue in bluebirds' eggs.
Downy woodpeckers lay four to five pure white eggs, which are incubated by both parents through the 12 days required for hatching. They take turns during the daylight hours; the male incubates at night.
The downy, like other woodpeckers, will seldom use the same nesting cavity year after year. Instead, the site is taken over the next year by chickadees, titmice, tree swallows, wrens, and sometimes bluebirds. This forces the downy couple to drill another nesting cavity each year.
Young Downies
When the young hatch, they are naked, blind, helpless, red-colored, and quite unattractive. During the first few critical days after hatching, the adults take turns in the cavity, one brooding the young while the other bird is gathering food. The male usually broods at night.
Downies swallow and regurgitate their food to the young for only four to five days. After that they carry insects and other bugs, primarily spiders, ants, and moths, to the youngsters in their bills. The older the chicks get, the more food the adults must provide. It isn't long before the young can be heard chippering in the cavity and both parents are feeding from daylight until dark. At times they are feeding as often as ounce a minute!
A few days after hatching, feathers start to grow on the young, and by the time they are 14 days old, their tail feathers are long enough to support their weight. It is then that they make their first appearance at the cavity entrance. For the next week, the youngsters spend a great deal of their time taking turns at the cavity entrance, heads out, chippering loudly, awaiting the next meal. At 21 to 24 days, the young are ready to leave the cavity on their first flight. A New York observer gave a good acount of a downy family's last few days in the cavity: "The young chattered most of the time during the last two days of nest life. One at a time they looked out a great deal at the strange outer world. They left the nest on on the eleventh of June. The last two, a male and a female, left during the afternoon, each after being fed at the entrance and seeing the parent fly away. The young male flew from the nesting hole straight to a tree 60 feet away. His sister quickly followed, lighting on the trunk
of the same tree and following her parent up the bole in the hitching manner of their kind as though she had been practicing this vertical locomotion all of her life."
The observer could distinguish male youngsters from female because they already had a slightly different appearance. Like their adult counterparts, the young males have red on their heads and the females do not. The red on the head of the juvenile male is not a small spot on the back of the head as in the adult male, but a much larger area of red and pink on the whole crown. The youngsters are also somewhat fluffy or "downy" looking. The juvenile female looks like the juvenile male, without the red crown.
This juvenile plumage will be worn but a short time, for all downies, young and adult, molt into winter plumage in September.
Ounce the young have fledged, the parents divide the brood and only take care of their charges. The male will usually take one or two of the young, while the female takes the others. According to study, young downies become independent at the age of 41 days. Many people have seen youngsters on suet feeders in late summer with no apparent adult escort, nor any interest in other downies in the area. In fact, the adults will drive off the youngsters at the suet feeders.
Downy woodpeckers have only one brood a year in the north, but sometimes two in the south.
Winter for a Downy
By September the downy woodpecker family has broken up, the young of the year look like adults, and all become solitary and quiet.
As cold weather approaches, the first order of business is to locate a winter roosting cavity. Apparently, downies do not use their nesting cavities as winter roosts; most birds drill fresh roosts in anticipation of the long winter ahead.
These preparations, however, are not made at the fast pace of most other birds in autumn. The species that must migrate to warmer climates seem to be restless and in such a hurry about everything. But not the downy. It remains calm in the midst of the hustle. Such is the personality of the permanent resident. Despite this, there are some studies which indicate that some downies, particularily females, do leave the breeding territory; others don't. The reasons for these variations are not clear.
The down's winter is spent quietly and alone, searching the doormant woodland for food. The pace of life has slowed, and often its tap, tap, tap is the only sound to be heard above the wind in the trees. The downy is well equiped to survive the coldest weather. It even takes playful baths in the snow piled high on branches. A woman in Canada described one such incident: "This morning a female downy flew to a horizantal branch and proceeded vigorously to bathe in the loose snow lying there. Like a robin in a puddle. Mrs. Downy ducked her head, ruffled her feathers and fluttered her wings, throwing some of the snow over her back and scattering the rest to the winds."
The downy woodpecker's winter food is not unlimited. The insects apon which it survives stopped multiplying when cold weather arrived. As time passes, the bird must search more and more diligently to feed itself. It gets some help from the bands of chickadees, titmice, and nuthatches with whom it shares the winter woods. Downies will often stay loosely associated with these species as they cruise the woodlands in search of hidden morsels. But the downy is tied somewhat to the area near its roosting hole, since it will return to it every evening at sunset. Therefore, the feeding areas surrounding the roosting cavity become a downy's individual winter feeding territory, which it will defend against other downies.
Backyard feeding stations are the exception. For some unexplained reason, feeding stations are a "common ground" for all birds in all seasons. Usually (in the right conditions) there will be between six and ten downies at suet feeders at various times every day during the winter. There will be fewer during the summer. That is probably because there is more natural food in the summer and breeding territories are more rigorously defended. Regardless, the downies take turns at feeders, abiding by some kind of truce at the suet, though there are often fights over who feeds first.
Territorial Disputes
When two males or two females come face to face over a territorial dispute, they spread their wings, raise their crests and assume a challenging attitude and scold each other. Most of this is bluff, of course, for they soon settle down, unless one or the other advances toward a female.
Flight
Like the other members of the woodpecker clan, the downy has a distinct undulatin flight that is most evident when it crosses open areas or swoops through woodlands. The dips are not as deep as those of a goldflinch, but as ornithologist Arthur Cleveland Bent said, "It gives the effect of a ship pitching slightly in a heavy sea. A few strokes carry the bird up to the crest of the wave~ the wings clapping close to the side of the body~ then, at the crest, with the wings shut, the bird tilts slightly foward, and slides down into the next trough."
Enemies & Camouflage
Though no songbird is totally safe from predators, not many downy woodpeckers fall prey to hawks, owls, and other winged hunters. When attacked, downies are quite adroit at dodging raptors by flitting around the branches of their natural habitat. They can also flatten themselves against the bark of a tree trunk and become almost invisible to any pursuer. Maurice Thompson described a downy's defense against a goshawk: "The downy darted through the foliage and flattened itself against a large oak bough, where it remained motionless as the bark itself. The hawk lit on the same bough within a few feet of its intended victim, and remained sitting there for a few moments, searching in vain. The black and white feathers of the downy blended perfectly with the bark and lichen on the tree."
Other enemies, strangely, include house wrens, which have been known to wait until downies have completed work on their nesting cavaties before appropriating the site for themselves. Unbelievable as it may sound, the house wren can be aggressive enough to attack a pair of downies and drive them from their own nesting site to procure the cavity for its own.
Squirrels, particularly red squirrels, will destroy the eggs and young of downy woodpeckers.
Attracting Downies
Food, cover, and water are the three basic needs of all wildlife and downy woodpeckers are no exception. Food and cover definitely take priority over water, as downies seldom drink at birdbaths.
Mature trees in an open woodland are the preferred habitat, but any kind of natural cover is better than none at all. A mixed stand of oaks, basswood, maples, and willows will suit downies perfectly.
Food is simple. Downy woodpeckers love beef suet. Be sure that you get real beef suet at the butcher shop. So often a butcher will give or sell you beef fat, which downies will reluctantly eat in the winter. They prefer real suet, which is the hard, white, opaque fat surrounding the beef kidney. Regular beef fat has a greasier, translucent appearance. It will also decompose in warm weather and attract flies. Suet will not. That is why beef suet is reccomended all year long. It is every bit as successful with downies in summer as winter. Plus, the suet feeder is the place where most of the baby downies are first seen by humans. They are so cute with their red caps and roly-poly appearance. At first a parent bird feeds the youngster suet. Then it tries to get the youngster to feed itself. All that free entertainment is yours to enjoy if you put up a suet feeder.
Other feeding station foods that downies will eat include peanut butter (it's a fallacy that peanut butter sticks in the throats of birds), doughnuts, nutmeats, sunflower seeds, corn bread, and cracked corn kernels. But beef suet is by far the most popular with all the woodpeckers.
Will a downy woodpecker nest in a bird house? Though most books on attracting birds or building birdhouses give dimensions for downy woodpecker houses, there does not appear to be any record of a downy nesting in a man-made house. However, there are records of downies using birdhouses as winter roosts.
Special Adaptations
The downy has many adaptations, ranging from the tail feathers to the tongue.
First of all the downy's toes are different than most other birds. Instead of having three toes in the front and one in the back, the downy has two toes in the front and two in back. This arrangement makes the downy's unique tripod of two feet and stiff tail feathers more effective. The toes have also adapted another way. The outer hind toe is longer than the rest of the toes to keep it from swaying.
The downy's tail is also special. Unlike most birds the downy's tail feathers are long and stiff. This helps balance the birds weight as it stands vertically on a tree.
Another adaptation of the downy woodpecker is their unusual bill. It is not pointed like most other birds, but it is chisel-shaped. A chisel- shaped bill makes the downy's work of carving a nesting and roosting cavity easier. The bill also helps the downy chip the wood around the insects buried in a tree. The tongue is also worth noting. At twice the size of the downy's head, the tongue easily spears small morsels with a horny tip of recurved barbs.
Yes, even the skull has changed to fit the downy's needs. The skull of the downy is stronger and thicker than most other birds. So logically it is also heavier. This extra weight makes the little jackhammer more effective.
But most amazing is not how the downy has adapted, it is its skill to adapt. When European settlers invaded the downy woodpeckers' territory 200 to 300 years ago, the birds did not retreat as did many of our native species. Instead, they accepted as a home the orchards and shade trees with which man replaced the forests. Our early ornithologists were in agreement when they characterized the bird. Audubon remarked in 1842 that it "is perhaps not surpassed by any of its tribe in hardiness, industry, or vivacity."
Alexander Wilson said ten years earlier that "the principal characteristics of this little bird are diligence, familiarity, perseverance," and spoke of a pair of downies working at their nest "with the most indefatigable diligence."
And so it is today. The downy woodpecker remains unspoiled and unconcerned by the threats of man. It just quietly flits around the backyard woodland, tap, tap, tap-ing its way through life.
The Downy Woodpecker
Habitat
Downies take home in the United States and southern Canada. They have been recorded at elevations of up to 9,000 feet. The downies are not deep-forested birds, preferring deciduous trees. Open woodlands, river groves, orchards, swamps, farmland, and suburban backyards are all favorite haunts of the downy. Downies will also nest in city parks. About the only place you won't find them is deserts. The most attractive human dwelling sites are woodlands broken up by logged patches in a waterside area. Downies also enjoy open shrubbery with groves of young deciduous trees.
Call(s)
Like the hairy woodpecker, the downy beats a tattoo on a dry resonant tree branch. This drumming is the downy's song, though they do make some vocal noises. They have several single-syllable call notes which include tchick, an aggressive social note; a tick and a tkhirrr, which are alarm notes. There is also a location call, known as a "whinny", made up of a dozen or more tchicks all strung together.
Scientific Names
The downy woodpecker's scientific name is Picoides pubescens. There are also six particular downies with six particular scientific names all from different regions of the United States and southern Canada which I have listed below:
southern downy / Dryobates pubescens
Gairdner's woodpecker / Gairdneri pubescens
Batchelder's woodpecker / Leucurus pubescens
northern downy / Medianus pubescens
Nelson's downy / Nelsoni pubescens
willow woodpecker / Turati pubescens
The downy woodpecker is sometimes reffered to as "little downy."
Behavior Towards Humans
The downy is unquestionably the friendliest woodpecker. A bird lover in Wisconsin described downies at their feeding station: "The downies will back down to the suet container on the basswood tree while I sit only a few feet away on the patio. Even when I walk right up to them, most downies will not fly away, but will simply scoot around the backside of the tree trunk and peek around to see what I am doing. If I press them, they will hop up the backside of the tree trunk and then fly to a higher branch.
Food
Besides being friendly, downy woodpeckers are our good friends for another reason. Most of the insects they eat are considered destructive to man's orchards and forest products. About 75% of their diet is made up of animal matter gleaned from bark and crevices where insect larvae and eggs lie hidden. While standing on that unique tripod of two legs and and a tail, downies hitch up and down tree trunks in search of a whole laundry list of insect pests. With their special chisel-like bills and horny, sticky tongues, downies are adept at plucking out great numbers of beetle grubs, insect cocoons, or batches of insect eggs. They also eat spiders, snails, ants, beetles, weevils, and caterpillars, with other local insects included. 25% of a downy's diet are plants made up of the berries of poison ivy, mountain ash, Virginia creeper, serviceberry, tupelo, and dogwood. Downies also eat the seeds of oaks, apples, hornbeams, sumac, hickory, and beach. Acorns, beachnuts, and walnuts are the particular favorites.
Dr. John Confer and his students at Ithaca College have studied the downy woodpecker's use of goldenrod galls as a source of food. They discovered the downy's little jackhammer is just the tool needed to drill a hole in the side of the one to two inch goldenrod gall and extract the tiny grub contained inside. In fact, Confer's studies show that the goldenrod grubs form an important part of the woodpecker's winter diet.
Plumage
Tap, tap, tap! Tap, tap, tap! It is interesting how the downy woodpecker props itself with those stiff tail feathers while clinging to the bark. The tail relieves the birds weight. This unique tripod allows the downy to hop up the tree trunk with ease, but it must back down in the same position, a more akward motion.
The downy woodpecker gets its name of downy because of its soft fine feathers. The downy, smallest of the woodpecker clan, is not even as big as a robin. It is only about the size of the of a house sparrow at six inches tall. The downy can be separated from all other woodpeckers ~ except the hairy ~ by the broad, white strip down its back. The downy and the hairy are often confused since their markings are quite similar. Both range across the same territory except the lower southwest where the downy is less often seen. There are really only two ways to distinguish the downy and the hairy. (1) Look at the bill of the two birds. The downy will have a much shorter, stubbier bill. (2) The downy is about 2/3 the size of the hairy. That is another good clue to look for.
The downy is most likely to be the one that you see at the feeder, since the hairy keeps more to the forest than the downy. However, both will feed at feeders in the winter months, on suet especially.
The tail, wings, and back of both the downy and hairy woodpeckers have a black hue intermingled with white spots. A black cap adorns each, below which there is a white stripe. A small scarlet patch appears on the lower~back of the head. Another black stripe is below this. The downies have barred outer tail feathers not found on the hairies.
Courtship
Regardless of the elevation, downy woodpeckers begin thinking about nesting earlier than most birds and several months before they actually nest. After spending the winter alone, the downies seem to come to life in early February, moving more quickly and taking more interest in their own species. Their normal tap, tap, tap becomes a quite different unbroken trrrrrrrrrrrrrrrrrrrrrrrr, lasting several seconds. The tapping is no longer simply an effort to find food but a means of communicating to other downies that this is "my" territory. It is also the first attempt to attract a mate. Both sexes drum. So early does this drumming begin that it is not unusual to hear it on sub-zero mornings.
Some ornithologists believe that downy woodpeckers retain the same mate as long as they live. In this case, all the pair has to do in the spring is to renew their pair bonds. This fidelity, however, seems to be a result of an attachment to the nesting site rather than between the birds.
After the drumming has united the pair, the actual courtship begins with a curious dance or "weaving" action by both sexes. With their neck stretched out and bill pointed in line with their head and body from side to side balancing on the tips of their tail. Their entire body is elongated. There is also a lot of flitting and chasing from one branch to another, and more waving and weaving of head and body. Sometimes with wing and tail feathers spread. Considerable chattering accompanies these gyrations.
Nesting
Sometime during the courting period the actual selection of a nesting cavity occurs. The female is usually, though not always, the dominant bird and selects the nesting site. Ounce selected, both birds dig the hole. Downies will characteristically place the nesting cavity 3-50 feet above the ground on the underside of an exposed dead limb. The pair will alternate digging because only one bird at a time can fit into the cavity. As the hole is cut deeper, the bird working may disappear into the hole and remain out of sight for 15-20 minutes, appearing only long enough to throw out chips. (This is unlike chickadees, which will carry their chips away from the nesting site, downies are not concerned about predators finding chips at the base of the nesting tree.) Then the pair will change shifts for 15 or
f:\12000 essays\sciences (985)\Biology\The Ebola virus 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION
The most deadly killers on this earth are too small to see with the naked eye. These microscopic predators are viruses. In my report, I will answer many basic questions concerning one of the fastest killing viruses, the Ebola virus. Questions such as "How does it infect its victims?", "How are Ebola victims treated?", "How are Ebola outbreaks controlled?" and many others
related to this deadly virus.
GENERAL INFORMATION
The Ebola virus is a member of the negative stranded RNA viruses known as filoviruses. There are four different strains of the Ebola virus - Zaire (EBOZ), Sudan (EBOS), Tai (EBOT) and Reston (EBOR). They are very similar except for small serological differences and gene sequence differences. The Reston Strain is the only one which does not affect humans. The Ebola virus was named after the Ebola river in Zaire, Africa after its first outbreak in 1976.
STRUCTURE
When magnified by an electron microscope, the ebola virus resembles long filaments and are threadlike in shape. It usually is found in the form of a "U-shape". There are many 7nm spikes which are 10nm apart from each other visible on the surface of the virus. The average length and diameter of the virus is 920nm and 80nm. The virons are highly variable in length (polymorphic), some attaining lengths as long as 14000nm. The Ebola virus consists of a helical nucleocapsid, which is a protein coat and the nucleic acid it encloses, and a host cell membrane, which is a lipoprotein unit that surrounds the virus and derived form the host cell's membrane. The virus is composed of 7 polypeptides, a nucleoprotein, a glycoprotein, a polymerase and 4 other undesignated proteins. These proteins are synthesized by mRNA that are transcribed by the RNA of the virus. The genome consists of a single strand of negative RNA, which is noninfectious itself. The order of it is as follows: 3' untranslated region, nucleoprote
in, viral structured protein, VP35, VP40 glycoprotein, VP30, VP24, polymerase(L), 5' untranslated region.
HOW IT INFECTS
Once the virus enters the body, it travels through the blood stream and is replicated in many organs. The mechanism used to penetrate the membranes of cells and enter the cell is still unknown. Once the virus is inside a cell, the RNA is transcribed and replicated. The RNA is transcribed, producing mRNA which are used to produce the virus' proteins. The RNA is replicated in the cytoplasm and is mediated by the synthesis of an antisense positive RNA strand which serves as a template for producing additional Ebola genomes. As the infection progresses, the cytoplasm develops "prominent inclusion bodies" which means that it will contain the viral nucleocapsid that will become highly ordered. The virus then assembles and buds off from the host cell, while obtaining its lipoprotein coat from the outer membrane. This destruction of the host cell occurs rapidly, while producing large numbers of viruses budding from it.
WHAT IT INFECTS
The Ebola virus mainly attacks cells of the lymphatic organs, liver, kidney, ovaries, testes, and the cells of the reticuloendothelial system. The massive destruction of the liver is the trademark of Ebola. The victim looses vast amounts of blood especially in mucosa, abdomen, pericardium and the vagina. Capillary leakage and bleeding leads to a massive loss in intravascular volume. In fatal cases, shock and acute respiratory disorder can also be seen along with the bleeding. Numerous victims are delirious due to high fevers and many die of intractable shock.
SYMPTOMS
During the onset of Ebola, the host will experience weakness, fever, muscle pain, headache and sore throat. As the infection progresses, vomiting (usually black), limited kidney and liver function, chest and abdominal pain, rash and diarrhoea begin. External bleeding from skin and injection sites and internal bleeding from organs occur due to failure of blood to clot.
TRANSMISSION
How "patient zero" (first to be infected) acquires natural infection is still a mystery. After the first person is infected, further spread of Ebola to other humans (secondary transmission) is due to direct contact with bodily fluids such as blood, secretions and excretions. It is also spread through contact with the patients skin which carries the virus. Spread can be accomplish either by person to person transmission, needle transmission or through sexual contact. Person to person transmission occurs when people have direct contact with Ebola patients and do not have suitable protection. Family members and doctors who contract the virus usually obtain it from this type of transmission. Needle transmission occurs when needles, which have been used on Ebola patients, are reused. This happens frequently in developing countries such as Zaire and Sudan because the heath care is underfinanced. A lucky person who has recovered from the Ebola virus can also infect another person though sexual contact. Th
is is because the person may still carry the virus in his/her genital. A fourth method of transmission is airborne transmission. This type is not proven 100% although there have been several experiments done to prove that this type of transmission is highly possible. The time between the invasion of Ebola and the appearance of its symptoms (incubation period) is 2-21 days.
HOW IT IS DIAGNOSED
Diagnosing the Ebola virus may take up to 10 days. The methods used to detect the virus are very slow, compared to how rapid Ebola can kill its victims. Blood or tissue samples are sent to a high- containment laboratory designed for working with infected substances and are tested for specific antigens, antibodies or the viruses genetic material itself. Recently, a skin test has been developed which can detect infections much faster. A skin biopsy specimen is fixed in a chemical called Formaline, which kills the virus, and is then safely transported to a lab. It is processed with chemicals and if the dead Ebola virus is present, the specimen will turn bright red.
TREATMENT
No treatment, vaccine, or antiviral therapy exists. Roughly ninety percent of all Ebola's victims die. The patient can only receive intensive supportive care and hope that they can be one of the fortunate ten percent who survive.
In November of 1995, Russian scientist claimed that they had discovered a cure for Ebola. It uses an antibody called Immunoglobulin G (IgG). They immunized horses with it and challenged them with live Ebola Zaire viruses. The scientists took their blood and used it as antiserum. With the antiserum, they have developed Ebola immune sheep, goat, pigs and monkeys. USAMRIID (USA Medical Research Institute for Infections Disease) received some equine Immunoglobulin and had some successes but fell short of the great claims of the Russians. This discovery does give grounds for optimism that an effective cure for Ebola can be found.
CONTROL OF THE OUTBREAK
To control an outbreak of Ebola, you must prevent further spread of the virus. The CDC (Center for Disease and Control) usually sends a team of medical scientists to the area of the outbreak where they provide advice and assistance to prevent additional cases. To limit the spread, they collect specimens, study the course of the virus, and look for others who may have been in contact with the virus. If anyone has been exposed to the virus, they are put under close surveillance and are sprayed with chemicals. The patients are isolated to interrupt person to person spread at the hospitals. This is called the "barrier technique".
1) All hospital personnel in contact with the patient must wear protective gear such as gowns, masks, gloves, and goggles.
2) Visitors are not allowed.
3) Disposable materials and wastes are removed or burned after use.
4) Reusable materials, such as syringes and needles are sterilized.
5) All surfaces are cleaned with sanitizing solution.
6) Fatal cases are buried or cremated.
The outbreak is officially over when two maximum incubation periods (42 days) have passed without any new cases.
PAST OUTBREAKS
In the past, there has been 4 major outbreaks. The first occurred in 1976 in Zaire, Africa where there were 280 fatalities out of 318 cases. The second also occurred in 1976, but in a nearby country, Sudan, Africa where 150 additional victims out of 250 cases died. In total, there were 340 deaths out of the 568 who were infected, a death rate of almost 60%. A smaller outbreak arose in 1979, also in Sudan. There were only 34 cases and 22 fatalities. Tiny outbreaks have occurred periodically in Africa up until 1995. In 1995, after 16 years of hiding, the fourth appearance of Ebola emerged and devastated Africa once again. This time it was in Kikwit, Zaire. The first patient was discovered on January the 6th and the outbreak was officially over on August the 24th (see chart for death distribution of each month during its peak - 212 deaths). There was a total of 315 cases and 244 deaths, a 77% fatality rate.
THE ANIMAL RESOVOIR
The animal species which carries the Ebola virus has not been found. Since outbreaks begin when man comes in contact with the animal resovoir, scientist have made several attempts during the 1970 outbreaks to find it, but have been unsuccessful. The 1995 outbreak gave scientist a perfect opportunity to search for the source once again. After locating "patient zero", a charcoal-maker named Gaspard Menga, they decided to search the jungle where he probably came in contact with Ebola. They collected over 18,000 animals and 30,000 insects. These include mosquitoes, hard ticks, rodents, birds, bats, cats, small bush antelope, snakes, lizards and a few monkeys. After collecting, the specimens are tested for antibodies of Ebola or Ebola itself. The scientist will continue searching until the end of the year, hoping that they will find the animal resovoir.
f:\12000 essays\sciences (985)\Biology\THE EBOLA VIRUS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Ebola Virus
A virus is an ultramicroscopic infectious organism that, having no independent metabolic activity, can replicate only within a cell of another host organism. A virus consists of a core of nucleic acid, either RNA or DNA, surrounded by a coating of antigenic protein and sometimes a lipid layer surrounds it as well. The virus provides the genetic code for replication, and the host cell provides the necessary energy and raw materials. There are more than 200 viruses that are know to cause disease in humans. The Ebola virus, which dates back to 1976, has four strains each from a different geographic area, but all give their victims the same painful, often lethal symptoms.
The Ebola virus is a member of a family of RNA viruses known as 'Filoviriade' and falling under one genus, 'Filovirus'. "The Ebola virus and Marburg virus are the two known members of the Filovirus family" (Journal of the American Medical Association 273: 1748). Marburg is a relative of the Ebola virus. The four strains of Ebola are Ebola Zaire, Ebola Sudan, Ebola Reston, and Ebola Tai. Each is named after the geographical location in which it was discovered. These filoviruses cause hemorrhagic fever, which is actually what kill victims of the Ebola virus. Hemorrhagic fever as defined in Mosby's Medical, Nursing, and Allied Health Dictionary as, a group of viral aerosol infections, characterized by fever, chills, headache, malaise, and respiratory or GI symptoms, followed by capillary hemorrhages, and, in severe infection, oliguria, kidney failure, hypotension, and, possibly, death. The incubation period for Ebola Hemorrhagic Fever ranges from 2-21 days (JAMA 273: 1748). The blood fails to clot and patients may bleed from injections sites and into the gastrointestinal tract, skin and internal organs (Ebola Info. from the CDC 2). The Ebola virus has a tropism for liver cells and macrophages, macrophages are cells that engulf bacteria and help the body defend against disease. Massive destruction of the liver is a hallmark feature of Ebola virus infection. This virus does in ten days what it takes AIDS ten years to do. It also requires biosaftey level four containment, the highest and most dangerous level. HIV the virus that causes AIDS requires only a biosaftey level of two. In reported outbreaks, 50%-90% of cases have been fatal (JAMA 273: 1748).
Ebola can be spread in a number of ways, and replication of the virus occurs at an alarming rate. Ebola replication in infected cells takes about eight hours. Hundreds to thousands of new virus particles are then released during periods of a few hours to a few days, before the cells die. The several cycles of replication occur in a primate before the onset of the fever and other clinical manifestations (Ornstein, Matthews and Johnson 7). In most outbreaks, transmission from patient to patient within hospitals has been associated within the reuse of unsterile needles and syringes. High rates of transmission in outbreaks have occurred from patients to heath-care workers and to family members who provide nursing care without appropriate precautions to prevent exposure to blood, other body fluids, vomitus, urine and stool. Risk for transmitting the infection appears to be highest during the later stages of illness, which are often characterized by vomiting, diarrhea, shock, and frequently hemorrhaging (JAMA 274: 374). Even a person who has recovered from the symptoms of the illness may have the virus present in the genital secretions for a brief period after. This makes it possible for the virus to be spread by sexual contact. Complete recovery is reached only when no particles of the virus are left in the body fluids, this however is rarely attained. The disease, for humans, is not airborne, capable to be passed on through air travel, but for nonhuman primates it has been a possibility in a few cases.
Ebola Zaire was identified in 1976 in Northern Zaire and was the first documented appearance of the virus. This strain of the virus effects humans and nonhuman primates. Close contact and dirty needles spread the Ebola virus. The center of the epidemic in Zaire involved a missionary hospital where they reused needles and syringes without sterilization. Most of the staff of the hospital got sick and died. This outbreak infected 318 with a death rate of 93% (Le Guenno et al. 1271). Another fatal case was reported one year later in Zaire but nothing major ever became of it. The most recent case recorded was the infamous breakout in Kikwit, Zaire. This breakout had the world in an uproar about the possibility of this virus spreading out globally. This outbreak appeared to have started with a patient who had surgery in Kikwit on April 10, 1995. Members of the surgical team then developed symptoms similar to those of a viral hemorrhagic fever disease (Ebola Info. from the CDC 2). From there, the disease spread to more than 300 others. The most frequent symptoms at the onset were fever (94%), diarrhea (80%), and server weakness (74%); other symptoms included dysphagia (41%) and hiccups (15%). Clinical signs of bleeding occurred in 38% of cases (JAMA 274: 373). The World Heath Organization declared on August 24, 1995 that the outbreak of Ebola Zaire in Kikwit was officially over after killing 244 of its 315 known victims ("Ebola Outbreak Officially Over" 1). This outbreak had a rate of death over 75%.
Ebola Sudan also occurred in 1976 about the same time as Ebola Zaire. The number of cases was 284 with a death rate of 53% (Le Guenno et al 1271). The outbreak occurred in a hospital setting. In 1979 a small epidemic was acknowledged in the same town of Sudan. Of the thirty-four recorded cases there were twenty-two fatalities (Ebola Info. from the CDC 1). Again the setting for the small epidemic was a hospital setting with inadequate supplies and unsanitary conditions.
Ebola Reston was isolated in 1989 during an outbreak of cynomolgus monkeys in a quarantine in Reston, Virginia (Le Guenno et al 1271). These monkeys were imported to the U.S. from the Philippines. This was the only outbreak of the virus to go outside the continent of Africa. This Reston strain of Ebola appears to be highly pathogenic for some monkey species but not for man (Le Guenno et al 1271). No humans fell victim or even contracted the virus. This also is the only known strain to be able to be transferred through the air.
Ebola Tai, which was named after the forest in which it was found, is the newest stain of the Ebola family. A Swiss female zoologist, who performed an autopsy on a chimpanzee infected with the same virus in the wild, contracted it. This occurred in the Ivory Coast, West Africa in mid November of 1994. This is the only know case of Ebola Tai and is the first recorded case that infection of a human has been linked to naturally infected monkeys anywhere on the African continent. It is also not clear how the chimpanzee may have contracted the disease.
The usual hosts for these types of hemorrhagic causing viruses are rodents, ticks or mosquitos. The natural reservoir for Ebola viruses has not been identified and ... because of the high mortality rate seen in apes they are unlikely to be the reservoir (Le Guenno et al 1271). Thousands of animals captured near outbreak areas, are tested for the virus, but efforts have always been unsuccessful.
The Ebola might never pose a problem to the world community but, the virus itself is armed with several advantages. It has the ability to mutate into new strains as we have seen over time. The fact that there are no know hosts, which means that there is no way to create a vaccine, coupled with the fact that poor sanitary conditions and lack of medical supplies worsen the spreading of the disease, meaning that there could be a slight chance that the virus could probably become an international problem.
Even if an international crisis were to occur, the virus has to many downfalls that would over shadow the mass spread of the diseases. First the virus is easily destroyed by disinfectants (Ebola Info. from the CDC 3). Also, under ultraviolet light the virus falls apart. This ultraviolet light smashes their genetic material making them unable to replicate. Ebola's virulence may also serve to limit its speed: its victims die so quickly that they do not have a chance to spread the infection very far. In order for the virus to become airborne it would have to mutate in such a way that its outer protective coating of proteins, the capsid, could resist the forces to which they are subjected in air, like dryness and heat. It would also probably need to change structure to allow infection through the respiratory system. There are no exact measures of the rate of Ebola mutation, but the probability of the required mutations happening is very low (Ornstein, Matthews and Johnson 4).
There is no cure or vaccine and it is still unclear if blood from survivors that contain antibodies can be used to synthesize a serum to treat the disease with. Some patients have had symptoms subside with the transfusion of survivors blood but not connection to the antibodies and the relief of the illness was proven. There is a good chance that a vaccine may never be synthesized. The kind of research needed to develop a modified live virus vaccine simply could not be done, given the scope of the problem. That is, only a few people would be working in labs who would need to be vaccinated, and a vaccine might want to stockpile in the event of an epidemic. Nevertheless, these are not the scale of circumstances under which the development of a vaccine could be afforded (Dr. F.A. Murphy 3).
f:\12000 essays\sciences (985)\Biology\THE EFFECTS OF ALTITUDE ON HUMAN PHYSIOLOGY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE EFFECTS OF ALTITUDE ON HUMAN PHYSIOLOGY
Changes in altitude have a profound effect on the human body. The body
attempts to maintain a state of homeostasis or balance to ensure the optimal
operating environment for its complex chemical systems. Any change from this
homeostasis is a change away from the optimal operating environment. The body
attempts to correct this imbalance. One such imbalance is the effect of
increasing altitude on the body's ability to provide adequate oxygen to be
utilized in cellular respiration. With an increase in elevation, a typical
occurrence when climbing mountains, the body is forced to respond in various
ways to the changes in external
environment. Foremost of these changes is the diminished ability to obtain
oxygen from the atmosphere. If the adaptive responses to this stressor are
inadequate the performance of body systems may decline dramatically. If
prolonged the results can be serious or even fatal. In looking at the effect
of altitude on body functioning we first must understand what occurs in the
external environment at higher elevations and then observe the important
changes that occur in the internal environment of the body in response.
HIGH ALTITUDE
In discussing altitude change and its effect on the body mountaineers
generally define altitude according to the scale of high (8,000 - 12,000
feet), very high (12,000 - 18,000 feet), and extremely high (18,000+ feet),
(Hubble, 1995). A common misperception of the change in external environment
with increased altitude is that there is decreased oxygen. This is not
correct as the concentration of oxygen at sea level is about 21% and stays
relatively unchanged until over 50,000 feet (Johnson, 1988).
What is really happening is that the atmospheric pressure is decreasing and
subsequently the amount of oxygen available in a single breath of air is
significantly less. At sea level the barometric pressure averages 760 mmHg
while at 12,000 feet it is only 483 mmHg. This decrease in total atmospheric
pressure means that there are 40% fewer oxygen molecules per breath at this
altitude compared to sea level (Princeton, 1995).
HUMAN RESPIRATORY SYSTEM
The human respiratory system is responsible for bringing oxygen into the
body and transferring it to the cells where it can be utilized for cellular
activities. It also removes carbon dioxide from the body. The respiratory
system draws air initially either through the mouth or nasal passages. Both
of these passages join behind the hard palate to form the pharynx. At the
base of the pharynx are two openings. One, the esophagus, leads to the
digestive system while the other, the glottis, leads to the lungs. The
epiglottis covers the glottis when swallowing so that food does not enter the
lungs. When the epiglottis is not covering the opening to the lungs air may
pass freely into and out of the trachea.
The trachea sometimes called the "windpipe" branches into two bronchi which
in turn lead to a lung. Once in the lung the bronchi branch many times into
smaller bronchioles which eventually terminate in small sacs called alveoli.
It is in the alveoli that the actual transfer of oxygen to the blood takes
place.
The alveoli are shaped like inflated sacs and exchange gas through a
membrane. The passage of oxygen into the blood and carbon dioxide out of the
blood is dependent on three major factors: 1) the partial pressure of the
gases, 2) the area of the pulmonary surface, and 3) the thickness of the
membrane (Gerking, 1969). The membranes in the alveoli provide a large
surface area for the free exchange of gases. The typical thickness of the
pulmonary membrane is less than the thickness of a red blood cell. The
pulmonary surface and the thickness of the alveolar membranes are not
directly affected by a change in altitude. The partial pressure of oxygen,
however, is directly related to altitude and affects gas transfer in the
alveoli.
GAS TRANSFER
To understand gas transfer it is important to first understand something
about the
behavior of gases. Each gas in our atmosphere exerts its own pressure and
acts independently of the others. Hence the term partial pressure refers to
the contribution of each gas to the entire pressure of the atmosphere. The
average pressure of the atmosphere at sea level is approximately 760 mmHg.
This means that the pressure is great enough to support a column of mercury
(Hg) 760 mm high. To figure the partial pressure of oxygen you start with the
percentage of oxygen present in the atmosphere which is about 20%. Thus
oxygen will constitute 20% of the total atmospheric pressure at any given
level. At sea level the total atmospheric pressure is 760 mmHg so the partial
pressure of O2 would be approximately 152 mmHg.
760 mmHg x 0.20 = 152 mmHg
A similar computation can be made for CO2 if we know that the concentration
is approximately 4%. The partial pressure of CO2 would then be about 0.304
mmHg at sea level.
Gas transfer at the alveoli follows the rule of simple diffusion. Diffusion
is movement of molecules along a concentration gradient from an area of high
concentration to an area of lower concentration. Diffusion is the result of
collisions between molecules. In areas of higher concentration there are more
collisions. The net effect of this greater number of collisions is a movement
toward an area of lower concentration. In Table 1 it is apparent that the
concentration gradient favors the diffusion of oxygen into and carbon dioxide
out of the blood (Gerking, 1969). Table 2 shows the decrease in partial
pressure of oxygen at increasing altitudes (Guyton, 1979).
Table 1
ATMOSPHERIC AIR ALVEOLUS VENOUS BLOOD
OXYGEN 152 mmHg (20%) 104 mmHg (13.6%) 40 mmHg
CARBON DIOXIDE 0.304 mmHg (0.04%) 40 mmHg (5.3%) 45 mmHg
Table 2
ALTITUDE (ft.) BAROMETRIC PRESSURE (mmHg) Po2 IN AIR (mmHg) Po2 IN ALVEOLI
(mmHg) ARTERIAL OXYGEN SATURATION (%)
0 760 159* 104 97
10,000 523 110 67 90
20,000 349 73 40 70
30,000 226 47 21 20
40,000 141 29 8 5
50,000 87 18 1 1
*this value differs from table 1 because the author used the value for the
concentration of O2 as 21%.
The author of table 1 choose to use the value as 20%.
CELLULAR RESPIRATION
In a normal, non-stressed state, the respiratory system transports oxygen
from the lungs to the cells of the body where it is used in the process of
cellular respiration. Under normal conditions this transport of oxygen is
sufficient for the needs of cellular respiration. Cellular respiration
converts the energy in chemical bonds into energy that can be used to power
body processes. Glucose is the molecule most often used to fuel this process
although the body is capable of using other organic molecules for energy.
The transfer of oxygen to the body tissues is often called internal
respiration (Grollman, 1978). The process of cellular respiration is a
complex series of chemical steps that ultimately allow for the breakdown of
glucose into usable energy in the form of ATP (adenosine triphosphate). The
three main steps in the process are: 1) glycolysis, 2) Krebs cycle, and 3)
electron transport system. Oxygen is required for these processes to function
at an efficient level. Without the presence of oxygen the pathway for energy
production must proceed anaerobically. Anaerobic respiration sometimes called
lactic acid fermentation produces significantly less ATP (2 instead of 36/38)
and due to this great inefficiency will quickly exhaust the available supply
of glucose. Thus the anaerobic pathway is not a permanent solution for the
provision of energy to the body in the absence of sufficient oxygen.
The supply of oxygen to the tissues is dependent on: 1) the efficiency with
which blood is oxygenated in the lungs, 2) the efficiency of the blood in
delivering oxygen to the tissues, 3) the efficiency of the respiratory
enzymes within the cells to transfer hydrogen to molecular oxygen (Grollman,
1978). A deficiency in any of these areas can result in the body cells not
having an adequate supply of oxygen. It is this inadequate supply of oxygen
that results in difficulties for the body at higher elevations.
ANOXIA
A lack of sufficient oxygen in the cells is called anoxia. Sometimes the
term hypoxia, meaning less oxygen, is used to indicate an oxygen debt. While
anoxia literally means "no oxygen" it is often used interchangeably with
hypoxia. There are different types of anoxia based on the cause of the oxygen
deficiency. Anoxic anoxia refers to defective oxygenation of the blood in the
lungs. This is the type of oxygen deficiency that is of concern when
ascending to greater altitudes with a subsequent decreased partial pressure
of O2. Other types of oxygen deficiencies include: anemic anoxia (failure of
the blood to transport adequate quantities of oxygen), stagnant anoxia (the
slowing of the circulatory system), and histotoxic anoxia (the failure of
respiratory enzymes to adequately function).
Anoxia can occur temporarily during normal respiratory system regulation of
changing cellular needs. An example of this would be climbing a flight of
stairs. The increased oxygendemand of the cells in providing the mechanical
energy required to climb ultimately produces a local hypoxia in the muscle
cell. The first noticeable response to this external stress is usually an
increase in breathing rate. This is called increased alveolar ventilation.
The rate of our breathing is determined by the need for O2 in the cells and
is the first response to hypoxic conditions.
BODY RESPONSE TO ANOXIA
If increases in the rate of alveolar respiration are insufficient to supply
the oxygen needs of the cells the respiratory system responds by general
vasodilation. This allows a greater flow of blood in the circulatory system.
The sympathetic nervous system also acts to stimulate vasodilation within the
skeletal muscle. At the level of the capillaries the normally closed
precapillary sphincters open allowing a large flow of blood through the
muscles. In turn the cardiac output increases both in terms of heart rate and
stroke volume. The stroke volume, however, does not substantially increase in
the non-athlete (Langley, et.al., 1980). This demonstrates an obvious benefit
of regular exercise and physical conditioning particularly for an individual
who will be exposed to high altitudes. The heart rate is increased by the
action of the
adrenal medulla which releases catecholamines. These catecholamines work
directly on the myocardium to strengthen contraction. Another compensation
mechanism is the release of renin by the kidneys. Renin leads to the
production of angiotensin which serves to increase blood pressure (Langley,
Telford, and Christensen, 1980). This helps to force more blood into
capillaries. All of these changes are a regular and normal response of the
body to external stressors. The question involved with altitude changes
becomes what happens when the normal responses can no longer meet the oxygen
demand from the cells?
ACUTE MOUNTAIN SICKNESS
One possibility is that Acute Mountain Sickness (AMS) may occur. AMS is
common at high altitudes. At elevations over 10,000 feet, 75% of people will
have mild symptoms (Princeton, 1995). The occurrence of AMS is dependent upon
the elevation, the rate of ascent to that elevation, and individual
susceptibility.
Acute Mountain Sickness is labeled as mild, moderate, or severe dependent on
the presenting symptoms. Many people will experience mild AMS during the
process of acclimatization to a higher altitude. In this case symptoms of AMS
would usually start 12-24 hours after arrival at a higher altitude and begin
to decrease in severity about the third day. The symptoms of mild AMS are
headache, dizziness, fatigue, shortness of breath, loss of appetite, nausea,
disturbed sleep, and a general feeling of malaise (Princeton, 1995). These
symptoms tend to increase at night when respiration is slowed during sleep.
Mild AMS does not interfere with normal activity and symptoms generally
subside spontaneously as the body acclimatizes to
the higher elevation.
Moderate AMS includes a severe headache that is not relieved by medication,
nausea and vomiting, increasing weakness and fatigue, shortness of breath,
and decreased coordination called ataxia (Princeton, 1995). Normal activity
becomes difficult at this stage of AMS, although the person may still be able
to walk on their own. A test for moderate AMS is to have the individual
attempt to walk a straight line heel to toe. The person with ataxia will be
unable to walk a straight line. If ataxia is indicated it is a clear sign
that immediate descent is required. In the case of hiking or climbing it is
important to get the affected individual to descend before the ataxia reaches
the point where they can no longer walk on their own.
Severe AMS presents all of the symptoms of mild and moderate AMS at an
increased level of severity. In addition there is a marked shortness of
breath at rest, the inability to walk, a decreasing mental clarity, and a
potentially dangerous fluid buildup in the lungs.
ACCLIMATIZATION
There is really no cure for Acute Mountain Sickness other than
acclimatization or
descent to a lower altitude. Acclimatization is the process, over time, where
the body adapts to the decrease in partial pressure of oxygen molecules at a
higher altitude. The major cause of altitude illnesses is a rapid increase in
elevation without an appropriate acclimatization period. The process of
acclimatization generally takes 1-3 days at the new altitude. Acclimatization
involves several changes in the structure and function of the body. Some of
these changes happen immediately in response to reduced levels of oxygen
while others are a slower adaptation. Some of the most significant changes
are:
Chemoreceptor mechanism increases the depth of alveolar ventilation. This
allows for an increase in ventilation of about 60% (Guyton, 1969). This is an
immediate response to oxygen debt. Over a period of several weeks the
capacity to increase alveolar ventilation may increase 600-700%.
Pressure in pulmonary arteries is increased, forcing blood into portions of
the
lung which are normally not used during sea level breathing.
The body produces more red blood cells in the bone marrow to carry oxygen.
This process may take several weeks. Persons who live at high altitude often
have red blood cell counts 50% greater than normal.
The body produces more of the enzyme 2,3-biphosphoglycerate that facilitates
the release of oxygen from hemoglobin to the body tissues (Tortora, 1993).
The acclimatization process is slowed by dehydration, over-exertion, alcohol
and other depressant drug consumption. Longer term changes may include an
increase in the size of the alveoli, and decrease in the thickness of the
alveoli membranes. Both of these changes allow for more gas transfer.
TREATMENT FOR AMS
The symptoms of mild AMS can be treated with pain medications for headache.
Some physicians recommend the medication Diamox (Acetazolamide). Both Diamox
and headache medication appear to reduce the severity of symptoms, but do not
cure the underlying problem of oxygen debt. Diamox, however, may allow the
individual to metabolize more oxygen by breathing faster. This is especially
helpful at night when respiratory drive is decreased. Since it takes a while
for Diamox to have an effect, it is advisable to start taking it 24 hours
before going to altitude. The recommendation of the Himalayan Rescue
Association Medical Clinic is 125 mg.
twice a day. The standard dose has been 250 mg., but their research shows no
difference with the lower dose (Princeton, 1995). Possible side effects
include tingling of the lips and finger tips, blurring of vision, and
alteration of taste. These side effects may be reduced with the 125 mg. dose.
Side effects subside when the drug is stopped. Diamox is a sulfonamide drug,
so people who are allergic to sulfa drugs such as penicillin should not take
Diamox. Diamox has also been known to cause severe allergic reactions to
people with no previous history of Diamox or sulfa
allergies. A trial course of the drug is usually conducted before going to a
remote location where a severe allergic reaction could prove difficult to
treat. Some recent data suggests that the medication Dexamethasone may have
some effect in reducing the risk of mountain sickness when used in
combination with Diamox (University of Iowa, 1995).
Moderate AMS requires advanced medications or immediate descent to reverse
the problem. Descending even a few hundred feet may help and definite
improvement will be seen in descents of 1,000-2,000 feet. Twenty-four hours
at the lower altitude will result in significant improvements. The person
should remain at lower altitude until symptoms have subsided (up to 3 days).
At this point, the person has become acclimatized to that altitude and can
begin ascending again. Severe AMS requires immediate descent to lower
altitudes (2,000 - 4,000 feet). Supplemental oxygen may be helpful in
reducing the effects of altitude sicknesses but does not overcome all the
difficulties that may result from the lowered barometric pressure.
GAMOW BAG
This invention has revolutionized field treatment of high altitude
illnesses. The Gamow bag is basically a portable sealed chamber with a pump.
The principle of operation is identical to the hyperbaric chambers used in
deep sea diving. The person is placed inside the bag and it is inflated.
Pumping the bag full of air effectively increases the concentration of oxygen
molecules and therefore simulates a descent to lower altitude. In as little
as 10 minutes the bag creates an atmosphere that corresponds to that at 3,000
- 5,000 feet lower. After 1-2 hours in the bag, the
person's body chemistry will have reset to the lower altitude. This lasts for
up to 12 hours outside of the bag which should be enough time to travel to a
lower altitude and allow for further acclimatization. The bag and pump weigh
about 14 pounds and are now carried on most major high altitude expeditions.
The gamow bag is particularly important where the possibility of immediate
descent is not feasible.
OTHER ALTITUDE-INDUCED ILLNESS
There are two other severe forms of altitude illness. Both of these happen
less
frequently, especially to those who are properly acclimatized. When they do
occur, it is usually the result of an increase in elevation that is too rapid
for the body to adjust properly. For reasons not entirely understood, the
lack of oxygen and reduced pressure often results in leakage of fluid through
the capillary walls into either the lungs or the brain. Continuing to higher
altitudes without proper acclimatization can lead to potentially serious,
even life-threatening illnesses.
HIGH ALTITUDE PULMONARY EDEMA (HAPE)
High altitude pulmonary edema results from fluid buildup in the lungs. The
fluid in the lungs interferes with effective oxygen exchange. As the
condition becomes more severe, the level of oxygen in the bloodstream
decreases, and this can lead to cyanosis, impaired cerebral function, and
death. Symptoms include shortness of breath even at rest, tightness in the
chest,
marked fatigue, a feeling of impending suffocation at night, weakness, and a
persistent productive cough bringing up white, watery, or frothy fluid
(University of Iowa, 1995.). Confusion, and irrational behavior are signs
that insufficient oxygen is reaching the brain. One of the methods for
testing for HAPE is to check recovery time after exertion. Recovery time
refers to the time after exertion that it takes for heart rate and
respiration to return to near normal. An increase in this time may mean fluid
is building up in the lungs. If a case of HAPE is suspected an immediate
descent is a necessary life-saving measure (2,000 - 4,000 feet). Anyone
suffering
from HAPE must be evacuated to a medical facility for proper follow-up
treatment. Early data suggests that nifedipine may have a protective effect
against high altitude pulmonary edema (University of Iowa, 1995).
HIGH ALTITUDE CEREBRAL EDEMA (HACE)
High altitude cerebral edema results from the swelling of brain tissue from
fluid leakage. Symptoms can include headache, loss of coordination (ataxia),
weakness, and decreasing levels of consciousness including, disorientation,
loss of memory, hallucinations, psychotic behavior, and coma. It generally
occurs after a week or more at high altitude. Severe instances can lead to
death if not treated quickly. Immediate descent is a necessary life-saving
measure (2,000 - 4,000 feet). Anyone suffering from HACE must be evacuated
to a medical facility for proper follow-up
treatment.
CONCLUSION
The importance of oxygen to the functioning of the human body is critical.
Thus the effect of decreased partial pressure of oxygen at higher altitudes
can be pronounced. Each individual adapts at a different speed to exposure to
altitude and it is hard to know who may be affected by altitude sickness.
There are no specific factors such as age, sex, or physical condition that
correlate with susceptibility to altitude sickness. Most people can go up to
8,000 feet with minimal effect. Acclimatization is often accompanied by fluid
loss, so the ingestion of large amounts of fluid to remain properly hydrated
is important (at least 3-4 quarts per day). Urine output should be copious
and clear.
From the available studies on the effect of altitude on the human body it
would appear apparent that it is important to recognize symptoms early and
take corrective measures. Light activity during the day is better than
sleeping because respiration decreases during sleep, exacerbating the
symptoms. The avoidance of tobacco, alcohol, and other depressant drugs
including, barbiturates, tranquilizers, and sleeping pills is important.
These depressants further decrease the respiratory drive during sleep
resulting in a worsening of the symptoms. A high carbohydrate diet (more than
70% of your calories from carbohydrates) while at altitude also
appears to facilitate recovery.
A little planning and awareness can greatly decrease the chances of altitude
sickness. Recognizing early symptoms can result in the avoidance of more
serious consequences of altitude sickness. The human body is a complex
biochemical organism that requires an adequate supply of oxygen to function.
The ability of this organism to adjust to a wide range of conditions is a
testament to its survivability. The decreased partial pressure of oxygen with
increasing
altitude is one of these adaptations.
Sources:
Electric Differential Multimedia Lab, Travel Precautions and Advice,
University of Iowa Medical College, 1995.
Gerking, Shelby D., Biological Systems, W.B. Saunders Company, 1969.
Grolier Electronic Publishing, The New Grolier Multimedia Encyclopedia, 1993.
Grollman, Sigmund, The Human Body: Its Structure and Physiology, Macmillian
Publishing Company, 1978.
Guyton, Arthur C., Physiology of the Human Body, 5th Edition, Saunders
College Publishing, 1979.
Hackett, P., Mountain Sickness, The Mountaineers, Seattle, 1980.
Hubble, Frank, High Altitude Illness, Wilderness Medicine Newsletter,
March/April 1995.
Hubble, Frank, The Use of Diamox in the Prevention of Acute Mountain
Sickness, Wilderness Medicine Newsletter, March/April 1995.
Isaac, J. and Goth, P., The Outward Bound Wilderness First Aid Handbook,
Lyons & Burford, New 1991.
Johnson, T., and Rock, P., Acute Mountain Sickness, New England Journal of
Medicine, 1988:319:841-5
Langley, Telford, and Christensen, Dynamic Anatomy and Physiology,
McGraw-Hill, 1980.
Princeton University, Outdoor Action Program, 1995.
Starr, Cecie, and Taggart, Ralph, Biology: The Unity and Diversity of Life,
Wadsworth Publishing Company, 1992.
Tortora, Gerard J., and Grabowski, Sandra, Principles of Anatomy and
Physiology, Seventh Edition, Harper Collins College Publishers, 1993.
Wilkerson., J., Editor, Medicine for Mountaineering, Fourth Edition, The
Mountaineers, Seattle, 1992.
f:\12000 essays\sciences (985)\Biology\The Fabry Disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Fabry Disease
Classification
The Fabry Disease is a hereditary disorder, caused by the lack of alphagalactosidase A. It is an x-linked recessive inheritance. Therefore it is the females that carry it. The ones who are mostly affected by this disease are the males. Female carriers, though, may develop angiokeratomas and may have problems with burning pains. Very few of the carriers may also have kidney or heart problems. This disease occurs in 1 of 40,000 people.
Descriptions
A person with the Fabry disease develops angiokeratomas, which are clusters of raised, dot-like lesions. Appearing during childhood or puberty in the genital and thigh areas, these angiokeratomas increase in size and number. Other symptoms of this disease are burning pains in hand or feet, nausea, vomiting, abdominal pains, dizziness, headaches and generalized weakness. Swelling of the legs, caused by the gathering of lymph, a yellowish body fluid, under the skin may also occur. Skin will show telangiectasis, inflated intra-epidermal (intra - within, epidermal - outer layer) spaces filled with blood. Places (vessel wall) where there is no telangiectasis are filled with deposits of glycolipids. These deposits are also found in the heart, muscles, renal tubules and glomeruli, central nervous system, spleen, liver, bone marrow, lymph nodes and cornea. Retarded growth, delayed puberty and ocular abnormalities are also common symptoms. These symptoms are mostly fond in males because they display full-b
lown syndrome, while females displays a partial form.
Diagnosis
They firsts take a urine sample, which is the first place where they would find anything. Then they would take a blood, bone marrow and ophthalmologic examination. Prenatal diagnosis by way of Amniocentesis or Chronic Villus Sampling is also available.
Prognosis
People affected by this disorder usually dies by the age of 40-50 from kidney failure or cerbovascular complications.
Treatment
There is only treatment to relieve the pains of the symptoms. Researchers are working towards the possibility of replacing the enzyme.
Bibliography
The Encyclopedia of Genetic Disorder and Birth Defects, By James Wynbrandt and Mark D. Ludman, M.D., F.R.C.P.C.
f:\12000 essays\sciences (985)\Biology\The Genetics of Violence.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Genetics of Violence
Introduction
We, in the 1990's, are slowly and inevitably being faced with the sociological and biological implications of impending genetic power. This power is analytical, in such cases as the Human Genome Project, which will hopefully succeed in mapping out the genetic code for the entire human genetic composition. Moreover, this power is preventative and participatory in that it can be, and is being, used to control the behavior of humans and other animals. This new power, in the eyes of many, is as risky and potentially hazardous as atomic energy: it must be treated carefully, used under close supervision, performed under professional consent and observation, otherwise, people will begin to see this new genetic power as a dangerous drawback, rather than an advancement of human culture.
One of the most highly contested and objectionable topics of genetic power is the analysis of crime, violence, and impulsivity. Doubtless, most will agree that children are not born with a natural affinity for violence and crime; yet, new genetic studies are beginning down a long road of finding the hereditary basis for impulsivity. While these studies continue to search for the genetic source of aggression, child testing programs, drug manufacturers, civil rights activists, lawyers, and anxious citizens await the resulting testimony of the scientists. The social implications of the genetic search for aggressive tendency is seen by some as a great step forward, by others as a dangerous power with the ability to give birth to another Holocaust, and by still others as racist.
At one time, it was believed that one's character could be determined from the bumps in one's skull. Much later, in the 1960's, as science marched on in its regular pace, it was theorized that carriers of an extra Y (male) chromosome were predisposed to criminality. Today, we are faced with the power to determine and alter one's character through genetics. We must collectively decide whether the ultimate price, not of money but of natural evolution, is worth the ultimate result.
Behavioral Genetics and Aggression
One day in 1978 a woman entered the University Hospital of Nijmegen, the Netherlands, with complaints regarding the men in her family. Many of the men seemed to have some sort of mental debility, including her brothers and her son. In time, a pattern of strange behavior of the men emerged: one had raped his sister, and, upon being institutionalized, stabbed a warden in the chest with a pitchfork; another tried to run over his boss in an automobile after he had criticized the man's work; a third had a regular habit of making his sisters undress at knife point, and two more were convicted arsonists. Additionally, the known IQ's of the men were typically around 85. The history of this sort of behavior was found to be typical, as nine other males in the family, tracing back to 1870, had the same type of disorder. It became evident that there was something wrong in the lineage of the family. Hans Brunner, a geneticist at the University Hospital, has been studying the family since 1988.
It was discovered that the men had a defect on the X chromosome that helps regulate aggressive behavior. Brunner was cued to the fact that the defect was on the X chromosome because the trait was passed on from mother to son, and none of the women, with two X chromosomes, were afflicted. The gene normally codes for the production of the enzyme monoamine oxidase A (MAOA), which breaks down three important neurotransmitters that trigger or inhibit the transmission of nerve impulses. One of these neurotransmitters is norepinephrine, which raises blood pressure and increases alertness as part of the body's "fight or flight" mechanism. Brunner believes that the lack of this neurotransmitter could cause an excess of chemical messages to the brain, in times of stress, causing the victim's fury. The men's urine found extremely low levels of the breakdown products of the three neurotransmitters, which are the breakdown products after MAOA has done its work.
Another of the chemicals is serotonin, which inhibits the effects of spontaneous neuronal firing, and consequently exerts a calming effect. The lack of this inhibitor is held responsible for the "Jekyll and Hyde" personalities of the afflicted men, and may be responsible for their low IQ's.
Over the course of four years, Brunner was the first to ever link and pinpoint a single gene to aggression. Also, he analyzed the X chromosomes of 28 members of the family, compiling sufficient evidence to prove his discovery. However, Brunner never studied the influence of a shared environment on the men.
Many other factors of genetic and biochemical signals have been shown to greatly influence behavior. In humans, impulsive aggression has been linked to low concentrations of a chemical known as 5-HIAA in the cerebrospinal fluid. Scientists have found a human gene which lies on chromosome 6 that creates a 25 percent higher susceptibility to schizophrenia. Also, MAOA has been found responsible for REM sleep deprivation in rats, which increases the incidence of fighting among the animals. Testosterone levels in repeated sex offenders is, almost without exception, extremely high. The National Research Council (NRC) reports that female mice and rhesus monkeys which have been injected with testosterone, in utero or at birth, repeatedly show more aggression at adulthood than others of their kind. Girls exposed to androgenic steroids in utero have an increased tendency to be more aggressive than their piers, where boys injected with anti-androgenic drugs were not as aggressive as their peers. The neurotransmitte
r gamma-aminobutyric acid has been shown not only to inhibit aggression, but may stimulate the brain as well. This may be the reason that the IQ's of the afflicted Dutch men were so low. In any case, all of these chemicals, in a natural setting, are ultimately determined by the genetic composition of the individual, and ample evidence exists that instances of aggressive behavior and crime are closely related to genetics.
However, the relation between the environment, genetics, and aggression has not yet been combined. Psychology and behavioral genetics, unfortunately, are not combined as they sensibly should be. We know that Brunner never studied the effects of the environment on the Dutch men; yet, experimentation with animals has shown that, for example, aggressively bred mice can act non-aggressively if placed in the right social environment. Therefore, the name of "behavioral genetics" is finally beginning to live up to the literal meaning of its name through the study of social and environmental influences.
Parental Aggression and Genetics
While there is very little known about the combined effects of genetics and the environment, there is much to be said about the social tendency toward violence with regard to the genetics of offspring. For example, parents are 60 to 70 times more likely to kill their children under the age of two if they are not their genetic children. Fewer children are murdered by their stepparents as the age of the children increases, but, nonetheless, a much higher number of stepchildren are killed than genetic children.
Moreover, male animals in the wild, such as mice and monkeys, often kill the offspring of their mate if the offspring is the product of another liaison. In humans, tribal men in Venezuela and Paraguay simply refuse to feed the children of their wives if the children are from another union, or simply demand that the children be put to death.
Few conclusions can be derived from these tendencies. Certainly, in humans, the tendency to murder stepchildren can not be determined as purely genetic. One could say that the cause is primarily social, as the stepchildren are from broken families where there is likely more tension and parental hostility towards children. Neither can animals' desire to kill the offspring of their mate that are not their genetic children be explained. Whether the desire to kill non-biological offspring is based on biology, sociology, or simple emotion, this example displays the difficulty of pinning any sort of aggressive or criminal behavior to a gene. It is also an example of the difficulty of using social and genetic evidence, together, to track the source of any animal behavior.
Society and Genetics
In the ten leading causes of death, violence kills more children than disease. In 1988, 8150 US children between the ages if one and fourteen; 840 of the deaths were clearly determined to be homicide; 237 were suicide. Homicide is the fourth leading cause of death for children between one and nine years old, and in the fifteen to twenty-four age group, it is the second leading cause of death. Obviously, crime and violence do a considerable amount of damage to many American lives. Consequently, limited amounts of genetic and other biological research is being performed in order to find a genetic link, if any, to aggression resulting in violence and crime. In 1989, $20 million in funds were dedicated to violence research; 5% of those funds were allocated to the biology of violence. There is so much conflict over the use of funds dealing with the genetics of violence that the National Institutes of Health (NIH) funds no specific studies that attempt to link genes and violence.
In August of 1992, the NIH allocated $78,000 to fund a controversial conference in an effort to assess the social implications of the Human Genome Project. The support was immediately withdrawn after black political leaders and psychologists charged the conference's agenda as being racist. The main opposition to the conference was formed by the Black Caucus, who argued that the roots of crime are based on social causes, such as poverty, racism, and unemployment, and these call on social solutions, not biological ones.
Finally, in September of 1995, some 70 biologists, criminologists, historians, and philosophers gathered at a remote conference center in the Chesapeake Bay region. It was an NIH-sponsored conference that had been carefully planned for over three years, made possible with a $133,000 from the NIH. Some of the scientists contended that if genes mold physiology, then they must mold psychology, and thus, antisocial behavior including violent crime must have a genetic component. Others at the conference pressed that evidence for genetic linkage to crime is circumstantial and a "racist pseudoscience".
Behind the tensions that seemed to dominate the conference was the horrors of past eugenics: the early twentieth-century campaign in the United States, and later in Germany, to purify the human gene pool by sterilizing the "feeble-minded." The leaders of the eugenics movement in the United States, although they acted out of sincere desire to build a better society, could do little when their ideas took root in Nazi Germany in the 1930's and soon became the Holocaust; this is where much genetic tension and fear stem from. One of the researchers, David Wasserman, a soft-spoken legal scholar, was shouting at the top of his lungs that, "There are a hell of a lot of people attending this conference who think the dangers of genetic research are as great in the long term as the dangers of atomic energy!" Many critics argued that the genetic studies are worse than inconclusive; they are racist and dangerous as they generally fail to recognize social issues. William Schneider, an Indiana University historian, in a f
ormal protest statement, wrote, "Scientists as well as historians and sociologists must not allow themselves to be used to provide academic responsibility for racist pseudoscience."
Flag-waving demonstrators, including self described communists, members of the Progressive Labor Party, and representatives of Support Coalition International (an alliance of psychiatric survivors endorsing a program against psychiatric medication) stormed the auditorium and seized the microphones. A student from Rutgers University proclaimed that, "You might think that you have the right to do the research that you are doing, but the bottom line is that it will be used to subjugate people." It took two hours to clear out the protesters and another eight hours to bring the proceedings to a close. A few researchers admitted that they needed an eye-opener to see the social implications of behavioral genetics dealing with violence and crime, realizing that "Only historians have never had their results misused."
Other federal research agencies have proposed a variety of monetary packages to promote this research, and it is estimated that these funded projects will cost the taxpayers as much as $50 million. However, this is not the main concern of the opponents to this research. It is assumed that very little is, at present, known about the human mind and its tendencies. Many believe that there is an over-reliance on drugs therapy in psychiatry, and that genetic violence research is cloaking the real problem. For example, overwhelming numbers of black children with problems with violence and aggressiveness are sent to psychiatrists where they are prescribed to pacifying drugs such as Ritalin of Prozac. Many black leaders felt that it is impossible to believe that the genetic studies are not attempting to find a link between violence and race. The conference, while ultimately displaying the public's fear of genetic assessment and engineering, made little headway in determining the course of the future of genetic resea
rch with regard to crime. It was, however, a critical step in beginning to assess the risks and concerns, along with the positive aspects, of behavioral genetics.
Conclusion
Genetic research and engineering, like any other new technology, has to be carefully put to use, and in the right hands. It seems impossible to dismiss any genetic research dealing with violence simply because it is has the possibility to become dangerous and fall into the wrong hands. Like nuclear research, genetics can be used for many positive deeds and the advancement of man. While I think that genetic research dealing with violence and genetics could have many positive aspects, it seems necessary to perform genetic research on all varieties of people: criminals, white-collar businessmen, the white-house staff and used car salesmen. Criminals cannot be singled out as the group that needs "healing"; genetic research can ultimately benefit all people, therefore, it must be performed on a variety of people. I, like many others, with the widespread use of psychotherapeutic drugs, such as Prozac and Ritalin, fear and foresee a day when designer drugs are used by all in order to help them deal with society. Thi
s is, personally, the most frightening possibility resulting from behavioral genetic research.
A time will never come when all are avid proponents of genetic engineering for the betterment of society. People need to decide for themselves whether research should continue, and to what degree. In the end, it will be the common people who will decide the course of genetic research, not the scientists. And, in the event of genetic developments, it should not only be the personal decision of the individual as to how they will personally use the new development, but the individual's responsibility to design a solid opinion of their moral, ethical, and biological feelings regarding the employment of behavioral genetics in the future.
Bibliography
Brunner, H. G., et al., Abnormal Behavior Associated with a Point Mutation in the Structural Gene for Monoamine Oxidase A, Science, Vol. 161, 22 October 1993.
Goldberg, Jeff, The Bad Seed: Amid Controversy, Scientists Hunt for the "Aggression" Gene, Omni, Vol.17, Iss. 5, February 1995.
Hilts, Philip J., Evolutionists Take the Long View on Sex and Violence, Science, Vol 261, 20 August 1993.
Holden, Constance, NIH Kills Genes and Crime Grant, Science, Vol 260, Iss. 5108, 30 April, 1993.
McBeath, Michael K., Genetic Hint to Schizophrenia, Nature, Vol 340, No. 6321, May 13, 1995.
Oberbye, Dennis, Born to Raise Hell, Time, Vol. 143, Iss. 8, 21 February, 1994.
Palca, Joseph, NIH Wrestles with Furor over Conference, Science, Vol. 257, Iss. 5071, 7 August, 1992.
Richardson, Sara, Violence in the Blood, Discover, Vol. 355, No. 4553, October 1993.
Roush, Wade, Conflict Marks Crime Conference, Science, Vol. 269, Iss. 5232, 29 September, 1995
Stone, Richard, HHS 'Violence and Initiative' Caught in a Crossfire, Science, Vol. 258, Iss. 5080, 9 October, 1992.
Stephens, Jane Ellen, The Biology of Violence, Bioscience, Vol. 44, Iss. 5, May 1994.
f:\12000 essays\sciences (985)\Biology\The Grasslands.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
" The Grasslands "
Picture yourself being able to see from horizon to horizon. The land is flat, and covered with different kinds of crops and small bunches of trees. You can see a village near the river. Most houses are made of brick, with some being wood. Power lines run up and down the street.
Close your eyes and the scene changes to a less familiar place. The land is flat with some steep hills nearby. In this scene, instead of brick and wood houses you see houses made out of dung. The ground is dry and barely alive.
Now close your eyes and imagine yet another scene. The sky is almost the only thing you see with gentle rolling hills all around you. Even rows of wheat stretch into the distance. You are near a white picket fenced farm with big cottonwoods shading it from the scorching sun.
You have just visited a collective farm in the Soviet Union, a Masai village in Africa and Abilene, Kansas, which is located in the U.S. These three places are part of the world's mid-latitude grassland region. Grasslands are usually found in the interior parts of most continents. The world's grasslands are vast areas covered with grass and leafy plants. They generally have a dry climate, little vegetation, and most grasslands receive only about twenty to thirty inches of rain each year, with most of it coming in the same season. Some grasslands may even receive up to thirty to forty inches of rain a year! For example, since the grasslands of the United States have hot summers and mild winters, most of the rain comes from the summer thunderstorms. With this limited amount of rain, only grasses and shrubs can grow. But some grassland areas have enough rain to support some trees such as cottonwood.
With this kind of climate and vegetation, it is no wonder that they have low human population densities. Because there are not that many people living in this kind of environment, a person traveling from one part of the grassland to another is very time consuming as well as difficult.
The wildlife in the grasslands is diversified and plentiful. Since the grasslands are full of grasses and shrubs, countless animals inhabit the grasslands to graze on the dense foliage. Some animals also migrate to the grasslands for temporary lodgings. The resident wildlife in the grasslands must be adapted to distinct wet and dry seasons, temperature extremes, drying winds and prolonged droughts. These wildlife usually migrate in search of food and water.
These animals include:
1. Pronghorn 2. Rabbits 3. Rodents 4. Coyotes
5. Bobcats 6. Badgers 7. Snakes 8. Lark Bunting
9. Meadowlarks 10. Plovers 11. Hawks 12. Owls
13. Ducks 14. Geese 15. Coots 16. Bison
17. Elk 18. Mountain Lions 19. Wolves 20. Prairie Dogs
To be able to stay in the grasslands for any period of time, these animals will have had to go through some adaptations. Here are just a few adaptations some of these animals have had to go through:
1. Prairie Dogs- Very small, living in burrows, prairie dogs often travel in
large groups, so as to defend one another from the many
predators and the threat of invaders entering the burrows.
2. Mountain Lions- The mountain lion has learned to hunt at night. It has
learned to climb well, is an excellent jumper, and has
learned the technique of surprising prey by dropping
from tree limbs onto its prey.
3. Bobcats- Learned to live on rodents and rabbits, which thrive in the
grasslands. Are small, so that they can have long periods of no
food after only one meal, since meat in the grasslands is met with
fierce competition.
4. Pronghorn- Their brownish fur lies flat as an insulator in cold weather and
springs erect to cool the skin in summer. One of the fastest of
New World mammals, it can run up to 72 km/h (45 mph),
which is essential for evading predators.
5. Rodents- Learn to live in burrows and search for food at night during hot
summers, search for food daytime during mild winters. Learn to
stay hidden in shrubs, in fear of air predators.
6. Plovers- These birds have stout bodies with a short neck and tail. Bills in
most species are short and stout and are swollen at the tip. Many species also have bands or rings around the neck. Plovers are swift in flight and forage actively on the ground or in shallow water for insects.
And while there are many species of animals in the grasslands, there is even a greater abundance of, of course, grasses. Here are just a few types of grasses:
1. In the wetter plains: Blue Stems, Indiangrass, Switchgrass, Needlegrass
2. In the drier plains: Grama Grasses, Buffalograss
While the grasses of the grasslands are the dominant plant life, there are some flowers about in the grasslands, although they are few and far between. Members of the sunflower and legume plant families provide the grasslands with the largest number of colorful flowers.
Although the grasslands are beautiful and abound with wildlife, human influence has lessened some of the beauty and tranquillity that was there before humans decided to settle in the grasslands. Since humans occupied some of these grasslands, several species of plants and animals have been threatened with extinction. Their once abundant habitat is being used for cropland, grazing of livestock, and living space, which has put considerable pressure on grassland life.
This overgrazing reduces vegetative cover and with the prolonged drought, desert conditions are easily formed and spread. This overgrazing also permits unpalatable species and nonnative weeds. As grazers avoid these, they begin to dominate the area and wipe out the original species. Many once fertile grasslands are being wiped out.
The grasslands are one of the many biomes of our Earth's biosphere. It is beautiful in it's own way, and the wildlife that is contained therein help to magnify the beauty of the rich grasslands. These grasslands are teeming with animals and with plants. The soil is very rich and fertile. This, unfortunately, is what draws humans to inhabit the grasslands. Humans are slowly destroying the grasslands, and all of the wildlife contained therein. If the grasslands are destroyed, a whole chunk of the Earth will go with it. So, in closing, you must remember to keep the beautiful grasslands alive. If they go, we will soon be next.
f:\12000 essays\sciences (985)\Biology\The Human Brain vs the Computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Human Brain vs. the Computer
Over the millennia, Man has come up with countless inventions, each more ingenious than the last. However, only now, as the computer arises that mankind's sentience itself is threatened. Ridiculous, some may cry, but I say look about you! The computer has already begun to hold sway over so many of the vital functions that man has prided himself upon before. Our lives are now dependent upon the computer and what it tells you. Even now, I type this essay upon a computer, fully trusting that it will produce a result far superior to what I can manage with my own to hands and little else.
It has been commonly said that the computer can never replace the human brain, for it is humans that created them. Is this a good reason why the computer must be inferior to humans? Is it always true that the object cannot surpass its creator? How can this be true? Even if we just focus on a single creation of man, say the subject of this essay, the computer, there are many ways in which the computer has the edge over man. Let us start with basic calculation. The computer has the capability to evaluate problems that man can hardly even imagine, let alone approach. Even if a man can calculate the same problems as a computer, the computer can do it far faster than he can possibly achieve. Let us go one step further. Say this man can calculate as fast as a computer, can he, as the computer can, achieve a 100% rate of accuracy in his calculation? Why do we now go over the human data entry into a computer when a mistake is noticed instead of checking the computer? It is because computers now possess the
ability to hold no error in its operation, where mankind has not advanced in this area in any noticeable margin. Why do you think the words 'human error' and 'to err is human' have become so popular in recent years? It is because the failings of the human race are becoming more and more exposed as the computer advances and becomes more and more omnipotent.
Perhaps the computer is not truly a competitor with the human brain but rather its ideal. After all, the computer is far superior to the human brain in those aspects where the brain is weakest. It is perhaps the attempt of the human brain to attain perfection after realising its own weaknesses. If you think about it carefully, do those who use the computer not use it supplement their own creative input? Maybe it is the subconscious attempt by us at reaching the next stage of evolution by our minds, creating a machine to do all the dirty work for us while we sit back and allow our brains to focus on creating, or destroying, as the case may be. This machine is the compensation for the human brain's weaknesses.
The human brain has flaws in abundance, yet it also has many an edge over the computer. It has the capacity to create, unlike the computer, and it can work without full input, making logical assumptions about problems. A person can work with a wide variety of methods, seeing new, more efficient ways of handling problems. It can come up with infinite ways of getting around problems encountered in day to day life, whilst a computer has a limited repertoire of new tricks it can come up with, limited by its programming. Should improved programming be introduced, it is the human brain that figures out the programming that will allow leeway for any improvements as vaguely conceived by the human brain. It is the human brain that conceptualises the formulae and methods by which the computer goes about its work. The human brain, given the time, can learn to understand anything, it can grasp the central concept of any concept, whilst the computer tends to take all things in their entirety, which makes some proble
ms near impossible to solve. Emotions too are an asset. Emotions allow the human brain to have evolved beyond a problem-solving machine. In truth, one characteristic of sentience, as we know it, is emotional maturity! Even a one-year-old baby knows infinitely more about emotions than the most sophisticated computers. Emotions open the mind to vast, new realms of possibilities. The reason why computers cannot create is because of the lack of emotions. Anger allows the imagination to roam, inventing concepts of new, ever more powerful weapons of destruction. Discontent induces the mind to conceive of new methods of fulfilment that could be expanded into something more. Puzzlement causes the mind to think of solutions. Curiosity leads to attempts to satisfy it, producing new discoveries and revelations.
The computer, on the other hand, though lacking in many aspects, is clearly the superior in many other aspects. In sheer speed of computation and retrieval of data, the computer is obviously by far the stronger. It has the capacity to handle things on a far grander scale than the human brain could ever conceive. The capacity to organise is massively improved as compared to the human brain. Measurements, results, applications can all be done down to the tiniest details, far beyond the human brain's capabilities. Calculations can be done with an accuracy nearly impossible to achieve manually. A certain uniformity can be achieved in its functions, something a human can hardly hope to achieve.
The human brain has many flaws just as it has advantages. The random mindset of the human brain gives allowance for many mistakes to be made. Though technically the potential is there, this potential is never realised. I refer to the potential to compute and store memory as efficiently or even more so than a computer. If potential cannot be realised, it is useless and the true capability of the object is its present capability. The human brain can never perform tasks as efficiently or as tirelessly than the computer. This is because the human brain can get bored quite easily and tends to stray from the task at hand. The computer does not get tired or bored, it just sits there and works, no problems. The human brain is a constant. The ability of it has not changed any time in recorded history, only the knowledge of man has changed, and this knowledge is invested in the computer anyway. The computer has altered drastically for the better in such a short period of time that it is incredible. The compu
ter has had improvements added to it almost non-stop, from a simple calculation device into a marvel of modern science, whilst the human brain cannot do anything but just stays there, not changing, not improving. Emotions can, too, be a liability as well as an asset. Emotions make the mind dangerously unstable, performance subject to moods and emotional disruption. The computer suffers no such problems. The human brain is easily stressed out by events and loses effectiveness when tired. Emotions blur the human brain's capacity to make clear, logical decisions, even when they are thrown before its eyes, and impair problem-solving capabilities. Age also has a devastating effect on the function of the human brain. Once senility sets in, the brain is of little use to anyone, and the person becomes a liability.
Computers are far from perfect themselves. Computers have only a limited capacity for learning and even this usually is not entirely accurate, for the computer lacks the common sense of the human brain, thus it cannot accurately realise its own mistake, if any. For example, a computer may send a $10 million tax bill to a person earning $30000 a year and not blink an eye, for if there is a bug in the program, it cannot go in by itself and change it. It would not even realise that it was making a mistake until a human spots it and corrects it. Also, a computer cannot create, for creation requires a curiosity and the capacity for independent thought, which is something the computer will not have, at least in the near future. Lacking the ability to create, it cannot truly pose a threat to mankind, but once it does acquire this ability, it will then be set to take over from the human brain.
The human brain is as incredible as it is flawed, whilst the computer is a fantastic machine, but seriously lacking in many aspects. While neither is perfect on its own, together they complement each other so perfectly that it is a heck of a potent combination. (1436 words)
f:\12000 essays\sciences (985)\Biology\The Kidney.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Kidneys
In vertebrates, kidneys are the two major organs of excretion. Excess water, toxic waste products of metabolism such as urea, uric acid, and inorganic salts are disposed of by kidneys in the form of urine. Kidneys are also largely responsible for maintaining the water balance of the body and the pH of the blood. Kidneys play important roles in other bodily functions, such as releasing the erythropoietin protein, and helping to control blood pressure.
Kidneys are paired, reddish-brown, bean-shaped structures. They are about eleven centimeters long. Kidneys are located on each side of spine, just above the waist. They are loosely held in place by a mass of fat and two layers of fibrous tissue. It is believed that the kidney first evolved in the original vertebrates where freshwater organisms needed some means of pumping water from the body. The kidney became adept at reabsorbing glucose, salts, and other materials which would have been lost if simply pumped out of the body by a simple organ.
The cut surface of the kidney reveals two distinct areas: the cortex- a dark band along the outer border, about one centimeter in thickness, and the inner medulla. The medulla is divided into 8 to 18 cone-shaped masses of tissue named renal pyramids. The apex of each pyramid, the papilla, extends into the renal pelvis, through which urine is released from the kidney tissue. The cortex arches over the bases of the pyramids (cortical arches) and extends down between each pyramid as the renal columns.
Urine passes through the body in a fairly complex way. The initial site of urine production in the body is the glomerus. The arterial blood pressure drives a filtrate of plasma containing salts, glucose, amino acids, and nitrogenous wastes such as urea and a small amount of ammonia through the glomerus. Proteins and fats are filtered out of the plasma, to remain in the normal blood stream. The plasma is now called glorular filtrate. One-hundred to one-hundred-forty milliliters of this filtrate are formed each minute!
The filtrate passes along a convoluted tibule. The majority of the water content and some of the dissolved materials are reabsorbed through the walls of the tibule and back into the blood. Water, sodium, chloride, bicarbonate, and all glucose are reabsorbed into the bloodstream, yet products such as urea and ammonia remain in the tibule. During the final stage of the passage process, most of the remaining filtrate is selectively reabsorbed until only about one percent of the original filtrate is to be excreted as urine.
Urine is eventually collected in the kidneys. The urine is collected in the renal pelvis, a funnel-like structure, which is contained inside the kidneys. The urine then passes into a hollow tube, called the ureter, which is forty to forty-five centimeters long. The ureter extends downward, emptying into the urinary bladder. A single tube, called the urethra eventually eliminates urine from the bladder.
When excessive amounts of fluid are lost from the body, or when the blood pressure of the body falls below normal, the kidneys release the enzyme renin into the blood. This enzyme promotes the formation of angiotensin. Within minutes, the angiotensin causes vasoconstriction. Vasoconstriction raises blood pressure, and stimulates the secretion of aldosterone, eventually bringing the body's fluid levels to equilibrium.
The kidney is an extroardinary organ. Without it's processes, human life would be virtually impossible.
f:\12000 essays\sciences (985)\Biology\The Koala.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Koala
Inroduction
The koala is the Australian jewel. It has very furry, ash colored
hair, a rubbery black nose, sharp claws, fuzzy ears, and a grizzly
personality, or should I say, koalality. If you kill a koala, you'll make a
million off their fur! They would sell the fur to coat companies and make coats out of koala fur. Well, sadly enough, too many people are making
millions on koalas. That's why they're endangered species.
Habitat
The koala is distributed along the eastern coastal semi-tropical
forests of Australia, ranging from north Queensland, New South Wales,
Victoria, and a small area in the south of Australia.
Breeding
The male koala and the female koala have two very different
mating calls. When humans reach their sexual maturity at around the age of 14 or 15, koalas reach their sexual maturity at the age of two. The female produces one baby every other year. The koala almost never produces twins. In the female's pouch, there are two nipples. The female koala gives birth after a 20-35 day gestation period. When the joey (baby koala) is born, it is no longer than 2 cm and weighs no more than a 1/2 gram. The joey stays in its mother's pouch for 5-7 months. The term ³joey² is used when you are talking about a baby marsupial. The mother gives ³pap² to the joey, a liquid from the caecum (which is similar to the human organ, appendix.) This is thought to give the joey the ability to eat only eucalyptus leaves. When the joey emerges from the pouch, it clings to its mother for another seven months. The joey stays with its mother for another three or four years, until it is fully grown.
Diet
Koalas eat eucalyptus and don't drink water. I guess that's how they
got their name. In the aboriginal language, ³koala² means ³no drink
water.² But, the koala does drink water, only when it is ill. Out of the 350
species of eucalyptus, the koala can only eat 20, will only eat 5, and only
prefers to eat a certain one.
Population and Extinction
Since the koala population has dropped since 50% since the turn of
the century, the Australian government passed a law banning anything
harmful to the koalas. At first, in the 1920's they were killed for
their fur. Then, somehow, a high percentage of them became infected
with a very contagious disease, chlamydia psittaci. Chlamydia psittaci
causes blindness, pneumonia, and for females, sterilization. It has slowed down a lot since it was first introduced to the koala population, but the virus is still going around (chylamydia psittaci is an endangered virus.) The koala population is also still falling due to destroyed habitats. The developers are coming into koala habitat and cutting down eucalyptus, selling it and building homes. By the 2030's, the koalas will have no place to live. Now, sadly, in Sydney, koalas must cross the street to get to another eucalyptus tree. They have to go to another eucalyptus tree every once and a while because that tree is the same tree from the same forest. A lot of koalas are now becoming roadkill.
General Information
The koala is a one of a kind animal. It is the only one of its kind.
There are two sub-species of koala though, in which there is the
tiniest difference. The southern one has a darker colored fur than the northern because it is colder in the south. The koala has only one relative, the wombat. Koalas and wombats share a common ancestor from some 25 million years ago on the landmass Lauasia (an ancient landmass that separated to make South America, Africa, and Australia.) Most people think that the koala is related to bears, that is not true, the koala is an aboreal (tree dwelling) bear. Most people also think that the koala is not harmful, that's wrong. The koala is more like a grizzly bear than a teddy bear. The koala is dangerous because of its extremely sharp claws. Its scientific name is Phascolarctos (pouched bear)cinerus (ash-colored.) An adult four year old koala eats 1.3 kg. (3 lb.) of eucalyptus per day. It also weighs 13.6 kg. (30 lb.) and is 60-85 cm. (21-33 in.) The koala sleeps a lot. The koala is also nocturnal, that's why the koala exhibit at the zoo is always boring. The eucalyptus that the koala eats is very impor
tant to them because if they change to a different forest, it could be fatal because they can usually only eat one species of eucalyptus.
Help the Furballs!
There are lots of ways that we can save the koalas. First of all, we
can stop cutting down the eucalyptus trees. Second of all, BRAKE FOR
KOALAS! Third, there are hundreds of save the koala associations world
wide. Another reason koalas are dying is because of fires. Most of these fires come from careless smokers. If you go to Australia one day and you see someone smoking, tell them to make sure that they put it out. There is a koala hospital in South Queensland that is building a 24-hour koala hospital. If you want more information on the Save the Koala Campaign, visit http://onthenet.com.au/~jbergh/koala1.htm. You can write the Australian Koala Hospital Assosiation, Inc. at jbergh@onthenet.com.au or snail mail them at The Australian Koala Hospital Assosiation, Inc.^P.O. Box 2379^Nerang Mail Centre^QLD 4211 AUSTRALIA. A lot of people are also going up and protesting against the developers that knock down the habitat of koalas.
Bibliography
Gaynor, Beth. Columbus Zoo: The Animals: Koala, http://ourworld.compuserve.com/homepages/BGa ynor/docent.htm (Columbus: compuserve.com)
Encyclopædia Britannica, 1992 ed., 6:922, 23:357, 8:642-3. (Chicago: Encyclopædia Britannica, Inc.)
S. Bahr, Lauren, e.d. & Johnston, Bernard, e.i.c. Collier's Encyclopædia, 1992 ed., 3:253, 14:129. (New York: Macmillan Educational Company)
Encyclopædia Americana, 1996 ed., 16:526, 18:371. (Danbury: Grolier, Inc.)
H. Harris, William, e.d. & S. Levey, Judith, e.d. The New Columbia Encyclopædia, 1975 ed., p. 1491 (New York: Columbia University Press)
Bergh, John. Austrailian Koala Hospital Assosiation, Inc.: Koala Facts Sheet, http://onthenet.com.au/~jbergh/koala2.htm (Sydney: onthenet.com)
Bergh, John. Austrailian Koala Hospital Assosiation, Inc.: Koala Facts Sheet, http://onthenet.com.au/~jbergh/koala4.htm (Sydney: onthenet.com)
Bergh, John. Austrailian Koala Hospital Assosiation, Inc.: Koala Facts Sheet, http://onthenet.com.au/~jbergh/koala1.htm (Sydney: onthenet.com)
World Book Encyclopædia, 1996 ed., 11:361 (Chicago: World Book, Inc.)
Payne, Oliver. ³Koala‹ Out on a Limb,² National Geographic Magazine, April, 1995 (Washington, D.C.: National Geographic Society Press)
Academic American Encyclopædia, 1994 ed., 12:103. (Danbury: Grolier, Inc.)
f:\12000 essays\sciences (985)\Biology\The Lymphatic System.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Lymphatic System
The Lymphatic System is very important. It helps
with the Cardiovascular system, and our immune systems.
The Lymphatic System is made up of two semi-independent
parts. One is a network of lymphatic vessels. The other
part is various lymphoid tissues and organs all over the
body. The functions of the Lymphatic System transporting
fluids that have escaped from the blood vascular system,
and the organs house phagocytic cells and lymphocytes.
Lymphatic vessels are an elaborate system of drainage
vessles that collect the excess protein-containing fluid and
returns it to the bloodstream.. Once an interstitial fluid
enters the lymphatics it is called lymph. The lymphatic
vessels form a one way system in which lymph flows only
towards the heart. This entire transport sytem starts in the
lymph capillaries. These are very commmon, usually occur
in the places blood capillaries occur. Lymph capillaries are
not found in bone, teeth, bone marrow, and entire central
nervous system. Lymphatic capillaries are very permeable.
The endothelial cells that make up the walls of the
capillaries are not tightly joined. Filament anchor the
endothelium cells so they can expand. Pathogens can
spread through the body through the lymphatic stream.
There are many cells in the lymphoid tissue. One type
is lymphocytes, which are reffered to often as T or B cells.
Plasma cells are antibody-producing offspring of B cells.
Macrophages are phagocytes that help out with immunity.
Reticular cells are cells that form the lymphoid tissue
stroma. Thes cells are very important parts of the immune
system.
The Lymphatic System also contains tissues. The
tissue of the Lymphatic System is reticular connective
tissue. It hold the macrophages and changes the number of
lymphocytes. It is an important part of the immune system.
The Lymphoid tissue can be found in the follicles.
Lymphoid organs are discrete and encapsulated. The main
lymphoid organs are the spleen, tonsils, thymus and lymph
nodes.
The lymph nodes are placed along the lymphatic
vessels. Each node has a fibrous capsule, a cortex, and a
medulla. The lymph nodes circulate fluids. The lymph
enters the lymph nodes through afferent lymphatic vessels
and exits through the efferent vessels. (afferent=enter,
efferent=exit)
Most lymphoid organs contain both macrophages and
lymphocytes. The spleen is a place for immune function,
and it kills defective or aged red blood cells and
blood-borne pathogens. The spleen also stores platelets,
products of hemoglobin, and acts as hematopoietic sites in
the fetus. The Thymus contains hormones. It is mostly
functional in youth. Peyer's patches are on the tonsils,
intestional wall, lymphatic nodules of the appendix and
nodules of the respitory tract. (MALT)
The many functions of the lymphatic system help the
body to maintain body homeostasis. Some of the functions
are removing foreign matter, maintain correct blood
volume, and very important in immunity. I never really
realized that the lymphatic system had so many functions.
It seems to be a very important part of the body. The
immunity part is very important.
f:\12000 essays\sciences (985)\Biology\The Mitochondrion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mitochondrion are the power plant of a cell. The mitochondria are cells based within a cell that turn nutrients from chemical form into a more simple and usable substance for a cell to use as energy.
These sausage-shaped organelles are not a true organelle, but more of a parasite that invaded primordial cells and evolved along with them. A mitochondrian's main purpose is to burn energy through a slow method of combustion, which will consume as much air to burn as fire will, just to break down nutrients into simpler substances. These simpler molecules then bond with the atoms that will need the energy to function. Then, enzymes in the mitochondrion break up the atoms and then recapture them again. This time, the energy atoms will be bonded in a different molecule to form ATP, or adenosine triphosphate. ATP has an adenosine core and three phosphates attached to it, hence its name.
These phosphates will store the new energy. ATP can travel throughout the cell freely and allows the stored energy to then be distributed evenly in the cell. Other organelles will find the ATP and break off the phosphates full of ready-to-use energy.Once the adenosine has been stripped of phosphates it will travel back to the mitochondrion to be reloaded with new phosphates.
f:\12000 essays\sciences (985)\Biology\The Plague.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since the reign of Emperor Justinian in 542 A.D., man has one unwelcome organism along for the ride, Yersinia pestis. This is the bacterium more commonly know as the Black Death, the plague. Plague is divided into three biotypes, each associated with one of three major pandemics occurring in history. Each of these biotypes are then divided into three distinct types, classified by method of infection.
The most widely know is bubonic, an infection of plague that resides in the lymph nodes, causing them to swell. The Black Death of the 14th century was mainly of this type. Bubonic plague is commonly spread through fleas that have made a meal from an infected Rattus rattus. In the American and Canadian west, from Texas and Oklahoma in the east to the Pacific Ocean in the west, it is most often transmitted from species of squirrels. The last occurrence of transmissions from rats to people, or people to people in the United States occurred in 1924 in Los Angeles. In that epidemic there were 32 cases of pneumonic plague with 31 fatalities. Since then there have been around 16 cases a year in the United States, most connected with rock squirrels and its common flea Oropsylla montana.
The most dangerous type of plague is pneumonic. It can be spread through aerosol droplets released through coughs, sneezes, or through fluid contact. It may also become a secondary result of a case of untreated bubonic or septicemic plague. Although not as common as the bubonic strain, it is more deadly. It has an untreated mortality rate on nearly 100%, as compared to 50% untreated mortality for bubonic plague. It attacks the respiratory track, furthering the cycle.
The third type of plague is septemic. It is spread by direct bodily fluid contact. It may also develop as a secondary result of untreated bubonic or pneumonic plague.
A LITTLE HISTORY
As mentioned before, the most known incidence of bubonic plague was in 14th century Europe. In 1346 reports of a terrible pestilence in China, spreading through Mesopotamia and Asia Minor had reached Europe, but caused no concern until two years later. In January of 1348 the plague had reached Marseille in France and Tunis in Africa. By the end of the next year the plague had reached as far as Norway, Scotland, Prussia, Iceland, and Italy. In 1351 the infection had spread to include Russia.
The plague was an equal opportunity killer. In Avignon nine bishops were killed, King Alfonso XI of Castile succumbed, and peasants died wherever they lay. Though the plague had, for the most part, ceased less than ten years after it started, it killed nearly one third of the European population. In many towns the dead outnumbered the living. Bodies piled in the streets faster than nuns, monks, and relatives could bury them. Many bodies were interred in mass graves, overflowing with dead, or dumped into nearby rivers. Domesticated cats and dogs, along with wolves, dug dead out of shallow graves, and sometimes attacked the still living. Many animals did either from plague or lack of care. Henry Knighton noted more than 5,000 dead sheep in one field alone.
The death of a very large portion of the work force aided those that were still living. The sheer scarcity of workers enabled the remainder to make demands of higher wages and better conditions. Farms located on poor soil were abandoned because the demand for grain had decreased, enabling fewer farms, located on the better tracts of land to feed the population.
f:\12000 essays\sciences (985)\Biology\The process of mitosis .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The process of Mitosis
---------------------------------------
Mitosis is the term used to describe cell division for replication. The product at the end of mitosis is two daughter cells both genetically identical to the original (parent) cell. This process (mitosis) is used for growth and repair within an organism (and also for asexual reproduction).
There are five main stages to mitosis, called Interphase, Prophase, Metaphase, Anaphase and Telophase. Although the process has been divided up into these stages the process of mitosis is actually continuous.
Interphase
---------------------------------------
In this, the first stage the cell will look just like any other 'normal' cell although this is far from the case because very much is actually happening. All cell organelles are being produced in quantity and the chromosomes - DNA molecules are being copied exactly. The two identical copies of DNA are called a "pair of chromatids" and they are linked together by an item called a "chromomere". During this stage a store of ATP is also built up.
[Best put a labelled diagram of a cell during Interphase here.]
Prophase
---------------------------------------
In this second stage changes to the cell become visible. The chromosomes condense, coiling up to about 5% of their original length, now clearly visible when a stain is added. The centrioles move to the opposite poles of the cell and small microtubules around the centrioles become visible (called "Asters"). The nuclear membranes and nucleolus disintegrate after passing their nucleic acids to certain pairs of chromatids. Now a spindle forms, this is also made out of microtubules.
[A labelled diagram of the end of the Prophase stage of a cell here would be great.]
Metaphase
---------------------------------------
During this stage the chromosomes move towards the equator of the spindle, attaching themselves horizontally by the centromere to the spindle's filaments. The chromatids then pull slightly away from each other at the centromere towards the opposite poles of the cell.
[A labelled diagram of Metaphase here, and put a note next to it saying "Note that some spindle fibres run from pole to pole while others from pole to equator."]
Anaphase
---------------------------------------
Now this stage is very quick. The pairs of chromatids are separated and each chromatids is pulled is pulled towards each opposite pole by the spindle fibres by a ratchet-like mechanism. This process requires energy so the ATP store is now used up.
[A labelled diagram of Anaphase. Write a note underneath saying "They split apart by the centromere breaking into two. Each centromere divides into two so that each chromatid has its own centromere."]
Telophase
---------------------------------------
The chromatids are destined to become the new chromosomes of the daughter cells. Once the chromatids are at the poles of the cell they unwind into chromatin again, now becoming hard to see once again.
The spindle fibres now disintegrate and new nuclear membranes form around the new groups of chromatin making two new nucleus'. The centrioles now also replicate and divide their numbers evenly around each nucleus. Now the cell is ready to divide.
[Draw a diagram of Telophase stage.]
Mitosis is now over.
The cell is now ready to divide, by the process known as "Cytokinesis".
f:\12000 essays\sciences (985)\Biology\The Relationship between yeast fermentation and food concentr.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In this experiment different concentrations of sucrose were tested to determine which leads to the most respiratory activity in yeast. Yeast is a heterotrophic anaerobic fungus which lacks chlorophyll. Yeast is used commercially to ferment the sugars of wheat, barley, and corn to produce alcohol, and in the baking industry to raise or expand dough. Yeast or alcoholic fermentation is the anaerobic process of respiration by which sugars, such as glucose and sucrose, are converted into ethanol and carbon dioxide (CO2 ). This process is illustrated in the following equation:
yeast
C12H22O11 + H2O ---> 4 CH3CH2OH + 4 CO2
sucrose + water (yields) ethanol + carbon dioxide
In order to determine what concentration of sucrose and water leads to the most respiratory activity, ten large test tubes were set with different concentrations by the process of serial dilution. The first test tube was filled with 40 ml of 60% sucrose solution. Then, the nine remaining test tubes were serially diluted, so that the sucrose concentration ranged from 30% to 0.12%.
The hypothesis in this expriment was that the most respiratory activity would take place with 60% sucrose concentration. Since yeast fermentation requires sucrose and water, aproximately equal proportions of both would yield to the most respiratory activity.
Once the sucrose concentration was serially cut to the desired level, the experimenter added 5 ml of yeast suspension to each one of the ten test tubes.
Then, ten small test tubes were placed invertedly into each one of the large test tubes, making sure no air bubbles remained within the small tibes.
The test tubes were left 24 hours, allowing for fermentation to take place. But, no respiratory activity was detected.
In previous experimentation, it was found that yeast fermentation did take place in different molasses concentrations. Since, molasses contains large quantities of sucrose, it was assumed that different concentrations of pure sucrose would yield similar results, when mixed with yeast. However, this was not the case.
The probable explanation is that in order for fermentation to take place, an enzyme is needed to break down sucrose --a disacharide-- into glucose and fructose --monosacharides. This enzyme is present in molasses, but it is absent in the sucrose solution.
The Relationship Between Food Concentration, and Respiratory Activity
September 25, 1996
Bibliography
1) Encarta Encyclopedia, CD-Rom Edition, Microsoft, 1994.
2) Biological Science, Prentice Hall, 1983.
3) Grolier Encyclopedia, CD-Rom Edition, Grolier Publishing, 1995.
f:\12000 essays\sciences (985)\Biology\The Results Of Aging.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE RESULTS OF AGING
Prepared for
Ms. Ferguson
by
Mark Trolley
Abstract
This report presents several aspects of aging. The report looks at a number of theories of why we
age, the physical and mental changes we undergo as we age, and several ways of caring for the
elderly.
March 7, 1997
TABLE OF CONTENTS
LIST OF ILLUSTRATIONS......................................................................................................iii
INTRODUCTION.......................................................................................................................1
THEORIES OF WHY WE AGE..................................................................................................2
Genetics............................................................................................................................2
Cellular.............................................................................................................................2
Physiological.....................................................................................................................2
PHYSICAL CHANGES................................................................................................................2
MENTAL CHANGES....................................................................................................................5
Alzheimer's Disease..........................................................................................................5
Senile Dementia................................................................................................................5
CARING FOR THE OLD............................................................................................................6
Retirement Communities....................................................................................................6
Life-care Facilities..............................................................................................................6
House Sharing..................................................................................................................6
Group Homes...................................................................................................................7
Low-cost, Government Subsidized Housing......................................................................7
Foster Care.......................................................................................................................7
Nursing Homes.................................................................................................................7
CONCLUSIONS..........................................................................................................................9
WORKS CITED.........................................................................................................................10
LIST OF ILLUSTRATIONS
Tables
1. The results of aging.....................................................................................................................4
INTRODUCTION
The purpose of this report is to discuss several aspects of aging.
Several theories of why we age, based on genetic research, cellular research, and
physiological research will be examined, along with physical and mental changes that are the result
of aging. Specific mental changes that will be explored are Alzheimer's Disease and Senile
Dementia. The final aspect to be looked at will be the care of the elderly in retirement
communities, life-care facilities, house sharing, group homes, low-cost government subsidized
housing, foster care, and nursing homes.
THEORIES OF WHY WE AGE
Since research into aging is not guided by any one universally accepted theory, genetic,
cellular, and physiological studies have yielded several hypotheses.
Genetics
The most popular genetic theory, the Error Theory, assumes that aging is the result of the
accumulation of random genetic damage, or from small errors in the flow of genetic information.
The damage or errors would reduce or prevent proper cell function.
Cellular
The best known theory of aging in cellular research is called the Hayflick Effect, which is
named after the American microbiologist Leonard Hayflick. He found that certain human cells
could only divide a limited number of times before they die. This may suggest that aging is
"programmed" into cells. This could also account for the differences in the life spans of different
animal species, and the differences in the life spans between the sexes within the same species.
Physiological
These theories focus on organ systems and their interrelationships. One area currently
being investigated is the immune system. As we age the immune system gradually loses its
capacity to fight off infections and other invaders. As a result, antibodies are produced that
cannot tell the difference between "friendly" cells and "enemy" cells. Most experts now believe
that aging represents many phenomena working together (Miller and Keane 97).
PHYSICAL CHANGES
The physical changes that accompany aging are not necessarily incapacitating, even
though they may be discomforting or limiting.
The body has less strength and endurance as it ages. The rate of energy production in the
body cells is gradually lowered so that people tire more easily and are more sensitive to weather
changes. Sexual desire and ability lower although they never entirely end for either sex. The
capacity to bear children ends in women with menopause, which is the time when the ovaries stop
functioning, causing the menstrual cycle to stop. Men retain their reproductive function into the
late years. The use of eyeglasses may become necessary, even if they were not necessary earlier in
life. Old people can hear low tones fairly well, but their ability to hear high tones decreases. The
capacity of tissue and bone to repair itself is slowed, as is cellular growth and division. Bones
become brittle and skin loses its thickness and elasticity, causing wrinkles. As brain cells die some
capacity for memorization and learning is lost. Breathing becomes difficult, and hardening
arteries circulation to worsen and blood pressure to rise. Joints lose their mobility and deteriorate
from constant wear and pressure. Finally, the liver filters toxins from the blood less efficiently
(Microsoft Encarta "Aging").
These are not all of the changed to the body that are brought about by aging, but these are
the major ones. There is hope in modern medicine, though. Through the use of new technologies
and drugs some of these changes can be slowed or prevented.
Table 1. The results of aging
System
Results of Aging
Contributing Factors
SKIN
-loses thickness and elasticity
(wrinkles appear)
-bruises more easily as blood
vessels near surface weaken
Process accelerated by smoking,
excessive exposure to sun.
BRAIN/NERVOUS
SYSTEM
-loses some capacity for
memorization and learning as
cells die
-becomes slower to respond to
stimuli (reflexes dull)
Process accelerated by overuse of
alcohol and other drugs, repeated
blows to the head.
SENSES
-become less sharp with loss of
nerve cells
Process accelerated by smoking,
repeated exposure to loud noise.
LUNGS
-become less effective as
elasticity decreases
Process accelerated by smoking,
poor air quality, insufficient
exercise.
HEART
-pumps less efficiently, making
exercise more difficult
Process accelerated by overuse of
alcohol and tobacco, poor eating
habits.
CIRCULATION
-worsens, and blood pressure
rises, as arteries harden
Process accelerated by insufficient
exercise, smoking, poor eating
habits.
JOINTS
-lose mobility (knee, hip) and
deteriorate from constant wear
and pressure (disappearance of
cartilage between vertebrae
results in old age "shrinking")
Process accelerated by injury,
obesity.
MUSCLES
-lose bulk and strength
Process accelerated by insufficient
exercise, starvation.
LIVER
-filters toxins from blood less
efficiently
Process accelerated by alcohol
abuse, viral infection.
Microsoft Encarta. "Aging."
MENTAL CHANGES
Along with the loss of the ability of memorization and learning due to brain cells dying
(Microsoft Encarta "Aging"), elderly people can be affected by Alzheimer's Disease and Senile
Dementia.
Alzheimer's Disease
This disease is a progressive degenerative disease of the brain, now considered to be a
leading cause of dementia among the old. It affects an estimated 2.5 to 3 million people in the
U.S. The incidence of this disease increases with advancing age, but there is no evidence that it is
caused by the aging process. The average life expectancy of a person with Alzheimer's is five to
ten years.
Alzheimer's patients show nerve cell loss in the parts of the brain associated with
cognitive functioning. The disease also includes the formation of abnormal proteins known as
neurofibillary tangles and neuritic plaques. Alzheimer's is also identified by defects in the brain's
neurotransmitters, chemicals that transmit nerve impulses, particularly acetylcholine, which has
been linked with memory function. Recent findings show that a small percentage of Alzheimer's
cases may have been inherited, and there has been a link between the disease and high amounts of
aluminum in the brain (Microsoft Encarta "Alzheimer's Disease").
Senile Dementia
This form of intellectual impairment is observed in elderly people. Approximately 10
percent of all people over 65 years of age have clinically important intellectual impairment.
Although 20 percent of these cases are treatable, such as toxic drug reactions, most cases are
Alzheimer's Disease. Senile Dementia begins with failing attention and memory, loss of
mathematical ability, irritability and loss of sense of humor, and poor orientation in space and time
(Microsoft Encarta "Senile Dementia").
CARING FOR THE OLD
There is a wide variety of living arrangements available for the elderly. (Social Issues
"Ways & Means: Options For Aging")
Retirement Communities
Most retirement communities offer private housing in houses or apartments , recreational
facilities, and sometimes housekeeping services. The housing unit is usually bought, and other
services are paid for monthly. Retirement communities are offering partial home care,
transportation, and other services.
Life-care Facilities
These are also intended for those elderly people that are moving in to be in good health.
They are charged an entrance fee (which can be as high as $100 000) and a monthly maintenance
fee. Meals and housekeeping are usually included. In case the resident becomes ill, medical and
nursing care are provided, and some life-care facilities contain built-in nursing homes. Some offer
unlimited nursing care, while others set a limit. A contract, which should be read carefully, is
signed before moving in.
House Sharing
House sharing is arranged by local agencies. A private house, which may be too big for
the older person living alone, is shared with someone else, such as another elderly person, a
student, or a single mother with a child. The success of this arrangement depends on the
personalities and flexibility of those involved.
Group Homes
These are basically communes for the old. They are usually sponsored by voluntary or
religious agencies, who provide various services--including shopping and cooking, laundry, and
financial management--to the residents. The residents pay the sponsoring agency. Each resident
has a private bedroom and shares the rest of the house.
Low-cost, Government Subsidized Housing
This type of housing for the elderly is available in some communities. Apartments are
usually designed with the needs of the elderly in mind, such as wide doorways and ramps, and
often good security systems. No services are provided, but Meals-On-Wheels and other local
organizations pay special attention to these housing clusters.
Foster Care
Foster care for the elderly is when the family shops, cooks, and cares for the elderly
person with the help of a government subsidy. This type of care is not widely available. It can
provide a family atmosphere for a person who needs supervision, but who is fairly capable of
taking care of themselves.
Nursing Homes
Nursing homes are one of the most popular options for taking care of the elderly. Some
homes offer total medical care, including rehabilitation facilities, for those who require twenty-
four-hour treatment by nurses. There are several things to look for when looking at a nursing
home: the general atmosphere and cleanliness; the attitude of the staff toward the patients and
visitors; openness of administrators to your questions and concerns; comfort and privacy of living
quarters; quality of food; availability of medical care and nursing and emergency services;
recreational and social programs; residents' participation in programs and input into
administration; and up-to-date licences. Nursing homes can cost as much as $35 000-$50 000 per
year, so that even people with reasonable savings cannot afford to stay for any long period of
time. Probably the most unfortunate aspect of these homes is the focus in the news on abuse of
the patients. This is the most important thing to research when you are looking at a nursing
home.
CONCLUSIONS
1. There is no one theory about why we age, but the subject is currently being researched in
several areas.
2. The body goes through many changes as it ages, some of which can be slowed or
prevented through the use of modern medicine.
3. Alzheimer's Disease is probably the most prominent mental disorder in elderly people, but
research has found what it does to the brain, so a cure may be in the future.
4. There is a large range of establishments where elderly people can spend the later years of
their life, depending on how self-sufficient they are, and how much they are willing to
spend.
WORKS CITED
Microsoft Encarta. Computer Software. "Alzheimer's Disease." Microsoft, 1993.
---. Computer Software. "Senile Dementia." Microsoft, 1993.
Miller, Benjamin F., M.D., and Claire Brackman Keane, R.N., B.S., M.Ed.. Encyclopedia and
Dictionary of Medicine and Nursing. U.S.A.: W. B. Saunders, 1972.
Riley, Matilda White. "Aging." Microsoft Encarta. Computer Software. Microsoft, 1993.
Social Issues Resources Series. "Ways & Means: Options for Aging." Article #39, Vol. 3. Aging.
f:\12000 essays\sciences (985)\Biology\The Theory of Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BODY
INTRODUCTION TO EVOLUTION
What is Evolution? Evolution is the process by which all living things
have developed from primitive organisms through changes occurring over
billions of years, a process that includes all animals and plants. Exactly how
evolution occurs is still a matter of debate, but there are many different
theories and that it occurs is a scientific fact. Biologists agree that all living
things come through a long history of changes shaped by physical and
chemical processes that are still taking place. It is possible that all organisms
can be traced back to the origin of Life from one celled organims.
The most direct proof of evolution is the science of Paleontology, or
the study of life in the past through fossil remains or impressions, usually in
rock. Changes occur in living organisms that serve to increase their
adaptability, for survival and reproduction, in changing environments.
Evolution apparently has no built-in direction purpose. A given kind of
organism may evolve only when it occurs in a variety of forms differing in
hereditary traits, that are passed from parent to offspring. By chance, some
varieties prove to be ill adapted to their current environment and thus
disappear, whereas others prove to be adaptive, and their numbers increase.
The elimination of the unfit, or the "survival of the fittest," is known as
Natural Selection because it is nature that discards or favors a
articular being. Evolution takes place only when natural selection
operates on a
population of organisms containing diverse inheritable forms.
HISTORY
Pierre Louis Moreau de Maupertuis (1698-1759) was the first
to
propose a general theory of evolution. He said that hereditary material,
consisting of particles, was transmitted from parents to offspring. His
opinion
of the part played by natural selection had little influence on other
naturalists.
Until the mid-19th century, naturalists believed that each
species was
created separately, either through a supreme being or through
spontaneous
generation the concept that organisms arose fully developed from soil or
water. The
work of the Swedish naturalist Carolus Linnaeus in advancing the
classifying of
biological organisms focused attention on the close similarity between
certain
species. Speculation began as to the existence of a sort of blood
relationship
between these species. These questions coupled with the emerging
sciences of
geology and paleontology gave rise to hypotheses that the life-forms of
the day
evolved from earlier forms through a process of change. Extremely
important was
the realization that different layers of rock represented different time
periods and
that each layer had a distinctive set of fossils of life-forms that had
lived in the past.
Lamarckism
Jean Baptiste Lamarck was one of several theorists who
proposed an
evolutionary theory based on the "use and disuse" of organs. Lamarck
stated that
an individual acquires traits during its lifetime and that such traits
are in some way
put into the hereditary material and passed to the next generation. This
was an attempt to explain how a species could change gradually over
time.
According to Lamarck, giraffes, for example, have long necks because for
many
generations individual giraffes stretched to reach the uppermost leaves
of trees, in
each generation the giraffes added some length to their necks, and they
passed this
on to their offspring. New organs arise from new needs and develop in
the extent that they are used, disuse of organs leads to
their disappearance. Later, the science of Genetics disproved
Lamarck's theory, it
was found that acquired traits cannot be inherited.
Malthus
Thomas Robert Malthus, an English clergyman, through his
work An Essay
on the Principle of Population, had a great influence in directing
naturalists toward
a theory of natural selection. Malthus proposed that environmental
factors such as
famine and disease limited population growth.
Darwin
After more than 20 years of observation and experiment,
Charles Darwin
proposed his theory of evolution through natural selection to the
Linnaean Society
of London in 1858. He presented his discovery along with another English
naturalist, Alfred Russel Wallace, who independently discovered natural
selection at
about the same time. The following year Darwin published his full
theory,
supported with enormous evidence, in On the Origin of Species.
Genetics
The contribution of genetics to the understanding of
evolution has
been the explanation of the inheritance in individuals of the same
species. Gregor
Mendel discovered the basic principles of inheritance in 1865, but his
work was
unknown to Darwin. Mendel's work was "rediscovered" by other scientists
around
1900. From that time to 1925 the science of genetics developed rapidly,
and many
of Darwin's ideas about the inheritance of variations were found to be
incorrect.
Only since 1925 has natural selection again been recognized as essential
in evolution. The modern theory of evolution combines the findings of
modern
genetics with the basic framework supplied by Darwin and Wallace,
creating the
basic principle of Population Genetics. Modern population genetics was
developed
largely during the 1930s and '40s by the mathematicians J. B. S. Haldane
and R. A.
Fisher and by the biologists Theodosius Dobzhansky , Julian Huxley,
Ernst Mayr ,
George Gaylord SIMPSON, Sewall Wright, Berhard Rensch, and G. Ledyard
Stebbins. According to the theory, variability among individuals in a
population of
sexually reproducing organisms is produced by mutation and genetic
recombination. The resulting genetic variability is subject to natural
selection in the
environment.
POPULATION GENETICS
The word population is used in a special sense to describe
evolution. The
study of single individuals provides few clues as to the possible
outcomes of
evolution because single individuals cannot evolve in their lifetime. An
individual
represents a store of genes that participates in evolution only when
those genes are
passed on to further generations, or populations. The gene is the basic
unit in the
cell for transmitting hereditary characteristics to offspring.
Individuals are units
upon which natural selection operates, but the trend of evolution can be
traced
through time only for groups of interbreeding individuals, populations
can be
analyzed statistically and their evolution predicted in terms of average
numbers.
The Hardy-Weinberg law, which was discovered independently
in 1908 by
a British mathematician, Godfrey H. Hardy, and a German physician,
Wilhelm
Weinberg, provides a standard for quantitatively measuring the extent of
evolutionary change in a population. The law states that the gene
frequencies, or
ratios of different genes in a population, will remain constant unless
they are
changed by outside forces, such as selective reproduction and mutation.
This
discovery reestablished natural selection as an evolutionary force.
Comparing the
actual gene frequencies observed in a population with the frequencies
predicted, by
the Hardy-Weinberg law gives a numerical measure of how far the
population
deviates from a nonevolving state called the Hardy-Weinberg equilibrium.
Given a
large, randomly breeding population, the Hardy-Weinberg equilibrium will
hold
true, because it depends on the laws of probability. Changes are
produced in the
gene pool through mutations, gene flow, genetic drift, and natural
selection.
Mutation
A mutation is an inheritable change in the character of a
gene. Mutations
most often occur spontaneously, but they may be induced by some external
stimulus, such as irradiation or certain chemicals. The rate of mutation
in humans is
extremely low; nevertheless, the number of genes in every sex cell, is
so large that
the probability is high for at least one gene to carry a mutation.
Gene Flow
New genes can be introduced into a population through new
breeding
organisms or gametes from another population, as in plant pollen. Gene
flow can
work against the processes of natural selection.
Genetic Drift
A change in the gene pool due to chance is called genetic
drift. The
frequency of loss is greater the smaller the population. Thus, in small
populations
there is a tendency for less variation because mates are more similar
genetically.
Natural Selection
Over a period of time natural selection will result in
changes in the
frequency of alleles in the gene pool, or greater deviation from the
nonevolving
state, represented by the Hardy-Weinberg equilibrium.
NEW SPECIES
New species may evolve either by the change of one species
to another or
by the splitting of one species into two or more new species. Splitting,
the
predominant mode of species formation, results from the geographical
isolation of
populations of species. Isolated populations undergo different
mutations, and
selection pressures and may evolve along different lines. If the
isolation is sufficient
to prevent interbreeding with other populations, these differences may
become
extensive enough to establish a new species. The evolutionary changes
brought
about by isolation include differences in the reproductive systems of
the group.
When a single group of organisms diversifies over time into several
subgroups by
expanding into the available niches of a new environment, it is said to
undergo
Adaptive Radiation .
Darwin's Finches, in the Galapagos Islands, west of Ecuador,
illustrate
adaptive radiation. They were probably the first land birds to reach the
islands, and,
in the absence of competition, they occupied several ecological habitats
and
diverged along several different lines. Such patterns of divergence are
reflected in
the biologists' scheme of classification of organisms, which groups
together animals
that have common characteristics. An adaptive radiation followed the
first conquest
of land by vertebrates.
Natural selection can also lead populations of different
species living in
similar environments or having similar ways of life to evolve similar
characteristics.
This is called convergent evolution and reflects the similar selective
pressure of
similar environments. Examples of convergent evolution are the eye in
cephalod
mollusks, such as the octopus, and in vertebrates; wings in insects,
extinct flying
reptiles, birds, and bats; and the flipperlike appendages of the sea
turtle (reptile),
penguin (bird), and walrus (mammal).
MOLECULAR EVOLUTION
An outpouring of new evidence supporting evolution has come
in the 20th
century from molecular biology, an unknown field in Darwin's day. The
fundamental tenet of molecular biology is that genes are coded sequences
of the
DNA molecule in the chromosome and that a gene codes for a precise
sequence of
amino acids in a protein. Mutations alter DNA chemically, leading to
modified or
new proteins. Over evolutionary time, proteins have had histories that
are as
traceable as those of large-scale structures such as bones and teeth.
The further in
the past that some ancestral stock diverged into present-day species,
the more
evident are the changes in the amino-acid sequences of the proteins of
the
contemporary species.
PLANT EVOLUTION
Biologists believe that plants arose from the multicellular
green algae
(phylum Chlorophyta) that invaded the land about 1.2 billion years ago.
Evidence is
based on modern green algae having in common with modern plants the same
photosynthetic pigments, cell walls of cellulose, and multicell forms
having a life
cycle characterized by Alternation Of Generations. Photosynthesis almost
certainly
developed first in bacteria. The green algae may have been preadapted to
land.
The two major groups of plants are the bryophytes and the
tracheophytes;
the two groups most likely diverged from one common group of plants. The
bryophytes, which lack complex conducting systems, are small and are
found in
moist areas. The tracheophytes are plants with efficient conducting
systems; they
dominate the landscape today. The seed is the major development in
tracheophytes,
and it is most important for survival on land.
Fossil evidence indicates that land plants first appeared
during the Silurian
Period of the Paleozoic Era (425-400 million years ago) and diversified
in the
Devonian Period. Near the end of the Carboniferous Period, fernlike
plants had
seedlike structures. At the close of the Permian Period, when the land
became drier
and colder, seed plants gained an evolutionary advantage and became the
dominant
plants.
Plant leaves have a wide range of shapes and sizes, and some
variations of
leaves are adaptations to the environment; for example, small, leathery
leaves found
on plants in dry climates are able to conserve water and capture less
light. Also,
early angiosperms adapted to seasonal water shortages by dropping their
leaves
during periods of drought.
EVIDENCE FOR EVOLUTION
The Fossil Record has important insights into the history of
life. The order
of fossils, starting at the bottom and rising upward in stratified rock,
corresponds to
their age, from oldest to youngest.
Deep Cambrian rocks, up to 570 million years old, contain
the remains of
various marine invertebrate animals, sponges, jellyfish, worms,
shellfish, starfish,
and crustaceans. These invertebrates were already so well developed
that they must
have become differentiated during the long period preceding the
Cambrian. Some
fossil-bearing rocks lying well below the oldest Cambrian strata contain
imprints of
jellyfish, tracks of worms, and traces of soft corals and other animals
of uncertain
nature.
Paleozoic waters were dominated by arthropods called
trilobites and large
scorpionlike forms called eurypterids. Common in all Paleozoic periods
(570-230
million years ago) were the nautiloid ,which are related to the modern
nautilus, and
the brachiopods, or lampshells. The odd graptolites,colonial animals
whose
carbonaceous remains resemble pencil marks, attained the peak of their
development in the Ordovician Period (500-430 million years ago) and
then
abruptly declined. In the mid-1980s researchers found fossil animal
burrows in
rocks of the Ordovician Period; these trace fossils indicate that
terrestrial
ecosystems may have evolved sooner than was once thought.
Many of the Paleozoic marine invertebrate groups either
became extinct or
declined sharply in numbers before the Mesozoic Era (230-65 million
years ago).
During the Mesozoic, shelled ammonoids flourished in the seas, and
insects and
reptiles were the predominant land animals. At the close of the Mesozoic
the once-
successful marine ammonoids perished and the reptilian dynasty
collapsed, giving
way to birds and mammals. Insects have continued to thrive and have
differentiated
into a staggering number of species.
During the course of evolution plant and animal groups have
interacted to
one another's advantage. For example, as flowering plants have become
less
dependent on wind for pollination, a great variety of insects have
emerged as
specialists in transporting pollen. The colors and fragrances of flowers
have evolved
as adaptations to attract insects. Birds, which feed on seeds, fruits,
and buds, have
evolved rapidly in intimate association with the flowering plants. The
emergence of
herbivorous mammals has coincided with the widespread distribution of
grasses,
and the herbivorous mammals in turn have contributed to the evolution of
carnivorous mammals.
Fish and Amphibians
During the Devonian Period (390-340 million years ago) the vast
land areas
of the Earth were largely populated by animal life, save for rare
creatures like
scorpions and millipedes. The seas, however, were crowded with a variety
of
invertebrate animals. The fresh and salt waters also contained
cartilaginous and
bony Fish. From one of the many groups of fish inhabiting pools and
swamps
emerged the first land vertebrates, starting the vertebrates on their
conquest of all
available terrestrial habitats.
Among the numerous Devonian aquatic forms were the Crossopterygii,
lobe-finned fish that possessed the ability to gulp air when they rose
to the surface.
These ancient air- breathing fish represent the stock from which the
first land
vertebrates, the amphibians, were derived. Scientists continue to
speculate about
what led to venture onto land. The crossopterygians that migrated onto
land were
only crudely adapted for terrestrial existence, but because they did not
encounter
competitors, they survived.
Lobe-finned fish did, however, possess certain characteristics
that served
them well in their new environment, including primitive lungs and
internal nostrils,
both of which are essential for breathing out of the water.
Such characteristics, called preadaptations, did not develop because the
others were
preparing to migrate to the land; they were already present by accident
and became
selected traits only when they imparted an advantage to the fish on
land.
The early land-dwelling amphibians were slim-bodied with fishlike
tails, but
they had limbs capable of locomotion on land. These limbs probably
developed
from the lateral fins, which contained fleshy lobes that in turn
contained bony
elements.
The ancient amphibians never became completely adapted for
existence on
land, however. They spent much of their lives in the water, and their
modern
descendants, the salamanders, newts, frogs, and toads--still must return
to water to
deposit their eggs. The elimination of a water-dwelling stage, which was
achieved
by the reptiles, represented a major evolutionary advance.
The Reptilian Age
Perhaps the most important factor contributing to the becoming of
reptiles
from the amphibians was the development of a shell- covered egg that
could be laid
on land. This development enabled the reptiles to spread throughout the
Earth's
landmasses in one of the most spectacular adaptive radiations in
biological history.
Like the eggs of birds, which developed later, reptile eggs
contain a
complex series of membranes that protect and nourish the embryo and help
it
breathe. The space between the embryo and the amnion is filled with an
amniotic
fluid that resembles seawater; a similar fluid is found in the fetuses
of mammals,
including humans. This fact has been interpreted as an indication that
life originated
in the sea and that the balance of salts in various body fluids did not
change very
much in evolution. The membranes found in the human embryo are
essentially
similar to those in reptile and bird eggs. The human yolk sac remains
small and
functionless, and the exhibits have no development in the human embryo.
Nevertheless, the presence of a yolk sac and allantois in the human
embryo is one
of the strongest pieces of evidence documenting the evolutionary
relationships
among the widely differing kinds of vertebrates. This suggests that
mammals,
including humans, are descended from animals that reproduced by means of
externally laid eggs that were rich in yolk.
The reptiles, and in particular the dinosaurs, were the dominant
land
animals of the Earth for well over 100 million years. The Mesozoic Era,
during
which the reptiles thrived, is often referred to as the Age of Reptiles.
In terms of evolutionary success, the larger the animal, the
greater the
likelihood that the animal will maintain a constant Body Temperature
independent
of the environmental temperature. Birds and mammals, for example,
produce and
control their own body heat through internal metabolic activities (a
state known as
endothermy, or warm-bloodedness), whereas today's reptiles are thermally
unstable
(cold-blooded), regulating their body temperatures by behavioral
activities (the
phenomenon of ectothermy). Most scientists regard dinosaurs as
lumbering,
oversized, cold-blooded lizards, rather than large, lively, animals with
fast metabolic
rates; some biologists, however--notably Robert T. Bakker of The Johns
Hopkins
University--assert that a huge dinosaur could not possibly have warmed
up every
morning on a sunny rock and must have relied on internal heat
production.
The reptilian dynasty collapsed before the close of the Mesozoic
Era.
Relatively few of the Mesozoic reptiles have survived to modern times;
those
remaining include the Crocodile,Lizard,snake, and turtle. The cause of
the decline
and death of the large array of reptiles is unknown, but their
disappearance is
usually attributed to some radical change in environmental conditions.
Like the giant reptiles, most lineages of organisms have
eventually become
extinct, although some have not changed appreciably in millions of
years. The
opossum, for example, has survived almost unchanged since the late
Cretaceous
Period (more than 65 million years ago), and the Horseshoe Crab,
Limulus, is not
very different from fossils 500 million years old. We have no
explanation for the
unexpected stability of such organisms; perhaps they have achieved an
almost
perfect adjustment to a unchanging environment. Such stable forms,
however, are
not at all dominant in the world today. The human species, one of the
dominant
modern life forms, has evolved rapidly in a very short time.
The Rise of Mammals
The decline of the reptiles provided evolutionary opportunities
for birds and
mammals. Small and inconspicuous during the Mesozoic Era, mammals rose
to
unquestionable dominance during the Cenozoic Era (beginning 65 million
years
ago).
The mammals diversified into marine forms, such as the whale,
dolphin,
seal, and walrus; fossorial (adapted to digging) forms living
underground, such as
the mole; flying and gliding animals, such as the bat and flying
squirrel; and
cursorial animals (adapted for running), such as the horse. These
various
mammalian groups are well adapted to their different modes of life,
especially by
their appendages, which developed from common ancestors to become
specialized
for swimming, flight, and movement on land.
Although there is little superficial resemblance among the arm of
a person,
the flipper of a whale, and the wing of a bat, a closer comparison of
their skeletal
elements shows that, bone for bone, they are structurally similar.
Biologists regard
such structural similarities, or homologies, as evidence of evolutionary
relationships.
The homologous limb bones of all four-legged vertebrates, for example,
are
assumed to be derived from the limb bones of a common ancestor.
Biologists are
careful to distinguish such homologous features from what they call
analogous
features, which perform similar functions but are structurally
different. For
example, the wing of a bird and the wing of a butterfly are analogous;
both are
used for flight, but they are entirely different structurally. Analogous
structures do
not indicate evolutionary relationships.
Closely related fossils preserved in continuous successions of
rock strata
have allowed evolutionists to trace in detail the evolution of many
species as it has
occurred over several million years. The ancestry of the horse can be
traced
through thousands of fossil remains to a small terrier-sized animal with
four toes on
the front feet and three toes on the hind feet. This ancestor lived in
the Eocene
Epoch, about 54 million years ago. From fossils in the higher layers of
stratified
rock, the horse is found to have gradually acquired its modern form by
eventually
evolving to a one-toed horse almost like modern horses and finally to
the modern
horse, which dates back about 1 million years.
CONCLUSION TO EVOLUTION
Although we are not totally certain that evolution is how we got
the way we
are now, it is a strong belief among many people today, and scientist
are finding
more and more evidence to back up the evolutionary theory.
f:\12000 essays\sciences (985)\Biology\The Theory That Shook The World.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Theory That Shook The World
Other than Mendellson and his studies with genetics,
Darwin has by far contributed the most to our modern science.
From his theories on variation of species to his explanation of
natural selection Charles Darwin has shocked the world by proving
the world older than previously thought and creatures not
immutable. In this present day these theories are as common
belief as a simple mathematical equation such as two plus two
equals four; but in the year eighteen hundred and fifty nine
Darwin not only risked his reputation with these far fetched
findings but also the risk of being excommunicated from the
church. Previous to Darwin the thought had been that the world
itself was only a few hundred years old and that all creatures
were made by God in those seven days as they lived exactly today
(Campbell p 421). Aside from past resistance, Darwin also comes
under scrutiny still today as missing fossils which are to have
been the bridge between a two familiar species are not yet found
(Hitching p 3). Whatever the reason of belief or disbelief in
Darwin's theories, he astounded the scientific world as well as
the public and was able to convince many in the presence of a
misguided past belief. This fact alone makes him one of the most
important people of science ever.
Charles Darwin was born in Shrewsbury-Shropshire,
England on Feb 12, 1809 (GEA & RBi p 42). He was the fifth child
in a wealthy English family with a history of scientific
achievement with his paternal grandfather Erasmus Darwin who was
a physician and a savant in the eighteenth century (GEA & RBi p
42). As a young boy Darwin already showed signs of his love for
nature. When he was not reading about nature and its quirks he
was out in the forest looking for wild game , fish, and insects
(Campbell p 424). His father, although noting his son's interest
in nature, felt that all the discoveries of the natural branch of
science had been accomplished so he sent his son to medical
school at Edinburgh instead (Bowler p 62). While Darwin was
there, he could not keep his mind on his medical studies and
decided to go and study at the University of Cambridge and become
a clergyman. It was here that he was to meet two people who
would change his future forever; Adams Sedgwick and John Stevens
Henslow. Out of these two, Henslow turned into his second father
and taught him to be meticulous in his observations of natural
phenomena (GEA & RBi p 42). Upon graduating in 1831, Henslow
suggested that he go on the Beagle as an unpaid naturalist on the
scientific expedition (GEA & RBi p 43). Darwin gladly took
Henslow's advice and set out on his voyage to South America to
analyze and collect data that would later back up his
evolutionary theories (Campbell p 424).
Even as Darwin collected his data pertaining to what
would become his theory on natural selection, many pre-existing
views still had a hold on the scientific world as well as the
public. The earliest recorded were those of Plato and Aristotle.
Plato (427-347 BC) believed in two worlds; an illusionary which
was perceived only through our senses and a real world which was
ideal and eternal (Campbell p 422). Aristotle (384-322 BC), on
the other hand, believed in a "scala naturae" in which each being
has its own rung on a ladder which was permanent (Campbell p
422). Also, there were the present religious views that had to
be dealt with as well as the ancient ideals. At that time many
believed that animals and plants did not evolve because they were
made holy and immutable by God on those seven days (GEA & RBi p
43). A person who was widely respected and also took some
beliefs from Aristotle and present religion was Carolus Linnaeus
(1707-1778). He believed species immutable and later became
known as the father of modern taxonomy (Campbell p 422). Perhaps
the largest barrier Darwin had was to convince the present day
scientists of his findings in contrast to their pre-existing
theories. The most common of the time was the catatropist
theory. The definition of this theory was that "a violent and
sudden change in the earth" had destroyed all creatures and each
time this happened, God would come back down and recreate all the
life in a seperate seven days (Webster p 131). This theory in
itself seemed created for the soul purpose of covering up the
reason for fossils existing and misled thought of the species
being immutable (Campbell p 423).
After Darwin's voyage on the Beagle, he had begun to
develop his own theory of evolution. His personal definition of
evolution was "in biology, the complex of processes by which
living organisms originated on earth and have been diversified
and modified through sustained changes in form and function" (JWV
p 20). In regards to his research he had not only found
evolution in the wild but in the domesticated sphere as well.
Darwin held that all related organisms descended from a common
ancestor and he found examples easily in common life (GEA & RBi p
43). One of these such examples were the domesticated pigeon.
Darwin studied the skeletal and the live forms of the pigeons he
had found. In doing so, he found them all to be related but for
a small change in their phenotype. Phenotype being defined as
follows "the actual appearance of an organism" (GEA & RBi-2 p
77). This small difference had been procured through the use of
breeding and mutation. Perhaps the most notable would be the
number of feathers in the fantail which ranged from twelve to
forty feathers (Darwin p 42). Another example Darwin found in
speciation by domesticated breeding were cows and horses. By the
definition of a gene pool, "large random assortment of genes that
may be rearranged", the farmers were able to produce a better
breed of race horse or milk cow by breeding the best he had
together (JWV p 21). This sexual evolution was just seen by the
public as a way to produce the necessary end but Darwin held it
as important evidence of evolution accessible for all to witness.
And to back up this finding in the domesticated breeds as well as
the wild he came up with his variability within a species. The
definition to variability within a species held that 1) the
offspring resemble the parents , but were not identical and 2)
some differences in the parents were due solely to the
environment but were often inheritable (JWV p 20). These two
statements as well as the backup with clinical data helped to
show that his theory was correct.
Another area of variability was that of species in the
wild. Perhaps Darwin's most famed findings to back his theory
are "Darwin's finches". During his voyage on the Beagle he had
observed thirteen different types of finches (Campbell p 425).
These finches were found on seperate Galapagos Islands. Here
each species of finch had at one time migrated to another island.
In doing so the founder effect had been put into action. The
founder effect being described as "when a few individuals of a
population migrate and form a new colony having only a small gene
pool causing a new species" (JWV p 23). Due to the diverse
surroundings and limited gene pool the thirteen species had
evolved from the original species that had migrated from the
mainland to the islands. Darwin also observed other animals on
these islands that were not found anywhere else in the world and
began to doubt the churches teaching that species were immutable
(Darwin p 29).
The most controversial of Darwin's theory was that of
natural selection. The term evolution was so controversial even
Darwin did not use it but the phrase "origin of species" instead
(Darwin p 27). Even though he did not term it evolution his
views were definitely concrete and were laid out in a few simple
sentences. These were the reasons why natural selection was a
way of life and always had been. First, Darwin proposed that
food supply was too little to support the large population thus
eliminating those who were not strong enough to find food and
survive. Second, parents adapted to a certain environment well
would pass on favorable traits that would help the next
generation survive, those without the trait would not survive.
Third, each generation would become better adapted and if
remaining in the same environment would become more capable of
surviving. Finally, even with all the above working there were
also factors of mutation, genetic drift, and bottle neck theories
which contributed to the survival of the fittest (GEA & RBi p
43). Mutation being the most effective in changing a species had
four factors by itself: 1) size of a population, 2) the length of
a generation's life span, 3) the degree to which the mutation was
favorable, and 4) the rate at which the same mutation appears in
descendants (JWV p 21). Although most mutations are fatal, they
are key in changing the genetic make up of an individual.
Genetic drift is described as when a species for some reason
begins to drift apart or come together to create a new specie or
species. This is typically seen in today's fossil record when a
present species is related to an extinct animal. [see fig. 1]
Another of the traits of natural selection is the bottle neck
theory. Here a population has been destroyed to such an extent
that only a few survive. This limited population will recreate a
new species based on its extremely limited gene pool and have a
higher chance of carrying a fatal gene. All these factors
working together simultaneously create the phenomena of natural
selection.
Darwin was not going to publish his findings but was
forced to by a young man Alfred Russel Wallace who had come to
the same conclusion after twenty years had passed. Although both
scientists names were on the original copies of the Origin of
Species Wallace regarded Darwin as the soul author. Within a
year of writing, Darwin published what would be twenty years of
research in 1859. Although, thoroughly backed up with
painstaking research, it was still refereed to as "the book that
shook the world" and in its first day of sales had sold out (GEA
& RBi p 43). The immediate reaction in the science world was one
of disbelief. The leading scientists of the day said that Darwin
could not prove his hypothesis and the concept of variation could
not be proved. Darwin was to be doubted for the next seventy
years until the rediscovery of Mendel's pea plant experiments
(GEA & RBi p 43). With these new findings on genetics, many
scientists would take in account Darwin's work. Some of these
people were to be a German zoologist named Ernst Mayr, a botanist
G. Ledyard Stebbins, and paleontologist named George Simpson
(JWV p 21).
f:\12000 essays\sciences (985)\Biology\The Thread of Life DNA.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The "thread of life", is deoxyribonucleic acid, otherwise known as DNA. It is the spiral shaped molecule found in the nucleus of cells. Scientists have known since 1952 that DNA is the basic substance of heredity. This was hypothesized, and later confirmed by James D. Watson and Francis Crick. They also know that it acts like a biological computer program over 3 billion bits long that "spells" out instructions for making the basic building blocks of life.
DNA carries the bodies genetic code, controls the development of an embryo, is capable of duplicating itself, and is able to repair damage to itself. DNA can be manipulated to change all kinds of things.
All DNA molecules consist of a linked series of unites called nucleotides. Each DNA nucleotide is composed of 3 subunits: a 5 carbon sugar called deoxyribose, a phosphate group that is joined to one end of the sugar molecule, and one of several different nitrogenous bases linked to the opposite end of the deoxyribose. There are 4 nitrogen bases called adenine, guanine, thymine, and cytosine. In DNA adenine pairs with thymine and guanine with cytosine.
Medicine's ability to diagnose continues to exceed its ability to treat or cure. For example, Huntington's Chorea is an inherited disease that develops between the ages of 30 and 45, can be diagnosed before any symptoms appear. This can be hard for both the individuals with the disease and their family.
There is a 3 billion dollar project underway right now called the Human Genome Project, a 15 year program to make a detailed map of every single gene in human DNA. With automated cloning equipment to steer scientists through the DNA, scientists are finding human genes at the rate of more than one a day. This may not sound like very much but as technology increases the rate at finding them will increase. Since January 1993 to January 1994 scientists have located the genes for Huntington's disease, Lou Gehrig's disease, and the "bubble-boy" disease. Scientists are expected to find the first breast cancer gene any week now.
Even with the best tools of today, the progress is full of surprises. Human DNA is not like that of plants, in which the trait of color of a flower is determined by one gene. Even the color of a human eye can involve the interaction of several genes. Some complex genes, such as cystic fibrosis, can go wrong in any number of places. Scientists have already accounted for 350 places where the cystic fibrosis gene mutates, and more are being uncovered weekly.
Many environmental factors, some physical, other's chemical, can alter the structure of a DNA molecule. A mutation occurs when such alterations lead to a permanent change in the base sequence of a DNA molecule. Mutations result in an inherited change in a protein synthesis. DNA is damaged by exposure to ultraviolet (UV) light. The DNA does have the ability to repair it self, however.
DNA can also be used to match suspects in a crime. Each person's DNA is different from everyone else, except in the case of identical twins in which it is identical. By comparing substances left at a crime scene (blood or semen samples) law enforcement agencies are able to match the DNA at the crime scene with a certain suspect. A recent example of this is the O.J. Simpson case, in which the lawyers are trying to match up O.J.'s DNA with the DNA in the blood found at the crime scene.
Many questions have been raised by a number of people and scientists about the ethics of DNA research. It was once feared that the insertion of a disease causing substance could cause a deadly epidemic in the general population upon accidental release. But, since 1973 when they first used the technique it has been transferred thousands of times without any of the feared catastrophes occurring. But, still there are many questions remaining to be answered. Is it right for people to change their babies' eye color, or any other aspect of their baby? Should employers be allowed to see your DNA, to see if you are at risk for a certain disease?
f:\12000 essays\sciences (985)\Biology\The Worlds Fight Against Microbes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Worlds Fight Against Microbes
Many infectious diseases that were nearly eradicated from the industrialized world, and newly emerging diseases are now breaking out all over the world due to the misuse of medicines, such as antibiotics and antivirals, the destruction of our environment, and shortsighted political action and/or inaction.
Viral hemorrhagic fevers are a group of diseases caused by viruses from four distinct families of viruses: filoviruses, arenaviruses, flaviviruses, and bunyaviruses. The usual hosts for most of these viruses are rodents or arthropods, and in some viruses, such as the Ebola virus, the natural host is not known. All forms of viral hemorrhagic fever begin with fever and muscle aches, and depending on the particular virus, the disease can progress until the patient becomes deathly ill with respiratory problems, severe bleeding, kidney problems, and shock. The severity of these diseases can range from a mild illness to death (CDC I).
The Ebola virus is a member of a family of RNA (ribonucleic acid) viruses known as filoviruses. When these viruses are magnified several thousand times by an electron microscope they have the appearance of long filaments or threads. Filoviruses can cause hemorrhagic fever in humans and animals, and because of this they are extremely hazardous. Laboratory studies of these viruses must be carried out in special maximum containment facilities, such as the Centers for Disease Control (CDC) in Atlanta, Georgia and the United States Army Medical Research Institute of Infectious Diseases (USAMRIID), at Fort Detrick in Frederick, Maryland (CDC I,II).
The Ebola hemorrhagic fever in humans is a severe, systemic illness caused by infection with Ebola virus. There are four subtypes of Ebola virus (Ebola-Zaire, Ebola-Sudan, Ebola-Ivory Coast, and Ebola-Reston), which are not just variations of a single virus, but four distinct viruses. Three of these subtypes are known to cause disease in humans, and they are the Zaire, Sudan, and Ivory Coast subtypes. Out of all the different viral hemorrhagic fevers known to occur in humans , those caused by filoviruses have been associated with the highest case-fatality rates. These rates can be as high as 90 percent for epidemics of hemorrhagic fever caused by Ebola-Zaire virus. No vaccine exists to protect from filovirus infection, and no specific treatment is available (CDC II).
The symptoms of Ebola hemorrhagic fever begin within 4 to 16 days after infection. The patient develops chills, fever, headaches, muscle aches, and a loss of appetite. As the disease progresses vomiting, diarrhea, abdominal pain, sore throat, and chest pain can occur. The blood fails to clot and patients may bleed from injection sites as well as into the gastrointestinal tract, skin, and internal organs (CDC I).
The Ebola virus is spread through close personal contact with a person who is very ill with the disease, such as hospital care workers and family members. Transmisson of the virus can also occur from the reuse of hypodermic needles in the treatment of patients. This practice is common in developing countries where the health care system is underfinanced (CDC I).
Until recently, only three outbreaks of Ebola among people had been reported. The first two outbreaks occurred in 1976. One was in western Sudan, and the other in Zaire. These outbreaks were very large and resulted in more than 550 total cases and 340 deaths. The third outbreak occurred in Sudan in 1979. It was smaller with only 34 cases and 22 deaths. Three additional outbreaks were identified and reported between 1994 and 1996: a large outbreak in Kikwit, Zaire with 316 cases and 244 deaths; and two smaller outbreaks in the Ivory Coast and Gabon. Each one of these outbreaks occurred under the challenging conditions of the developing world. These conditions including a lack of adequate medical supplies and the frequent reuse of needles, played a major part in the spread of the disease. The outbreaks were controlled quickly when appropriate medical supplies were made available and quarantine procedures were used (CDC I).
Ebola-Reston, the fourth subtype, was discovered in 1989. The virus was found in monkeys imported from the Philippines to a quarantine facility in Reston, Virginia which is only about ten miles west of Washington, D.C. (Preston 109). The virus was also later detected in monkeys imported from the Philippines into the United States in 1990 and 1996, and in Italy in 1992. Infection caused by this subtype can be fatal in monkeys; however, the only four Ebola-Reston virus infections confirmed in humans did not result in the disease. These four documented human infections resulted in no clinical illness. Therefore, the Ebola-Reston subtype appears less capable of causing disease in humans than the other three subtypes. Due to a lack of research of the Ebola-Reston subtype there can be no definitive conclusions about its pathogenicity (CDC II).
Staphylococcus is a genus of nonmotile, spherical bacteria. Some species are normally found on the skin and in the throat, and certain species can cause severe life-threatening infections, such as staphylococcal pneumonia (Mosby 1477). Despite the age of antibiotics, staph infections remain potentially lethal. By 1982 fewer than 10 percent of all clinical staph cases could be cured with penicillin, which is a dramatic shift from the almost 100 percent penicillin susceptibility of Staphylococcus in 1952. Most strains of staph became resistant to penicillin's by changing their DNA structure (Garrett 411).
The fight against staph switched from using the mostly ineffective penicillin to using methicillin in the late 1960's. By the early 1980's, clinically significant strains of Staphylococcus emerged that were not only resistant to methicillin, but also to its antibiotic cousins, such as naficillin. In May 1982 a newborn baby died at the University of California at San Francisco's Moffit Hospital. This particular strain was resistant to penicillin's, cephalosporin's, and naficillin. The mutant strain infected a nurse at the hospital and three more babies over the next three years. The only way further cases could be prevented was to aggressively treat the staff and babies with antibiotics to which the bacteria was not resistant, close the infected ward off to new patients, and scrub the entire facility with disinfectants. This was not an isolated case, unfortunately. Outbreaks of resistant bacteria inside hospitals were commonplace by the early 1980's. The outbreaks were particularly common on wards that housed
the most susceptible patients, such as burn victims, premature babies, and intensive care patients. Outbreaks of methicillin resistant Staphylococcus aureus (MRSA) increased in size and frequency worldwide throughout the 1980's (Garrett 412).
By 1990, super-strains of staph that were resistant to a huge number of drugs existed naturally. For example, an Australian patient was infected with a strain that was resistant to cadmium, penicillin, karamycin, neomycin, streptomycin, tetracycline, and trimethoprim. Since each of these drugs operated biomechanically the same as a host of related drugs the Australian staph was resistant, to varying degrees, some thirty-one different drugs (Garrett 413).
A team of researchers from the New York City Health Department, using genetic fingerprinting techniques, traced back in time over 470 MRSA strains. They discovered that all of the MRSA bacteria descended from a strain that first emerged in Cairo, Egypt in 1961, and by the end of that decade the strain's descendants could be found in New York, New Jersey, Dublin, Geneva, Copenhagen, London, Kampala, Ontario, Halifax, Winnipeg, and Saskatoon. Another decade later they could be found world wide (Garrett 414).
New strains of bacteria were emerging everywhere in the world by the late 1980's, and their rates of emergence accelerated every year. In the U.S. alone, an estimated $200 million a year was spent on medical bills because of the need to use more exotic and expensive antibiotics, and longer hospitalization for everything from strep throat to life-threatening bacterial pneumonia. These trend, by the 90's, had reached the level of universal, across-the-board threats to humans of all ages, social classes, and geographic locations (Garrett 414).
Jim Henson, famed puppeteer and inventor of the muppets, died in 1990 of a common, and supposedly curable bacterial infection. A new mutant strain of Streptococcus struck that was resistant to penicillin's and possessed genes for a deadly toxin that was very similar to a strain of S. aureus discovered in Toxic Shock Syndrome. This new strain of strep was later dubbed strep A-produced TSLS (Toxic Shock-Like Syndrome). Only a year after its discovery lethal human cases of TSLS had been reported from Canada, the U.S., and several countries in Europe. Streptococcal strains of all types were showing increasing levels of resistance to antibiotics. According to Dr. Harold Neu, who is a Columbia University antibiotics expert, a dose of 10000 units of penicillin a day for four days was more than adequate to cure strep respiratory infections in 1941. By 1992 the same illness required 24 million units a day, and could still be lethal (Garrett 415).
The emergence of highly antibiotic resistant strains Streptococcus pneumoniae, or Pneumococcus, was even more serious. The bacteria normally inhabited human lungs without causing harm; however, if a person were to inhale a strain that differed enough from those to which ho or she had been previously exposed, the individuals immune system might not be able to keep in check (Garrett 415).
By 1990, a third of all ear infections occurring in young children were due to Pneumococcus, and nearly half of those cases involved penicillin resistant strains. The initial resistance's were incomplete. This means that only some of the organisms would die off and the child's ears would clear up, and both parents and doctor would believe the illness gone. The organisms that did not die off would multiply , and in a few weeks the infection would be back. Then if the parents used any leftover penicillin's, they would possible see another apparent recovery, but this time the organisms were more resistant, and the ear infection returned quickly with a vengeance (Garrett 415-16).
In poor and developing countries the prevention of pediatric respiratory diseases had to be handled with scarce resources, available antibiotics, and little or no laboratory support to identify the problem. Health officials then defined the disease process not in terms of the organisms involved but according to where the infection was taking place, and the severity of the infection. In general, upper respiratory infections were milder and usually viral, while deep lung involvement indicated a potentially lethal bacterial disease. In 1990 the World Health Organization (WHO) said that the best policy for developing countries was to assume that pediatric pneumonia's were bacterial, and treat with penicillin in the absence of laboratory proof of a viral infection. This process was shown to have reduced the number of child deaths in the test areas by more than a third, and even more surprising was that there was a 36 percent reduction in child deaths due to all other causes. This was only the good news. The bad ne
ws was that penicillin's and other antibiotics offered no more benefit to children with mild and usually viral respiratory infections than not taking any drugs at all and staying home. This was due to the fact that antibiotics have no effect on viruses. Another key danger was that village doctors, who lacked training and laboratory support, would overuse antibiotics, which would in turn promote the emergence of new antibiotic resistant S. pneumoniae (Garrett 417).
Because of drug use policies in both wealthy and poor countries, antibiotic resistant strains of pneumococcal soon turned up all over the world. Some of these strains were able to withstand exposure to six different classes of antibiotics simultaneously. This emergence of drug resistance usually occurred in communities of social and economic deprivation. Poor people were more likely to self-medicate themselves using antibiotics purchased off the black market, or borrowing leftovers from relatives (Garrett 417-19). " Whether one looked in Spain, South Africa, the United States, Romania, Pakistan, Brazil, or anywhere else, the basic principle held true: overuse or misuse of antibiotics, particularly in small children and hospitalized patients, prompted emergence of resistant mutant organisms" (Garrett 419).
Infectious diseases thought to be common, and relatively harmless are now becoming lethal to people of all ages, race, and socioeconomic status because of the misuse of medicines, which make the diseases ever more drug resistant, and short sighted political policies. It now seems that the microbes now have the macrobes on the run.
Consider the difference in size between some of the very tiniest and the very largest creatures on Earth. A small bacterium weighs as little as 0.00000000001 grams. A blue whale weighs about 100000000 grams. Yet a bacterium can kill a whale ... Such is the adaptability and versatility of microorganisms as compared with humans and other so called "higher" organisms, that they will doubtless continue to colonise and alter the face of the Earth long after we and the rest of our cohabitants have left the stage forever. Microbes, not macrobes, rule the world.
- Bernard Dixon, 1994
WORKS CITED
CDC(I).Ebola Virus Hemorrhagic Fever: General Information. http://www.cdc.gov/ncidod/diseases/virlfv/ebolainf.htm[1996, November 20].
CDC(II). Filoviruses in Nonhuman Primates: Overview of the Investigation in Texas. http://www.cdc.gov/ncidod/diseases/virlfvr/ebola528.htm[1996, November 20].
Garrett, Laurie. The Coming Plague. Farrar, Straus. and Giroux: New York, 1994.
Mosby's Medical, Mursing, and Allied Health Dictionary 4th Ed. . Mosby-Year Book, Inc.: St.Louis,1994.
Preston, Richard. The Hot Zone. Random House Inc.: New York, 1994.
Roizman, Bernard. Infectious Diseases in an Age of Change. National Academy Press: Washington,D.C., 1995.
Top, Franklin H. . Communicable and Infectious Diseases. C.V. Mosby Company: St.Louis, 1964.
f:\12000 essays\sciences (985)\Biology\Tiger Subspecies.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tiger Subspecies
I am here writing a report on the sub species of tigers. Many of these tigers will not survive in the next forty years due to the killing that us humans have caused. Tiger sub species have not been as important to us humans as whether or not the species Tiger Panthera tigris can survive either in the wild or in captivity for the next forty years. Nevertheless a great deal of information on the future of the tiger can be learned from a study of subspecies. Which is what my report will be based on.
It is amazing to me that people want numbers of tigers. Process is the important aspect. If I say that the number of Sumatra tigers today is between 300 and 400, it doesn't tell one that the 1975 census was 1500. Therefore saying that the Bali tiger, theCaspian tiger, the Javan tiger, the Manchurian tiger, and the Southwest Chinese tiger are now extinct doesn't give you aportrait of the process of extinction. The Javan tiger became extinct in the 1970's in a set aside special national park under full protection.
Politicans and bureaucrats seem to be obsessed with numbers and not trends. Let me illustrate this with tigers.
There are frequently requests as to the exact number of tigers, or a tiger subspecies left in the world. That tells you that there are people that care. But there are so little tigers left that we can not even keep track of them.
We should look at the trend that the population is taking, rather than the number as a slice in time. Just as you might say of a young member of the Hunt family, that they were very wealthy. A hypothetical individual was 24 years old and had $1,000,000. What isn'tavailable in this one time analysis was that this Hunt inherited $24,000,000 at age 21, has no education nor ever worked. At age 22 Hunt had $9,000,000 and at 23 had $4,000,000. Now instead of saying Hunt was rich, we would say Hunt is in
trouble. Tigers are a great deal like Hunt.
The estimates of tigers are from the Carnivore Preservation Trust has arrived at their own estimates; they are highly educated guesses:
Bengal tigers probably number fewer than 1000 in India. In majority of that country it is hopelessly fragmented. It is, overall, actively poached. Fewer than 200 exist in Nepal and under 1000 exist in Myammar (Burma). Indochinese tigers are among 500 and 2000. CPT's guess is about 700 amidst heavy poaching. In the early sixties when the South China tiger had a
population of about 4,000, Mao instituted a tiger eradication program. After Mao's death in 1976, the South China tiger
population was reduced to 400. The Chinese government then instituted a "Save the Tiger Program!" South Chinese tiger is
about 25, but the wild number is so inbred that the effective population number is more like four! The Siberian tiger number
is between 125 and 175. (Closer to 125 according to the Russian scientist whowas recruiting for the genetic
management of the free ranging Siberian tiger program in October 1995.)
We must do something to stop the extincian of one of our must beautiful animals. We have lost to many species of animals and we can not afford to lose any more. As we can see, at the rate that we are losing animals in the world, pretty soon there are not going to be any left for us to enjoy.
f:\12000 essays\sciences (985)\Biology\Tigers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Tiger is often described as a particularly dangerous, sly, and invincible predator. The Tiger is the largest of the cat family. They have powerful bodies, large paws, and very sharp claws. The head of the Tiger is rounded and has a convex profile. The ears are black with white in the middle. The Tiger's eyes are a yellowish-orange color, but at night they almost look green. Coloration of the Tiger is reddish yellow or rust- brown on the upper side, and a whitish under side. There is prominent beard like growth of hair on the cheeks, and they may have a short neck mane. There body is covered with heavy black stripes ( Grzinek's Animal Life Encyclopedia 1972).
During the day a Tiger may rest in the shade, or lie in a quiet pool of water to escape the heat. Tigers like water, and are very good swimmers. Northern Tigers undertake long migrations. These migrations occur when epidemics wipe out the prey populations. This type of migrations happens often. Temperatures down to -45 degrees Celsius are not harmful and do not dampen their activities ( Mammals Multimedia Encyclopedia 1990).
Tigers usually live and hunt alone. When they hunt they can leap 5 to 6 or jump as far as 10 meters. Tigers do not usually prey on people, but some do become man eaters. If a Tiger becomes a man eater it is because of a wound, weakness, or just because it is to old. The young accompany their mother on the hunt when they are 5 to 6 months of age. Tigers begin to hunt alone when they are just eleven months old. Before the young can hunt alone, the mother will demonstrate how it is done(Comptons Interactive Encyclopedia 1993).
Tigers usually prey on deer wild cattle called gore, and wild pigs. Whenever humans have domestic animals, Tigers will feed on cattle, horses, sheep, and goats. A cattle eating Tiger will kill an ox about every 5 days, or from 60 to 70 a year. If a tiger has trouble finding food it will eat birds eggs or berries. If a Tiger can not find any kind of food at all, it will eat any kind of flesh it can find( Grzinek's Animal Life Encyclopedia 1972).
Tigers have only three major requirements:they need large prey,water,and cover.An adult tiger requires 12 pounds at a time,but
it may eat as much as 60 pounds in one night.A tigress with three young require 280 kg of meat every twenty days.At times a tiger must
go without food,or will have to make do with small animals.In many
areas tigers have to kill much more than this amount because people
often chase them away from kills.The tiger attemps to pull dead prey near water.As a tiger feeds,it often inerrupts the meal by going over
to a body of water and drinking a large amount.(Grzinek's Animal Life Encyclopedia 1972)
Tigers live in Asia,and can not be found on any other continent.
Tigers prefer damp,thickly overgrown places such as dense jungles,
andriver banks covered with reeds or brush.They like to prowl rainforests,woodedhillsides,and swamps in many parts of Asia.They
only survive in parts of that range.(Compton's Interactive Encyclopedia 1993)
The limit of hunting territories permits sufficient distance between
individuals,lowering the frequency of fights between males competing
for females.Males will not tollerate a male staying in his territory,but he will allow other males to pass through the area.A female will not defend her territory,because to a female a territory is just a hunting area.Tigers stay in their dens when they are not hunting.The dens are constructed beneath fallen trees or rocks,in the earth and stone cavities,or in rotting thickets.A tiger usually has several dens in a territory,using whichever one is closest when one is needed.The den is
not visited daily or regularly.A tiger mother with cubs will stay at a den until the young can walk with her on hunting raids.(Mamals Multimedia Encyclopedia 1990)
Tigers can breed at any time of the year,but they mostly breed in winter or spring.After 95-112 days the tigress bears 2to4 young or at the most 7.The young usually weigh anywhere from 800-1500 grams at birth.Their eyes open by the tenth day.The cubs first leave their den when they are 2 months old.After six months of age the young begin going along with their mother on hunts.The cubs have their permanent teeth when they are about one year old.Tigers first hunt alone when they are eleven months old.A tigress usually bears young every three to four years.In the north they bear young every four to five years.The young normally stay with their mother for two or three years,and then go out on their own.(Grzinek's Animal Life Encyclopedia 1972)
When a male tiger is full grown it can weigh as much as 500lbs.
Males can grow to be as long as 7ft. long.They also may get a tail as long as 3ft.The total body length of a full grown male is around 11ft.
(Compton's Interactive Encyclopedia 1993)
The tiger needs sufficient cover,so that it can creep to within
10-25meters of its prey unnoticed,after the tiger suprises its prey with
a sudden attack.If possible,the tiger creeps 2-4 meters away and grabs the prey after a quick leap,it will pursue its prey for 100-200meters.If
it has not caught the prey by then it gives up,and most of the time the
tiger fails to catch the animal it was after.Small prey is killed by a
nape bite.Larger prey,that have heavier vertebrae,powerful horns,or
dangerous antlers, are killed by a bite to the throat either
from the side or from below.Powerful prey such as water buffalo are bitten on the skin and scratched with the paws.Tigers kill elephants when they are able to accomplish such a feat,by jumping on the animals back,biting
and pawing at the head,neck,and back,and then killing it by biting through
the throat.(Grzinek's Animal Life Encyclopedia 1972)
Beauty ,mystery,and strength are all
qualities for which the tiger
has been admired and feared .In some myths,the tiger,with its striking
pattern of stripes, is the king of beasts. But in most stories,
the tiger is a demon. Because of the animal's reputation as a dangerous
foe, those who hunted the tiger were respected for their bravery.
f:\12000 essays\sciences (985)\Biology\Tourettes syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tourette Syndrome
Tourette Syndrome was named for Georges Gilles de la Tourette, who first described the syndrome in 1885. Although the disease was identified in 1885, today in 1996, there still is a mystery surrounding Tourette Syndrome, its causes and possible cures. Tourette Syndrome is a neurological disorder that researchers believe is caused by and abnormal metabolism of the neurotransmitters dopamire and serotonin. It is genetically transmitted from parent to child. There is a fifty percent chance of passing the gene on from parent to child (Gaffy,Ottinger). Those most at risk are sons of mothers with Tourette Syndrome. About three-quarters of Tourette Syndrome patients are male. Males with the disorder have a ninety-nine percent chance of displaying symptoms. Females, have a seventy percent chance of displaying symptoms. This ration of 3-4:1 for males and females may be accounted for by referral bias. Also, there is a frequent number of reported cases within the Mennonite religious isolate population in Canada
. The specific genetic transmission however, has not been established. Some researchers believe that the mar is on an autosomal dominant trait. Some cases however are sporadic, and there may not be a link to family history involved. These cases are mild however, and not full blown. The onset of Tourette Syndrome must be before the age of fifteen, and usually occurs after the age of two. The mean age onset of motor tics is seven. The mean age onset for vocal tics is nine. In order for a person to be classified as having Tourette Syndrome they must have both multiple motor tics and vocal tics. These tics however do not have to occur everyday. In fact, affected individuals may rarely exhibit all of the symptoms, or all of the tics. The vocal and motor tics must also occur within the same year, for a person to be classified as having Tourette Syndrome. Symptoms can disappear for weeks or months at a time. However if people afflicted with the syndrome try and suppress their tics, they will reoccur with i
ncreased ferver. Tics increase as a result of tension or stress, and decrease with relaxation or concentration on absorbing a task.
Tics are classified into two groups: complex and simple tics. Simple tics are movements or vocalizations which are completely uncomprehendable and meaningless to those not suffering from the disorder (Peiss). Complex tics are movements or vocalizations which make use of more than one muscle group to appear to be meaningful (Peiss). Simple motor tics are: eye blinking, head jerking, shoulder shrugging or facial grimacing. Simple vocal tics are: throat clearing, coughing, snorting, baiting, yelping. Examples of complex motor tics include: jumping, touching over people, and or things, smelling, stomping loudly, making obscene gestures, hitting or biting oneself. Complex vocal tics are any understandable words given out of context, and may including echoing and repetition.
Other problems associated with Tourette Syndrome include Attention-Deficit Disorder, Hyperactivity Disorder, disinhibition, obsessive compulsive disorder, dyslexia and other various learning disabilities, and various sleep disorders. People with Tourette Syndrome do tend to present more other Axis 1 disorders than the rest of the normal population not afflicted with the syndrome. People with Tourette Syndrome are also afflicted with obsessions of contamination, disease, sexual impulses, self harm, being "just right", and death.
Sixty percent of those who are diagnosed as having Tourette Syndrome will also display some type of learning disorder. Such disorders include: having difficulty organizing work, having difficulty playing quietly, talking excessively, interrupting and intruding on others, having a shorter attention span, losing necessary materials for school and home, and engaging in physically dangerous activity, with no thought given to the ramifications of their actions. Attention-Deficit/Hyperactivity Disorder is also found in sixty percent of those with Tourette Syndrome. Those with ADHD are easily distracted, has difficulty getting along in groups, shifts from activity to activity, often blurts out answers before asked, and fidgets with hands, feet, or squirms in seat. Although these symptoms may seem fairly similar it Tourette Syndrome, it is important to remember that Tourette Syndrome is a genetically inherited disease. These other complexes are merely brought on by the neurological imbalance which affects the b
rain of those afflicted.
Tourette Syndrome cannot be treated as a whole. Medications must be issued for the different aspects of the disease. For example, Tics and movements are treated with Neuoleptics, Clonidine and SErotonin Drugs, which are prozac-like. These drugs are very good for treating muscle spasms as well as tremors. However the side effects may be unpleasant. Therefore the patients under such drugs must be monitored for the liver and heart. The Medical Treatment for OCD is augmenting dopamine agents (Orap) or Klonopin. These drugs help curtail depression, but how genital-urinary side-effects. The ADHD in Tourette Syndrome are treated with Ritalin because the tics may not increase if used in reasonable dosages. Hyperactivity is also curtailed. The side effects of Ritalin are urinary problems, skin changes, EEG monitor, and EKG monitor as well. The Tics may also be controlled by visits to doctors office, talking to friends, and staying away from social gatherings, and learning to deal with emotional trauma.
Help however is available for Tourette Syndrome. The goals of health professionals concerning this disorder is to clarify reasons for school problems, and to develop and individualized multimodality treatment program.
f:\12000 essays\sciences (985)\Biology\Transitions in Water.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Keys To Unlocking Transitions in Water
When examining waters transition from fresh to salt as well as from salt to
fresh one quickly finds the importance of estuaries. In terms of geology,
present-day estuaries are young and ephemeral coastal features. Today's estuaries
began to take their current form during the last interglacial period, when sea level
rose about 120 m (Braun 36). However, the relatively high sea levels and extensive
estuaries found today have been characteristic of only about 10 to 20 percent of the
last million years. When sea level was lower, during glaciation periods, estuaries
were much smaller than they are at present and were located on what is now the
continental slope. Unless sea level rises, estuaries tend to fill with sediments and
become much smaller. The sediments come from riverborne terrestrial materials
from the eroding continents and from sand transported upstream by the tides from
the continental shelf (Braun 55).
It is in estuaries that most of the world's freshwater runoff encounters the
oceans. Because fresh water is lighter, or less dense, than salt water, unless the
two are mixed by the tides or winds, the fresh water remains at the surface,
resulting in a salinity gradient. Tides force seawater inland as a countercurrent
and produce a saltwater wedge below the freshwater surface waters (Bellamy
62).
Estuaries are always in a state of change and hardly ever in a steady state.
The principal energy source are tides, causing estuarine mixing, but wind, wave
motions, and river runoff can also be important locally (Braun 45). Salt water and
fresh water mix to form brackish water. The three main estuarine
ones-saltwater, brackish, and freshwater-can shift seasonally and vary greatly
from one area to another because of changes in river flow. Also, an area of an
estuary can change from stratified to well-mixed during the spring neap-tide
cycles.
The most highly stratified estuaries are the ones that receive a large
amount of fresh water but that have a relatively low tidal range. Partially mixed
estuaries have moderate freshwater inflow and tidal range. The brackish zone of
such estuaries may have a salinity of 2 to 10 parts per thousand (ppt), compared
with the salinity of salt water, which is about 35 ppt. Where there is a large tidal
range but little freshwater inflow mixing is more complete. In coastal lagoons,
where there are large open waters, small tidal range, and low freshwater inputs,
wind is usually a more important mixing agent than tides. It is truly evident the
awesome role estauries play in the transition between salt and fresh water.
f:\12000 essays\sciences (985)\Biology\Twinning in Cattle.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mac Winslow Winslow 1
Dr. Farin
ANS 220
3 December 1996
Twinning in Cattle
Due to the continual fluctuation of the cattle market cattle producers have been searching for ways to improve their production and increase their profits any way possible. For years genetic engineers have been working hard on improving economic efficiency in cattle. It is their hope that through genetic research they can improve the yield and the income of cattle producers around the world. Research has shown that twinning is one way that farmers can increase their yield . Twinning has a significant influence on producers as well as people who are involved in all realms of agriculture. The reason for this large impact at this time is the fact that the occurrences are limited. However, many producers have a vision that twinning can be more than a once in a blue moon occurrence. These producers see twinning as a way to dramatically increase their yield per calving season. Producers will increase their income due to more weight per year per cow. It is necessary ;however, that the producer be well educated on ho
w to handle twinning, in order for it to be successful for them.
Many agencies see twinning as an economic move upward. The American Breeder Service has made efforts to produce semen as well as embryos with high predicted breeding values available to producers. They have been recorded based on
Winslow 2
twinning probabilities and ovulation rates. A large amount of work on twinning has also been done by the Meat and Animal Research Center. Since the early eighties, they have located cattle with a high frequency of twinning and been forming a breeding foundation based on this characteristic. "We believe the time has come to make some of these unique genetic resources available to the beef industry through artificial insemination and embryo transfer" (Gregory 23). An extensive amount of research has been done using embryo transfer in cattle. In one study recipients were implanted with either a single embryo, two embryo in one uterine horn, or one embryo in each uterine horn. It is also possible to split embryos using a micro manipulator and implant each half to produce identical twins. On the average about 16% of the cows implanted with two embryos produced twins. When two embryos were implanted, and one was placed in each horn, conception rates were comparable with the prior method, however the twinning rate wa
s much higher when the embryos were in separate horns (73% vs. 45%). For the most part, when one embryo was split in an attempt to produce identical twins, only one of the offspring survived birth (Davis 302).
Many producers see twinning as a possible advancement in economic prosperity for themselves. Scientists have increased the possibility of successful twinning through extensive genetic research. They now also able to inform the producer of twins through the use of proper palpation techniques as well as ultrasound. Blood can be analyzed in labs to determine fetal weight gain. In addition nutritious feeds and technology that aids in calf survival have made the possibility of
Winslow 3
high twinning success rates closer to being reality. These factors enable the genetic possibilities to be an asset to producers (Gregory 23-24).
"Increased frequency of twinning should increase efficiency of beef production" (Davis 301). Results from twinning are very appealing to a farmer who can use one brood cow to produce two calves per year. Reports show that beef cattle can wean a higher total weight per cow. A twin's average daily gain depends on the environment as well as genetics (Cady 950-956). Single born calves are reported to have birth weights of 25% more than a twin calf. Over time, however, the twin calves approach the weight of the single calves. At weaning the weight gap decreases to only about 15%. Despite this seemingly large difference in weaning weights, it should be realized that there are two calves to sell from a set of twins as compares to one from a single birth. In addition to their size, twin calves consume less rations of fees than their counterparts. From these conditions, promise for economical stimulus is easily seen, especially in beef cattle (Cundiff 3133-3135).
Despite all of these draw backs, work is being done to help twinning become profitable, instead of problem causing. A gene has been researched that causes twinning in cows. This gene could be selected for through expected progeny difference scores just like someone might select for birth weight. This gene would not only make the offspring of the bull more likely to have twins, but it would also help her to be maternal to both of the offspring instead of nurturing one of them and abandoning to other one.
Winslow 4
"At the present, selection for more twin births in dairy cattle results in deleterious effects on the dams" (Berepoot 1044). Economic calculations have mainly been done on beef cattle so far. The calculations for beef cattle is mainly centered around final sale weight per calving season, rather than milk production as in dairy cattle. (1044). Dairy cattle producers usually discourage twinning because of milk loss. Twinning may be directly related to high lactation. Dairy cattle that have superior milk production tend to have higher twinning rates. Even though these cattle were superior in milk, they gave less total milk. An increase in hormones which will inhibit lactation may explain the decline in milk production. Thankfully, this milk decrease does not effect the lactation results of the dam in future parturitions. Since the return of estrus these dams takes longer, there is added milk loss due to loss of productivity (Syrstad 255-261). "in general, there were so many disadvantages that attempts to select
for more twin calves in dairy cattle herds should be discouraged" (Beerepoot 1051).
Twinning in cattle has many positive and negative effects. These effects depend on the breed of cattle and the purpose for which the cattle are raised. Producers can move forward in today's economy through the successful use of twinning. However, the producer must be ready, willing, and able to deal with the difficulties that con along with twinning, in order to ensure the survival and success of not only the calves but of the dams. Selective breeding methods can be utilized to chose a base herd for a twinning program. At this time, many producers believe that the negative effects outweigh the benefits. Through continuing research in the area, twinning shall become
Winslow 5
a successful and economic way to raise beef cattle. Since twinning research began, the percentage of beef cattle giving birth to twins has risen by nearly twenty percent. Through research and education of producers twinning could be one of the beef industry's greatest reproductive achievements.
Twinning is often associated with major management problems, such as an increased frequency of dystocia, retained placenta, and longer rebreeding intervals."Dystocia is defined as all calvings for which personal assistance is needed, and dystocia depends on the size of the calf, its sex, and the age of the dam"
(Beerepoot 1048). "Dystocia accounts for most calf deaths within the first 24 hours of calving" (Taylor 233). Twin calves have a 15% greater chance of undergoing dystocia and the chance of a free martin offspring is likely (Hays and Mozzola 7). Twins have only a 8% less chance of survival, even when there is dystocia. "Twinning has not been considered [in the past as] desirable in cattle because of increased incidence of retained placenta, reduction in future reproductive efficiency, weaker calves that are more difficult to raise, and reduced milk production by the cows after twinning" (Bearden 100). A cow that retains her placenta has a greater chance of infection and a longer duration before returning to estrus. Cattle producing twin calves are estimated to remain open 19-22 days longer than single calvers (Chapin 1-6). The length of gestation in a cow is, on the average, is seven days shorter in cows birthing twins than is cows that are birthing singles (Gregory 3135). This can result in a significant loss
in the number of offspring and the quantity of milk a cow can produce in her lifetime. Twin
Winslow 6
calvers can also be costly due to the fact that they are subject to different postpartum nutritional needs (Cundiss 3133).It has also been observed that there is an increased incidence of abortions during late pregnancy among cows that carry twin fetuses.
"The heritability of twinning is lower. A higher incidence of twinning has been reported for certain cow families, but long term selection studies to increase twinning have not greatly increased the twinning rate" (Bearden 100). In many analysis, repeatability was estimated to be less than heritable, this is assumed to be due to small negative environmental covariances in adjacent gestation or estus cycles (Gregory 3214). The genetic correlation between ovulation rates and twinning were found to be 80% in cattle. Yet, in heifers it had a substantial increase of 10% more. Research by recording consecutive ovulation rates, can help when establishing a base herd with emphasis on twinning. Using these records, producers can have a hold on relative twinning. Sires may also be selected based on the same records from their daughters (Gregory 3212-3218). Ovulation rate in heifers can be used to predict breeding values for twinning. To pick breeding values a producer should use the average ovulation rate form severa
l estrous cycles. Estrous cycles can be observed at
3 week intervals between puberty and breeding. In recent test analysis genetic correlation proved to be high with twinning. The analysis was not independent because it had many cows and several estrous cycles.
"Adjustments in management practices are required to exploit full potential of twinning to increase efficiency of beef production" (Gregory 3134). Twin carriers need
Winslow 7
a great deal of care to ensure a safe gestation period and a safe delivery. Recently more producers have began to use ultrasound to detect the number of embryos, fairly early in gestation. This saves the producer a great deal of money that would other wise be lost, because paying a veterinarian is much more economical than loosing two calves. More postpartum care is also required for the mother and the offspring by the producer. Many times when a cow gives birth to a pair of twins her maternal instinct only tells her to take care of one of the calves. Due to this one of the offspring is abandoned and given no care from the dam. This leads to the death of the abandoned offspring.
Even though good breeding practices have proven to be a major factor, the environment will also have a large influence on twinning. Part parity seems to have the largest effect, not considering heritability. One percent twinning was displayed in cows in their first parity. Yet, 6% twinning was displayed in cattle in their third parity. This could be directly related to the cattle's age and the ability of the cow to maintain a biparous pregnancy. Time is a large factor in beginning and maintaining a herd that is prone to having a large twinning percentage. Genetically, twinning is not affected largely by additive variation (Cady 952-956). Age of the mother does not usually affect the proportion of twins born alive; however, the frequency of natural twinning increases with age and parity of the dam (Davis 306). Most twinning research has been done on crossbreeds, which is not a true estimate of all cattle because of possible hybrid vigor concerning certain traits. Not much research has been done on
Winslow 8
in-breeding and between breeds. More will be learned about the genetic variation responsible for twinning, once these ideas have been researched more.
Works Cited
Davis, M.E.; W.R. Harvey; M.D. Bishop; W.W. Gearheart, "Use of embryo Transfer To Induce Twinning in Beef Cattle:Embryo Survival Rate, Gestation Length, Birth Weight and Weaning Weight of Calves". J. of Anim. Science, 1989. 67: 301-310.
Cundiff, L.V.; Gregory, K.E.; Echternkamp, S.E.; Dickerson, G.E., "Twinning in Cattle III. Effects of Twinning on Dystocia, Reproductive Traits, Calf Survival, Calf Growth, and Cow Productivity." J. of Anim. Science, 1990. 68:3133-3144
Bearden, J.W.; M.D. Holland, K.L. Hossner, J.D. Tatum. "Serum Insulin-Like Growth Factor I Profiles In Beef Heifers With Single and Twin Pregnancies". J. of Anim. Science, 1988.66:3190-3196.
Cady, R.A., L.D. Van Vleck, "Factors Affecting Twinning and Effects of Twinning on Holstein Dairy Cattle." J. of Anim. Science, 1978.46.950-956.
Taylor, Robert E. Beef Production and the Beef Industry. 1984 Burgess Publishing Company. Minnaepolis.
Gregory, J.E. Reproduction in Farm Animals. 1980. Lea & Febinger. Philedelphia
Beerepoote, R.H. Reproduction of Farm Animals. 1982. Logman Inc, New York
Russell, Perter J. Genetics. 1996. Library of Congress, Washington DC.
f:\12000 essays\sciences (985)\Biology\Two Brains .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Two Brains?
Your brain has two sides. And each has a distinctly different way of
looking at the world.
Do you realize that in order for you to read this article, the two
sides of your brain must do completely different things? The more we
integrate those two sides, the more integrated we become as people.
Integration not only increases our ability to solve problems more
creatively, but to control physical maladies such as epilepsy and migranes,
replace certain damaged brain functions and even learn to "thin" into the
future. Even more startling is evidence coming to light that we have
become a left-brain culture.
Your brain's right and left side have distinctly different ways of
looking at the world. Your two hemispheres are as different from each
other as, oh, Micheal Wilson and Shirley Maclean. The left brain controls
the right side of the body (this is reversed in about half of the 15
percent of the population that is left-handed) and, in essence, is logical
analytical, judgemental and verbal. It's interested in the bottom line, in
being efficent. The right brain controls the left side of the body and
leans more to the creative, the intuitive. It is concerned more with the
visual and emotional side of life.
Most people, if they thought about it, would identify more with
their left brain. In fact, many of us think we are our left brains. All
of that non-stop verbalization that goes on in our heads is the dominant
left brain talking to itself. Our culture- particularly our school system
with its emphasis on the three Rs (decidedly left-brain territory) -
effectively represses the intuitive and artistic right brain. If you don't
believe it, see how far you get at the office with the right brain activity
of daydreaming.
As you read, your left-side is sensibly making connections and
analysing the meaning of the words, the syntax and other complex relation-
ships while putting it into a "language" you can understand. Meanwhile,
the right side is providing emotional and even humerous cues, decoding
visual information and maintaining an integrated story structure.
While all of this is going on, the two sides are constantly
communicating with each other across a connecting fibre tract called the
corpus callosum. There is a certain amount of overlap but essentially
the two hemispheres of the brain are like two different personalities
that working alone would be somewhat lacking and overspecialized, but
when functioning together bring different strengths and areas of expertise
to make an integrated whole.
"The primitive cave person probably lived solely in the right
brain," says Eli Bay, president of Relaxation Response Inc., a Toronto
organization that teaches people how to relax. "As we gained more control
over our environment we became more left-brain oriented until it became
dominant." To prove this, Bay suggests: "Try going to your boss and saying
"I've got a great hunch." Chances are your boss will say, "Fine, get me
the logic to back it up."
The most creative decision making and problem solving come about
when both sides bring their various skills to the table: the left brain
analysing issues, problems and barriers; the right brain generating fresh
approaches; and the left brain translating the into plans of action.
"In a time of vast change like the present, the intuitive side of
the brain operates so fast it can see what's coming," says Dr. Howard
Eisenberg, a medical doctor with a degree in psychology who has studied
hemispheric relationships. "The left brain is too slow, but the right
can see around corners."
Dr. Eisenberg thinks that the preoccupation with the plodding left
brain is one reason for the analysis paralysis he sees affecting world
leaders. "Good leaders don't lead by reading polls," he says. "They have
vision and operate to a certain extent by feel."
There are ways of correcting out cultural overbalance. Playing
video games, for example, automatically flips you over to the right brain
Bay says. "Any artistic endavour, like music or sculpture, will also do
it."
In her best-selling book "Drawing on the Right Side of the Brain
(J.P. Tarcher Inc., 1979), Dr. Betty Edwards developed a series of exercises
designed to help people tap into the right brain, to actually see or process
visual information, differently. She cites techniques that are as old as
time, and modern high-tech versions such as biofeedback.
An increasing number of medical professionals beieve that being in
touch with our brain, especially the right half, can help control medical
problems. For examplem Dr. Eisenberg uses what he calls "imaginal
thinking" to control everything from migranes to asthma, to high blood
pressure. "We have found," he says, "that by teaching someone to raise to
raise their temperature - by imaging they are sunbathing or in a warm bath
- they can control their circulatory system and terefore the migrane."
Knowledge of our two-sided brain began in the mid-1800's when
French neurologist Paul Broca discovered that injuries to the left side of
the brain resulted in the loss of speech. Damage to the right side,
however did not. Doctors speculated over what this meant. Was the brain
schizophrenically divided and non-communicative?
In the early 1960s, Nobel Prize winner Dr. Roger Sperry proved that
patients who had their corpus callosum severed to try and control epileptic
seizures could no longer communicate between their hemispheres. The
struggle can be seen quite clearly in the postoperative period whe the
patient is asked to do a simple block design. This is a visual, spacial
task that the left-hand (controlled by the right brain in most of us) can
do very well but the right hand (controlled by the language-oriented left
brain) does poorly. The right hand may even intervene to mix up the
design.
Some people with epilepsy can control their seizures by concentrating
activity on the hemisphere that is not affected. In the case of left lobe
epilepsy, this can be done by engaging in a right-brain activity such as
drawing.
One intriguing question is why we have two hemispheres at all? "In
biology you always have the same thing on one side as the other - ears,
lungs, eyes, kidneys, etc." explains Dr. Patricia De Feudis, director of
psychology at Credit Valley Hospital in Mississauga, Ont. "But with the
brain there is more specialization. You can have something going on one
side and not not be aware of it in the other."
Our knowledge of the brain is general is only beginning. We know
even less about how the hemispheres operate, Getting in touch with how the
two sides work can only do us good, if just to keep us from walking around
"half-brained".
f:\12000 essays\sciences (985)\Biology\Variations in High Altitude Populations.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Some ten to twenty-five million people (that is less than 1% of the earth's population) currently make it[high altitude zones] their home(Moran,143)." The adjustment high altitude populations must make are firstly physical and secondly cultural. Although most people adapt culturally to their surroundings, in a high altitude environment these cultural changes alone aren't enough. Many physical adaptations that reflect "the genetic plasticity common to all of mankind(Molinar,219)" have to be made to survive and even more than that thrive in this type of environment.
In this paper I will describe the high altitude stresses. Along with adaptations made by the populations living in them. The two high altitude populations which I will examine in this paper are the Tibetan people of the Asian Himalayas and the Quechua of the South American Andes.
The Quechua are an Indian people who inhabit the highlands of Peru and Bolivia. They speak Quechua, which is a branch of the Andean-Equitorial stock. They show many remnants of Inca heritage by their houses, music, and religion which has pagan rites under the Roman-Catholic surface. Their villages consist of kin groups . Their marriage partners are taken from within each village. Agriculture is the dominant subsistence pattern in the central Andean region but the Nunoa region where the Quechua reside can only support a few frost-resistant crops. Which include bitter potato, sweet potato, and a few grain crops of quinoa and canihua. The rest of the fruits and vegetables of the Quechua come from the eastern mountains on it's way to the markets. The most important subsistence pattern for the Quechua is stock raising. Which is limited to the few animals that do well in the high altitudes. Their stock include alpacas,llamas and sheep.
In the Himalayas only "5% of the geographical area(Baker,36)" can be used for agriculture. The main crops are barley, wheat and buckwheat. The crops are grown between 3,500 and 4,300 meters. These few crops are threatened by drought, hail, frost, snow and erosion. The Himalayas also have extensive pasture areas which are used by the nomadic and sedentary peoples. The higher regions have pastures where yak, sheep, and goats are the main animals used.
In the high altitude there are many environmental stresses that the people must endure. They include hypoxia, intense ultraviolet radiation, cold, aridity, and a limited nutritional base. The people adapt to these stresses in many ways.
Hypoxia, or low oxygen pressure, is the most prominent stress which populations living at high altitudes must deal with. "Hypoxia results whenever either physiological,pathological, or environmental conditions cannot deliver adequate supply of oxygen to the tissues. Since air is compressible, air at high altitudes is less concentrated and under less pressure. At 4500 meters the partial pressure of oxygen is decreased by as much as 40%, in comparison to pressure at sea level. This reduces the amount of oxygen finally available to the tissue(Moran,147-148)." The adaptations to hypoxia are all geared towards increasing the oxygen to the tissues.
One adaptation to hypoxia is an increase of red blood cells in circulation. A person living in high altitude conditions is likely to have "30% more red blood cells(Molinar,218)" than a person living at sea level. "This greater number of red blood cells increases the hemoglobin concentration, which in turn increases the oxygen -carrying capacity per unit volume of blood(Molinar,219)." This then increases the oxygen sent to the tissues. Respiration and cardiac output are also increased. There is an increase in the capillary network to aid diffusion of oxygen to the tissues. There have also been cellular changes that increase the resistance to the tissues to low oxygen.
Many other effects are felt from hypoxia. Growth and development are one of the many areas affected. Kruger and Arias-Stella compared two populations at 4,570 meters and at 200 meters and found the mean placental weight of the high population to be 561 grams as compared to the low-land population weight to be 500 grams. Placenta volumes did not differ showing that the placenta at the high altitude was denser. The denser placenta offers the fetus more protection and greater oxygen. The birth weights at high altitudes are uniformly lower than that of low altitude. This is probably due to hypoxia but the nutritional status of the mother's must also be taken into account. A study by Frisancho Velasquez and Sanchez demonstrated that subjects with short stature attained a greater maximal aerobic capacity than their counterparts of a larger body size when tested under identical conditions. It is known that " birth weight is said to be correlated with maternal size particularly stature(Baker,95)" therefore small birt
h weight is an effect of the adaptation of body size to deal with hypoxia.
Growth and development in high altitude populations is considerably slower than low-land growth.This may be due to the growth of their large chests and the extra production of the red blood cells from the bone marrow. This extra large chest growth increases the lung capacity to take in more oxygen. Although in the Himalayas this increased chest size is not a factor.
Baker shows growth in stature occurs until the twenty second year. Sexual dimorphism isn't defined until the 16th year. Growth spurts also take longer to occur. Fifteen to nineteen for boys and fourteen to seventeen for girls. The mean weight for Sherpas and Quechua is 54 kilograms. Height is 140-160 centimeters. Menses between the Sherpas and Quechua differ. The mean age of Sherpa women to begin menses is 18. For the Quechua it is thirteen although this is compared to an Andean lowland mean of eleven.
Cold is another stress people of high altitudes must contend with. Three things help these populations "one is a lack of dramatic fall in core temperature 2) is a slightly elevated metabolic rate and 3) consistently high extremity surface temperatures(Baker,277)." The elevated metabolic rate generates a greater body heat. This greater blood flow helps maintain a warmer skin surface. This is necessary because of the heat lost through the extremities. Keeping the core temperature high is another adaptation which keeps the bodies of these people warn even while at rest in their harsh environment.
There are many non-physical adaptations that people make to help them adjust to the high altitude environment. Clothing is one of these adaptations. Andean men wear "woolen homespun pants which are mid-calf in length. Worn over one or more layers of loosely knit woolen underwear. A knitted,sleeveless undershirt is used under a cotton shirt with long sleeves...A colorful jacket, matching the pants, is also used. The outfit is completed with a felt hat and a poncho.(Baker,263)." "Women wear several woolen skirts and a long sleeved jacket of similar material. They also may use knitted underwear but like the men wear a manufactured cotton blouse. Women carry over their shoulders a shawl which is similar in construction to the poncho...Skirts are usually dark red or black as are jackets(Baker,263)." Footwear is normally not worn. Shoes would be detrimental during the rainy season because of the extra loss of heat. Also in the hot weather the feet would sweat. In the Himalayas "Women are shown wearing long-sleeved
cotton blouses which are covered by woolen jackets and ankle length length skirts. Men's dress also seems substantial with long jackets,long pants and heavy coats. As among Quechua Indians most Sherpas [name of one group living in the Himalayas] seem to walk barefoot(Baker,261)." There have been no detailed studies of the Sherpa clothing
Houses are another adaptation people have made. In the Andes there are two basic house designs. "The first uses adobe or sod and is a permanent building. This type is usually found in towns and represents a major investment. The second design is constructed of piled fieldstone, is semipermanent, and is cheap to construct It is more characteristic of areas where the population is largely pastoral. The adobe building has a rectangular floor plan with average dimensions of 5 meters by 10 meters. The roof is gabled with a peak of 4 meters to 5 meters from the ground. Frequently the first meter of the walls will be made of stone to resist erosion due to rain draining from the roof. The roof is typically constructed of tile, grass, or in more affluent families, corrugated tin. The door is small and it's height seldom exceeds 1.3 meters. Doors are usually wooden, but in some cases blankets or old ponchos may be used to cover the openings. Walls are usually plastered with mud to form an air right structure. The roof i
s tightly fitted, regardless of the material used. In some cases a wooden floor may be added but usually a natural dirt floor is preferred. Rooms may be employed for cooking , sleeping, or storage(Baker,260)."
For the semipermanent "The floor plan of these houses is circular or rectangular with the upper walls sloping slightly inward. The roof is always constructed of grass and supported by tree limbs. The diameter is quite variable as is the height. The walls are made of fieldstone. If the house is to be occupied for an extended period of time the stones are carefully piled to eliminate cracks. Those large holes which remain and those at eye level are used as windows; that is, they serve to admit daylight and provide for observation of the surrounding terrain. These houses may have either a wooden door or a piece of old cloth may be used to cover the entrance(Baker,259)."
Baker measured the interior temperatures of adobe houses during the cold and dry season and found that well constructed adobe houses could maintain an interior nighttime temperature 7 degrees Celsius above the ambient temperature. The thermal protection offered by stone houses seems to be less than that of the adobe structures. Baker reports that there is only 3.7 degrees Celsius difference between indoor and outdoor temperatures.
The houses of the Himalayas are "constructed of heavy stone and have wooden roofs which are held in place by stones. Most are two story structures whose interior dimensions are rather small. They are apparently tightly constructed because the first floor is reserved as animals quarters while the second is reserved for human habitation. Cooking is done indoors with the smoke escaping from a small hole in the roof. The use of the first floor as animal quarters might add to the insulation between the floor of the human section and the ground.(Baker,261)" There are no climatological data on these houses.
Scheduling of work is another way people cope. Normally rising after dawn and "spending the day outside taking advantage of the solar radiation(Moran,160) while working and playing. At sunset everyone goes to sleep. Also at night most families sleep together in bed to share body heat.
The adaptations which have been made by these groups also have a down side to them. There is as we've seen a slower growth, a higher infant mortality, and even an increased frequency of respitory diseases. Along with this there is a decrease of many micro-organisms which cause infectious diseases. As we consider this give and take and whether we would ever subject ourselves to these things, we must appreciate what these people go through. High altitude has many stresses to which people must adapt. Although this life is hard the people would have it no other way we should respect and commend them for that.
Bibliography
Allan, Nigel and Knapp,Gregory and Stadel,Christoph,eds.1988.Human Impact in Mountains.Rowman&Littlefield:New Jersey.
Baker,Paul and Little,Michael,eds. 1976.Man in the Andes. Dowden,Hutchinsonand Ross:Pennsylvania.
Baker,Paul,ed. 1978.The Biology of High Altitude Peoples.Cambridge University Press:London.
Gibbons,Ida. 1996.Andean Cultures Web Page. Gibbons@andes.org.
Molinar,Stephen. 1992. Human Variation. Prentice Hall:New Jersey
Monge,Carlos. 1948.Acclimatization in the Andes.Maryland:The John Hopkins Press.
Moran,Emilio. 1982.Human Adaptability.Westview Press:Colorado.
Occasional Papers in Anthropology. 1968.High Altitude Adaptation in a Peruvian Community.Pennsylvania State University: Department of Anthropology.
U.S. Department of Health and Human Services. 1983.Adjustment to High Altitude.
References
Baker,Paul,ed. 1978.The Biology of High Altitude Peoples.Cambridge University Press:London.
Gibbons,Ida. 1996.Andean Cultures Web Page. Gibbons@andes.org.
Molinar,Stephen. 1992. Human Variation. Prentice Hall:New Jersey
Moran,Emilio. 1982.Human Adaptability.Westview Press:Colorado.
Outline
Thesis:The purpose of this paper is to describe the high altitude stresses and the general adaptations made by the Tibetan population in the Himalayas and the Quechua in the Andes.
I Introduction
II Background
A Quechua People
B Tibetan People
III General Adaptations
A Physical
1 Growth
2 Development
3 Core temperature
4 Extremity temperature
B Non- Physical
1 Clothing
2 Houses
3 Schedule
V Conclusion
High
Altitude
Adaptations
"Some ten to twenty-five million people (that is less than 1% of the earth's population) currently make it[high altitude zones] their home(Moran,143)." The adjustment high altitude populations must make are firstly physical and secondly cultural. Although most people adapt culturally to their surroundings, in a high altitude environment these cultural changes alone aren't enough. Many physical adaptations that reflect "the genetic plasticity common to all of mankind(Molinar,219)" have to be made to survive and even more than that thrive in this type of environment.
In this paper I will describe the high altitude stresses. Along with adaptations made by the populations living in them. The two high altitude populations which I will examine in this paper are the Tibetan people of the Asian Himalayas and the Quechua of the South American Andes.
The Quechua are an Indian people who inhabit the highlands of Peru and Bolivia. They speak Quechua, which is a branch of the Andean-Equitorial stock. They show many remnants of Inca heritage by their houses, music, and religion which has pagan rites under the Roman-Catholic surface. Their villages consist of kin groups . Their marriage partners are taken from within each village. Agriculture is the dominant subsistence pattern in the central Andean region but the Nunoa region where the Quechua reside can only support a few frost-resistant crops. Which include bitter potato, sweet potato, and a few grain crops of quinoa and canihua. The rest of the fruits and vegetables of the Quechua come from the eastern mountains on it's way to the markets. The most important subsistence pattern for the Quechua is stock raising. Which is limited to the few animals that do well in the high altitudes. Their stock include alpacas,llamas and sheep.
In the Himalayas only "5% of the geographical area(Baker,36)" can be used for agriculture. The main crops are barley, wheat and buckwheat. The crops are grown between 3,500 and 4,300 meters. These few crops are threatened by drought, hail, frost, snow and erosion. The Himalayas also have extensive pasture areas which are used by the nomadic and sedentary peoples. The higher regions have pastures where yak, sheep, and goats are the main animals used.
In the high altitude there are many environmental stresses that the people must endure. They include hypoxia, intense ultraviolet radiation, cold, aridity, and a limited nutritional base. The people adapt to these stresses in many ways.
Hypoxia, or low oxygen pressure, is the most prominent stress which populations living at high altitudes must deal with. "Hypoxia results whenever either physiological,pathological, or environmental conditions cannot deliver adequate supply of oxygen to the tissues. Since air is compressible, air at high altitudes is less concentrated and under less pressure. At 4500 meters the partial pressure of oxygen is decreased by as much as 40%, in comparison to pressure at sea level. This reduces the amount of oxygen finally available to the tissue(Moran,147-148)." The adaptations to hypoxia are all geared towards increasing the oxygen to the tissues.
One adaptation to hypoxia is an increase of red blood cells in circulation. A person living in high altitude conditions is likely to have "30% more red blood cells(Molinar,218)" than a person living at sea level. "This greater number of red blood cells increases the hemoglobin concentration, which in turn increases the oxygen -carrying capacity per unit volume of blood(Molinar,219)." This then increases the oxygen sent to the tissues. Respiration and cardiac output are also increased. There is an increase in the capillary network to aid diffusion of oxygen to the tissues. There have also been cellular changes that increase the resistance to the tissues to low oxygen.
Many other effects are felt from hypoxia. Growth and development are one of the many areas affected. Kruger and Arias-Stella compared two populations at 4,570 meters and at 200 meters and found the mean placental weight of the high population to be 561 grams as compared to the low-land population weight to be 500 grams. Placenta volumes did not differ showing that the placenta at the high altitude was denser. The denser placenta offers the fetus more protection and greater oxygen. The birth weights at high altitudes are uniformly lower than that of low altitude. This is probably due to hypoxia but the nutritional status of the mother's must also be taken into account. A study by Frisancho Velasquez and Sanchez demonstrated that subjects with short stature attained a greater maximal aerobic capacity than their counterparts of a larger body size when tested under identical conditions. It is known that " birth weight is said to be correlated with maternal size particularly stature(Baker,95)" therefore small birt
h weight is an effect of the adaptation of body size to deal with hypoxia.
Growth and development in high altitude populations is considerably slower than low-land growth.This may be due to the growth of their large chests and the extra production of the red blood cells from the bone marrow. This extra large chest growth increases the lung capacity to take in more oxygen. Although in the Himalayas this increased chest size is not a factor.
Baker shows growth in stature occurs until the twenty second year. Sexual dimorphism isn't defined until the 16th year. Growth spurts also take longer to occur. Fifteen to nineteen for boys and fourteen to seventeen for girls. The mean weight for Sherpas and Quechua is 54 kilograms. Height is 140-160 centimeters. Menses between the Sherpas and Quechua differ. The mean age of Sherpa women to begin menses is 18. For the Quechua it is thirteen although this is compared to an Andean lowland mean of eleven.
Cold is another stress people of high altitudes must contend with. Three things help these populations "one is a lack of dramatic fall in core temperature 2) is a slightly elevated metabolic rate and 3) consistently high extremity surface temperatures(Baker,277)." The elevated metabolic rate generates a greater body heat. This greater blood flow helps maintain a warmer skin surface. This is necessary because of the heat lost through the extremities. Keeping the core temperature high is another adaptation which keeps the bodies of these people warn even while at rest in their harsh environment.
There are many non-physical adaptations that people make to help them adjust to the high altitude environment. Clothing is one of these adaptations. Andean men wear "woolen homespun pants which are mid-calf in length. Worn over one or more layers of loosely knit woolen underwear. A knitted,sleeveless undershirt is used under a cotton shirt with long sleeves...A colorful jacket, matching the pants, is also used. The outfit is completed with a felt hat and a poncho.(Baker,263)." "Women wear several woolen skirts and a long sleeved jacket of similar material. They also may use knitted underwear but like the men wear a manufactured cotton blouse. Women carry over their shoulders a shawl which is similar in construction to the poncho...Skirts are usually dark red or black as are jackets(Baker,263)." Footwear is normally not worn. Shoes would be detrimental during the rainy season because of the extra loss of heat. Also in the hot weather the feet would sweat. In the Himalayas "Women are shown wearing long-sleeved
cotton blouses which are covered by woolen jackets and ankle length length skirts. Men's dress also seems substantial with long jackets,long pants and heavy coats. As among Quechua Indians most Sherpas [name of one group living in the Himalayas] seem to walk barefoot(Baker,261)." There have been no detailed studies of the Sherpa clothing
Houses are another adaptation people have made. In the Andes there are two basic house designs. "The first uses adobe or sod and is a permanent building. This type is usually found in towns and represents a major investment. The second design is constructed of piled fieldstone, is semipermanent, and is cheap to construct It is more characteristic of areas where the population is largely pastoral. The adobe building has a rectangular floor plan with average dimensions of 5 meters by 10 meters. The roof is gabled with a peak of 4 meters to 5 meters from the ground. Frequently the first meter of the walls will be made of stone to resist erosion due to rain draining from the roof. The roof is typically constructed of tile, grass, or in more affluent families, corrugated tin. The door is small and it's height seldom exceeds 1.3 meters. Doors are usually wooden, but in some cases blankets or old ponchos may be used to cover the openings. Walls are usually plastered with mud to form an air right structure. The roof i
s tightly fitted, regardless of the material used. In some cases a wooden floor may be added but usually a natural dirt floor is preferred. Rooms may be employed for cooking , sleeping, or storage(Baker,260)."
For the semipermanent "The floor plan of these houses is circular or rectangular with the upper walls sloping slightly inward. The roof is always constructed of grass and supported by tree limbs. The diameter is quite variable as is the height. The walls are made of fieldstone. If the house is to be occupied for an extended period of time the stones are carefully piled to eliminate cracks. Those large holes which remain and those at eye level are used as windows; that is, they serve to admit daylight and provide for observation of the surrounding terrain. These houses may have either a wooden door or a piece of old cloth may be used to cover the entrance(Baker,259)."
Baker measured the interior temperatures of adobe houses during the cold and dry season and found that well constructed adobe houses could maintain an interior nighttime temperature 7 degrees Celsius above the ambient temperature. The thermal protection offered by stone houses seems to be less than that of the adobe structures. Baker reports that there is only 3.7 degrees Celsius difference between indoor and outdoor temperatures.
The houses of the Himalayas are "constructed of heavy stone and have wooden roofs which are held in place by stones. Most are two story structures whose interior dimensions are rather small. They are apparently tightly constructed because the first floor is reserved as animals quarters while the second is reserved for human habitation. Cooking is done indoors with the smoke escaping from a small hole in the roof. The use of the first floor as animal quarters might add to the insulation between the floor of the human section and the ground.(Baker,261)" There are no climatological data on these houses.
Scheduling of work is another way people cope. Normally rising after dawn and "spending the day outside taking advantage of the solar radiation(Moran,160) while working and playing. At sunset everyone goes to sleep. Also at night most families sleep together in bed to share body heat.
The adaptations which have been made by these groups also have a down side to them. There is as we've seen a slower growth, a higher infant mortality, and even an increased frequency of respitory diseases. Along with this there is a decrease of many micro-organisms which cause infectious diseases. As we consider this give and take and whether we would ever subject ourselves to these things, we must appreciate what these people go through. High altitude has many stresses to which people must adapt. Although this life is hard the people would have it no other way we should respect and commend them for that.
Bibliography
Allan, Nigel and Knapp,Gregory and Stadel,Christoph,eds.1988.Human Impact in Mountains.Rowman&Littlefield:New Jersey.
Baker,Paul and Little,Michael,eds. 1976.Man in the Andes. Dowden,Hutchinsonand Ross:Pennsylvania.
Baker,Paul,ed. 1978.The Biology of High Altitude Peoples.Cambridge University Press:London.
Gibbons,Ida. 1996.Andean Cultures Web Page. Gibbons@andes.org.
Molinar,Stephen. 1992. Human Variation. Prentice Hall:New Jersey
Monge,Carlos. 1948.Acclimatization in the Andes.Maryland:The John Hopkins Press.
Moran,Emilio. 1982.Human Adaptability.Westview Press:Colorado.
Occasional Papers in Anthropology. 1968.High Altitude Adaptation in a Peruvian Community.Pennsylvania State University: Department of Anthropology.
U.S. Department of Health and Human Services. 1983.Adjustment to High Altitude.
References
Baker,Paul,ed. 1978.The Biology of High Altitude Peoples.Cambridge University Press:London.
Gibbons,Ida. 1996.Andean Cultures Web Page. Gibbons@andes.org.
Molinar,Stephen. 1992. Human Variation. Prentice Hall:New Jersey
Moran,Emilio. 1982.Human Adaptability.Westview Press:Colorado.
Outline
Thesis:The purpose of this paper is to describe the high altitude stresses and the general adaptations made by the Tibetan population in the Himalayas and the Quechua in the Andes.
I Introduction
II Background
A Quechua People
B Tibetan People
III General Adaptations
A Physical
1 Growth
2 Development
3 Core temperature
4 Extremity temperature
B Non- Physical
1 Clothing
2 Houses
3 Schedule
V Conclusion
High
Altitude
Adaptations
Jessyca Caumo
26november1996
Human Variation
Jessyca Caumo
26november1996
Human Variation
f:\12000 essays\sciences (985)\Biology\Vertebrate adaptations for terrestrial Life .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AP-Biology Essay on vertebrate structural adaptations for terrestrial life. (From an actual past AP-BIOLOGY test)
The problems of survival of animals on land are very different from those of survival of animals in aquatic environment. Describe four problems associated with animal survival in terrestrial environments but not in aquatic environments. For each problem, explain a physiological of structural solution.
Four problems faced by animals on land are breathing (respiration), water conservation in excretions, successful reproduction, and the producing an egg which can survive outside of the water.
All animals need to respire, but I have no idea why. Maybe you would like to answer that? Aquatic animals use gills, which are outgrowths from the body which increase surface area over which gas exchange can occur. Inside the gills of aquatic animals, the circulatory system removes oxygen, and delivers waste carbon dioxide. Land vertebrates have developed a different approach to the problem of gas exchange, as water is not present in all of the terrestrial environment. Terrestrial vertebrates have developed lungs to solve this problem. Air enters through the nasal passages, or the mouth, passes through the trachea, then branches off at the two bronchi, and goes through many branching passages called bronchioles, which end in alveoli. Alveoli are sack-like structures where the circulatory system meets the respiratory system.
Since terrestrial vertebrates do not live in water, they need to develop a means of conserving water. One way we do this is through our excretions. Nitrogen forms a major waste product in animals. When amino acids and nucleic acids are broken down, they release toxic ammonia (NH3). To rid the body of this toxin, several mechanisms have evolved, each appropriate to the habitat or survival of the animal. Aquatic animals secrete NH3 directly into the surrounding water. Land animals cannot do this because of the toxicity of NH3. Instead, NH3 is converted into urea in our livers. Urea is significantly less toxic than NH3, and thus requires less water to excrete in the urine. The reason we need the water to excrete this is because the water is needed to dilute the urea (or NH3 if we did excrete it in that form), in order to make it less toxic. Birds excrete ammonia in the form of uric acid, that's what they're always dropping on our heads. Those mangy little rats with wings... have you ever wondered why we let those
little pests run free in the cities, but we wont let dogs and cats free, even though most people consider the birds more of a nuisance? I didn't think so, anyways:
A third adaptation to terrestrial life is internal fertilization. In aquatic animals, many eggs are laid, usually allowing the water, and chance to fertilize the eggs. We can't do this on land, because the eggs and sperm would dry out, and would stay in the same place, unless they could walk (he he he). To solve this problem, we have developed a system of internal fertilization. The sperm are released directly inside the female, providing an increased chance of fertilization.
The amniotic egg of birds and reptiles represents a transition to terrestrial life. The egg provides conditions similar in some ways to the aquatic environment. In the aquatic environment, eggs have soft, usually permeable shells, which do not have to worry about losing water. The amniotic cavity formed by the amnion is fluid-filled, protecting the embryo. The egg case often leathery in reptiles, and calcified in birds protects the contents, while permitting gas to be exchanged with the surroundings. This egg also prevents the evaporation of water from the embryo, since the egg cannot walk to the store and buy some Evian, it needs all to water it has.
f:\12000 essays\sciences (985)\Biology\Viruses 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ebola
With a ninety percent mortality rate, high mutation capability, and opportunities for genetic re-assortment Ebola Zaire is one of the most deadly and unforgiving viruses in the known world. A new family of viruses termed filoviruses, was first discovered in 1967 Marburg W. Germany. Ebola Zaire was first isolated in 1976 at Center of Disease Control, Porton Down in the UK, and at the Institute for Tropical Diseases in Antwerp, Belgium. Immunological uniqueness was found in the laboratory of Dr. Karl Johnson at the Center for Disease Control Atlanta. Since then, there have been five more included in this family.
It is a biological level 4 pathogen, meaning there is no known cure. It is one of the hardest and most deadly to work and study with. There are only two labs in the world that are effectively capable of and authorized to handling the hot virus. Both of these labs are in the United States: The United States Army Research Institute of Infectious Disease (USAMRID) in Reston, Virginia, and the Center for Disease Control (CDC), in Atlanta, Georgia.
Ebola Zaire if great at what it does, to well. It kills so quickly that the index case, the first person to start an outbreak is usually dead before the proper authorities can show up and try to back track where it came from, defying a decent strategy to keep people away from its natural reservoir. However, it destroys the body so quickly that it doesn't have a chance to spread very far, at least in humans. This virus is a true paradox.
Ebola Zaire is a nasty little virus with no known cure. The natural reservoir for the virus is still unknown. If the host could be found, a serum could be made of the antibodies in its blood. It must have a stable host, one in which it has reached equilibrium with. Collection of animal specimens is currently underway to determine the source. The possible species in tropical Africa are so numerous that a long and lucky search is likely to be required.
The virus itself can be decimated by Ultra Violet light, gamma rays, irradiation, lipid solvents, detergents, and common disinfectants.
Moving quickly, thanks to modern technology and with no known hosts, this virus could one day, become a world wide problem. Given that filoviruses with increased potential for rapid evolution, because of the high error rate of ribonucleic acid polymerases that they use to replicate their genomes could easily become a problem. With major airports and fast planes the virus could incubate in a body and spread all over the world very easily.
f:\12000 essays\sciences (985)\Biology\Viruses Complex Molecules or Simple Life Forms .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Viruses: Complex Molecules or Simple Life Forms?
Viruses have been defined as "entities whose genomes are elements
of nucleic acid that replicate inside living cells using the cellular
synthetic machinery, and cause the synthesis of specialised elements that
can transfer the genome to other cells." They are stationaryand are unable
to grow. Because of all these factors, it is debatable whether viruses
are the most complex of molecules or the simplest life forms. While the
definition of living organisms must be adapted, the majority of evidence
leads to the classification of viruses as living organisms.
Viruses are composed of a nucleic acid core, a protein capsid, and
occasionally a membraneous envelope. The nucleic acid core is composed of
either DNA or in the case of retroviruses, RNA, but never both. In
retroviruses, the RNA gets transcribed to DNA bye the enzyme reverse
transcriptase. The protein capsid is a protein layer that wraps around
the virus. There are four basic shapes of viruses. The tobacco mosiac,
adenovirus, influenza virus, and t-even bacteriophage are each examples of
a different virus structure. Each individual protein subunit composing the
capsid is a capsomere.
The tobacco mosiac virus has a helical capsoid and is rod shaped.
The adenovirus is polyhedral and has a protein spike at each vertex. The
influenza virus is made of a flexible, helecal capsid. It has an outer
membranous enevelope that is covered with glycoprotein spikes. The T-even
bacteriophage consists of a polyhedral head and a tail. The tail is used
to inject DNA into a bacterium while the head stores the DNA.
Basic life is defined as the simplest form capable of displaying
the most essential attributes of a living thing. This makes the only real
criterion for life the ability to replicate. Only systems containing nucleic
acids are capable of this phenomenon. With this reasoning, a better
definition is the unit element of a continuous lineage with an individual
evolutionary history. Because of viruses inability to survive when not in
a host, they must have evolved from other forms of life. The origin of
viruses is an easy thing to theorize about so many hypothesese have been made.
One such hypothesis is that viruses were once complete living
parasites. Over time they have lost all other cellular components. This is
backed up by the idea that all cells degenerate over time.
Some people think along very similar lines that viruses are
representatives of an early "nearly living" stage of life. This goes along
with the first hypothesis in that it accounts for a loss of components. All
creatures that become parasitic can be seen losing their obsolete functions
and structures. An example of this is the flea. Fleas are eveolved from
flies but have discarded their unneeded wings.
This theory when applied suggests that atleast some branches of
viruses have evolved from bacteria because of their similar natures.
Scientists say that at one point viruses could have been independant
organisms. As they slowly became parasitic, the unsed structures for protein
and energy synthesis were lost, along with the inhibiting cell wall. While
viruses do need a host cell to complete many important functions of living
organisms, the should still be considered living themselves.
The ability to replicate is important to the classification of an
item as living. Within the host, viruses are able to replicate, evolve, and
even mutate. They are deeply intertwined in the life process by this
dependancy on a host.
Viruses are very specific to what they can use as a host. Despite
the specificness, many viruses can host withmembers of different species,
genus, and even phylums. A lock and key fit determines the host, or host
range. This works vert similar to that of an enzymes active site.
Once the virus has found a host cell, the virus uses the host's
nucleotides and enzymes to replicate it's DNA. Other materials and machinery
of the host cell produce the virus's capsid proteins. The viral DNA and
proteins then join to make a new copy of the virus.
While viruses are inactive when in transport between hosts, the
arguements are overwhelmingly in favor of considering viruses living organisms.
Through their parasitic nature, they are able to fulfill most qualities of
living organisms. Their behavior and complexness also lead to this
classification. While they are not the text book example of living organisms,
is has been in agreement that there always will be exceptions to the rules.
Viruses deserve to take their rightful place among the ranks of living
organisms.
f:\12000 essays\sciences (985)\Biology\Viruses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A flagellum is a whiplike tail that helps organisms living in moist places to move
The characteristics of protists is they are euckaryotic organisms, they are one or many-celled but do not have the complex organization found in plants and animals, and some make there own food and other can't. Protists are plantlike, animallike and funguslike.
A virus is a microscopic particles that make up either DNA or RNA core covered by a protein coat. Viruses are so small an electrn microscope is needed to see them. Viruses are not classified in any kingdom because they are not cells. They show almost none of the characteristics of living things. The classification of viruses are based on it's shape, the kind of nucleic acid it contains, and the type of organism the virus infects. The protein coat of a virus gives the particle it's shape. Some viruses have tails, some are many sided looking like a soccer ball and others look like rods. Others look sphere shaped. An active virus has these steps:
Attach--- A specific virus attaches to the surface to a specific bacterial cell.
Invade--- The nucleic acid of the virus injects itself into the cell
Copy--- The viral nucleic acid takes control of the cell, and the cell begins to make new virus particles.
Release--- The cell bursts open and hundreds of new virus particles are released from te cell. These new viruses go on to affect other cells.
Algae is a plantlike protist. Some are one celled and others are many-celled. All algae can make there own food because they have a pigment called chlorophyll in there chloroplasts. Theses are the phylum of algaes:
f:\12000 essays\sciences (985)\Biology\We are not alone.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
OUTLINE
I. THINGS IN THE SKY
A. THE FIRST DOCUMENTED SIGHTING
B. THE FEVER SPREADS
1. PILOT ENCOUNTERS
2. THE LIGHTS IN THE SKY
II. DENTS IN THE EARTH
III. UNEXPLAINED PHENOMENON
A. THE WRITING ON THE WALL
B. GEODES
IV. WHAT ABOUT RELIGION?
A. THE CHRISTIAN BIBLE
B. THE ANCIENT GREEKS
C. THE AMERICAN INDIAN
V. CONCLUSION
WE ARE NOT ALONE
WE ONCE BELIEVED THAT THE EARTH IS THE ONLY PLANET IN THE
UNIVERSE THAT SUPPORTS LIFE. TODAY THERE IS OVERWHELMING
EVIDENCE THAT NOT ONLY SUGGESTS, BUT SUPPORTS THE VERY REAL
POSSIBILITY THAT WE MAY SHARE THE UNIVERSE WITH OTHER
INTELLIGENT BEINGS.
ON JUNE 24TH, 1947, WHILE SEARCHING FOR THE REMAINS OF A
DOWNED MARINE C-46 TRANSPORT LOST SOMEWHERE IN THE MOUNT
RANIER AREA, A YOUNG IDAHOAN BUSINESSMAN NAMED KENNETH
ARNOLD SPOTTED SOMETHING THAT WOULD CHANGE HIS LIFE FOREVER.
JUST NORTH OF HIS POSITION FLYING AT AN ALTITUDE OF 9,500 FEET AND
AN UNPRECEDENTED AIRSPEED OF 1,700 MPH HE SPOTTED NINE
CIRCULAR AIRCRAFT FLYING IN FORMATION AND ACCORDING TO HIS
ESTIMATE THE AIRCRAFT WERE APPROXIMATELY THE SIZE OF A DC-4
AIRLINER ( JACKSON 4).
THIS ACCOUNT WAS THE FIRST SIGHTING TO EVER RECEIVE A
GREAT DEAL OF MEDIA ATTENTION. THIS SIGHTING GAVE BIRTH TO THE
PHRASE "FLYING SAUCER" COINED BY A REPORTER NAMED BILL
BEGRETTE. ALTHOUGH NOT THE FIRST UFO SIGHTING IN HISTORY,
KENNETH ARNOLDS ACCOUNT IS CONSIDERED TO BE THE FIRST
DOCUMENTED UFO SIGHTING. THE FOLLOWING DAY MR. ARNOLD
DISCOVERED THAT IN ADDITION TO HIS SIGHTING THERE WERE SEVERAL
OTHERS IN THE MOUNT RANIER AREA THAT SAME DAY (JACKSON 6).
WHEN MOST OF US THINK OF UFO SIGHTINGS, WE PICTURE AN
UNEMPLOYED, HALF-CRAZED, ALCOHOLIC HICK LIVING IN A TRAILER
PARK IN THE MIDDLE OF SMALL TOWN, USA. OFTEN TIMES THIS
DESCRIPTION, ALTHOUGH A LITTLE EXAGGERATED, SEEMS TO FIT FAIRLY
WELL. IN THE PAST, WHEN THE AVERAGE PERSON SPOTTED A UFO THEY
WERE QUICKLY DISCOUNTED AS A KOOK OR CON-ARTIST IN SEARCH OF
EITHER ATTENTION OR MONETARY REWARD. IT WASN'T UNTIL MORE
REPUTABLE FIGURES IN OUR SOCIETY BEGAN TO COME FORWARD THAT
WE STARTED LOOKING AT THIS ISSUE A LITTLE MORE SERIOUSLY. AN
ARTICLE WRITTEN IN 1957, ENTITLED " STRANGE LIGHTS OVER GRENADA"
WRITTEN BY AIME' MICHEL DESCRIBES JUST SUCH AN ACCOUNT:
AT 10:35 P.M. ON SEPTEMBER THE 4TH, 1957, CPT.
FERREIRA ORDERED HIS WING TO ABANDON A
PLANNED EXERCISE AND EXECUTE A 50 DEGREE
TURN TO PORT. FERREIRA WAS ATTEMPTING TO
GET A CLOSER LOOK AT WHAT HE DESCRIBED AS
BRILLIANT, PULSATING LIGHT HANGING LOW
OVER THE HORIZON. WHEN THE TURN WAS
COMPLETED HE NOTICED THAT THE OBJECT HAD
TURNED TOO. IT WAS STILL DIRECTLY OVER HIS
LEFT. THERE WAS ABSOLUTELY NO DOUBT
THAT THE ORANGE LIGHT WAS SHADOWING THE
F-84S. FOR ANOTHER 10 MINUTES, IT FOLLOWED
THE JETS WITHOUT CHANGING DIRECTION OR
APPEARANCE. THE PILOTS WATCHED AS FOUR
SMALL YELLOW DISCS BROKE AWAY FROM THE
LARGE RED OBJECT AND TOOK UP A FORMATION
ON EITHER SIDE OF IT. ALL AT ONCE THE LARGE
LUMINOUS DISC SHOT VERTICALLY UPWARD
WHILE THE SMALLER DISCS SHOT STRAIGHT
TOWARDS THE F-84S. IN AN INSTANT THE FLAT
DISC SPED OVERHEAD IN A HAZY BLUR AND
VANISHED. WHEN CPT. FERRIERA WAS
QUESTIONED BY PORTUGUESE AIR FORCE
INVESTIGATORS, HE WAS QUOTED AS SAYING:
"PLEASE DON'T COME OUT WITH THE OLD
EXPLANATION THAT WE WERE BEING CHASED
BY THE PLANET VENUS, WEATHER BALLOONS,
OR FREAK ATMOSPHERIC CONDITIONS. WHAT
WE SAW UP THERE WAS REAL AND
INTELLIGENTLY CONTROLLED AND IT SCARED
THE HELL OUT OF US. (32)
THIS IS ONLY ONE OF LITERALLY HUNDREDS OF PILOT ACCOUNTS
THAT HAVE BEEN DOCUMENTED AND CROSS VERIFIED BY OTHER
SOURCES. TO DATE THE PORTUGUESE GOVERNMENT HAS TAKEN NO
OFFICIAL POSITION AS TO WHAT THE LUMINOUS DISCS WERE.
THE UNITED STATES HAS HAD MORE THAN IT'S FAIR SHARE OF
UNEXPLAINED AERIAL OBJECTS. IN FEBRUARY OF 1960 THE N.A.A.D.S.
(NORTH AMERICAN AIR DEFENSE SYSTEM) SPOTTED A SATELLITE OF
UNKNOWN ORIGIN ORBITING THE EARTH. THEY KNEW THAT IT WASN'T A
SOVIET SATELLITE BECAUSE IT WAS ORBITING PERPENDICULAR TO THE
TRAJECTORY PRODUCED BY A SOVIET LAUNCH. IT ALSO HAD A MASS
ESTIMATED AT 15 METRIC TONS, NO EVIDENCE OF BOOSTER ROCKETS
AND TRAVELED AT SPEED THREE TIMES FASTER THAN ANY KNOWN
SATELLITE. THE SATELLITE ORBITED FOR TWO WEEKS AND
DISAPPEARED WITHOUT A TRACE. BEFORE ITS DISAPPEARANCE, THE
OBJECT WHICH APPEARED TO GIVE OFF A RED GLOW, WAS
PHOTOGRAPHED OVER NEW YORK SEVERAL TIMES (JACKSON 19).
LIGHTS IN THE SKY AREN'T THE ONLY EVIDENCE THAT SUGGESTS
WE MAY HAVE COSMIC COMPANY. IN THE BOOK "A HISTORY OF UFO
CRASHES," THE AUTHOR KEVIN D. RANDAL GIVES DETAILED ACCOUNTS
OF NUMEROUS UFO CRASHES IN HISTORY. PERHAPS THE MOST FAMOUS
OF THESE CRASHES OCCURRED ON JULY 4TH, 1947 IN ROSWELL, NEW
MEXICO. THE CRASH AT ROSWELL WAS WITNESSED FROM AFAR BY OVER
A HUNDRED PEOPLE. UNTIL JUST RECENTLY, NO ONE WHO WAS
INVOLVED IN THE RECOVERY OPERATION WAS TALKING, BUT THANKS TO
CONTINUED PRESSURE FROM UFO ENTHUSIAST OUR GOVERNMENT HAS
BEGUN TO DECLASSIFY MUCH OF ITS UFO RELATED MATERIAL. PERHAPS
MORE STARTLING IS THAT THE GOVERNMENT DOCUMENTS WERE THE
ACCOUNTS GIVEN BY LOCAL POLICE AND MEMBERS OF THE RECOVERY
TEAM. ACCORDING TO ONE UNNAMED WITNESS, A MEMBER OF THE
ROSWELL RECOVERY TEAM, HE STATED:
"THE CRASH SITE WAS LITTERED WITH PIECES OF
AIRCRAFT. SOMETHING ABOUT THE SIZE OF A
FIGHTER PLANE HAD CRASHED, THE METAL WAS
UNLIKE ANYTHING I'D EVER SEEN BEFORE. I
PICKED UP A PIECE THE SIZE OF A CAR FENDER
WITH ONE HAND, IT COULDN'T HAVE BEEN MORE
THAN A QUARTER OF A POUND AND NO MATTER
HOW HARD I TRIED I COULDN'T EVEN GET IT TO
BEND." (10)
IN MY OPINION THE MOST FASCINATING PIECE OF EVIDENCE TO
COME OUT OF THE ROSWELL CRASH IS THE ALIEN AUTOPSY FILM.
APPARENTLY THERE WAS MORE THAN BITS AND PIECES OF A SPACESHIP
RECOVERED AT ROSWELL. THERE IS AN AIR FORCE VIDEO ACCOUNT OF
AN AUTOPSY BEING PERFORMED ON A LIFE FORM THAT DOESN'T SHARE
THE COMMON CHARACTERISTICS OF ORGAN DEVELOPMENT FOUND IN
LIFE FORMS ON THIS PLANET. THE FILM IS SILENT AND LABELED
"AUTOPSY, ROSWELL, JULY 1947" (RANDAL 17).
AS DIFFICULT AS THE ROSWELL EVIDENCE IS TO EXPLAIN OR
DISCOUNT, IT PALES IN COMPARISON TO THE PHYSICAL EVIDENCE LEFT
BY OUR ANCESTORS. AN ILLUSTRATION TAKEN FROM A NUREMBURG
BROADSHEET TELLS HOW MEN AND WOMEN "SAW A VERY FRIGHTFUL
SPECTACLE." AT SUNRISE APRIL 14TH, 1561, "GLOBES, CROSSES, AND
TUBES BEGAN TO FIGHT ONE ANOTHER." THE EVENT CONTINUED FOR
ABOUT AN HOUR. AFTERWARD THEY FELL TO THE GROUND IN FLAMES,
MINUTES LATER A "BLACK, SPEAR LIKE OBJECT APPEARED." IN A BASAL
BROADSHEET DATED AUGUST 7TH, 1566, LARGE, BLACK, AND WHITE
GLOBES ARE SEEN OVER DASEL, SWITZERLAND. BOTH EVENTS
OCCURRED IN A TIME PERIOD WHEN THERE SHOULD HAVE BEEN
NOTHING MORE THAN BIRDS AND BEES FILLING OUR SKIES. THEY EACH
CONSIDERED TO BE DIVINE WARNINGS AT THE TIME (GOULD 95-96).
ANCIENT PHYSICAL EVIDENCE ISN'T LIMITED TO NEWSPAPER
ILLUSTRATION AND SKETCHES ON CAVE WALL. PERHAPS THE MOST
ASTOUNDING AND UNEXPLAINABLE PIECES OF PHYSICAL EVIDENCE ARE
A PAIR OF GEODES. BOTH ARE BELIEVED TO BE APPROXIMATELY 1,800
YEARS OLD AND WHEN CAREFULLY EXAMINED WERE IDENTIFIED AS
ELECTRICAL CELLS. ONE OF THE CELLS WHICH WAS DISCOVERED IN IRAQ
WAS TESTED AND PRODUCED 2 VOLTS OF ELECTRICITY. THE OTHER,
WHICH WAS DISCOVERED BY A PAIR OF ARIZONA ROCK HOUNDS, WAS
DAMAGED WHEN THE SEDIMENTARY ENCRUSTATION WAS BEING
REMOVED AND THEREFORE COULDN'T BE TESTED (MONTGOMERY 221).
SINCE THE DAWN OF TIME, MAN HAS TOLD STORIES OF HEAVENLY
AND DEMONIC BEINGS COMING TO RULE, TEACH, TORMENT, SEDUCE AND
PROVIDE SALVATION. EVERY CULTURE HAS MYTHS OF ANCIENT GODS
WHO STRODE THROUGH THE HEAVENS. THE AMERICAN INDIANS HAD
THE CACHINAS WHO TAUGHT THEM TO FARM AND SAVED THEM FROM
NUMEROUS CATACLYSMS. GREECE HAD ZEUS WHO THREW LIGHTNING
BOLTS FROM HIS FINGER TIPS AND APOLLO CROSSED THE SKY IN HIS
GOLDEN CHARIOT. THE CHRISTIANS HAVE ECCLESIASTES WHO
ENCOUNTERED THE "ANT PEOPLE" AND RODE THROUGH THE SKIES WITH
THEM FROM BABYLON TO ISRAEL. ACROSS THE ENTIRE GLOBE WE FIND
DRAWINGS ON CAVE WALLS THAT RESEMBLE MEN IN SPACE SUITS AND
OBJECTS THAT GREATLY RESEMBLE FLYING SAUCERS. THE SACRED
ARTWORK OF THE HOPI INDIAN, IS WITHOUT A DOUBT A
REPRESENTATION OF THE WAVES PRODUCED BY MODERN DAY
OSCILLOSCOPES (MONTGOMERY 225-237). THE HOPIS ARE ALSO NATIVE
TO THE AREA WHERE ONE OF THE ELECTRICAL CELLS WERE FOUND. IT
COULD BE THAT THESE THINGS ARE NO MORE THAN MERE COINCIDENCE,
BUT I DOUBT IT. MAN IN HIS ARROGANCE IS RELUCTANT TO BELIEVE
THAT WE MAY SHARE GODS IN THIS VAST, GLORIOUS UNIVERSE WITH
OTHER BEINGS OF INTELLIGENCE. WE SOMETIMES FAIL TO REALIZE
THAT, "IF THE EARTH WERE A DAY OLD, THE RACE OF MAN WOULD ONLY
HAVE BEEN THERE FOR 13 MINUTES." IF YOU COUPLE THAT WITH THE
FACT THAT THERE ARE BLACK HOLES AND WHITE DWARFS MILLIONS OF
YEARS OLDER THAN OUR SUN, IT INCREASES THE IMPROBABILITY THAT
WE ARE THE ONLY ONES OUT HERE. I WILL CLOSE THIS PAPER ON A
QUOTE FROM ECCLESIASTES 1:9 "THERE IS NO NEW THING UNDER THE
SUN," AND THAT INCLUDES INTELLIGENT LIFE.
WORKS CITED
ECCLESIASTES. HOLY BIBLE. NASHVILLE, TENNESSEE: THOMAS
NELSON, 1976.
GOULD ROBERT. ODDITIES. NEW YORK BELL PC, 1965.
JACKSON, ROBERT. UFO'S: SIGHTINGS OF STRANGE PHENOMENA IN
THE SKIES. NEW JERSEY: CHARTWELL BOOKS, 1995.
MICHEL, AIME'. STRANGE LIGHTS OVER GRENADA. FATE
MAGAZINE. AUG. 1957. 29-32
MONTGOMERY, RUTH. ALIENS AMONG US. NEW YORK: G.P
PUTNUM'S SONS, 1985.
RANDLE, KEVIN. A HISTORY OF UFO CRASHES. AVON BOOKS, 1995.
f:\12000 essays\sciences (985)\Biology\What is Angina.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
QUESTION: What is Angina? And
what is the cure?
================================
RESPONSE:
Angina refers to the pain arising from lack of adequate blood supply to the
heart muscle. Typically, it is a crushing pain behind the breastbone in the
center of the chest, brought on by exertion and relieved by rest. It may at
times radiate to or arise in the left arm, neck, jaw, left chest, or
back. It is frequently accompanied by sweating, palpitations of the
heart, and generally lasts a matter of minutes. Similar pain syndromes
may be caused by other diseases, including esophagitis, gall bladder
disease, ulcers, and others.
Diagnosis of angina begins with the recognition of the consistent
symptoms. Often an exercise test with radioactive thallium is performed
if the diagnosis is in question, and sometimes even a cardiac
catheterization is done if the outcome is felt necessary to make
management decisions. This is a complex area which requires careful
judgment by physician and patient.
Angina is a manifestation of coronary artery disease, the same
disease leading to heart attacks. Coronary artery diseas refers to
those syndromes caused by blockage to the flow of blood in those
arteries supplying the heart muscle itself, i.e., the coronary arteries.
Like any other organ, the heart requires a steady flow of oxygen and
nutrients to provide energy for rmovement, and to maintain the delicate
balance of chemicals which allow for the careful electrical rhythm
control of the heart beat. Unlike some other organs, the heart can
survive only a matter of minutes without these nutrients, and the rest
of the body can survive only minutes without the heart--thus the
critical nature of these syndromes.
Causes of blockage range from congenital tissue strands within or
over the arteries to spasms of the muscular coat of the arteries
themselves. By far the most common cause, however, is the deposition of
plaques of cholesterol, platelets and other substances within the
arterial walls. Sometimes the buildup is very gradual, but in other
cases the buildup is suddenly increased as a chunk of matter breaks off
and suddenly blocks the already narrowed opening.
Certain factors seem to favor the buildup of these plaques. A strong
family history of heart attacks is a definite risk factor, reflecting
some metabolic derangement in either cholesterol handling or some other
factor. Being male, for reasons probably related to the protective
effects of some female hormones, is also a relative risk. Cigarette
smoking and high blood pressure are definite risks, both reversible in
most cases. Risk also increases with age. Elevated blood cholesterol
levels (both total and low density types) are risks, whereas the high
density cholesterol level is a risk only if it is reduced. Possible,
but less well-defined factors include certain intense and hostile or
time-pressured personality types (so-called type A), inactive lifestyle,
and high cholesterol diets.
Medications are increasingly effective for symptom control, as well
as prevention of complications. The oldest and most common agents are
the nitrates, derivatives of nitroglycerine. They include
nitroglycerine, isosorbide, and similar agents. Newer forms include
long acting oral agents, plus skin patches which release a small amount
through the skin into the bloodstream over a full day. They act by
reducing the burden of blood returning to the heart from the veins and
also by dilating the coronary arteries themselves. Nitrates are highly
effective for relief and prevention of angina, and sometimes for
limiting the size of a heart attack. Used both for treatment of
symptoms as well as prevention of anticipated symptoms, nitrates are
considered by many to be the mainstay of medical therapy for angina.
The second group of drugs are called "beta blockers" for their
ability to block the activity of the beta receptors of the nervous
system. These receptors cause actions such as blood pressure elevation,
rapid heart rate, and forceful heart contractions. When these actions
are reduced, the heart needs less blood, and thus angina may be reduced.
The newest group of drugs for angina is called the calcium channel
blockers. Calcium channels refer to the areas of the membranes of heart
and other cells where calcium flows in and out, reacting with other
chemicals to modulate the force and rate of contractions. In the heart,
they can reduce the force and rate of contractions and electrical
excitability, thereby having a calming effect on the heart. Although
their final place in heart disease remains to be seen, they promise to
play an increasingly important role.
When medications are unsuccessful, or if there is concern about an
impending or potential heart attack, coronary bypass surgery is highly
successful in reducing symptoms. Whether or not it prolongs survival is
questionable for most patients.
Angina which is new or somehow different from previous episodes in
any way is termed unstable angina, is a medical emergency, and
requires urgent attention. Research is active, and careful medical
follow-up is important.mergency, and
requires urgent attention. Research is active, and careful medical
follow-up is important.
f:\12000 essays\sciences (985)\Biology\Wolf Predations.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hypotheses of the Effects of
Wolf Predation
John Feldersnatch
December 1st, 1995
Abstract: This paper discusses four hypotheses to explain the effects of wolf predation on prey populations of large ungulates.
The four proposed hypotheses examined are the predation limiting hypothesis, the predation regulating hypothesis, the predator
pit hypothesis, and the stable limit cycle hypothesis. There is much research literature that discusses how these hypotheses can
be used to interpret various data sets obtained from field studies. It was concluded that the predation limiting hypothesis fit
most study cases, but that more research is necessary to account for multiple predator - multiple prey relationships.
The effects of predation can have an enormous impact on the ecological organization and structure of communities. The
processes of predation affect virtually every species to some degree or another. Predation can be defined as when members of
one species eat (and/or kill) those of another species. The specific type of predation between wolves and large ungulates
involves carnivores preying on herbivores. Predation can have many possible effects on the interrelations of populations. To
draw any correlations between the effects of these predator-prey interactions requires studies of a long duration, and statistical
analysis of large data sets representative of the populations as a whole. Predation could limit the prey distribution and decrease
abundance. Such limitation may be desirable in the case of pest species, or undesirable to some individuals as with game
animals or endangered species. Predation may also act as a major selective force. The effects of predator prey coevolution can
explain many evolutionary adaptations in both predator and prey species.
The effects of wolf predation on species of large ungulates have proven to be controversial and elusive. There have been many
different models proposed to describe the processes operating on populations influenced by wolf predation. Some of the
proposed mechanisms include the predation limiting hypothesis, the predation regulating hypothesis, the predator pit
hypothesis, and the stable limit cycle hypothesis (Boutin 1992). The purpose of this paper is to assess the empirical data on
population dynamics and attempt to determine if one of the four hypotheses is a better model of the effects of wolf predation
on ungulate population densities.
The predation limiting hypothesis proposes that predation is the primary factor that limits prey density. In this non- equilibrium
model recurrent fluctuations occur in the prey population. This implies that the prey population does not return to some
particular equilibrium after deviation. The predation limiting hypothesis involves a density independent mechanism. The
mechanism might apply to one prey - one predator systems (Boutin 1992). This hypothesis predicts that losses of prey due to
predation will be large enough to halt prey population increase.
Many studies support the hypothesis that predation limits prey density. Bergerud et al. (1983) concluded from their study of
the interrelations of wolves and moose in the Pukaskwa National Park that wolf predation limited, and may have caused a
decline in, the moose population, and that if wolves were eliminated, the moose population would increase until limited by
some other regulatory factor, such as food availability. However, they go on to point out that this upper limit will not be
sustainable, but will eventually lead to resource depletion and population decline. Seip (1992) found that high wolf predation
on caribou in the Quesnel Lake area resulted in a decline in the population, while low wolf predation in the Wells Gray
Provincial Park resulted in a slowly increasing population. Wolf predation at the Quesnel Lake area remained high despite a
fifty percent decline in the caribou population, indicating that mortality due to predation was not density-dependent within this
range of population densities. Dale et al. (1994), in their study of wolves and caribou in Gates National Park and Preserve,
showed that wolf predation can be an important limiting factor at low caribou population densities, and may have an
anti-regulatory effect. They also state that wolf predation may affect the distribution and abundance of caribou populations.
Bergerud and Ballard (1988), in their interpretation of the Nelchina caribou herd case history, said that during and immediately
following a reduction in the wolf population, calf recruitment increased, which should result in a future caribou population
increase. Gasaway et al. (1983) also indicated that wolf predation can sufficiently increase the rate of mortality in a prey
population to prevent the population's increase. Even though there has been much support of this hypothesis, Boutin (1992)
suggests that "there is little doubt that predation is a limiting factor, but in cases where its magnitude has been measured, it is no
greater than other factors such as hunting."
A second hypothesis about the effects of wolf predation is the predation regulating hypothesis, which proposes that predation
regulates prey densities around a low-density equilibrium. This hypothesis fits an equilibrium model, and assumes that following
deviation, prey populations return to their pre-existing equilibrium levels. This predator regulating hypothesis proposes that
predation is a density-dependent mechanism affecting low to intermediate prey densities, and a density-independent
mechanism at high prey densities.
Some research supports predation as a regulating mechanism. Messier (1985), in a study of moose near Quebec, Canada,
draws the conclusion that wolf-ungulate systems, if regulated naturally, stabilize at low prey and low predator population
densities. In Messier's (1994) later analysis, based on twenty-seven studies where moose were the dominant prey species of
wolves, he determined that wolf predation can be density-dependent at the lower range of moose densities. This result
demonstrates that predation is capable of regulating ungulate populations. Even so, according to Boutin (1992) more studies
are necessary, particularly at high moose densities, to determine if predation is regulatory.
A third proposal to model the effects of wolf predation on prey populations is the predator pit hypothesis. This hypothesis is a
multiple equilibria model. It proposes that predation regulates prey densities around a low-density equilibrium. The prey
population can then escape this regulation once prey densities pass a certain threshold. Once this takes place, the population
reaches an upper equilibrium. At this upper equilibrium, the prey population densities are regulated by competition for (and or
availability of) food. This predator pit hypothesis assumes that predator losses are density-dependent at low prey densities, but
inversely density-dependent at high prey densities. Van Ballenberghe (1985) states that wolf population regulation is needed
when a caribou herd population declines and becomes trapped in a predator pit, wherein predators are able to prevent caribou
populations from increasing.
The final model that attempts to describe the effects of predation on prey populations is the stable limit cycle hypothesis. This
hypothesis proposes that vulnerability of prey to predation depends on past environmental conditions. According to this theory,
individuals of a prey population born under unfavorable conditions are more vulnerable to predation throughout their adult lives
than those born under favorable conditions. This model would produce time lags between the proliferation of the predator and
the prey populations, in effect generating recurring cycles. Boutin (1992) states that if this hypothesis is correct, the effects of
food availability (or the lack of) should be more subtle than outright starvation. Relatively severe winters could have long- term
effects by altering growth, production, and vulnerability. Thompson and Peterson (1988) reported that there are no
documented cases of wolf predation imposing a long-term limit on ungulate populations independent of environmental
influences. They also point out that summer moose calf mortality was high whether predators were present or not, and that
snow conditions during the winter affected the vulnerability of calves to predation. Messier (1994) asserts that snow
accumulation during consecutive winters does not create a cumulative impact on the nutritional status of deer and moose.
All of the four proposed theories mentioned above could describe the interrelationships between the predation of wolves and
their usual north american prey of large ungulate species. There has been ample evidence presented in the primary research
literature to support any one of the four potential models. The predation limiting hypothesis seems to enjoy wide popular
support, and seems to most accurately describe most of the trends observed in predator-prey populations. Most researchers
seem to think that more specific studies need to be conducted to find an ideal model of the effects of predation. Bergerud and
Ballard (1988) stated "A simple numbers argument regarding prey:predator ratios overlooks the complexities in
multi-predator-prey systems that can involve surplus killing, additive predation between predators, enhancement and
interference between predator species, switch over between prey species, and a three-fold variation in food consumption rates
by wolves." Dale et al. (1994) stated that further knowledge of the factors affecting prey switching, such as density-dependent
changes in vulnerability within and between prey species, and further knowledge of wolf population response is needed to
draw any firm conclusions. Boutin (1992) also proposed that the full impact of predation has seldom been measured because
researchers have concentrated on measuring losses of prey to wolves only. Recently, bear predation on moose calves has
been found to be substantial, but there are few studies which examine this phenomenon (Boutin 1992). Messier (1994) also
pointed out that grizzly and black bears may be important predators of moose calves during the summer. Seip (1992), too,
states that bear predation was a significant cause of adult caribou mortality. These points emphasize that multiple-predator and
multiple-prey systems are probably at work in the natural environment, and we must not over generalize a one predator - one
prey hypothesis in the attempt to interpret the overall trends of the effects of predation of wolves on large ungulate populations.
Literature Cited
Bergerud, A. T., W. Wyett, and B. Snider. 1983. The role of wolf predation in limiting a moose population. Journal of
Wildlife Management. 47(4): 977-988.
Bergerud, A. T., and W. B. Ballard. 1988. Wolf predation on caribou: the Nelchina herd case history, a different
interpretation. Journal of Wildlife Management. 52(2): 344- 357.
Boutin, S.. 1992. Predation and moose population dynamics: a critique. Journal of Wildlife Management. 56(1): 116-
127.
Dale, B. W., L. G. Adams, and R. T. Bowyer. 1994. Functional response of wolves preying on barren-ground caribou
in a multiple prey ecosystem. Journal of Animal Ecology. 63: 644- 652.
Gasaway, W. C., R. O. Stephenson, J. L. Davis, P. E. K. Shepherd, and O. E. Burris. 1983. Interrelationships of
wolves, prey, and man in interior Alaska. Wildlife Monographs. 84: 1- 50.
Messier, F.. 1985. Social organization, spatial distribution, and population density of wolves in relation to moose
density. Canadian Journal of Zoology. 63: 1068-1077.
Messier, F.. 1994. Ungulate population models with predation: a case study with the North American moose. Ecology.
75(2): 478-488.
Seip, D.. 1992. Factors limiting woodland caribou populations and their interrelationships with wolves and moose in
southeastern British Colombia. Canadian Journal of Zoology. 70: 1494-1503.
Thompson, I. D., and R. O. Peterson. 1988. Does wolf predation alone limit the moose population in Pukaskwa Park?:
a comment. Journal of Wildlife Management. 52(3): 556-559.
Van Ballenberghe, V.. 1985. Wolf predation on caribou: the Nelchina herd case history. Journal of Wildlife
Management. 49(3): 711-720.
f:\12000 essays\sciences (985)\Biology\Wolverine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE
WOLVERINE
BY:William Cline
I am doing my report on the wolverine.Which is classified in the kingdom Anamalia, the phylum Vertibrae and the class Mamalia.The scientific name for the wolverine is Gulo gulo.Their common name is glutton.
The Wolverine ranges from northern Europe and Siberia through northern North America. Their distribution once extended as far south as Colorado, Indiana, Pennsylvania, and perhaps Michigan.
It looks like a weasel maybe because it is in the weasel family.It has black and brown fur and long claws.It's legs are short but strong. The Wolverine is usually solitary except for members of the opposite sex and a female's young. After the females give birth they hide with their young. The mother defends her territory and intruders are not tolerated. This territorial behavior continues until the young are ready to hunt on their own.
The Wolverine has a diet that can include anything from small eggs to full-sized deer. The Wolverine is capable of bringing down prey that is five times bigger than itself. It is equipped with large claws and with pads on its feet that allow it to chase down prey in deep snow. Some prey species include reindeer, roe deer, wild sheep, and elk. The wolverine can be very swift when it is on the attack, reaching speeds of over thirty miles an hour.It also uses it's claws to protect itself.
In total I can truly say that the wolverine is one mean guy and he can kick some major butt. So if you see one remember to stay away or be brave(stupid) and confront it.
Bibliography
1)The internet at The University of Michigan-Museum of Zoology
2)The World Book Encyclopedia 1969 edition
3)Last but not least the TELEVISION.
f:\12000 essays\sciences (985)\Biology\Woolly Mammoth.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Extinct Animals Research: Woolly Mammoth
We have learned much about the Woolly Mammoth almost more than any other
dinosaur that has been identified. Due to the fact that the Woolly Mammoth so closely
resembles today's elephants, care for them would most probably require most of the same
factors to keep it alive. Since the Woolly Mammoth has been extinct for 4000 years, it is
difficult to tell exactly what they lived on, but we can hypothesize.
The Woolly Mammoth lived during the Ice Age, so if alive today, it must be kept
in a tundra environment. For food, only basic tundra vegetation is necessary. Due to the
thick pelt that the Woolly Mammoth has, any known Ice Age temperatures would suffice
since the thick fur protects the animal in any extreme temperatures.
Large enclosures would not be needed as they would be for a normal elephant
since the Woolly Mammoth is only three meters high. The huge tusks would allow it to
scavenge for its own food, so no special feedings would be necessary. Feedings would
also be needed on a less frequent basis since the Woolly Mammoth, much like today's
camels, keeps under its sloping back a thick layer of blubber as nutrition when food was
not needed.
The problem in keeping a creature such as the Woolly Mammoth in a zoo-like
surrounding would be poachers. Due to the endangerment of such a magnificent species,
poachers of pelts and ivory would most certainly be after it's huge tusks and thick furs, so
it would be necessary to post guards around it's cage at all times.
A large-scale habitat would be constructed for this creature since, during the
period it lived, the Pleistocene, there were no restrictions on the places it could roam to.
There was nothing stopping this beast from stomping along to wherever it wanted to go.
A Woolly Mammoth might find it peculiar to be stuck in a twenty foot ice field with no
predators or other animals whatsoever.
To solve this problem, it would be possible to include other animals from the
Woolly Mammoth's time period, but that is another project.
Bibliography
Carlberg, Ulf. "The Mammoth." November 22, 1996.
(3/1/97)
Dixon, Douglas. The Illustrated Dinosaur Encyclopedia. New York: Gallery Books.
1988. pp.10,31
Kitson, Kenneth. "Zoobooks: Elephants" San Diego: Wildlife Education, Ltd., 1986.
Norman, David. The Prehistoric World of the Dinosaur. New York: Gallery Books.
1988. pp. 6-7
Preistland, Neal. "The Types of Mammoths."
(3/1/97)
"Woolly Mammoth: Symbol of the Ice Age." The Late, Great, Mammals of Canada.
December, 1994. (3/1/97)
f:\12000 essays\sciences (985)\Biology\Your Brain.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Your brain has two sides. And each has a distinctly different way of
looking at the world.
Do you realize that in order for you to read this article, the two
sides of your brain must do completely different things? The more we
integrate those two sides, the more integrated we become as people.
Integration not only increases our ability to solve problems more
creatively, but to control physical maladies such as epilepsy and migranes,
replace certain damaged brain functions and even learn to "thin" into the
future. Even more startling is evidence coming to light that we have
become a left-brain culture.
Your brain's right and left side have distinctly different ways of
looking at the world. Your two hemispheres are as different from each
other as, oh, Micheal Wilson and Shirley Maclean. The left brain controls
the right side of the body (this is reversed in about half of the 15
percent of the population that is left-handed) and, in essence, is logical
analytical, judgemental and verbal. It's interested in the bottom line, in
being efficent. The right brain controls the left side of the body and
leans more to the creative, the intuitive. It is concerned more with the
visual and emotional side of life.
Most people, if they thought about it, would identify more with
their left brain. In fact, many of us think we are our left brains. All
of that non-stop verbalization that goes on in our heads is the dominant
left brain talking to itself. Our culture- particularly our school system
with its emphasis on the three Rs (decidedly left-brain territory) -
effectively represses the intuitive and artistic right brain. If you don't
believe it, see how far you get at the office with the right brain activity
of daydreaming.
As you read, your left-side is sensibly making connections and
analysing the meaning of the words, the syntax and other complex relation-
ships while putting it into a "language" you can understand. Meanwhile,
the right side is providing emotional and even humerous cues, decoding
visual information and maintaining an integrated story structure.
While all of this is going on, the two sides are constantly
communicating with each other across a connecting fibre tract called the
corpus callosum. There is a certain amount of overlap but essentially
the two hemispheres of the brain are like two different personalities
that working alone would be somewhat lacking and overspecialized, but
when functioning together bring different strengths and areas of expertise
to make an integrated whole.
"The primitive cave person probably lived solely in the right
brain," says Eli Bay, president of Relaxation Response Inc., a Toronto
organization that teaches people how to relax. "As we gained more control
over our environment we became more left-brain oriented until it became
dominant." To prove this, Bay suggests: "Try going to your boss and saying
"I've got a great hunch." Chances are your boss will say, "Fine, get me
the logic to back it up."
The most creative decision making and problem solving come about
when both sides bring their various skills to the table: the left brain
analysing issues, problems and barriers; the right brain generating fresh
approaches; and the left brain translating the into plans of action.
"In a time of vast change like the present, the intuitive side of
the brain operates so fast it can see what's coming," says Dr. Howard
Eisenberg, a medical doctor with a degree in psychology who has studied
hemispheric relationships. "The left brain is too slow, but the right
can see around corners."
Dr. Eisenberg thinks that the preoccupation with the plodding left
brain is one reason for the analysis paralysis he sees affecting world
leaders. "Good leaders don't lead by reading polls," he says. "They have
vision and operate to a certain extent by feel."
There are ways of correcting out cultural overbalance. Playing
video games, for example, automatically flips you over to the right brain
Bay says. "Any artistic endavour, like music or sculpture, will also do
it."
In her best-selling book "Drawing on the Right Side of the Brain
(J.P. Tarcher Inc., 1979), Dr. Betty Edwards developed a series of exercises
designed to help people tap into the right brain, to actually see or process
visual information, differently. She cites techniques that are as old as
time, and modern high-tech versions such as biofeedback.
An increasing number of medical professionals beieve that being in
touch with our brain, especially the right half, can help control medical
problems. For examplem Dr. Eisenberg uses what he calls "imaginal
thinking" to control everything from migranes to asthma, to high blood
pressure. "We have found," he says, "that by teaching someone to raise to
raise their temperature - by imaging they are sunbathing or in a warm bath
- they can control their circulatory system and terefore the migrane."
Knowledge of our two-sided brain began in the mid-1800's when
French neurologist Paul Broca discovered that injuries to the left side of
the brain resulted in the loss of speech. Damage to the right side,
however did not. Doctors speculated over what this meant. Was the brain
schizophrenically divided and non-communicative?
In the early 1960s, Nobel Prize winner Dr. Roger Sperry proved that
patients who had their corpus callosum severed to try and control epileptic
seizures could no longer communicate between their hemispheres. The
struggle can be seen quite clearly in the postoperative period whe the
patient is asked to do a simple block design. This is a visual, spacial
task that the left-hand (controlled by the right brain in most of us) can
do very well but the right hand (controlled by the language-oriented left
brain) does poorly. The right hand may even intervene to mix up the
design.
Some people with epilepsy can control their seizures by concentrating
activity on the hemisphere that is not affected. In the case of left lobe
epilepsy, this can be done by engaging in a right-brain activity such as
drawing.
One intriguing question is why we have two hemispheres at all? "In
biology you always have the same thing on one side as the other - ears,
lungs, eyes, kidneys, etc." explains Dr. Patricia De Feudis, director of
psychology at Credit Valley Hospital in Mississauga, Ont. "But with the
brain there is more specialization. You can have something going on one
side and not not be aware of it in the other."
Our knowledge of the brain is general is only beginning. We know
even less about how the hemispheres operate, Getting in touch with how the
two sides work can only do us good, if just to keep us from walking around
"half-brained".
f:\12000 essays\sciences (985)\Chemistry\Acetylation of Ferrocene.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
17. October 1996
Experiment #7
Acetylation of Ferrocene
Introduction
In this lab we will be utilizing the Friedel Crafts process of acetylation of ferrocene. Ferrocene is an atom of iron bounded by two aromatic rings. We will use some reagents that will cause the ferrocene to add either one acetyl group to an aromatic ring or add two acetyl groups to each of the aromatic rings. In order to determine how well this process had worked we employed: IR spectra analysis, column chromatography, and a little TLC. This experiment is relevant in today's highly industrialized world. By utilizing many of the techniques we employ in this lab, a company can synthesize new types of materials or composites that could revolutionize an industry.
Background
When we react the ferrocene with phosphoric acid and acetic anhydride, we obtain many disparate products. Not only do we get acetylferrocene, but we also get diacetylferrocene, some unreacted ferrocene reactant, and acetic acid as well. We will use thin layer chromatography (TLC), column chromatography (CG), and IR spectra analysis in order to determine the what proportions of each of these compounds will be present in the final product.
Both TLC and CG are excellent methods of measuring the presence of a given substance. Both methods turn around a compounds polarity. As one recalls, polarity is a measure of the electronegativity of a compound determined by their placement in the periodic chart. Specifically, in this lab we are talking about the difference in polarity between the atoms of oxygen and carbon. Ferrocene is relatively low to none in polarity. Acetylferrocene, because of the carbonyl functional group, is more polar than the ferrocene. Moreover, diacetylferrocene, because of the 2:1 ratio of the carbonyl groups over the acetylferrocene, is the most polar of the lot.
As stated above, both TLC and CG take advantage of polarity. Both methods have an extremely polar stationary phase; specifically, silica or alumina gel is used. Through this polar stationary phase, a mobile liquid phase is passed. Now, one can think of a polar stationary phase as a bully that waits in the high school halls for his hooligan friends. His hooligan friends, hooley's as I like to call them, always stay back to talk him; the rest of the normal student body simply keep walking and pass him. The idea here is: like-stays-with- like. Analogously, those compounds which are most similar to the stationary substrate will stay behind to "hang out". In this case, the more polar the compound is, then the more it will stay behind as the rest of the product moves forward in its liquid mobile phase. TLC works by capillary action, where the mobile phase is drawn up the TLC plate and across a polar TLC plate. CG, on the other hand, works by having gravity pull the liquid mobile phase down a polar laden column. The joyous wonder of TLC and CG, then, is that they are thus able to separate each constituent contained in the product.
Methods & Procedure
The procedure of for this lab may be found in the pre-lab note for this experiment contained in the appendix. I will only remark on the important features of the procedure. The amount of start material for this lab was ca. 10 g. The calculation for this may also be found in the pre-lab I first added acetic anhydride to ferrocene (FC) and then warmed to add in the H3PO4 catalyst. I observed a reddish-violet color to this mix of reactants. I then did a TLC and noted that the majority of the sample was not the original ferrocene start material. Please see the pre-lab for reproductions of the TLC plates used in this lab. Also see table 1.2 for Rf values.
As one can see, this crude's Rf is half that of the start material. This indicates that a reaction has definitely occurred. Next we performed an extraction on this sample with Methylene Chloride (MeCl) and Sodium Hydroxide (OH-). Please see the pre-lab for a picture of what the extraction looked like. Then we transferred the lower organic potion into another vial with a little sodium sulfate for drying. Then we transferred this to a tarred vial and dried off the MeCl in a nitrogen stream. MeCl is a great solvent because it evaporates easily (bp. ca. 48øC). Moreover, we used a nitrogen steam so that we could minimize the amount of moisture in regular air from being reintroduced into the sample. This was our second crude sample and we did a TLC on it with FC start material. See Tables 1.1 and 1.2 for amount of crude sample obtained and the Rf values. We allowed this to dry over till next weekend; we then performed a CG on this sample.
We placed this crude into the CG and then added three mobile solvents to it in order to separate the crude. We used Hexane (non-polar), Ethyl Acetate (medium polarity) and Methanol (Nice and polar). The sample flowed down the column and into separate tarred vials for each colored material. The first was bright yellow. The second was a deep reddish color. The third was a dark violet. I placed each vial into the N2 stream for concentration and then re-weighed each sample, called F1, F2, F3, respectively, again. The results are presented in table 1.1 at the end of the next section. After yet another class lab on this experiment, I tried to take 5 mg of each sample and place it into K-Br for IR spectra. However, after waiting till the next period for the IR, hardly any of the materials we present in each vial. This was quite inexplicable. I did manage to get melt-temps, but I was only able to scrape together enough of the F2 for IR spectra. See the appendix for the results of the IR spectra and see table 1.3 for the melt temp values.
I also took a TLC on each one as well; the values are presented in table 1.2 at the end of the next section.
Results & Observations
The results of this experiment are pretty straight forward and are summarized in tables 1.1 through 1.3 in this section.
Table 1.1 Weights and Measures
F1 F2 F3
DRAM Vial Weight: 4.6480 12.1914 12.2362
Vial & Mix: 4.6871 12.2177 12.2439
Amount Present in grams: 0.00391 0.00236 0.0077
in mg: 3.91 mg 2.36 mg 7.7 mg
1
The next table revolves around TLC results
Table 1.2 TLC Rf Values
First Crude Second Crude FC Mix AFC Mix DAFC Mix
Spot A: .38 .43, .84 .67 .54, .75 .61
Spot B: .75 .84 .74 .77 .36
Cospot A/B: .38, .75 .43, .84 .74 .77 .36, .88
2
The melt-temp values are:
Table 1.3 Melt-Temp Values
F1 F2 F3
Sample Melt-Temp: 195-200øC 90-94øC 75-80øC
Actual Melt-Temp: 173øC 85-86øC 130-131øC
3
The IR spectra may be found in the appendix.
Discussion
From the first crude sample's TLC that was taken, one can see that roughly half of the material present is composed of acetylferrocene and diacetylferrocene. Thus we continued along in the procedures. The results of the CG, which was performed next separated the constituent products enough that more TLC's were able top be taken. The results of these TLC Rf's tell us that our separation was pretty successful. As one can see, F1 spot A's Rf, we have at least 90% of FC in separated mixture. This is great. It means the extraction was a success. The F2 percent difference is 2.7%. This means that over 97% of this material are indeed AFC. OUTSTANDING results. This was a success for, one of the very few in this lab; thus, I am quite happy. The percent difference for the DAFC, however, is quite disappointing. Only 30 % of this mix is indeed DAFC. This is not that good of a separation. The reasons for this are explainable though. I believe it is due to inaccurate CG technique. I did not wait for all of the AFC to finish flowing out of the column before I placed the vial down to obtain the DAFC. Thus, as the TLC shows, most of this material is not DAFC. I think from comparison Rf's to the AFC TLC plates that it is probably AFC; this does then confirm that its probably the AFC. Moreover, it also could be some acetic alcohol as well. If this was a side-product formed. After all, acetic alcohol is quite polar; thus it would be one of the last products out along with the DAFC. Table 1.4 shows the calculations.
Table 1.4 Percent Different Calculations
4
F1 %D = Actual - Start Material x 100
Start Material
= 0.67 - 0.74 x 100 = 9.97%
.74
F2 %D = Actual - Start Material x 100
Start Material
= 0..75 - 0.77 x 100 = 2.65%
.77
F3 %D = Actual - Start Material x 100
Start Material
= 0.61 - 0.36 x 100 = 69.5%
.36
5
The IR spectra for F2 show us that we have mostly AFC formed. The peak at the 1700 range indicates that c=o bonds are present in this sample. AFC does indeed have the carbonyl bound present. Furthermore, since the size of the peak indicated the amounts of c=o bonds present, we would expect it to be a smaller peak than a F3 IR for DAFC. But I do not have an IR for F3 for comparison. Nevertheless, it probably has a big peak there in proportion to the F2 anyway. Hmmmmm. A large peak around the 2900 mark indicates the present of a hydrogen bonded to an aromatic ring. Our IR spectra for F2 does indeed show a peak in this range. So we conclude that this sample is acetylferrocene.
Conclusions
In sum, this lab was successful. It taught us how to correctly test the accuracy of a synthesis reaction; specifically, the acetylation of ferrocene. Our results show us the accurately synthesized and separated out the ferrocene and acetylferrocene from the reaction. The separation of the diacetylferrocene was not as successful as the extraction of the other two, but an explanation for this seems superficially valid.
Make this field active and press CTRL-V
f:\12000 essays\sciences (985)\Chemistry\Acid Rain 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THOUGHTS ON ACID RAIN
Acid rain is a serious problem with disastrous effects. Each day this serious problem increases, many people believe that this issue is too small to deal with right now this issue should be met head on and solved before it is too late. In the following paragraphs I will be discussing the impact has on the wildlife and how our atmosphere is being destroyed by acid rain.
CAUSES
Acid rain is a cancer eating into the face of Eastern Canada and the North Eastern United States. In Canada, the main sulphuric acid sources are non-ferrous smelters and power generation. On both sides of the border, cars and trucks are the main sources for nitric acid(about 40% of the total), while power generating plants and industrial commercial and residential fuel combustion together contribute most of the rest. In the air, the sulphur dioxide and nitrogen oxides can be transformed into sulphuric acid and nitric acid, and air current can send them thousands of kilometres from the source.When the acids fall to the earth in any form it will have large impact on the growth or the preservation of certain wildlife.
NO DEFENCE
Areas in Ontario mainly southern regions that are near the Great Lakes, such substances as limestone or other known antacids can neutralize acids entering the body of water thereby protecting it. However, large areas of Ontario that are near the Pre-Cambrian Shield, with quartzite or granite based geology and little top soil, there is not enough buffering capacity to neutralize even small amounts of acid falling on the soil and the lakes. Therefore over time, the basic environment shifts from an alkaline to a acidic one. This is why many lakes in the Muskoka,
Haliburton, Algonquin, Parry Sound and Manitoulin districts could lose their fisheries if sulphur emissions are not reduced substantially.
ACID
The average mean of pH rainfall in Ontario's Muskoka-Haliburton lake country ranges between 3.95 and 4.38 about 40 times more acidic than normal rainfall, while storms in Pennsilvania have rainfall pH at 2.8 it almost has the same rating for vinegar.
Already 140 Ontario lakes are completely dead or dying. An additional 48 000 are sensitive and vulnerable to acid rain due
to the surrounding concentrated acidic soils.
ACID RAIN CONSISTS OF....?
Canada does not have as many people, power plants or automobiles as the United States, and yet acid rain there has become so severe that Canadian government officials called it the most pressing environmental issue facing the nation. But it is important to bear in mind that acid rain is only one segment, of the widespread pollution of the atmosphere facing the world. Each year the global atmosphere is on the receiving end of 20 billion tons of carbon dioxide, 130 million tons of suffer dioxide, 97 million tons of hydrocarbons, 53 million tons of nitrogen oxides, more than three million tons of arsenic, cadmium, lead, mercury, nickel, zinc and other toxic metals, and a host of synthetic organic compounds ranging from polychlorinated biphenyls(PCBs) to toxaphene and other pesticides, a number of which may be capable of causing cancer, birth defects, or genetic imbalances.
COST OF ACID RAIN
Interactions of pollutants can cause problems. In addition to contributing to acid rain, nitrogen oxides can react with hydrocarbons to produce ozone, a major air pollutant responsible in the United States for annual losses of $2 billion to 4.5 billion worth of wheat, corn, soyabeans, and peanuts. A wide range of interactions can occur many unknown with toxic metals.
In Canada, Ontario alone has lost the fish in an estimated 4000 lakes and provincial authorities calculate that Ontario stands to lose the fish in 48 500 more lakes within the next twenty years if acid rain continues at the present rate.Ontario is not alone, on Nova Scotia's Eastern most shores, almost every river flowing to the Atlantic Ocean is poisoned with acid. Further threatening a $2 million a year fishing industry.
THE DYING
Acid rain is killing more than lakes. It can scar the leaves of hardwood forest, wither ferns and lichens, accelerate the death of coniferous needles, sterilize seeds, and weaken the forests to a state that is vulnerable to disease infestation and decay. In the soil the acid neutralizes chemicals vital for growth, strips others from the soil and carries them to the lakes and literally retards the respiration of the soil. The rate of forest growth in the White Mountains of New Hampshire has declined 18% between 1956 and 1965, time of increasingly intense acidic rainfall.
Acid rain no longer falls exclusively on the lakes, forest, and thin soils of the Northeast it now covers half the continent.
EFFECTS
There is evidence that the rain is destroying the productivity of the once rich soils themselves, like an overdose of chemical fertilizer or a gigantic drenching of vinegar. The damage of such overdosing may not be repairable or reversible. On some croplands, tomatoes grow to only half their full weight, and the leaves of radishes wither. Naturally it rains on cities too, eating away stone monuments and concrete structures, and corroding the pipes which channel the water away to the lakes and the cycle is repeated. Paints and automobile paints have its life reduce due to the pollution in the atmosphere speeding up the corrosion process. In some communities the drinking water is laced with toxic metals freed from metal pipes by the acidity. As if urban skies were not already grey enough, typical visibility has declined from 10 to 4 miles, along the Eastern seaboard, as acid rain turns into smogs. Also, now there are indicators that the components of acid rain are a health risk, linked to human respiratory disease.
PREVENTION
However, the acidification of water supplies could result in increased concentrations of metals in plumbing such as lead, copper and zinc which could result in adverse health effects. After any period of non-use, water taps at summer cottages or ski chalets they should run the taps for at least 60 seconds to flush any excess debris.
STATISTICS
Although there is very little data, the evidence indicates that in the last twenty to thirty years the acidity of rain has increased in many parts of the United States. Presently, the United States annually discharges more than 26 million tons of suffer dioxide into the atmosphere. Just three states, Ohio, Indiana, and Illinois are responsible for nearly a quarter of this total. Overall, two-thirds of the suffer dioxide into the atmosphere over the United States comes from coal-fired and oil fired plants. Industrial boilers, smelters, and refineries contribute 26%; commercial institutions and residences 5%; and transportation 3%. The outlook for future emissions of suffer dioxide is not a bright one. Between now and the year 2000, United States utilities are expected to double the amount of coal they burn. The United States currently pumps some 23 million tons of nitrogen oxides into the atmosphere in the course of the year.
Transportation sources account for 40%; power plants, 30%; industrial sources, 25%; and commercial institutions and residues, 5%. What makes these figures particularly distributing is that nitrogen oxide emissions have tripled in the last thirty years.
FINAL THOUGHTS
Acid rain is very real and a very threatening problem. Action by one government is not enough. In order for things to be done we need to find a way to work together on this for at least a reduction in the contaminates contributing to acid rain. Although there are right steps in the right directions but the government should be cracking down on factories not using the best filtering systems when incinerating or if the factory is giving off any other dangerous fumes. I would like to express this question to you, the public:WOULD YOU RATHER PAY A LITTLE NOW OR A LOT LATER?
f:\12000 essays\sciences (985)\Chemistry\Acid Rain.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION: Acid rain is a great problem in our world. It causes fish
and plants to die in our waters. As well it causes harm to our own race as
well, because we eat these fish, drink this water and eat these plants. It
is a problem that we must all face together and try to get rid of. However
acid rain on it's own is not the biggest problem. It cause many other
problems such as aluminum poisoning. Acid Rain is deadly.
WHAT IS ACID RAIN?
Acid rain is all the rain, snow, mist etc that falls from the sky onto
our planet that contains an unnatural acidic. It is not to be confused with
uncontaminated rain that falls, for that rain is naturally slightly acidic.
It is caused by today's industry. When products are manufactured many
chemicals are used to create it. However because of the difficulty and cost
of properly disposing of these products they are often emitted into the
atmosphere with little or no treatment.
The term was first considered to be important about 20 years ago when
scientists in Sweden and Norway first believed that acidic rain may be
causing great ecological damage to the planet. The problem was that by the
time that the scientist found the problem it was already very large.
Detecting an acid lake is often quite difficult. A lake does not become
acid over night. It happens over a period of many years, some times
decades. The changes are usually to gradual for them to be noticed early.
At the beginning of the 20th century most rivers/lakes like the river
Tovdal in Norway had not yet begun to die. However by 1926 local inspectors
were noticing that many of the lakes were beginning to show signs of death.
Fish were found dead along the banks of many rivers. As the winters ice
began to melt off more and more hundreds upon hundreds more dead fish
(trout in particular) were being found. It was at this time that scientist
began to search for the reason. As the scientists continued to work they
found many piles of dead fish, up to 5000 in one pile, further up the
river. Divers were sent in to examine the bottom of the rivers. What they
found were many more dead fish. Many live and dead specimens were taken
back to labs across Norway. When the live specimens were examined they were
found to have very little sodium in their blood. This is typical a typical
symptom of acid poisoning. The acid had entered the gills of the fish and
poisoned them so that they were unable to extract salt from the water to
maintain their bodies sodium levels.
Many scientist said that this acid poising was due to the fact that it
was just after the winter and that all the snow and ice was running down
into the streams and lakes. They believed that the snow had been exposed to
many natural phenomena that gave the snow it's high acid content. Other
scientists were not sure that this theory was correct because at the time
that the snow was added to the lakes and streams the Ph levels would change
from around 5.2 to 4.6. They believed that such a high jump could not be
attributed to natural causes. They believed that it was due to air
pollution. They were right. Since the beginning of the Industrial
revolution in England pollution had been affecting all the trees,soil and
rivers in Europe and North America.
However until recently the loses of fish was contained to the southern
parts of Europe. Because of the constant onslaught of acid rain lakes and
rivers began to lose their ability to counter act their affects. Much of
the alkaline elements; such as calcium and limestone; in the soil had been
washed away. It is these lakes that we must be worried about for they will
soon become extinct.
A fact that may please fishermen is that in lakes/rivers they tend to
catch older and larger fish. This may please them in the short run however
they will soon have to change lakes for the fish supply will die quickly in
these lakes. The problem is that acid causes difficulties the fish's
reproductive system. Often fish born in acid lakes do not survive for they
are born with birth defects such as twisted and deformed spinal columns.
This is a sign that they are unable to extract enough calcium from the
water to fully develop their bone. These young soon die. With no
competition the older,stronger can grow easily. However there food is
contaminated as well by the acid in the water. Soon they have not enough
food for themselves and turn to cannibalism. With only an older population
left there is no one left to regenerate themselves. Soon the lake dies.
By the late 1970s many Norwegian scientists began to suspect that it
was not only the acid in the water that was causing the deaths. They had
proved that most fish could survive in a stream that had up to a 1 unit
difference in PH. After many experiments and research they found that their
missing link was aluminum.
Aluminum is one of the most common metals on earth. It is stored in a
combined form with other elements in the earth. When it is combined it
cannot dissolve into the water and harm the fish and plants. However the
acid from acid rain can easily dissolve the bond between these elements.
The Aluminum is then dissolved into a more soluble state by the acid. Other
metals such as Copper (Cu), iron (Fe) etc can cause such effects upon the
fish as well however it is the aluminum that is the most common. For
example: CuO + H2SO4 ----------> CuSO4 + H2O
In this form it is easily absorbed into the water. When it comes in
contact with fish it causes irritation to the gills. In response the fish
creates a film of mucus in the gills to stop this irritation until the
irritant is gone. However the aluminum does not go always and the fish
continues to build up more and more mucus to counteract it. Eventually
there is so much mucus that it clogs the gills. When this happens the fish
can no longer breath. It dies and then sinks to the bottom of the lake.
Scientists now see acid, aluminum and shortages of calcium as the three
determining factors in the extinction of fish.
As well there is the problem of chlorine. In many parts of the world
it is commonly found in the soil. If it enters the fish's environment it
can be deadly. It affects many of the fish's organisms and causes it to
die. As well it interferes in the photosynthesis process in plants.
NaOH + HCl ----> NaCl + H2O
The carbon in the water can become very dangerous for fish and plants
in the water if the following reaction happens:
CaCO3 + 2HCl ---> CaCl2 + H2CO3 then
H2CO3 ---> H2O + CO2
The salt created by this reaction can kill. It interferes directly with
the fish's nervous system.
Acid lakes are deceivingly beautiful. The are crystal clear and have a
luscious carpet of green algae on the bottom. The reason that these lakes
are so clear is because many of the decomposers are dead. They cannot break
down that material such as leaves and dead animals. These materials
eventually sink to the bottom instead of going through the natural process
of decomposition. In acid lakes decomposition is very slow. "The whole
metabolism of the lake is slowed down."
During this same period of time the Canadian department of fisheries
spent eight years dumping sulfuric acid (H2SO4) into an Ontario lake to see
the effects of the decrease in the PH over a number of years. At the PH of
5.9 the first organisms began to disappear. They were shrimps. They started
out at a population of about seven million, but at the pH of 5.9 they were
totally wiped out. Within a year the minnow died because it could no longer
reproduce it's self.
At this time the pH was of 5.8. New trout were failing to be produced
because many smaller organisms that served as food to it had been wiped out
earlier. With not enough food the older fish did not have the energy to
reproduce. Upon reaching the pH of 5.1 it was noted that the trout became
cannibals. It is believed this is due to the fact that the minnow was
nearly extinct.
At a pH of 5.6 the external skeletons of crayfish softened and they
were soon infected with parasites, and there eggs were destroyed by fungi.
When the pH went down to 5.1 they were almost gone. By the end of the
experiment none of the major species had survived the trials of the acid.
The next experiment conducted by the scientists was to try and bring the
lake back to life. They cut in half the amount of acid that they dumped to
simulate a large scale cleanup. Soon again the cuckers and minnows began to
reproduce again. The lake eventually did come back; to a certain extent;
back to life. THE NEW THEORY:
A scientist in Norway had a problem believing that it was the acid
rain on it's own that was affecting the lakes in such a deadly way. This
scientist was Dr Rosenqvist.
"Why is it that during heavy rain, the swollen rivers can be up to
fifteen times more acid than the rain? It cannot be the rain alone that is
doing it, can it?" Many scientist shunned him for this however they could
not come up with a better answer. Soon the scientists were forced to accept
this theory.
Sulfuric acid is composed of two parts, know as ions. The hydrogen ion
is what make a substance acid. The other ion is sulphate. When there are
more hydrogen ions then a substance is acid. It is this sulphate ion that
we are interested in. When the rain causes rivers to overboard onto the
banks the river water passes through the soil. Since the industrial
revolution in britain there has been an increasing amount of sulphur in the
soil. In the river there is not enough sulphur for the acid to react in
great quantities. However in the soil there is a great collection of
sulphur to aid the reaction. When it joins the water the pH becomes much
lower. This is the most deadly effect of acid rain on our water!!! The
water itself does not contain enough sulphur to kill off it's population of
fish and plants. But with the sulphur in the soil it does.
CONCLUSION:
Acid rain is a big problem. It causes the death of our lakes, our rivers,
our wild life and most importantly us. As well it causes other problems
that are very serious as well such as the release of aluminium and lead
into our water supplies. We are suffering because of it. In Scotland there
are many birth defects being attributed to it. We must cut down the
releases of chemicals that cause it. But it will take time, even if we were
to stop today we would have the problem for years to come because of the
build up in the soil. Let's hope we can do something.
BIBLIOGRAPHY
Penguin Publishing House, 1987 , Pearce Fred Acid Rain. What is it and
what is it doing to us?
New York Publishers, 1989, William Stone Acid Rain. Fiend or Foe?
Lucent books, Inc. 1990, Steward Gail Acid Rain.
f:\12000 essays\sciences (985)\Chemistry\AcidBase Titration.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chemistry
Acid-Base Titration
Purpose:
The objective of this experiment were: a) to review the concept of simple acid-base reactions; b) to review the stoichiometric calculations involved in chemical reactions; c) to review the basic lab procedure of a titration and introduce the student to the concept of a primary standard and the process of standardization; d) to review the calculations involving chemical solutions; e) to help the student improve his/her lab technique.
Theory:
Titration was used to study acid-base neutralization reaction quantitatively. In acid-base titration experiment, a solution of accurately KHP concentration was added gradually to another solution of NaOH concentration until the chemical reaction between the two solutions were completed. The equivalence point was the point at which the acid was completely reacted with or neutralized by the base. The point was signaled by a changing of color of an indicator that had been added to the acid solution. Indicator was substance that had distinctly different colors in acidic and basic media. Phenolphthalein was a common indicator which was colorless in acidic and neutral solutions, but reddish pink was result in basic solutions. Strong acid (contained H+ ion) and strong base ( contained OH ) were 100% ionized in water and they were all strong electrolytes.
Procedure:
Part A.
Investigating solid NaOH for use as a possible primary standard
First of all, The weight of a weighting paper was measured in analytical balance, then added two pellets of NaOH and reweighed the total amount of those. At the end of the lab, reweighed the combination and recorded all results in the lab manual.
Part B.
Preparation and standardization of a solution of sodium hydroxide
A clean beaker, burette, three 250ml Erlenmeyer flasks, and florence flask were rinsed by soap and distilled water. Poured 1.40g of NaOH into florence flask and added 350ml distilled water, then swirl it and inverted flask five times with parafilm on the top of it. Next, obtained a vial of KHP from the instructor, and poured about 0.408g into three different Erlenmeyer flasks by measuring with analytical balance. Then, filled up about 25ml of distilled water, added 3 drops of phenolphthalein into it and mixed them well by a glass rod. Labeled all solutions to prevent mix them up. Before the titration began, the buret should be rinsed with NaOH solution and recorded the initial buret reading. Titrated the solutions until the reddish pink color appeared. Recorded the final reading, and calculated the change of volume.
Part C.
Determination of the molar mass of unknown acid
Repeated the procedure above, but this time KHP was replaced with an unknown acidic solution and concentration. Demanded the number of replaceable hydrogen from the instructor.
Conclusion and Discussion:
From the titration results of three trial, the average of molarity of NaOH is 0.1021 . The percentage deviation in molarity of NaOH had 0.20% error. The possible error in this experiment were: the error in taking the buret readings, the error in measuring amount of elements, and the NaOH was not stable under air.
f:\12000 essays\sciences (985)\Chemistry\AcidbaseExtraction.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The purpose of this laboratory assignment was two-fold, first, we were to
demonstrate the extraction of acids and bases, finally, determining what unknowns were
present. Second, we were to extract caffeine from tea. These two assignment will be
documented in two separate entities.
Introduction: Acid/base extraction involves carrying out simple acid/base reactions in
order to separate strong organic acids, weak organic acids neutral organic compounds
and basic organic substances. The procedure for this laboratory assignment are on the
following pages.
3) Separation of Carboxylic Acid, a Phenol and a Neutral Substance
The purpose of this acid/base extraction is to separate a mixture of equal parts of
benzoic acid(strong acid) and 2-naphthanol(weak base) and
1,4-dimethoxybenzene(neutral) by extracting from tert-butylmethyl ether(very
volatile).The goal of this experiment was to identify the three components in the mixture
and to determine the percent recovery of each from the mixture.
4) Separation of a Neutral and Basic Substance
A mixture of equal parts of a neutral substance containing either naphthalene or
benzoin and a basic substance containing either 4-chloroaniline or ethyl 4-aminobenzoate
were to be separated by extraction from an ether solution. Once the separation took
place, and crystallization was carried out, it became possible to determine what
components were in the unknown mixture, by means of a melting point determination.
Results
Procedure Observations Inference
Dissolve 3.05g Phenol Mixture was a golden-
Neutral acid in 30ml brown/yellow color
t-butyl methyl ether in
Erlenmeyer flask and transfer
mixture to 125ml separatory
funnel using little ether to
complete the transfer
Add 10 ml of water Organic layer=mixture
aqueous layer=water(clear)
Add 10 ml saturated aqueous Sodium bicarbonate NaHCO3 dissolves in
solution sodium bicarbonate water.
to funnel and mix cautiously
with stopper on
Vent liberated carbon Carbon dioxide gas
dioxide and shake the mixture was released three times
thoroughly with frequent venting
of the funnel
Allow layers to separate Layer = H2O +NaHCO3
completely and draw off
lower layer into 50ml Erlenmeyer
flask 1
Add 10ml of 1.5 aqueous NaOH Flask 2 = H2O + NaHCO3
(5ml of 3M and 5ml H2O) to
separatory funnel, shake mixture,
allow layers to separate and draw
off lower layer into a 25ml
Erlenmeyer flask 2. Add additional
5ml of water to funnel, shake as before
Add 15 ml NaCl to funnel. Shake Bottom layer is white and NaCl was added to
the mixture and allow layers to separate gooey. wash the ether
and draw off lower layer, which is layer and to remove
discarded organic substances
NaOH and NaHCO3
Pour ether layer into 50ml
Erlenmeyer flask from the top
of the separatory funnel
(not allowing any water
droplets to be transferred)
Flask 3
Add anhydrous NaSO4
to ether extract until it no
longer clumps together
and set it aside
Acidify contents of flask 2 Litmus went from Acidification was now
by dropwise addition of blue to pink. Flask complete
concentrated HCl while 2 = creamy color
testing with litmus paper
and cool in ice
Acidify contents of flask 1 Litmus went from Acidification was now
by adding HCl dropwise blue to pink. Flask complete
while testing with litmus 2 = white solution
paper and cool in ice
Decant ether from flask
3 into a tared flask
Boil ether with boiling chips
Do a vacuum filtration and Solution turns to a Crystallization is now
recrystallize ether by dissolving it solid. complete
in 5ml, taking out boiling chips,
adding drops of Ligroin until the
solution was cloudy and cool it
in ice
Isolate crystals from flask 2 by Crystals = creamy-white Dried crystals are now
vacuum filtration and wash with powder ready for melting point
a small amount of ice water determination
and recrystallize it from boiling
water
Repeat the above for flask 1 Crystals = white powder
Flasks number 4 and 5 were done by the following diagram.
Results:
As a result of this acid/base experiment, the following results were obtained:
Flask 1: 31.113g
-30.223g
.890g
Flask 2: 36.812g
-36.002g
.810g
Flask 3: 90.789g
-90.114g
.065g
% yield = experimental weight x 100%
theoretical weight
Flask 1: .890g x 100% = 89%
1.00g
Flask 2: .810g x 100% = 81%
1.00g
Flask 3: .675g x 100% = 67.5%
1.00g
When taking the melting points of the unknowns, flasks 4 and 5, I came to the
conclusion that the samples contained, benzoin, melting point of 136-137Degrees(C) and
4-chloroaniline, melting point of 67-80 degrees(C), respectively.
Flask 4: 90.912g
-89.174g
1.738g
% yield = 1.738g x 100% = 90.4%
1.922g
Flask 5: 87.833g
-86.064g
1.769g
% yield = 1.769g x 100% = 87.3%
2.027g
Conclusion:
After each procedure was complete, it became apparent that flask number 4 and
flask number 5 contained benzoin and 4-chloroaniline, respectively. The melting point
range that was experimentally determined for each was 136-137 for benzoin and 67-70
for 4-chloroaniline. As you can see, this experiment was not error-free, as my percentage
yield was not 100%. This is expected for any experiment; for there is no way that, under
the conditions, this experiment can be free of error. This error could have occurred for
many reasons. The most prevalent reason, I feel that maybe not all of the substance was
transferred from the flask to the vacuum, giving a slight error. Also, some residue could
have also been left in the vacuum funnel when transferring the crystal substances.
Questions
2) It is necessary because nothing would come out of the stopcock- the reason for this is
because of pressure. Leaving the stopper on, would decrease the pressure pushing down
on the liquid and the pressure pushing upward would prevail, allowing nothing to escape.
3) I would not expect p-nitrophenol (pka = 7.15) to dissolve NaHCO3(pka = 6.4) because
having a weak acid and a weak base, the reaction would favor the products, not the
reactants, hence, the reaction would not proceed forward. I would expect
2,5-dinitrophenol(pka = 5.15) to dissolve in NaHCO3 the reaction would proceed
forward.
5) a) 1g benzoic acid x 1mol = .00699 mol benzoic acid
143g benzoic acid
b) 1ml 10% solution NaHCO3 x 1g_ x 1mol = .00116 mol NaHCO3
4ml 96g NaHCO3
.00699 moles of benzoic acid
Introduction:
The purpose of the second part of this laboratory assignment was to extract
caffeine from tea using dichloromethane and then to confirm the identity of it by
preparing a derivative of the extracted caffeine which has a sharp melting point, unlike
caffeine itself. Once the extraction was complete, we were to test for melting point and
get a HPLC reading for our derivative.
Discussion:
Tea leaves contain acidic, colored compounds as well as a small amount of
undecomposed chlorophyll, which is soluble in dichloromethane. Caffeine can be easily
extracted from tea. This procedure can be done using conventional methods. Simply
pouring hot water on the tea bags and steeping the bags for about 5-7 minutes would
extract most of the caffeine that the tea contains. Pure caffeine itself is a white, bitter,
odorless crystalline solid, therefore, it is obvious that more than just caffeine is in the
liquid tea solution since tea is a brown color. Because of this, dichloromethane is used to
dissolve the caffeine that is in the tea, which leaves the other constituents in the aqueous
layer. Using a separatory funnel, it becomes possible to extract the dissolved caffeine
from the aqueous layer and the extraction is now ready for further procedure.
Results
Procedure Observation Inference
To a 250ml beaker
containing 7 tea bags,
add 100ml of boiling
water.
Allow the mixture to stand Brown aqueous solution
for 5-7 minutes while steeping containing caffeine and
the tea from the bags other impurities.
Decant the mixture into
another flask
Cool solution to near Dichloromethane =
room temperature and water soluble, clear,
extract twice with 15ml heavier that water.
portions of dichloromethane
using a gentle rocking motion
and venting.
Drain off dichloromethane Dichloromethane Evaporation of the
layer on first extraction; organic layer found solvent leaves crude
include emulsion layer on on the bottom of the caffeine, which on
the second extraction. funnel where the sublimation, yields
caffeine is dissolved. a relatively pure
Chlorine = top, aqueous product.
solution.
Drain extraction 1 and 2
back into the funnel
Dry combined dichloromethane The solvent layer is
solutions and any emulsion yellow.
layer with sodium sulfate
Wash the drying agent Residue of greenish
with further portions of white crystalline weighs
solvent and steam bath 50mg(solid)
the solvent
To 5mg of the Salicylic acid is water
sublimed caffeine in water soluble.
beaker, add 7.5mg of
salicylic acid and .5 ml
of dichloromethane.
Heat mixture to a boil Petroleum ether is a poor
and add a few drops solvent for the product.
petroleum ether until
the mixture turns cloudy.
Insulate beaker and allow
it to cool slowly to room
temperature and then cool
in an ice bath
Remove the solvent with Needle-like crystals are Caffeine salicylate is
a Pasteur pipette while the isolated(white color) formed.
beaker is in the ice bath
then vacuum filter.
Caffeine beaker: 51.61g
-51.56g
.05g = 50mg
% yield = .05g x 100% = 20%
.25g
Caffeine salicylate: 17.198g
-17.036g
.062g
% yield = .062g x 100% = 25%
.25g
Conclusion
According to the HPLC graph that follows, my product was very pure. The actual
melting point of caffeine salicylate is 137 degree(C), my product was found to have a
melting point of 138 degrees (C). As before, of course this experiment was not done
completely error-free, the error is due almost entirely on human error.
f:\12000 essays\sciences (985)\Chemistry\Aerosol Spray Can.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Spray cans produce an aerosol, the technical term for a very
fine spray. They do this by means of a pressurized propellant, which
is a liquid that boils at everyday temperatures. Inside the can, a
layer of gaseous pressure increased, and eventually it becomes so
high that boiling stops. when the nozzle is pressed, the gas pressure
forces the product up the tube in the can and out of the nozzle in a
spray or foam. The propellant may emerge as well but, now under
less pressure, it immediately evaporates.
First patented in the US in 1941, aerosol spray cans have been
used as convenient packages for an ever increasing range of
products including paints, insecticides, and shaving cream to name
a few. The can is filled with the product to be sprayed and the
propellant, a compressed gas such as butane or Freon. The gas is
partly liquefied by the pressure in the can, but there is a layer of free
gas above the liquid. As the can empties liquefied gas vaporizes to
fill the space.
The valve is normal held shut by the pressure in the can, and
by the coil spring directly below the valve stem. When the push
button is pressed, it forces the valve stem down in its housing,
uncovering a small a small hole which leads up through the stem to
the nozzle in the button. This allows the product to be forced up the
dip tube by the gas pressure in the can. The nozzle is shaped to give
a spray or a continuous stream.
To produce a fine mist, a propellant is used which mixes with
the product. The two leave the nozzle together and the propellant
evaporates a soon as it reaches the air, breaking the product in to
tiny droplets. The same technique used with a more viscous liquid
and a wider nozzle results in a foam. For a continuous stream of
liquid or more viscous material, a nonmixing propellant is used, and
the dip tube reaches into the product.
The widespread use of aerosol cans using Freon as the
propellant led scientists to believe by the late 1970s that the ozone
layer in the upper atmosphere, which filters out harmful Ultraviolet
radiation from the sun, could be destroyed by the large quantities of
fluorocarbons in the gas being release into the air. Federal controls
were introduced to ban the use of Freon, and other propellants are
now employed, notably butane which, however is dangerously
flammable.
Among young people in United States, conventional drug or
alcohol abuse has given away-for an increasing number of teen-
agers-to a practice called 'huffing', inhaling chemicals found in
aerosol sprays and other common household items such as cigarette
lighters, paint thinner, gasoline. Inhalant abuse is becoming
increasingly common among young middle-class teenagers. It is a
cheap, and sometimes deadly, thrill.
Yoon, Byung
Period 1
Aerosol Spray Cans
Bibliography:
Aylesworth, T.G. It Works Like This. Garden City: Doubleday & Company,
1968.
Casey, Maura. "When a quick high may be quick death." The New York
Times 30 July 1995 sec:cn p:4 col:5
Flexner, Bob. "Finishes for small projects." Workbench March 1994
Kaplan, Justine. "Continuum: Are the Ninja Turtles misinformed?" Omni
June 1993: p27
Macaulay, David. The Way Things Work. Boston: Houghton Mifflin
Company, 1988.
Pierson, John. "Form plus function: ... The battle between pumps and
aerosols." The Wall Street Journal 28 Feb. 1994 sec:B P:1 col:1
Stepp, Laura Sessions. "Ringing the alarm on aerosols: Inhalants & Poisons.
Awareness Week." The Washington Post 21 March 1994 sec:C p:5
col:5
Trebilcock, Bob. "The new high kids crave." Redbook March 1993
f:\12000 essays\sciences (985)\Chemistry\Alchemy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Science
Alchemy
Alchemy, ancient art practiced especially in the Middle Ages, devoted chiefly
to discovering a substance that would transmute the more common metals into
gold or silver and to finding a means of indefinitely prolonging human life.
Although its purposes and techniques were dubious and often illusory, alchemy
was in many ways the predecessor of modern science, especially the science of
chemistry.
The birthplace of alchemy was ancient Egypt, where, in Alexandria, it began to
flourish in the Hellenistic period; simultaneously, a school of alchemy was
developing in China. The writings of some of the early Greek philosophers might
be considered to contain the first chemical theories; and the theory advanced
in the 5th century BC by Empedocles-that all things are composed of air, earth,
fire, and water-was influential in alchemy. The Roman emperor Caligula is said
to have instituted experiments for producing gold from orpiment, a sulfide of
arsenic, and the emperor Diocletian is said to have ordered all Egyptian works
concerning the chemistry of gold and silver to be burned in order to stop such
experiments. Zosimus the Theban (about AD 250-300) discovered that sulfuric
acid is a solvent of metals, and he liberated oxygen from the red oxide of
mercury.
The fundamental concept of alchemy stemmed from the Aristotelian doctrine that
all things tend to reach perfection. Because other metals were thought to be
less "perfect" than gold, it was reasonable to assume that nature formed gold
out of other metals deep within the earth and that with sufficient skill and
diligence an artisan could duplicate this process in the workshop. Efforts
toward this goal were empirical and practical at first, but by the 4th century
AD, astrology, magic, and ritual had begun to gain prominence.
A school of pharmacy flourished in Arabia during the caliphates of the Abbasids
from 750 to 1258. The earliest known work of this school is the Summa
Perfectionis (Summit of Perfection), attributed to the Arabian scientist and
philosopher Geber; the work is consequently the oldest book on chemistry proper
in the world and is a collection of all that was then known and believed. The
Arabian alchemists worked with gold and mercury, arsenic and sulfur, and salts
and acids, and they became familiar with a wide range of what are now called
chemical reagents. They believed that metals are compound bodies, made up of
mercury and sulfur in different proportions. Their scientific creed was the
potentiality of transmutation, and their methods were mostly blind gropings;
yet, in this way, they found many new substances and invented many useful
processes.
>From the Arabs, alchemy generally found its way through Spain into Europe. The
earliest authentic works extant on European alchemy are those of the English
monk Roger Bacon and the German philosopher Albertus Magnus; both believed in
the possibility of transmuting inferior metals into gold. This idea excited the
imagination, and later the avarice, of many persons during the Middle Ages.
They believed gold to be the perfect metal and that baser metals were more
imperfect than gold. Thus, they sought to fabricate or discover a substance,
the so-called philosopher's stone, so much more perfect than gold that it could
be used to bring the baser metals up to the perfection of gold.
Roger Bacon believed that gold dissolved in aqua regia was the elixir of life.
Albertus Magnus had a great mastery of the practical chemistry of his time. The
Italian Scholastic philosopher St. Thomas Aquinas, the Catalan churchman
Raymond Lully, and the Benedictine monk Basil Valentine (flourished 15th
century) also did much to further the progress of chemistry, although along
alchemical lines, in discovering the uses of antimony, the manufacture of
amalgams, and the isolation of spirits of wine, or ethyl alcohol.
Important compilations of recipes and techniques in this period include The
Pirotechnia (1540; trans. 1943), by the Italian metallurgist Vannoccio
Biringuccio; Concerning Metals (1556; trans. 1912), by the German mineralogist
Georgius Agricola; and Alchemia (1597), by Andreas Libavius, a German
naturalist and chemist.
Most famous of all was the 16th-century Swiss alchemist Philippus Paracelsus.
Paracelsus held that the elements of compound bodies were salt, sulfur, and
mercury, representing, respectively, earth, air, and water; fire he regarded as
imponderable, or nonmaterial. He believed, however, in the existence of one
undiscovered element common to all, of which the four elements of the ancients
were merely derivative forms. This prime element of creation Paracelsus termed
alkahest, and he maintained that if it were found, it would prove to be the
philosopher's stone, the universal medicine, and the irresistible solvent.
After Paracelsus, the alchemists of Europe became divided into two groups. One
group was composed of those who earnestly devoted themselves to the scientific
discovery of new compounds and reactions; these scientists were the legitimate
ancestors of modern chemistry as ushered in by the work of the French chemist
Antoine Lavoisier. The other group took up the visionary, metaphysical side of
the older alchemy and developed it into a practice based on imposture,
necromancy, and fraud, from which the prevailing notion of alchemy is derived.
"Alchemy," Microsoft (R) Encarta. Copyright (c) 1994 Microsoft Corporation.
Copyright (c) 1994 Funk & Wagnall's Corporation.
f:\12000 essays\sciences (985)\Chemistry\Analysis of a Vapor Power Plant.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Analysis of a Vapor Power Plant
8/20/96
ME1361 Thermo II
3.0 Abstract
The objective of this study is to construct a computer model of a water vapor power plant. This model will be used to calculate the state properties at all points within the cycle. Included is an analysis of the ideal extraction pressures based on the calculated values of net work, energy input, thermal efficiency, moisture content, and effectiveness.
4.0 Body
4.1 Introduction
System to be Analyzed
Steam enters the first turbine stage at 120 bar, 520 C and expands in three stages to the condenser pressure of .06 bar. Between the first and second stage, some steam is diverted to a closed feedwater heater at P1, with saturated liquid condensate being pumped ahead into the boiler feedwater line. The Terminal Temperature Difference of the feedwater heater is 5C. The rest of the steam is reheated to 500C, and then enters the second stage of expansion. Part of the steam is extracted between the second and third stages at P2 and fed into an open feedwater heater operating at that pressure. Saturated liquid at P2 leaves the open feedwater heater. The efficiencies of all pumps are 80%, and the efficiencies of all turbines are 85%.
Throughout this report the states will be referenced as depicted above with the numbers 1-13.
The analysis of the system will involve the use of the Energy Rate Balance to isolate the specific enthalpies and associated values of temperature, pressure, specific volume, and steam quality. The Entropy balance equation will be used to calculate the specific entropy at all the above noted states.
Energy Rate Balance (assume KE&PE=0)
dEcv/dt = Qcv-Wcv+Smi(hi) - Sme(he)
Entropy Rate Balance
dScv/dt = SQj/Tj + Smi(si) - Sme(se) + scv
For simplicity, it is assumed in all calculations that kinetic and potential energy have a negligible effect. It is also assumed that each component in the cycle is analyzed as a control volume at steady state; and that each control volume suffers from no stray heat transfer from any component to its surroundings. The steam quality at the turbine exits will also be constrained to values greater than or equal to 90% (Moran, 337).
4.2 Code Development
The C program finalproject.c was developed to calculate the state values given the constraints listed in section 4.1. The program structure consists of three parts:
Header/variable declaration
Calculation section
Data Report section
The Header section includes all the variable declarations, functions to include and system definitions. To obtain accurate data values, this program uses floating point values. The Calculation section is the function that is used to calculate all the state values. In essence this section consists of two nested while() loops that are used to vary the extraction pressures from 12000 kPa to 300 kPa. The while loops are set to terminate when the steam quality becomes less than 90% as defined in the constraints in 4.1. The Data Reporting section is found within the nested while() loops and are used to report the values found in the preceding Calculation section.
4.3 Results and Discussion
T-s diagram
The T-s diagram above shows how specific entropy changes related to temperature. At State 1 the water vapor has just left the boiler and is superheated. It then undergoes an expansion through the turbine. Since the efficiency of the turbine is not 100% the entropy increases and is denoted by the point labeled 2. During the reheat the pressure remains constant but the entropy increases to point 3. Then another two expansions occurs and the fluid reaches state 4 and 5 respectively. The fluid then condenses at constant pressure to a saturated liquid at state 6. The working fluid then enters a pump of efficiency 80% to state 7. The fluid is then heated in an open feedwater heater at constant pressure until it is a saturated liquid at state 8. The fluid is then sent through a pump to a pressure equal to that at state 1 at point 9. The closed feedwater heater, heats the fluid at constant pressure to state 10; and then is heated again by the mixing with the feedwater at state 13. The fluid is then heated back to point 1 in the boiler.
To find the optimum extraction pressure it is necessary to analyze the net work, energy input, thermal efficiency and moisture content at various extraction pressures. These results can be found in the appendix and graphed on the following pages. In Plot of Net Work as a Function of Extraction Pressure P2,P4 it is evident that net work is maximized when extraction pressure, P2 is 300 kPa and pressure P4 is 100 kPa. The corresponding minimum value occurs when P2 is 3100 kPa, and P4 is 100 kPa. In the following graph the total energy input to the system is plotted as a function of extraction pressure. The minimum value is obtained when P4 is 100 kPa and P2 is 3100 kPa. However, it is noteworthy to observe that for all values of P2 the energy input is minimized when P4 is 100 kPa. In Plot of Thermal Efficiency as a Function of Extraction Pressure P2,P4, the thermal efficiency is maximized when P4 is 100 kPa and P2 is 300 kPa. This corresponds directly to the results obtained for the net work of the cycle. The moisture content at the exit of the third stage turbine can then be analyzed to see which extraction pressure combination will be less damaging to the system. The graph shows that the value of steam quality is maximized when P4 is 100 kPa and P2 is 300 kPa. From this it can be concluded that an extraction pressure of P4 equal to 100 kPa and P2 equal to 300 kPa would be optimal for cycle performance. Not only is net work maximized, but damage to system components is minimized by having a high steam quality in the turbine. For this, energy input to the system is sacrificed, however the net result is a more efficient and maintenance free power plant.
5.0 Conclusion
The computer model of the vapor power plant was used to construct a table of data corresponding to every state of the system cycle. Using this data, an optimal solution could be found for a combination of extraction pressures that optimizes the performance of the power plant. In particular a solution was found that maximized net work and efficiency while minimizing the need for equipment replacement. The graphs of net work, thermal efficiency and moisture content versus extraction pressure, depict this contrast in values.
#include "h2osuperc.c"
#include "h2osaturc.c"
#include
#include
#include
#include
#define P1 12000 /* kPa */
#define T1 793.15 /* degrees Kelvin */
#define T3 773.15 /* degrees Kelvin */
#define P5 6 /* kPa */
#define EFF_TURB .85 /* Isentropic Turbine Efficiency */
#define EFF_PUMP .8 /* Isentropic Pump Efficiency */
#define MIN_QUALITY .9 /* Lowest allowable Turbine Exit Quality */
#define INCREMENT 100 /* kPa */
#define TTD 5.0 /* degrees Celsius */
#define C 4.179 /* specific heat */
#define T_ENV 293.15 /* Environment temp, deg. Kelvin */
#define T_BOUND 1123.15 /* Boundary temp, deg. Kelvin */
void main()
{
/*** FUNCTION DECLARATION ***/
float h2osuper(int,float,float,int);
float h2osatur(int,float,int);
/*** VARIABLE DECLARATION ***/
float h1, s1, v1, t1, p1;
float p2, t2, s2, h2, h2s, x2, hg2, hf2, s2s, sf2, sg2, vf2, vg2, v2;
float p3, s3, h3, v3, t3;
float p4, t4, s4, h4, h4s, x4, s4s, sf4, sg4, hf4, hg4, vf4, vg4, v4;
float t5, s5, h5, h5s, s5s, sf5, sg5, x5, hf5, hg5, p5, vf5, vg5, v5;
float p6, t6, s6, h6, h6s, v6;
float p7, t7, s7, h7, h7s, v7;
float p8, t8, s8, h8, h8s, v8;
float p9, t9, s9, h9, h9s, v9;
float p10, t10, s10, h10, h10s, hf10, vf10, v10;
float p11, t11, s11, h11, h11s, v11;
float p12, h12, t12, v12, s12;
float p13, h13, t13, v13, s13;
float y2, y1, psat, w_turb_1, w_turb_2, w_turb_3;
float w_pump_1, w_pump_2, w_pump_3, w_turb_total, w_pump_total;
float w_net, q_boiler, q_reheat, thermal_eff, irrev;
/* OPEN A FILE FOR DATA */
FILE *fp;
fp=fopen("/bitbucket/ashoemak/proj.out","w");
fprintf(fp, "|______________________STATE_1______________________");
fprintf(fp, "|________________________STATE_2____________________________");
fprintf(fp, "|______________________STATE_3______________________|");
fprintf(fp, "|________________________STATE_4____________________________");
fprintf(fp, "|________________________STATE_5____________________________");
fprintf(fp, "|______________________STATE_6______________________|");
fprintf(fp, "|______________________STATE_7______________________|");
fprintf(fp, "|______________________STATE_8______________________|");
fprintf(fp, "|______________________STATE_9______________________|");
fprintf(fp, "|______________________STATE_10_____________________|");
fprintf(fp, "|______________________STATE_11_____________________|");
fprintf(fp, "|______________________STATE_12_____________________|");
fprintf(fp, "|______________________STATE_13_____________________|");
fprintf(fp, "|______________________Misc_Data____________________|\n");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) | x ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) | x ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) | x ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| T(C) | P(kPa) | v(m^3/kg) | h(kj/kg) | s(kj/kg-K) ");
fprintf(fp, "| W_T1 | W_T2 | W_T3 | W_P1 | W_P2 | W_P3 | WTT | WPT | Wnet | q_b | q_r | T_eff | Irrev. |\n");
fclose(fp);
p2 = P1;
x2 = 1;
x4 = 1;
x5 = 1;
s2 = 0;
s4 = 0;
while( x2 >= MIN_QUALITY )
{
/*** TURBINE INLET ***/
p1=P1;
h1 = h2osuper(12,T1,P1,4);
s1 = h2osuper(12,T1,P1,5);
v1 = h2osuper(12,T1,P1,3);
t1 = T1-273.15;
h2s = h2osuper(25,p2,s1,4);
if (h2s == 0)
{
s2s = s1;
sf2 = h2osatur(2,p2,7);
sg2 = h2osatur(2,p2,8);
x2 = (s2s-sf2)/(sg2-sf2);
if(x2 < MIN_QUALITY || p2 <= 0)
{
break;
}
hf2 = h2osatur(2,p2,5);
hg2 = h2osatur(2,p2,6);
h2s = hf2 + x2*(hg2-hf2);
h2 = h1 - (EFF_TURB*(h1 - h2s));
s2 = ((sg2-sf2)/(hg2-hf2)*(h2-hf2)) + sf2;
t2 = h2osatur(2,p2,1)-273.15;
vf2 = h2osatur(2,p2,3);
vg2 = h2osatur(2,p2,4);
v2 = ((vg2-vf2)/(hg2-hf2)*(h2-hf2)) + vf2;
}
else
{
h2= h1-(EFF_TURB*(h1-h2s));
s2= h2osuper(24,p2,h2,5);
t2= h2osuper(24,p2,h2,1);
v2= h2osuper(24,p2,h2,3);
}
p3 = p2;
h3 = h2osuper(12,T3,p3,4);
s3 = h2osuper(12,T3,p3,5);
v3 = h2osuper(12,T3,p3,3);
t3 = T3-273.15;
x5=1;
x4=1;
p4=p3;
while( x4 >= MIN_QUALITY )
{
h4s = h2osuper(25,p4,s3,4);
if (h4s == 0)
{
s4s = s3;
sf4 = h2osatur(2,p4,7);
sg4 = h2osatur(2,p4,8);
x4 = (s4s-sf4)/(sg4-sf4);
if(x4 < MIN_QUALITY || p4 <= 0)
{
goto a1;
}
hf4 = h2osatur(2,p4,5);
hg4 = h2osatur(2,p4,6);
h4s = hf4 + x4*(hg4-hf4);
h4 = h3 - (EFF_TURB*(h3 - h4s));
s4 = ((sg4-sf4)/(hg4-hf4)*(h4-hf4)) + sf4;
t4 = h2osatur(2,p4,1);
vf4 = h2osatur(2,p4,3);
vg4 = h2osatur(2,p4,4);
v4 = ((vg4-vf4)/(hg4-hf4)*(h4-hf4)) + vf4;
}
else
{
h4= h3-(EFF_TURB*(h3-h4s));
s4= h2osuper(24,p4,h4,5);
t4= h2osuper(24,p4,h4,1);
v4= h2osuper(24,p4,h4,3);
}
/* THIRD TURBINE */
p5=P5;
printf("***\n");
h5s = h2osuper(25,P5,s4,4);
printf("***\n");
if (h5s == 0)
{
s5s = s4;
sf5 = h2osatur(2,P5,7);
sg5 = h2osatur(2,P5,8);
x5 = (s5s-sf5)/(sg5-sf5);
printf("x5 = %f\n",x5);
printf("p2= %f\tp4= %f\n",p2,p4);
if(x5 < MIN_QUALITY || p5 <= 0)
{
goto a2;
}
hf5 = h2osatur(2,P5,5);
hg5 = h2osatur(2,P5,6);
h5s = hf5 + x5*(hg5-hf5);
h5 = h4 - (EFF_TURB*(h4 - h5s));
s5 = ((sg5-sf5)/(hg5-hf5)*(h5-hf5)) + sf5;
t5 = h2osatur(2,P5,1);
printf("t5 = %f\n",t5);
vf5 = h2osatur(2,P5,3);
vg5 = h2osatur(2,P5,4);
v5 = ((vg5-vf5)/(hg5-hf5)*(h5-hf5)) + vf5;
}
else
{
h5= h4-(EFF_TURB*(h4-h5s));
printf("h5 = %f\n",h5);
printf("h5s = %f\n", h5s);
printf("h4 = %f\n",h4);
s5= h2osuper(24,P5,h5,5);
t5= h2osuper(24,P5,h5,1);
printf("t5 = %f\n",t5);
v5= h2osuper(24,P5,h5,3);
}
/* CONDENSER */
p6=P5;
h6=h2osatur(2,p6,5);
s6=h2osatur(2,p6,7);
v6=h2osatur(2,p6,3);
t6=h2osatur(2,p6,1);
/* PUMP 1 EXIT */
p7=p4;
h7=h6+((v6*(p7-p6))/EFF_PUMP);
t7=h2osatur(5,h7,1);
v7=h2osatur(1,(t7+273.15),3);
s7=(C*log((273.15+t7)/(273.15+t6)))+s6; /* page 213 */
/* OPEN FEEDWATER EXIT */
p8=p4;
h8=h2osatur(2,p8,5);
s8=h2osatur(2,p8,7);
v8=h2osatur(2,p8,3);
t8=h2osatur(2,p8,1);
/* PUMP 2 EXIT */
p9=P1;
h9=h8+((v8*(p9-p8))/EFF_PUMP);
t9=h2osatur(5,h9,1);
v9=h2osatur(1,(t9+273.15),3);
s9=(C*log((273.15+t9)/(273.15+t8)))+s8; /* page 213 */
/* CLOSED FEEDWATER HEATER EXIT-CONDENSATE */
p11=p2;
h11=h2osatur(2,p11,5);
s11=h2osatur(2,p11,7);
v11=h2osatur(2,p11,3);
t11=h2osatur(2,p11,1);
/* CLOSED FEEDWATER HEATER EXIT */
t10=t11-TTD;
p10=P1;
psat=h2osatur(1,t10+273.15,2);
hf10=h2osatur(2,p10,5);
vf10=h2osatur(2,p10,3);
h10=hf10+(vf10*(p10-psat));
s10=(C*log((273.15+t10)/(273.15+t9)))+s9;
v10=-1;
/* EXIT PUMP 3 */
p12=P1;
h12=h11+((v11*(p12-p11))/EFF_PUMP);
t12=h2osatur(5,h12,1);
v12=h2osatur(1,(t12+273.15),3);
s12=(C*log((273.15+t12)/(273.15+t11)))+s11; /* page 213 */
/* CALCULATE Y-VALUES */
y1=(h10-h9)/(h2-h11);
y2=(h8-h7+(y1*h7))/(h4-h7);
/* BOILER INLET */
p13=P1;
h13=((1-y1-y2)*h10)+(h12*y1);
t13=h2osatur(5,h13,1);
v13=h2osatur(1,(t13+273.15),3);
s13=-1;
/* PUMP & TURBINE WORK */
w_turb_1=(h1-h2);
w_turb_2=((1-y1)*(h3-h4));
w_turb_3=((1-y1-y2)*(h4-h5));
w_pump_1=((1-y1-y2)*(h7-h6));
w_pump_2=((1-y1)*(h9-h8));
w_pump_3=(y1*(h12-h11));
w_turb_total=(w_turb_1+w_turb_2+w_turb_3);
w_pump_total=(w_pump_1+w_pump_2+w_pump_3);
w_net=(w_turb_total-w_pump_total);
q_boiler=(h1-h13);
q_reheat=((1-y1)*(h3-h2));
thermal_eff=(w_net/(q_boiler+q_reheat));
irrev=((1.0-(T_ENV/T_BOUND))*(q_boiler+q_reheat))-w_net+((h1-h13)-(T_ENV*(s1-s13)));
/* PRINT OUTPUT */
fp=fopen("/bitbucket/ashoemak/proj.out", "a");
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t1,p1,v1,h1,s1);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f | %.4f ",t2,p2,v2,h2,s2,x2);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t3,p3,v3,h3,s3);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f | %.4f ",t4,p4,v4,h4,s4,x4);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f | %.4f ",t5,p5,v5,h5,s5,x5);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t6,p6,v6,h6,s6);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t7,p7,v7,h7,s7);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t8,p8,v8,h8,s8);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t9,p9,v9,h9,s9);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t10,p10,v10,h10,s10);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t11,p11,v11,h11,s11);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t12,p12,v12,h12,s12);
fprintf(fp, "|%.1f | %.0f | %.7f | %.3f | %.8f ",t13,p13,v13,h13,s13);
fprintf(fp, "|%f |%f |%f |%f |%f |%f |%f |%f |%f |%f |%f |%f |%f |\n",w_turb_1,w_turb_2,w_turb_3,w_pump_1, w_pump_2, w_pump_3, w_turb_total, w_pump_total,w_net, q_boiler, q_reheat, thermal_eff,irrev);
fclose(fp);
printf("p2= %f\tp4= %f\n",p2,p4);
a2: p4 = p4 - INCREMENT;
}
a1: p2 = p2 - INCREMENT;
}
}
f:\12000 essays\sciences (985)\Chemistry\Analytical Chemistry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Analytical Chemistry
Analytical Chemistry is the branch of chemistry principally concerned with determining the chemical composition of materials, which may be solids, liquids, gases, pure elements, compounds, or complex mixtures. In addition, chemical analysis can characterize materials but determining their molecular structures and measuring such physical properties as pH, color, and solubility. Wet analysis involves the studying of substances that have been submerged in a solution and microanalysis uses substances in very small amounts.
Qualitative chemical analysis is used to detect and identify one or more constituents of a sample. This process involves a wide variety of tests. Ideally, the tests should be simple, direct, and easily performed with available instruments and chemicals. Test results may be an instrument reading, and observation of a physical property, or a chemical reaction. Reactions used in qualitative analysis may attempt to cause a characteristic color, odor, precipitate, or gas appear. Identification of an unknown substance is accomplished when a known one is found with identical properties. If none is found, the uknown substance must be a newly identified chemical. Tests should not use up excessive amounts of a material to be identified. Most chemical methods of qualitative analysis require a very small amount of the sample. Advance instrumental techniques often use less than one millionth of a gram. An example of this is mass spectrometry.
Quantitative chemical analysis is used to determine the amounts of constituents. Most work in analytical chemistry is quantitative. It is also the most difficult. In principle the analysis is simple. One measures the amount of sample. In practice, however, the analysis is often complicated by interferences among sample constituents and chemical separations are necessary to isolate tthe analyte or remove interfering constituents.
The choice of method depends on a number of factors: Speed, Cost, Accuracy, Convenience, Available equipment, Number of samples, Size of sample, Nature of sample, and Expected concentration. Because these factors are interrelated any final choice of analytical method involves compromises and it is impossible to specify a single best method to carry out a given analysis in all laboratories under all conditions. Since analyses are carried out under small amounts one must be careful when dealing with heterogeneous materials. Carefullly designed sampling techniques must be used to obtan representative samples.
Preparing solid samples for analysis usually involves grinding to reduce particle size and ensure homogeneity and drying. Solid samples are weighed using an accurate analytical balance. Liquid or gaseous samples are measureed by volume using accurately calibrated glassware or flowmeters. Many, but not all, analyses are carried out on solutions of the sample. Solid samples that are insoluble in water must be treated chemically to dissolve them without any loss of analyte. Dissolving intractable substances such as ores, plastics, or animal tisure is sometimes extremely difficult and time consuming.
A most demanding step in many analytical procedures is isolating the analyte or separating from it those sample constituents that otherwise would interfere with its measurement. Most of the chemical and physical properties on which the final measurement rests are not specific. Consequently, a variety of separation methods have been developed to cope with the interference problem. Some common separation methods are precipitation, distillation, extraction into an immiscible solvent, and various chromatography procedures. Loss of analyte during separation procedures must be guarded against. The purpose of all earlier steps in an analysis is to make the final measurement a true indication of the quantity of analyte in the sample. Many types of final measurement are possible, including gravimetric and volumetric analysis. Modern analysis uses sophisticated instruments to measure a wide variety of optical, electrochemical, and other physical properties of the analyte.
Methods of chemical analysis are frequently classified as classical and instrumental, depending on the techniques and equipment used. Many of the methods currently used are of relatively recent origin and employ sophisticated instruments to measure physical properties of molecules, atoms, and ions. Such instruments have been made possible by spectacular advances in electronics, including computer and microprocessor development. Instrumental measurements can sometimes be carried out without separating the constituents of interest from the rest of the sample, but often the instrumental measurement is the final step following separation of the samples's components, frequently by means of one or another type of chromatography.
One of the best instrumental method is various types of spectroscopy.
All materials absorb or emit electromagnetic radiation to varying extents, depending of their electronic structure. Therefore, studies of the electromagnetic spectrum of a material yield scientific information. Many spectroscopic methods are based upon the exposure of a sample substance to electromagnetic radiation. Measurements are then made of how the intensity of radiation absorbed, emitted, or scattered by the sample changes as a function of the energy, wave length, or frequency of the radiation. Other important methods are based upon using beams of electrons or other particles to excite a sample to emit radiation, or using radiation to induce a sample to emit electrons. In conjunction with the related techniques of mass spectrometry and X-ray or neutron diffraction, spectroscopy has almost completely replaced classical chemical analysis in studies of the structure of materials.
Classical chemical procedures such as determination by volume as in titrations is also used. A titration is a procedure for analyzing a sample solution by gradually adding another solution and measuring the minimum volume required to react with all of the analyte in the sample. The titrant contains a reagent whose concentration is accurately known; it is added to the sample solution using a calibrated volumetric burette to measure accurately the volume delivered.
When a precisely sufficient volume of titrant has been added, the equivalence point, or endpoint, is reached. An endpoint can be located either visually, using a suitable chemical indicator, or instrumentally, using an instrument to monitor some appropriate physical property of the solution, such as pH or optical absorbance, that changes during the titration. Ideally, the experimental endpoint coincides with the true equivalence point, where an exactly equivalent amount of the titrant has been added, but in practice some discrepancy exists. Proper choice of endpoint location system minimizes this error.
Analytical chemistry has widespred useful applications. For example, the problems of ascertaining the extent of pollution in the air or water involves qualitative and quantitative chemical analysis to identify contaminants and to determine their concentrations. Diagnosing human health problems in a clinical chemistry laboratory is facilitated by quantitative analyses carried out on samples of the patient's blood and other fluids. Modern industrial chemical plants rely heavily on quantitative analyses of raw materials, intermediates, and final products to ensure product quality and provide information for process control. In addition, chemical analyses are essential to research in all areas of chemistry as well as such related sciences as biology and geology.
f:\12000 essays\sciences (985)\Chemistry\aristotle.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aristotle
One of the greatest thinkers of all time was Aristotle-322 BC, the Ancient Greek philosopher. He has practically influenced every area of present day thinking. His main focal points were the natural and social sciences. In Stagira, a town on the northwest coast of the Aegean Sea, in the year of 384 BC Aristotle was introduced to the world. He grew up a wealthy boy. His father was friends with the noble king of Macedonia, and as a young man he spent the majority of his time at the Macedonian court. At the age of seventeen, he was sent away to study in Athens. It was there that he transformed to a disciple of Plato. Over time, Aristotle became the "mind of the school". Later in his life, he followed his mentor and became a teacher in a school on the coast of Asia minor. Aristotle was the professor of young prince Alexander, who went on to become the ruler Alexander the Great.
Aristotle was the first known person to make major advances in the fields of logic, physical works( such as physics, meteorologists, ect.) , psychological works, and natural history( modern day biology). His most famous studies are in the field of philosophical works. His studies play an important role in the early history of chemistry. Aristotle was the first person to propose the idea of atoms matter and other grand ideas.
Aristotle made the first major advances in the field of philosophy of nature. He saw the universe as lying between two scales: form without matter and is at one end and matter without form is at the other end. One the most important aspects of Aristotle's philosophy was the development of potentiality to actuality. That can be explained as something possibility in terms of its accuracy. The actual state compare to the potential state is demonstrated in terms of the causes which act on things. The four causes include material cause, efficient cause, formal cause, and final cause. First the material cause is also defined as the elements out of which matter is created. The way in which matter is created is known as efficient cause. Formal cause is called the expression of what the material actually is . The last cause, appropriately named final cause, is for the end of the substance.
An example, actual compared to potential, can be as simple as bronze statue. The material cause is plainly the bronze. Its efficient cause is the sculptor . The formal cause is the idea of the statue, as the sculptor envisions it . The final cause is the perfection of the statue . These four stages of creation through termination exist throughout nature. Aristotle's vision of early chemistry created a strong foundation for the chemists of today .
Works Cited
Aristotle (Internet Encylopedia of Philosophy). (Online) Available http://utm.edu/research/iep/a/aristotl/htm
Aristotle's Page. (Online) Available http://eng.ox.ac.uk/jdr/aristo/html
Compton's Interactive Encyclopedia. 1995 Compton's NewMedia, Inc.
f:\12000 essays\sciences (985)\Chemistry\Art for all .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Rutherford's Gold Foil Experiment
Rutherford started his scientific career with much success in local schools leading to a scholarship to Nelson College. After achieving more academic honors at Nelson College, Rutherford moved on to Cambridge University's Cavendish laboratory. There he was lead by his mentor J.J. Thomson convinced him to study radiation. By 1889 Rutherford was ready to earn a living and sought a job. With Thomson's recommendation McGill University in Montreal accepted him as a professor of chemistry. Upon performing many experiments and finding new discoveries at McGill university, Rutherford was rewarded the nobel prize for chemistry. In 1907 he succeded Arthur Schuster at the University of Manchester. He began persuing alpha particles in 1908. With the help of Geiger he found the number of alpha particles emitted per second by a gram of radium. He was also able to confirm that alpha particles cause a faint but discrete flash when striking luminescent zinc sulfide screen. These great accomplishments are all overshadowed by Rutherford's famous Gold Foil experiment which revolutionized the atomic model.
This experiment was Rutherford's most notable achievement. It not only disproved Thomson's atomic model but also paved the way for such discoveries as the atomic bomb and nuclear power. The atomic model he concluded after the findings of his Gold Foil experiment have yet to be disproven. The following paragraphs will explain the significance of the Gold Foil Experiment as well as how the experiment contradicted Thomson's atomis model.
Rutherford began his experiment with the philosophy of trying "any dam fool experiment" on the chance it might work.1 With this in mind he set out to disprove the current atomic model. In 1909 he and his partner, Geiger, decided Ernest Marsden, a student of the University of Manchester, was ready for a real research project.2 This experiment's apparatus consisted of Polonium in a lead box emitting alpha particles towards a gold foil. The foil was surrounded by a luminescent zinc sulfide screen to detect where the alpha particles went after contacting the gold atoms. Because of Thomson's atomic model this experiment did not seem worthwhile for it predicted all the alpha particles would go straight through the foil. Despite however unlikely it may have seemed for the alpha particles to bounce off the gold atoms, they did. Leaving Rutherford to say, "It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you." Soon he came up with a new atomic model based on the results of this experiment. Nevertheless his findings and the new atomic model was mainly ignored by the scientific community at the time.
In spite of the views of other scientists, Rutherford's 1911 atomic model was backed by scientific proof of his Gold Foil Experiment. When he approched the experiment he respected and agreed with J.J. Thomson's, his friend and mentor, atomic theory. This theory proposed that the electrons where evenly distributed throughout an atom. Since an alpha paritcle is 8,000 times as heavy as an electron, one electron could not deflect an alpha particle at an obtuse angle. Applying Thomson's model, a passing particle could not hit more than one elctron at a time; therefore, all of the alpha particles should have passed straight through the gold foil. This was not the case - a notable few alpha particles reflected of the gold atoms back towards the polonium. Hence the mass of an atom must be condessed in consentrated core. Otherwise the mass of the alpha particles would be greated than any part of an atom they hit. As Rutherford put it:
"The alpha projectile changed course in a
single encounter with a target atom. But
for this to occur, the forces of electrical
repulsion had to be concentrated in a region
of 10-13cm whereas the atom was known to
measure 10-8cm."
He went on to say that this meant most of the atom was empty space with a small dense core. Rutherford pondered for much time before anouncing in 1911 that he had made a new atomic model-this one with a condensed core (which he named the "nucleus") and electrons orbitting this core. As stated earlier, this new atomic model was not opposed but originally ignored by most of the scientific community.
Rutherford's experiment shows how scientists must never just accept the current theroies and models but rather they must constently be put to new tests and experiments. Rutherford was truly one of the most successful scientists of his time and yet his most renowned experiment was done expecting no profound results. Currently, chemists are still realizing the uses for atomic energy thanks to early findings from scientists such as Rutherford.
f:\12000 essays\sciences (985)\Chemistry\Asimov on Chemistry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Asimov
on chemistry
The Book Asimov on Chemistry by Isaac Asimov is a collection of
seventeen essays that he wrote for The Magazine of Fantasy and Science
Fiction. This book is one of ten that were published by Doubleday & Company,
Inc. Not all of the books centered on chemistry and like science. Most just
covered anything Isaac Asimov wondered about. These Essays date back quite
aways with a range from January 1959 to April 1966.
INORGANIC CHEMISTRY
The Weighting Game
This i found to be the most boring in the whole book. It covers
chemical atomic weight and physical atomic weight. It also gives chemical
methods that determine the atomic weight.
Slow burn
This is a description of how Isaac newton contributed to the field of
chemistry along with what civilizations thought of chemistry. Then he talks
about a pathologically shy, absentminded, stuffy, women-hating chemist. This
man did make some discoveries about inflammable gas and proved water to be an
oxide.
The Element of Perfection
Asimov talks about astronomers in the mid 1800's, and how they made
the spectroscope. Only then does he start to mention a element a french
chemist belived to be new or maybe just a heavier from of nitrogen. Inert
gases and there liquefaction points are then listed along when they when fisrt
liquefied by a chemist.
Welcome, Stranger!
This talks about the rarest of stable enert gases, xenon. It also tells
why that in 1962 so many expirements were done involving this gas. Fisrt it
defines the word gas, and talks about different types in about four pages.
Thens he talks about how it is combined with flourine to form a poison.
Death in the Labratory
Here Asimov talks about how scientists have died due to poor lab
conditions and other matters. He also tells you a few way to poison youself
in a lab such as mixing xenon and flourine. He then goes off and explains how
flourine was used and discovered along with who died in this process. A few
other poisonous chemical compounds are also mentioned.
To Tell a Chemist
This is Isaac Asimov's way of telling if someone is chemist or not. The
two questions are: (1) How do you pronounce UNIONIZED? and (2) what is a
mole? He feels that if you can say un-EYE-on-ized and talk for hours about
molecular weight to define mole, then you must be a chemist.
NUCLEAR CHEMISTRY
The Evens Have It
Concluded here is how isotopes are impractical and how to identify
them. He then descibes how an isotope is constructed. also he says an
element with an even atomic number is without stable or semi-stable elements,
execpt nine elements. Thus the Earth is of the even/even form having
isotopes wtih an even number of neutrons.
ORGANIC CHEMISTRY
You, Too, Can Speak Gaelic
Here you are given basic instrucions on how to pronouce seventeen
sylable words. His example is para-dimethylaminobenzaldehyde (PA-ruh-dy-METH-
il-a-MEE-noh-ben-ZAL-duh-hide). He then tells the origin and evolution of the
different words for methyl and ethyl alcohol along with there atomic
structure.
BIOCHEMISTRY
The Haste-makers
Asimov talks about catalysts and the origins. He tells how a catalyst
works and what causes it to. Also, he proves that a catalyst is in no way
magical after having a lecture about this from his editor. In the end catalyst
are made of enzymes that cause life.
Life's Bottleneck
This deals with how man is dumping phosphorus into the ocean due to
plumbing. this is mixing up the ocean and the sea floor causing phosphorus to
stay at the bottom of ocean instead of circulating. Also, sewage dumping is a
major pollutant for the ocean.
The Egg and Wee
Asimov talks about and contemplates how all life can be placed in a egg
which is so small. He ends by talking about the volume and number of atoms
certain viruses contain along with who died studying them.
That's Life
Explanations of life are given here. Asimov talks about people theories
and definitions of life over past years. People tried to say what was living and
what was dead by definitions that were there own counter example. A lenghty
definition was presented at the end with no loopholes that were detectable.
Not as We Know It
Covered here are different posibile backgrounds for sustaining life.
Water is the background we live off. The closest he compared is ammonia due
to similarities in water they share. Asimov, a detailed science fiction author,
also metions such alterantives as vaporous, metal, energy, and mental beings.
These would live in space, energy, stars, hyperspace.
GEOCHEMISTRY
Recipe for a Planet
This was written when the united states and the former Soviet Union
were attempting to drill into the center of the Earth. This project, Mohole
,has long since been abandoned. Ideas are presented about possible center of
the Earth such as iron or olivine (a magnesium iron silicate). The percentages
by wieght are given for the different substance that make up the Earth in
these people's theories. At the end is a recipe to construct a planet as
somewhat of a joke.
No More Ice Ages?
This deals with the fact that by having coal and oil buring power plants,
we are giving off too much carbon dioxide. That may cause another ice age or a
world wide tropic. It also deals with how nuclear waste from power plants is
being dispoed of.
GENERAL
The Noblemen of Science
Isaac Asimov decided to write this essay after he was called by a
reporting wanting to know who three frenchmen were that had just won the
nobel prize. Since he didn't know he decided to make a list of all pepople who
had won the nobel preize in the fields of physics, chemistry, and medicine. He
also supplied the country in which each scientist recieved his or her
undergradute training.
The Isaac Winners
There are people that Isaac Asimov feel are the best in the field of
science; so he made The Isaac Winners. This award is named after Isaac Newton
(besides who else could it be). Asimov has made a list of seventy - two people
he feels are the contedors. Then, after listing and giving a brief description
of the canidates, he gives you a list of how many people think in what
language. Most thought in English, while the least thought in Russian. Then
he list his top ten in alphabetical order giving there religion and nationality..
f:\12000 essays\sciences (985)\Chemistry\Asymmetric Epoxidation of Dihydronaphthalene with a Synthesiz.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Asymmetric Epoxidation of Dihydronaphthalene with a Synthesized Jacobsen's Catalyst
Justin Lindsey
12/08/96
Chem 250 GG
Professor Tim Hoyt
TA: Andrea Egans
Abstract. 1,2 diaminocyclohexane was reacted with L-(+)-tartaric acid to yield (R,R)-1,2-diaminocyclohexane mono-(+)-tartrate salt. The tartrate salt was then reacted with potassium carbonate and 3,5-di-tert-butylsalicylaldehyde to yield (R,R)-N,N'-Bis(3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediamine, which was then reacted with Mn(OAc)2*4H2O and LiCl to form Jacobsen's catalyst. The synthesized Jacobsen's catalyst was used to catalyze the epoxidation of dihydronaphthalene. The products of this reaction were isolated, and it was found that the product yielded 1,2-epoxydihydronaphthalene as well as naphthalene.
Introduction
In 1990, professor E.N. Jacobsen reported that chiral manganese complexes had the ability to catalyze the asymmetric epoxidation of unfunctionalized alkenes, providing enantiomeric excesses that regularly reaching 90% and sometimes exceeding 98% . The chiral manganese complex Jacobsen utilized was [(R,R)-N,N'-Bis(3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediaminato-(2-)]-manganese (III) chloride (Jacobsen's Catalyst).
(R,R) Jacobsen's Catalyst
Jacobsen's catalyst opens up short pathways to enantiomerically pure pharmacological and industrial products via the synthetically versatile epoxy function .
In this paper, a synthesis of Jacobsen's catalyst is performed (Scheme 1). The synthesized catalyst is then reacted with an unfunctional alkene (dihydronaphthalene) to form an epoxide that is highly enantiomerically enriched, as well as an oxidized byproduct.
Jacobsen's work is important because it presents both a reagent and a method to selectively guide an enantiomeric catalytic reaction of industrial and pharmacological importance. Very few reagents, let alone methods, are known to be able to perform such a function, which indicates the truly groundbreaking importance of Jacobsen's work.
Experimental Section
General Protocol. 99% L-(+)- Tartaric Acid, ethanol, dihydronaphthalene and glacial acetic acid were obtained from the Aldrich Chemical Company. 1,2 diaminocyclohexane (98% mix of cis/trans isomers) and heptane were obtained from the Acros Chemical Company. Dichloromethane and potassium carbonate were obtained from the EM Science division of EM Industries, Inc. Manganese acetate was obtained from the Matheson, Coleman and Bell Manufacturing Chemists. Lithium chloride was obtained form the JT Baker Chemical Co. Refluxes were carried out using a 100 V heating mantle (Glas-Col Apparatus Co. 100 mL, 90 V) and 130 V Variac (General Radio Company). Vacuum filtrations were performed using a Cole Parmer Instrument Co. Model 7049-00 aspirator pump with a Büchner funnel. For Thin Layer Chromatography (TLC) analysis, precoated Kodak chromatogram sheets (silica gel 13181 with fluorescent indicator) were used in an ethyl acetate/hexane (1:4) eluent. TLC's were visualized using a UVP Inc. Model UVG-11 Mineralight Lamp (Short-wave UV-254 nm, 15 V, 60 Hz, 0.16 A). Masses were taken on a Mettler AE 100. Rotary evaporations were performed on a Büchi Rotovapor-R. Melting points were determined using a Mel-Temp (Laboratory Devices, USA) equipped with a Fluke 51 digital thermometer (John Fluke Manufacturing Company, Inc.). Optical rotations ([a]D) were measured on a Dr. Steeg and Renter 6mbH, Engel/VTG 10 polarimeter. Solid IR's were run on a Bio-Rad (DigiLab Division) Model FTS-7 (KBr:Sample 10:1, Res. 8 cm-1, 16 scans standard method, 500cm-1 - 4000cm-1). Flash Chromatography was carried out in a 20 mm column with an eluant of ethyl acetate (25%) in hexane.
(R,R)-1,2 Diaminocyclohexane mono-(+)-tartrate salt. 99% L-(+)-Tartaric Acid (7.53g, 0.051mol) was added in one portion to a 150 mL beaker equipped with distilled H2O (25 mL) magnetic stir bar, and thermometer. Once the temperature had dropped to 17.8 °C, 1,2 diaminocyclohexane (11.89 g, 12.5 mL, 0.104 mol) was added with stirring in one portion. To the resultant amber solution was added glacial acetic acid (5.0 mL, 0.057 mol). The frothy orange product was cooled in an ice water bath for 30 minutes. The product was washed with 5 °C distilled H2O (5.0 mL) and ambient temperature methanol (5.0 mL) and isolated by vacuum filtration. 8.37 grams of an orange slush were obtained. The product was further purified by recrystallization of the salt from H2O (1:10 w/v, 84 mL of H2O) and again isolated by vacuum filtration, yielding an off-white crystalline product (1.2015g; 0.00415 mol; 8.9 % yield; mp=270.4-273.8 °C Lit. Value mp=273 °C )
(R,R)-N,N'-Bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediamine. Distilled H2O (6.0 mL), (R,R)-1,2 diaminocyclohexane mono-(+)-tartrate salt (1.1087 g, 0.0042 mol) and K2CO3 granules (1.16 g, 0.0084 mol) were added to a 100 mL RB flask equipped with a magnetic stir bar. The mixture was stirred until complete dissolution occurred, and then ethanol (22 mL, 0.376 mol) was added. The solution was then brought to reflux, and then a solution of 3,5-di-tert-butylsalicylaldehyde (2.0g, 0.0037 mol) dissolved in ethanol (10 mL, 0.1713 mol) was added with a Pasteur pipette. The solution refluxed for 45 minutes. H2O (6.0 mL) was added to the yellow solution, and the mixture cooled in an ice bath for 30 minutes. The resultant yellow solid was collected by vacuum filtration and washed with ethanol (5 mL, 0.856 mol). The yellow solid was dissolved in CH2Cl2 (25 mL, 0.4 mol) and washed with H2O (2 x 5.0 mL) and saturated aqueous NaCl (5.0 mL) The organic layer was dried (Na2SO4) and then decanted into an RB flask. Methanol was removed in vacuo, yielding a yellow crystalline powder (1.56g; 0.00285 mol; 77 % yield; mp=202.9-205.4 °C, Lit. Value mp=205-207 °CIII; IR (KBr) 2800, 2100, 1631.7, 1506.7, 1173.5, 828, 545cm-1; [a]D20 =-314°, Lit. Value [a]D20 =-315°III)
[(R,R)-N,N'-Bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediaminato (2-)]-manganese (III) Chloride. Absolute ethanol (25 mL, 0.429 mol) was added to (R,R)-N,N'-bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediamine (1.01 g, 0.001788 mol) in a 50 mL RB equipped with a magnetic stirrer, mantle, claisen adapter and reflux condenser. The pale yellow mixture was brought to reflux, and MnO(OAc)2*4H2O (2.0 equivalents, 0.881 g, 0.0036 mol) was added. The orange mixture refluxed for 30 minutes, and then the reaction flask was equipped with a glass bleed tube allowing air to bubble through at a slow rate. The progress of the reaction was monitored by TLC until the starting material ((R,R)-N,N'-bis (3,5-di-tert-butylsalicylidene)-1,2 cyclohexane diamine)) faded from the TLC readings (Rf=0). At this point, the air was discontinued and granular LiCl (3 equivalents, 0.24 g, 0.0054 mol) was added to the caramel brown mixture. The mixture was refluxed for an additional 39 minutes, and the ethanol was removed in vacuo. The brown solid was redissolved in CH2Cl2 (25 mL), washed with H2O (2x 10 mL) and saturated aqueous NaCl (15 mL). The organic phase was dried (Na2SO4) and redissolved in heptane (30 mL, 0.205 mol). The CH2Cl2 was removed in vacuo, and the brown slurry was cooled in an ice bath for 57 minutes. The brown solid (0.22g; 0.000354 mol; 19% yield; mp=331.4 -333.6 °C, Lit. Value mp=324-326 °CIII) was collected by vacuum filtration, and left to air dry for 1 week.
1,2-Epoxydihydronaphthalene. 0.05 M Na2HPO4 (5 mL, 0.037g 2.5*10-4 mol) was added to household Clorox bleach (12.5 mL), and the resultant clear liquid was adjusted to pH 11.3 by adding 1M NaOH (1 drop). [(R,R)-N,N'-Bis(3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediaminato (2-)]-manganese (III) chloride (0.2 g, 0.00031 mol) was added to a solution of 4-phenylpyridine N-oxide (0.13 g, 0.00076 mol) and dihydronaphthalene (0.51 g, 0.0038 mol) in CH2Cl2 (5 mL, 0.076 mol). The brown liquid was stirred vigorously for 2 hours. The progress of the reaction was monitored by TLC until the starting material (dihydronaphthalene, Rf=) faded from the TLC readings (Rf=0). Once the starting material was gone, the stir bar was removed and dichloromethane (50 mL, 0.76 mol) was added. The brown organic phase was separated, washed twice (NaCl aq) and dried (Na2SO4). The brown organic layer was then isolated by vacuum filtration, and then the dichloromethane was removed in vaccuo. The dark brown, oily solid (0.4 g; 0.0027 mol; 71% yield; IR (NEAT) 2964.0, 2857.1, 1747.0, 1373.0, 1239.0, 1048.7cm-1; GC Retention Times (minutes) 3.75 (70%, naphthalene), 6.75 (29%, 1,2-epoxydihydronaphthalene)) was stored for one week and then purified by flash chromatography.
Results and Discussion
Synthesis of (R,R) Jacobsen's Catalyst (Scheme 1). The first step in the synthesis of Jacobsen's catalyst was the selective crystallization of one of three stereoisomers present in 1,2-diaminocyclohexane. The yield from this reaction was 8.9% (Appendix 1). The reaction produced 1.2015 g of an off-white crystal (Product 1) with a melting point of 270.4-273.8 °C, which was identified as (R,R)-1,2-diaminocyclohexane mono-(+)-tartrate salt (Table 1).
Table 1. Selected Data Utilized in Identification of Product 1
Compound Product 1 (R,R)-1,2-diaminocyclohexane mono-(+)-tartrate saltIII
Physical Description Off-white crystals Off-white to beige crystalline solid
Melting Point (°C) 270.4-273.8 273
The percent yield was so low (8.9%) largely because of experimental error. An unknown amount of Product 1 was lost because it was not retrievable from the reaction flask, and a further unspecified amount was lost when a portion of the product recrystallized on the filter paper during a vacuum filtration. This recrystallization occurred because the funnel and filter flask were not heated properly. The second step of the Jacobsen synthesis involved the reaction of the isolated diamine salt (Product 1, (R,R)-1,2-diaminocyclohexane mono-(+)-tartrate salt) with an aldehyde (3,5-di-tert-butylsalicylaldehyde) to produce the organic backbone of the catalyst. The percent yield from this reaction was 77%. This reaction produced 1.56 g of an oily, yellow powder (Product 2) with a melting point of 202.9-205.4 °C and an optical rotation ([a]D20) of -314° that was identified as (R,R)-N,N'-Bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediamine (Table 2).
Table 2. Selected Data Used in Identification of Product 2
Compound Product 2 (R,R)-N,N'-Bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediamineIII
Physical description
Oily, yellow powder Yellow powder
Melting Point (°C)
202.9-205.4 205-207
[a]D20 -314° -315°
Product was lost during transfers between containers and in the separatory funnel when the reaction material was washed. It is also possible that product was lost because the reaction was not allowed to reflux to completion and was cut short by fifteen minutes. The fourth and final step of the Jacobsen catalyst synthesis involved the insertion of the oxidizing metal (in the form of Mn(OAc)2*4 H2O followed by 2 equivalents of LiCl) into the organic backbone (Product 2) of the catalyst. The percent yield for this reaction was 19%. The reaction produced 0.22 g of a brown, oily solid (Product 3) with a melting point of 331-333.6 °C that was identified as Jacobsen's catalyst; [(R,R)-N,N'-Bis (3,5-di-tert-butylsalicylidene)-1,2-cyclohexanediaminato (2-)]-manganese (III) Chloride (Table 3).
Table 3. Selected Data Used in Identification of Product 3
Compound Product 3 Jacobsen's Catalyst
Physical Description Brown, oily solid
Brown Solid
Melting Point (°C) 331.4-333.6 324-326
Again, product was lost because the reflux was cut short and not allowed to run to completion, causing loss of product. Additional product was either lost or unreacted when the air bleed tube was inserted, causing some product to splash out of the reaction flask. These experimental errors may very well have led to a high amount of impurities in Product 3, which would account for the difference between the experimental melting point and the literature value. The net percent yield for the synthesis of Jacobsen's catalyst was 1.9% (Appendix 1)
Asymmetric Epoxidation of Dihydronaphthalene. The synthesized Jacobsen's catalyst (Product 3) was used to run an enantiomerically guided epoxidation of an unfunctionalized alkene (dihydronaphthalene). The percent yield for this reaction was 71%. The reaction yielded a 0.4 g of a dark brown, oily solid (Product 4) that was purified by flash chromatography, analyzed by GC/MS and IR (NEAT) (Figure 1, Table 4).
Table 4. Selected IR Data for Identification of Epoxidaton of Dihydronaphthalene Products
Compound Product 4***Fig 1,2-epoxydihydronaphthalene Naphthalene
Prominent IR Peaks 2964.0 (C-H, alkane)
1747.0 (C=C, alkene)
1239.0 (C-O, ether)
1048.7 (C=C-H, alkene)
2970-2850 (C-H, alkane)
1750-1620 (C=C, alkene)
1300-1000 (C-O, ether)
1050-675 (C=C-H, alkene) 2970-2850 (C-H, alkane)
1750-1620 (C=C, alkene)
1050-675 (C=C-H, alkene)
GC:
Retention Times (min.) and Corresponding Mass Spec (m/z) 3.75 min.: (128)
6.75 min.: (146)
Structure, Physical Properties
Mass=146.18
Mass=128.17
Product 4 displays properties of both 1,2-epoxydihydronaphthalene and naphthalene. The peaks seen in the IR (NEAT) of product 4 at 2964.0, 1747, 1239, and 1048.7 cm-1 (FIG 1) could be interpreted to represent the presence of just 1,2-epoxydihydronaphthalene. The GC that was run on product 4; however, indicated that naphthalene was also present (FIG 2-4). This leads to the conclusion final product of this Jacobsen catalyzed epoxidation was a mixture of 1,2-epoxydihydronaphthalene (30%) and naphthalene (70%) (FIG 2-3, Scheme 2). The presence of an oxidized product (naphthalene) indicates that the solution in which the reaction took place was probably too basic. Such a situation could be corrected by either adding less Clorox or by adding NaOH that is less concentrated than 1M. It is also possible that not all of the epoxidized product was isolated, and that much of it remained stuck in the silica gel of the flash chromatography column. In order to remedy this situation, a solvent that is more polar than the 25% ethyl acetate in hexane that was used for the flash chromatography in this experiment.
Conclusion
The synthesized Jacobsen's catalyst did not guide this enantiomeric epoxidation as was hoped; however, both the reagent and mechanism showed that it is possible to produce a significant amount of an enantiomerically enriched epoxide. The problem with the reaction described above was not the reagent or the mechanism of the reaction, it was the conditions in which the reaction was carried out. In order for the Jacobsen catalyzed epoxidation to produce highly enamtiomerically enriched epoxides as was hoped, more care must be taken in the transferring and washing of products, and reactions must be allowed to run to completion. If this is successfully done, then the impurities that were present in the final product will be effectively minimized, and the results that were obtained by Dr. E.N. Jacobsen may be repeated.
Literature Cited
f:\12000 essays\sciences (985)\Chemistry\Atcp.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acetontricykloperoxid
Acetontricykloperoxid ATCP
Ketonperoxider är cykliska, organiska ämnen som med lätthet
sönderfaller med hög hastighet under bildandet av en stor mängd gaser.
Acetontricykloperoxid är den bästa och med lätthet den enklaste att
tillverka (Andra peroxiderbara ketoner innefattar cyklopentanon och
cyklohexanon). Med acetontricykloperoxid avses den trimära formen av
acetonperoxid, C9H18O6. Det finns även en dimär form; acetonDIcykloperoxid,
som efter hand bildas vid lagring av ämnet i ett lösningsmedel.
Det finns många uppfinningsrika namn på denna peroxid;
Peroxyaceton, Acetonperoxid, Acetontricykloperoxid, Triacetontricyklo-
peroxid, Triperoxyaceton, Acetontriperoxid, Acetonit, Cyklokarbon B osv...
Inget av dessa namn uppfyller några av IUPACs nomenklaturregler, jag har
dock valt acetontricykloperoxid som trivialnamn, med förkortningen ATCP.
Det kemiskt korrekta namnet blir enligt "The Chemistry Of Peroxides"
följande: 1,2,4,5,7,8-hexaoxa-3,3,6,6,9,9-hexametylnonan, vilket faktiskt
är det längsta tänkbara namnet, men samtidigt mycket beskrivande. Namnet
Cyklokarbon B kan tyda på att det har använts militärt/industriellt någon
gång i historien, jag har dock inte kommit fram till något angående denna
användning. Detonationshastigheten ligger på 3750 m/s vid 0.92 g/cm2 och på
5300 m/s vid 1.18 g/cm2. Enligt Tim Lewis skall ATCP vara mycket flyktigt,
och därigenom skall hälften av en friliggande mängd ha avdunstat på tio
dagar. Vi har själva inte märkt av denna effekt, så något större problem är
det inte, men du kan ju alltid förvara ämnet i en tättslutande burk för att
vara på den säkra sidan.
Den grundläggande reaktionen vid tillverkning av ketonperoxider är en keton
som behandlas med väteperoxid i en sur lösning. Kravet på surheten är inte
så viktigt, bara det faktum att, ju surare lösning, ju snabbare går
reaktionen. Vid pH 7 händer reaktionen omärkligt långsamt, ingenting händer
på upp till några månader (med undantag av förvaring i överskott av
aceton, då kan några procent kristalliseras i acetonen. Detta kan säkert
skyllas på vattens autohydrolys). Däremot, vid pH < 2-3, går reaktionen så
pass fort att lösningen börjar koka av den vid reaktionen utvecklade
värmen, och en sats på några deciliter är klar på några minuter om man
lyckas kyla lösningen under dess kokpunkt. Man kan följaktligen katalysera
reaktionen med många olika syror, men saltsyra och svavelsyra är de
huvudsakliga vi använder, en mycket lättåtkomlig (HCl) och en mycket
effektiv (H2SO4) (konc. HCl är vätejonfattigare än konc. H2SO4). 25%-ig
hushållsättiksyra exempelvis, är för svag för att vara praktiskt användbar.
Acetontricykloperoxids smältpunkt ligger på 97°C. Vid temperaturer under
-162°C är denna peroxid inaktiv, och kan utan risk studeras. En forskare
(Groth) som studerat detta ämne, fann att ämnet kristalliserar i monoklin
kristallstruktur. Acetontricykloperoxid är praktiskt taget olöslig i
vatten. Vid temperaturer uppemot 100°C, löser sig dock peroxiden dåligt
till bra. ATCP är lättlösligt i aceton. Om man löser tillräckligt med ATCP
i aceton och låter densamma avdunsta kan man lätt framställa ovan nämnda
kristaller, vilka kan vara tämligen snygga, men de är dock inte kraftigare.
Densiteten ökar dock, och vill man ha en kompaktare laddning är
kristallisering att föredra framför packande av ämnet, pga dess
stötkänslighet.
En liter 100% aceton blandad med en liter 35% väteperoxid ger efter
syrabehandling runt ett halvt kilo färdig acetontricykloperoxid.
Ämnet är stötkänsligt, så var försiktig med det. Skrapar man några korn
mellan ett par sandpapper, tar ämnet eld. Slår man på det med en hammare
exploderar det. Mal man ämnet (ganska löst) i en mortel inträffar en serie
småexplosioner. Slänger man en plastburk innehållande ATCP i marken händer
inget. Gnuggar man ATCP mellan handflatorna händer inget. Jag hoppas detta
ger dig en bild av dess stötkänslighet. Enligt den tio cm tjocka boken
"Lange's Handbook of Chemistry", så ska man helst inte blanda väteperoxid
med aceton, speciellt inte i sur miljö, men det står INTE varför... :-)
Icke utjämnad reaktionsformel vid framställning av acetontricykloperoxid:
-----------
H C CH
3 \ / 3
C
H C Syra / \
3 \ H+ O O
C=O + H O ====> / \ + H O
/ 2 2 O O 2
H C / \
3 H C-C---O---O---C-CH
3 \ / 3
CH CH
3 3
Aceton Väteperoxid Acetontricykloperoxid Vatten
Praktiskt tillvägagångssätt vid tillverkning av Acetontricykloperoxid:
--------
Du behöver:
-----------
Aceton
Väteperoxid
Saltsyra/svavelsyra
Kaffefilter (gärna flergångs- av plast)
Termometer
Bägare (Glas)
Gör så här:
-----------
1. Häll 4 delar väteperoxid i en glas- eller plastbägare.
2. Tillsätt 3 delar aceton. Rör om ordentligt och kontrollera att
temperaturen inte överstiger 45°C. Det hela kan för säkerhets skull
(rekommenderas) nedsänkas i ett vattenbad, vilket underlättar kylning.
Var beredd på att du kan vara tvungen att byta vatten i vattenbadet,
eftersom det hela tiden värms upp. Blir det för varmt kokar det över,
vilket inte är trevligt, därför att gaserna svider ganska mycket i
ögonen och dessutom försvinner hela din kemikalieomgång.
3. Tillsätt 0.1-1 del konc. syra droppvis. Syran fungerar som katalysator,
dvs den snabbar upp reaktionen. Om temperaturen skulle stiga över 45°C,
låt temperaturen sjunka innan du tillsätter mer syra. Annars avdunstar
acetonen (kokpunkt 56°C). Rör hela tiden om ordentligt. (Det fräser till
när konc. svavelsyra droppas i, det är ingen fara, bara du rör om
ordentligt hela tiden.) Du kan med fördel ha även denna reaktion i
kylande vattenbad, då reaktionen är mycket exoterm.
4. Efter en tid, varierande med val av syra och dess koncentration och
mängd, framträder den i vatten olösliga acetontricykloperoxiden.
Låt detta stå ett tag. Som du ser så bildas det hela tiden mer vita
kristaller.
5. Filtrera blandningen genom att hälla över blandningen i ett flergångs
kaffefilter, och spola igenom med mycket vatten. På detta sätt så
avlägsnar du kvarblivna syrarester som alltid finns kvar bland
acetontricykloperoxidkristallerna (phew!). Syra som annars skulle vara
kvar gör ämnet mer instabilt, därför att det kan angripa metaller och
därigenom värmas upp. Det är också behagligare att handskas med syrafria
ämnen. De är då varken frätande eller kraftigt luktande. Kontrollera
gärna med en pH-mätare att dina kristaller håller omkring pH 7.
6. Lyft upp filtret och pressa ur så mycket vätska du kan. Ju mer vätska du
pressar ut, ju snabbare går sedan torkningsprocessen.
7. Låt filtratet torka på några lager tidningsspapper/hushållspapper eller
annat liknande material, så sugs överskottsvätskan snabbt upp. Låt det
åtminstone stå över natten. För att du ska få ut något av ditt ämne så
ska kristallerna vara helt torra. För att snabba upp torkningsprocessen
så kan du rikta några glödlampor/spotlights mot det torkande ämnet.
Se dock till att lamporna inte på något tänkbart sätt kan falla ned i
pulvret och antända det.
-------------
Lästips om ketonperoxider:
The Chemistry Of Peroxides, Saul Patai 198X, ISBN 0-471-10218-0
Advanced Organic Chemistry, Jerry March 1992, ISBN 0-471-60180-2 (s 1048ff)
-------------
f:\12000 essays\sciences (985)\Chemistry\Atom Book.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
STATEMENT: These were meant to be cut ou as a childrens book each paragraph is a new page.
Hey kids! Today I'm going to introduce
you to the world of atoms. Atoms are
little things that you or anybody else have ever seen.
Make up things like trees, cars, paper, even you.
So let's shrink down to size and see what it's like.
We're going to into the Helium atom today.
An atom is made of little things called protons, nuetrons, and
electrons. Protons have a positive charge, neutrons have no charge,
and electrons have a negative charge. Electrons travel around the center of
the atom, which is called the nucleus. Kind of like how Earth revolves around the sun.
Protons and nuetrons make up the center of the atom.
The atom has an atomic number. Scientists find that number by counting how many
protons are in the nucleus. In this case Helium has two protons.
Scientists find the average atomic mass by adding up protons and neutrons many times.
Then they divide the total by how many times they tried this. This time Helium has
an average of 4 protons and neutrons in the center.
The electron arrangement is the number of electrons in each ring or shell.
In this case helium has 1 ring with 2 electrons in it. An aatom can have up to
only 7 rings.
An atomic symbol is the letter(s) that describe the element. Like He means
helium, but if you wrote it HE it could mean something totally different so be
careful.
An isotope has fewer or more neutrons than protons.
So there isn't the same number of protons as neutrons.
Well now that I'm big again I hope you learned everything
you wanted to know about atoms.
f:\12000 essays\sciences (985)\Chemistry\Atomic Structure.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Yazan Fahmawi Sept. 30, 1995
T3 IBS Chemistry
Ms. Redman
Historical Development of
Atomic Structure
The idea behind the "atom" goes back to the Ancient Greek society, where scientists
believed that all matter was made of smaller, more fundamental particles called elements.
They called these particles atoms, meaning "not divisible." Then came the chemists and
physicists of the 16th and 17th centuries who discovered various formulae of various salts
and water, hence discovering the idea of a molecule.
Then, in 1766 was born a man named John Dalton born in England. He is known as the father
of atomic theory because he is the one who made it quantitative, meaning he discovered
many masses of various elements and, in relation, discovered the different proportions
which molecules are formed in (i.e. for every water molecule, one atom of oxygen and two
molecules of hydrogen are needed). He also discovered the noble, or inert gases, and their
failure to react with other substances. In 1869 a Russian chemist, best known for his
development of the periodic law of the properties of the chemical elements (which states
that elements show a regular pattern ("periodicity") when they are arranged according to
their atomic masses), published his first attempt to classify the known elements. His name
was Mendeleyev, and he was a renowned teacher. Because no good textbook in chemistry was
available at the time, he wrote the two-volume Principles of Chemistry (1868-1870), which
later became a classic. During the writing of this book, Mendeleyev tried to classify the
elements according to their chemical properties. In 1871 he published an improved version
of the periodic table, in which he left gaps for elements that were not yet known. His
chart and theories gained acceptance by the scientific world when three elements he
"predicted"-gallium, germanium, and scandium-were subsequently discovered In 1856 another
important figure in atomic theory was born: Sir Joseph John Thomson. In 1906, after
teaching at the University of Cambridge and Trinity University in England, he won the
Nobel Prize in physics for his work on the conduction of electricity through gases. He
discovered what an electron is using cathode rays. An electron is the smallest particle in
an atom, whose mass is negligible compared to the rest of the atom, and whose charge is
negative. Though scientists did not know it at the time, electrons were located in an
electron cloud rotating around the nucleus, or center of the atom.
Another prominent figure in nuclear physics is a man called Ernest Rutherford, born in
1871. He also was a professor at the University of Cambridge, the University of Manchester
(both of which are in England), and at McGill College in Montreal, Canada. His importance
comes after the discovery of radioactivity in 1896 by a French scientist named Becquerel.
Rutherford identified the three main components of radioactivity: alpha, beta, and gamma
particles. He also found the alpha particle to be a positively charged helium atom. Also,
Rutherford was the first one to discover the true structure of an atom, it having a
central, heavy nucleus with an electron cloud surrounding it. It was Rutherford that,
through experiments such as passing alpha particles through a thin gold foil and watching
some repel, discovered the second constituent of the atom (also the first component of the
nucleus): the proton. The proton has a relative atomic mass of one and has a positive
charge. Rutherford also went down in history as the first man to artificially cause a
nuclear reaction when, in 1919, he bombarded nitrogen gas with radioactive alpha
particles, which resulted in atoms of an oxygen isotope and protons. A unit of
radioactivity, the rutherford, was named in his honor. A colleague of Rutherford's at
Cambridge University was a man named James Chadwick discovered the third fundamental
particle that makes up the atom: the neutron. This discovery led immediately to the
discovery of nuclear fission and the atom bomb The neutron has a relative atomic mass of
one, and has no positive or negative charge (i.e. it is neutral). It is found in the
nucleus of atoms, along with the proton. Chadwick was one of the first British
scientists to stress the development of a possible atom bomb. His name was strongly
associated with the British atomic bomb effort, especially during World War II. During the
last two years of W.W.II (1943-1945) Chadwick moved to New Mexico, where he spent much of
his time researching at the Los Alamos Scientific Laboratory, a site chosen by the US
government for nuclear weapon research. The first atomic bomb was developed here with the
help of James Chadwick. Chadwick earned the Nobel Prize for physics in 1935. In the same
era of the development of the atom lived a man, just across the North Sea from these three
learned individuals, in Denmark. Neils Henrik Bohr, born in 1885, was also a considerable
man when it came to nuclear and atomic physics. He moved to Cambridge University in 1911,
working under J. J. Thomson, but soon moved to Manchester to work under Rutherford's
supervision. He won the Nobel Prize in physics in 1922 for his theory on atomic structure
(also known as the Quantum Theory), which was published in papers between 1913 and 1915.
He based his work around Rutherford's conception of the atom. This theory, that suggests
that electrons only emit electromagnetic energy when they jump from one quantum level to
another, contributed tremendously to future developing of theoretical atomic physics. His
work helped lead to the notion that electrons exist in shells and that the electrons in
the outermost shell certify an atom's chemical properties. He later illustrated that
uranium-235 is the singular isotope of uranium that undergoes nuclear fission. The Bohrs
moved to England, and then to the US, where Bohr went to work for the government at Los
Alamos, New Mexico, along with James Chadwick, until the first bomb's detonation in 1945.
He disapproved complete secrecy of the nuclear bomb, and believed that its consequences
would revolutionize the modern world. He wanted some sorts of international law to watch
over the use of nuclear devices. In 1945 Bohr returned to the University of Copenhagen in
Denmark, where he began developing peaceful uses for atomic energy, such as power plants
using nuclear resources as opposed to fossil fuels such as coal, oil, and natural gases.
Bohr died in Copenhagen on November 18, 1962. In Austria in 1887 a man by the name of
Erwin Schrödinger was born. He became a physicist best known for his mathematical studies
of the wave mechanics of orbiting electrons. His most famous and important contribution to
the understanding of atomic structure is a meticulous and precise mathematical description
of the standing waves orbiting electrons follow. His theory was published in 1926, and
along with a German physicist's theory of matrix mechanics, their theories became the
basis of quantum mechanics. Schrödinger shared the 1933 Nobel Prize in physics with the
British physicist Paul A. M. Dirac for his contribution to the development of quantum
mechanics. Through the centuries that have passed, minds have been boggled, countless
questions have been answered, and many great minds conceived, however, there is no doubt
that there is still much to discover about the atom, such as sub-atomic, elementary
particles.
A whole new generation of great scientists is still to come, to explore and unlock the
universe's secrets.
f:\12000 essays\sciences (985)\Chemistry\AZT.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The AIDS virus is one of the most deadly and most wide spread diseases in the
modern era. The disease was first found in 1981 as doctors around the United States
began to report groups of young, homosexual men developing a rare pneumonia
caused by an organism called Penumocystis carini. These patients then went on to
develop many other new and rare complications that had previously been seen only in
patients with severely damaged immune systems. The Center for Disease Control in
the United States named this new epidemic the acquired immunodeficiency syndrome
and defined it by a specific set of symptoms. In 1983, researchers finally identified the
virus that caused AIDS. They named the virus the human immunodeficiency virus, or
HIV. AIDS causes the immune system of the infected patient to become much less
efficient until it stops working altogether.
The first drug that was approved by the American Food and Drug
administration for use in treating the AIDS virus is called AZT, which stands for
azido-thymidine. AZT was released under the brand name of Retrovir and it's chemical
name is Zidovudine, or ZDV. The structural name of AZT is 3'-azido-3'-
deoxythymidine. AZT works by inhibiting the process of copying DNA in cells. More
specifically, AZT, inhibits the reverse transcriptase enzyme, which is involved in the
DNA replication process. When DNA is replicating in a cell, there is a specific enzyme
that works along one side of the original DNA strand as the DNA is split into two
strands, copying each individual nucleotide. This enzyme is only able to work in one
direction along the nucleotide string, therefore a different enzyme, or rather a series of
different enzymes is required to work in the opposite direction. Reverse transcriptase
is one of the enzymes that is required to work in the opposite direction. AZT works by
bonding to the reverse transcriptase enzyme, thereby making it unable to bond with
the nucleotide string and making it unable to fulfill it's role. This whole process is used
by the HIV virus to replicate itself so that it can continue to infect more cells.
AZT was originally developed over 20 years ago for the treatment of lukemia.
The concept behind this was that the AZT was supposed to terminate the DNA
synthesis in the growing lukemia lymphocytes, thereby stopping the disease. AZT was
rejected at this point because it failed to lengthen the lives of test animals.
The problem with the AZT drug is that it is not perfect. First of all, AZT will
not bond to each and every reverse transcriptase enzyme in the body, and therefore it
cannot shut down the HIV production completely. The reason for this is because to
put enough AZT in the patient to completely shut down the HIV production would
probably kill the patient. The second, and most serious problem with AZT is that it
also goes into normal, healthy cells and will inhibit their reverse transcriptase enzyme
and will therefore inhibit their ability to produce new, healthy cells. However, AZT
does have an ability to specifically target HIV infected cells to a certain degree so that
it does not kill each and every cell it gets into. However, it does kill a high proportion
of the cells that it gets into, thereby giving it a high toxicity level.
The formula for AZT is C H N O . The molar mass of AZT is 267.24 grams
per mole. AZT's melting point is between 106 C and 112 C. AZT is soluble in water,
which is important so that it may dissolve into the human blood and be distributed to
the cells. AZT is usually taken in a pill format, but it is absorbed by the skin, which can
make it dangerous for people handling the drug.
There is quite a bit of controversy about the effectiveness of AZT. Most
experts agree that AZT delays the progression of HIV disease; the drug may also
prolong the disease-free survival period. However, many doctors still disagree with
using AZT as a treatment for AIDS. Peter Duesberg, a professor of molecular biology
at the university of California, Berkley, says that "In view of this, [the cytotoxicity
level of AZT] there is no rational explanation of how AZT could be beneficial to AIDS
patients, even if HIV were proven to cause AIDS." This comment stems from the fact
that AZT has a very high cytotoxicity level, which means that while it kills the infected
cells, it will also kill perfectly healthy cells. According to Dr. Duesberg, AZT will kill
approximately nine hundred and ninety nine healthy cells for each infected cell that it
kills. Most of this opposition to AZT stems from the fact that the initial testing for the
drug had severe problems associated with it. These initial tests were performed with
two groups of AIDS patients. The volunteering patients were secretly divided into two
groups using a double-blind system, where neither the patients nor the doctors are
aware of who is in the placebo, or control group, and who is in the AZT group. These
tests were performed by the FDA at twelve medical centers throughout the United
States. The study actually became unblinded almost immediately as some patients
discovered a difference in taste between the placebo and AZT caplets and other
patients took the capsules to chemists to have them analyzed. The doctors found out
the differences between AZT patients and the placebo patients by very obvious
differences in blood profiles. An FDA meeting was convened and the decision was
made to keep all of the useless data, and therefore the bad data was thrown in with the
good data and it ended up making all of the data virtually useless. In fact, according to
some sources, AZT ended up shortening the lifespans of many of the patients taking it.
AZT is also thought to be a possible carcinogen, although it has not been around long
enough for any conclusive results to be obtained. After AZT was approved for use,
mortality statistics were taken, they showed a mortality rate of 10% after 17 weeks,
with the original number of patients being 4805. The FDA tests, with their skewed
statistics, showed only a 1% mortality rate. AZT also had some strange side-effects
that were reported with it's use, such as raising the IQs of 21 children who took the
drug by 15 points, 5 of the children died.
The newest treatments with AZT are combining AZT with other drugs, such as
ddI. These tests were being performed, once again in the double-blind format, just like
the original FDA tests. Three different groups were tested, ones taking only AZT,
ones taking only ddI and ones taking a combination of both ddI and AZT. The Data
Safety Monitoring Board (DSMB), and organization that monitors all testing in the
United States secretly unblinded the test, as they do with all double-blind tests, and
found that the AZT patients had a much higher mortality rate than those in the straight
ddI and the ddI and AZT tests. The DSMB found the difference in the tests to be high
enough to stop the trials early.
In August of 1994, the FDA approved AZT for use by pregnant, AIDS
infected women. Once again it was conducted in a double-blind method and was
placebo controlled. The therapy was begun 14-34 weeks after pregnancy. However, in
this testing it was found that in the AZT mothers, the AIDS transmission rate to the
babies was about 8.3% while the placebo group was about 25.5%. Therefore the AZT
was reducing the AIDS transmission by two thirds.
It is still not clear as to the effectiveness of AZT to stop or hinder the progress
of the AIDS virus. Most experts today consider AZT to be a valid way to treat AIDS
and HIV infection, but they are constantly experimenting with new combinations of
different drugs such as ddI and AZT to try to better treat AIDS patients. The massive
administrative errors in the initial testing have set the AZT research back and have
fostered unlooked for antipathy. As the treatments become more sound and more
reliable, AZT will find it's place in AIDS treatments.
EndNotes
Lauritsen, John. Poison by Prescription - The AZT Story. New York; Asklepios Publishing, 1990. pg.7.
Lauritsen, John. Poison by Prescription - The AZT Story. New York; Asklepios Publishing, 1990. pg.7.
Lauritsen, John. Poison by Prescription - The AZT Story. New York; Asklepios Publishing, 1990. pg.23.
Lauritsen, John. Poison by Prescription - The AZT Story. New York; Asklepios Publishing, 1990. pg.49.
Whitmore, Arthur. AZT Approved for Preventing Maternal-Fetal HIV Transmission. Internet: http://www.hivpositive.com/f-DrugAdvisories/II- FDA/4.htm. August 8, 1994.
Bibliography
Lauritsen, John. Poison by Prescription - The AZT Story. New York: Asklepios Publishing, 1990.
Pinsky, Laura. Douglas, Paul Harding. Metroka, Craig. The Essential HIV Treatment Fact Book. New York: Simon & Schuster Inc., 1992.
Kaiser, Jon D. Immune Power - A Comprehensive Treatment Program for HIV. New York: St.Martin's Press, 1993.
Whitmore, Arthur. AZT Approved For Preventing Maternal-Fetal HIV Transmission. Internet: http://www.hivpositive.com/f-DrugAdvisories/II-FDA/4.htm, August 8, 1994.
Whitmore, Arthur. FDA Grants Accelerated Approval For 3TC With AZT To Treat AIDS. Internet: http://www.hivpositive.com/f-DrugAdvisories/II-FDA/17.htm, November 20, 1995.
Clark, Martina. AZT: Pediatric Study Changed. Internet: http://www.out.org/HIV/AZT_pediatric_study_changed.htm, "W.O.R.L.D. - A Newsletter about Women & HIV" April 22, 1995.
f:\12000 essays\sciences (985)\Chemistry\Biolgical and Chemical Warfare.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chemical and Biological Weapons
Chemical and biological weapons are the most dangerous threats that our soldiers face today. But just how much do most of us know about them? The American public had been bombarded by stories of how our government keeps secret weapons, does secret experiments, and the everlasting conspiracies. And many accept it all. Rather than simply trusting our government, (which is perhaps as foolish as believing several unsubstansiated theroies), I've compiled several simple facts regarding recent and historic developments in chemical and biological warfare.
Chemical weapons are defined as chemical substances of gas, liquid, or solid which are used because of a directly toxic effect upon humans, animals, or plants. Biological weapons are living organisms, whatever their nature, or the materials that are created because of their use. Biological weapons can cause disease or death in living organisms, and are depended upon for their further ability to multiply inside the organism that it attacks. Even though the two weapons are closely related, chemical weapons are used far more commonly because they are inexpensive to make and use. Chemical weapons are more dangerous to America because of the conflicts we have involved ourselves in. Iraq for example, has a long and extensive history of using chemical weapons. In the 1980's, Iraq released poisonous gases against Iranian troops. Iraq has even used chemical weapons against it's own Kurdish citizens to subdue rebellions. As one of the aftermaths of the Persian Gulf War, however, Iraq agreed to giveup all materials and equipment for making chemical and biological weapons. An organization called UNSCOM or United Nations Special Commissions on Iraq was formed to ensure that Iraq followed through upon it's promises. However, when Lt. Hussein, Saddam Hussein's son-in-law and director of Iraq's weapons program, defected, it was found that Iraq had been dishonest in it's reports to UNSCOM. for four years. Today, everyone has heard even a passing reference to Gulf-War Syndrome. In 1994, a Congressional report examined eyewitness accounts and declassified operation logs. They concluded that United States troops were exposed eleven times to chemical and biological weapons. Yet, two other reports concluded the opposite. The DSB and IOM reports found that there was no reliable evidence to support that American troops were exposed to chemical or biological weaponry. Unfortunately, Iraq is not the only nation using chemical weapons. Former CIA director, William Webster, has revealed that nearly 20 other nations have the chemical industry that allows them to make chemical weapons, in fact, many these countries have even stockpiled these weapons for further use. Several nations, including the United States, have conventional arms and nuclear weapons. Numerous Middle Eastern nations feel that since they do not have the same capabilities or funds, they have the right to make and use chemical weapons in order to counter our advanced weaponry. Because of the fact that many third world countries feel the need to make chemical weapons, it is frequently called "the poor man's atomic bomb."
Unlike chemical weapons, biological weapons have not been used in modern day warfare. But in today's technologically advanced world, genetics is quickly becoming a threat in biological weapons. Scientists are using genetics to develop new deadly diseases that would be used to harm an opposing country. The new bacteria and viruses that the scientists already have the ability to develop, could be used against hostile countries. Bacterias and viruses could be used to kill crops and a country's environment, thereby destroying their food supply. Or, even more effectively, spreading deadly diseases to the country's citizens and soldiers. By February 14, 1970, the United States Department of Defense had been ordered to draw up a plan to dispose of all biological agents and toxins. Sweden, the United Kingdom, and Canada all followed the United State's example. However, a number of nations, including the ex-Soviet Union and it's allies, have actually favored concentrated international agreements on biological and chemical weapons. These agreements would be focused upon biological and chemical weapons control as a whole. Unfortunately, worldwide control of these weapons are impossible and impractical, such an agreement would allow for many loopholes.
All through history, both chemical and biological weapons have been used. Biological weapons were used in 1346 when the Tartars laid siege to the port city of Caff. Which is now Feodosya in the Ukraine, on the East coast of the Black Sea. While the Tartars were attempting to invade the walled city, the Tartar developed a deadly and infectious disease. As the Tartars began to die, they realized that they were fighting a losing battle. In desperation they took the disease filled cadavers and flipped them into the city, spreading the disease. The citizens of Caff fled the city in panic, and spread the disease to the rest of Europe. More recently, in 1763, the British gave blankets as gifts to the Native American Indians. What they neglected to mention to the Indians was that the blankets came from a small pox hospital. By deliberately infecting the Indians, the British soon conquered America.
Chemical weapons were used extensively during World War I and II. The Axis powers used them first against the British during World War I. The uses of mustard gas and such poisonous gases has been limited for fear of retribution. The gases soon led to the use of the gas mask. The gas mask was not always effective because some of the gases had delayed reactions. Or other gases would penetrate the gas masks, and make the soldiers nauseated. As they removed the mask in order to vomit, the gas would take it's full effect. The gases frequently caused severe burns, nausea, ate away at the soldier's nostrils, and at times caused fatal damage to the respiratory system. Chemical weapons are not used exclusively for it's harmful effects against humans. During the Vietnam War, Agent Orange was used by the Americans to destroy the rainforest, to make sure that enemy troops would be unable to hide in the dense plant growth.
Chemical and biological weapons have been the subject of international debate for over 70 years, and I believe for good reason. Of course, the government should, and does, participate in the conventions and foreign event that have relation to these weapons. As citizens, we should be concerned because chemical weapons are so easily accessible to terrorists, and one result of living in such a powerful country, is being a prime target for terrorist. Mustard gas, for example, is made with two very commonly used chemical compounds, which are thiodiglycol and hydrochloric acid. Thiodiglycol is used in textile dyes, and almost all pens. Hydrochloric acid is often used here at school in experiments. While it would be impossible to completely stop the use of chemical and biological weapons everywhere, America can use it's position as a world leader to influence other countries by showing an example of peace and strong defense, instead of offense.
f:\12000 essays\sciences (985)\Chemistry\Bunsin Burner.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Bunsen Burner
In class on Monday. We learn how to use a Bunsen burner. We had to tell what was the hottest and coolest part of the flame. When we finish that. We had to take a wire and go up and down in the flame to see what was the hottest part of the flame. After we did that, we had to take a evaporating dish, and put it into the flame and see what would be collected on the dish.
To hook the Bunsen burner up. We had to connect the burner to a gas jet with the rubber hose that was hook to the burner. Then we had to make sure that the needle value was rotating barrel was closed, so no gas or air was not going through the burner. To light the burner we had to open the needle value so the gas can flow through the burner. Then take a lighted match over the side to light it. Don't allow any air when you are lighting the burner. When it is lit you will see a yellow flame. Then you would take the evaporating dish and put it in the flame for a few minutes. To see what would be collected on the dish. after you do that. Then you turn the barrel until you can't see the yellow flame anymore. Then put the dish in the hottest part of the flame and see what happens. After you do that. You would cut off the burner by closing the needle value and closing the barrel. Then you would cut off the gas.
In the conclusion the hottest part of the flame was the top part of the flame, and the coolest part of the flame was the blue cone in the middle. Soot was on the bottom of the dish the first time. When you put the dish back in the hottest part of the flame. It had cleaned the bottom of the dish. That is how you use a Bunsen burner.
f:\12000 essays\sciences (985)\Chemistry\Calcium Transport in SF9 and Bull Frog Ganglion Cells.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Calcium Transport in SF-9 and Bull Frog Ganglion Cells Kenny Yu University of Toronto Faculty of Pharmacy 19 Russell Street Toronto, Ontario M5S 2S2 PHM499 Research Project Supervisors: Dr. P. S. Pennefather, Dr. S. M. Ross Calcium transport study of SF-9 lepidopteran cells and bull frog sympathetic ganglion cells Kenny Yu Faculty of Pharmacy, University of Toronto, 19 Russell Street, Toronto, Ontario M5S 2S2 ABSTRACT The intracellular calcium level and the calcium efflux of the bull-frog sympathetic ganglion cells (BSG) and the SF-9 lepidopteran ovarian cells were investigated using a calcium-sensitive fluorescence probe fura-2. It was found that the intracellular calcium levels were 58.2 and 44.7 nM for the BSG cells and SF-9 cells respectively. The calcium effluxes following zero calcium solution were 2.02 and 1.33 fmole·cm-2·s-1 for the BSG cells and SF-9 cells. The calcium effluxes following sodium orthovanadate (Na2VO4) in zero calcium solution were 6.00 and 0.80 fmole·cm-2·s-1 for the BSG cells and the SF-9 cells. The SF-9 cells also lost the ability to extrude intracellular calcium after 2-3 applications of Na2VO4 while the BSG cells showed no apparent lost of calcium extruding abilities for up to 4 applications of Na2VO4. INTRODUCTION Spodoptera frugiperda clone 9 (SF-9) cells are a cultured insect cell line derived from the butterfly ovarian tissue. SF-9 cells are used by molecular biologists for the studies of gene expression and protein processing (Luckow and Summers, 1988). However, there is not much known about these cells' basic biophysiology. Since calcium is involved in many cells' activities such as acting as a secondary messenger, it is important for cells to control their intracellular calcium level. This study was aimed toward looking at the some of the basic properties of the SF-9 cells such as resting calcium concentration and rate of calcium extrusion after being calcium level being raised by an ionophore 4-bromo-A23187. The effect of sodium orthovanadate (an active transport inhibitor) on calcium extrusion was also looked at. Microspectrofluorescence techniques and the calcium-sensitive probe fura-2 were used to measure the intracellular calcium concentration of these cells. In addition, the BSG cells were used to compare with the SF-9 cells for the parameters that were studied. It was found that the SF-9 cells appeared to have a calcium concentration similar to the BSG cells. Moreover, the calcium extrusion rates of both cell types with no Na2VO4 added seemed to the same. However, due to insufficient data, the effects of Na2VO4 could not be statistically analyzed. From the data available, it suggested that the BSG cells' rate of calcium extrusion was enhanced by the Na2VO4 and was greater than the SF-9 cells. It was more important to note that the calcium extruding capabilities of the SF-9 cell seemed to impaired after two to three applications of Na2VO4 but it had apparent effects on the BSG cells even up to 4 applications. After obtaining these basic parameters, many questions raised such as how does the SF-9 cells extrude their calcium and why the Na2VO4 affected the calcium efflux for the SF-9 cells but not the BSG cells? The SF-9 cells may have a calcium pump or exchanger to extrude their calcium and they may be very sensitive to the ATP (adenosine 3'-triphosphate) supply. This was apparently different from the BSG cells' since their calcium extrusion were not affected by the Na2VO4.. It may be useful to find the mechanism(s) of the actions of Na2VO4 on the SF-9 cells because it may find possible applications in agriculture such as pest control. MATERIALS AND METHODS Chemicals and solutions 4-bromo-A23187 and Fura-2/AM were purchased from Molecular Probes (Eugene, OR). Na2VO4 was purchased from Alomone Lab (Jerusalem, Israel). Dimethyl sulfoxide (DMSO) was obtained from J. T. Baker Inc. (Phillipsburg, NJ). All other reagents were obtained from Sigma (St. Louis, MO). The normal Ringer's solution (NRS) contained (mM): 125 NaCl, 5.0 KCl, 2.0 CaCl2, 1.0 MgSO4, 10.0 glucose, 10.0 N-[2-hydroxyethyl] piperazine-N'-[2-ethanesulfonic acid] (HEPES). The calcium free Ringer solution (0CaNRS) is the same as the NRS except CaCl2 was substituted with 2.0 mM ethylene glycol-bis(b-aminoehtyl) ether N,N,N',N'-tetraacetic acid (EGTA). Fura-2/AM solution was prepared as follows: a stock solution of 1mM fura-2/AM in DMSO was diluted 1:500 in NRS containing 2% bovine albumin. It was then sonicated for 10 minutes. It was then kept frozen until the day of the experiment. 20 mM 4-bromo-A23187 solution was prepared by diluting a stock of 5mM 4-bromo-A23187 in DMSO 1:250 with NRS. Na2VO4 solution (VO4NRS) contained 100 mM. Na2VO4 in 0CaNRS. All experiments were performed at room temperature, 22-26 °C. The above solutions were adjusted to pH 7.3 with NaOH. Cells BSG cells were obtained as described by Kuffler and Sejnowski (1983). BSG cells were plated and incubated at 3-10 °C for up to 4 days before the experiments. The cells were plated on custom made 3.5 cm plastic culture dishes. A circular hole about half the diameter of the dishes were cut out in the middle and fitted with a piece of aclar. The aclar dishes were then treated with poly-d-lysine for one hour before plating. SF-9 cells (non-transfected) were cultured as described by Summers and Smith (1987). The SF-9 cells were plated and incubated (at 37 °C) on the custom made dishes as used for the BSG cells one day prior to the experiments. They were not kept for more than two days to avoid overgrowth of cells that might cause difficulties in experimental measurement. Each dish contained approximately 100 ml of cell suspension. To load the cells with fura-2/AM, 100 ml of fura-2/AM /BSA solution was added for 30 minutes. Intracellular calcium measurements Fura-2 is a fluorescence indicator of calcium that is used to determine the free intracellular calcium concentration. Fura-2/AM was used in the experiments instead of fura-2. Fura-2/AM is an ester moiety of fura-2 which has the advantages of being permeable to the cell membrane (where fura-2 is not permeable to the cell membrane to any great extent) and is subsequently broken down into fura-2 intracellularly by esterase. The apparatus included a fluorescence microscope unit and a spectrofluorometer system. The fluorescence microscope unit consisted of a 75 W Xenon arc lamp and a Zeiss inverted microscope with a Zeiss Neofluor 63X objective. In addition, a pipette was placed close to the sample cells (within 5mm) for perfusion. The pipette delivered the solutions at a rate of 2-3 ml/min and could switch between the solutions from five different solutions simultaneously. This would allow rapid switching of solutions and improved the speed of responses The PTI Deltascan 4000 microscope system (Photon Technology International Inc., South Brunswick, NJ) was used to make fluorescence measurements. Emitted fluorescence signal was detected by the photomultiplier tube (PMT) and recorded via a NEC 286 microcomputer. The software used was PTI Instrument Control Program from Photon Technology International Inc. (South Brunswick, NJ). The experimental methods of calcium measurements used in the experiments were similar to the one described by Schwartz et al. (1991). In brief, intracellular free calcium concentration can be determined through the following formula (Grynkiewicz et al. 1985): [Ca2+]i = Kd.(Fmin/Fmax).(R-Rmin)/(Rmax-R) where Kd is the effective dissociation constant for the Ca2+-fura-2 complex, Fmin and Fmax are the fluorescence intensities at l=380nm obtained from calcium-free fura-2 sample and calcium-bound fura-2 sample respectively, R is the fluorescence intensities ratio obtained with excitation at 340 and 380nm (R = F340/F380), Rmin and Rmax are the F340/F380 ratios of the calcium-free and calcium-saturated fura-2 sample respectively. One average size cell from each dish was randomly selected for the measurement. NRS was initially perfused to wash out the fura-2/AM in the cell suspension. When the intracellular calcium level was stabilized, it was switched to 2-bromo-A23187 to raise the intracellular calcium concentration. 0CaNRS was used to decrease the calcium concentration when the calcium level reached over 200 nM. Once the calcium concentration was decreased and stabilized with the 0CaNRS, 4-bromo-A23187 was added again and the whole procedure was repeated for two to four times. Then 4-bromo-A23187 was used once again to bring the intracellular calcium concentration up, VO4NRS now was used instead of 0CaNRS to lower the calcium concentration. This procedure was used for two to four cycles or until the cell showed no response and unable to lower the calcium concentration to the previous resting level. Statistical Analysis Statistical analysis was performed with using The Student Edition of Minitab release 8 (Minitab Inc., 1991). Results It was found that the intracellular calcium concentration in the SF-9 cells was 44.7 ± 8.3 nM (mean ± S.E., n = 8) in NRS. The calcium concentration in the BSG cells was found to be 58.2 ± 9.0 nM (n = 4). Student's t test did not indicate a significant difference between the intracellular calcium concentration of the SF-9 and the BSG cells (P = 0.31). The rates of active transport of calcium out of the cells following 0CaNRS were also calculated. They were determined by performing a linear regression on the linear portion (ranging from 20 - 50 seconds) of the decline following the maximum calcium concentration. It was found that the rates of calcium depletion (DC/Dt) of BSG and SF-9 cells were 3.92 ± 0.81 nM/s (mean ± S.E., n = 10) and 4.12 ± 0.81 nM/s (n = 7) respectively. However, the BSG cells and the SF-9 cells were generally in different sizes in which the SF-9 cells (about 15-20 mm in diameter) were usually smaller in sizes relative to the BSG cells (about 25-40 mm in diameter). It is therefore important to take into the account of the size of the cells for the analysis of the calcium flux. The calcium flux (J) out of the cell can be determined by adjusting the rates of calcium depletion with the volume to area ratio of the cells (assuming the cells were spherical in shape). The flux can be found by: J = -DC/Dt ·V/S where J is the flux, -DC/Dt is the rate of calcium depletion and V/S is the volume to surface area of the cell (V/S can be further simplified to r/3 where r is the radius of the cell). The calculated calcium efflux of the BSG cells and the SF-9 cells were 2.02 ± 0.44 fmole·cm-2·s-1 (n = 10) and 1.33 ± 0.26 fmole·cm-2·s-1 (n = 7) respectively (table 1). There was no significant difference between the two efflux values (P = 0.2) shown by t-test. Similarly, the rates of calcium depletion of the BSG cells and the SF-9 cells following VO4NRS were 9.24 ± 0.22 nM/s (n=2) and 2.46 ± 0.75 nM/s (n=3) respectively. The adjusted calcium efflux of the BSG cells and the SF-9 cells were 6.00 ± 0.14 fmole·cm-2·s-1 (n = 2) and 0.80 ± 0.24 fmole·cm-2·s-1 (n = 3) respectively (table 2). In addition, it was observed that SF-9 cells lost the ability to extrude the calcium after two to three cycles of VO4NRS applications (Figure 1). On the other hand, the BSG cells did not appear to lose their abilities to extrude the calcium after up to three to four VO4NRS applications (Figure. 2). Table 1 Rate of Calcium depletion of BSG and SF-9 cells after the addition of 0CaNRS BSG rate of calcium depletion (nMs-1) BSG calcium efflux (fmole·cm-2·s-1) SF-9 rate of calcium depletion (nMs-1) SF-9 calcium efflux (fmole·cm-2·s-1) 2.23 1.01 4.67 1.51 0.54 0.24 4.10 1.33 4.36 1.98 3.19 1.03 8.58 3.89 7.74 2.51 5.88 2.67 5.55 1.80 1.28 5.81 2.01 0.65 5.28 2.40 1.56 0.50 7.02 4.55 2.22 1.44 2.27 1.47 Intracellular calcium concentration of a single sample cell was raised using 4-bromo-A23187 and was subsequently lowered by introducing 0CaNRS. These data represented the rates of decline (DC/Dt) of the initial linear portion after the maximum calcium concentration. Table 2 Rate of Calcium depletion of BSG and SF-9 cells after the addition of VO4NRS BSG rate of calcium depletion (nMs-1) BSG calcium efflux (fmole·cm-2·s-1) SF-9 rate of calcium depletion (nMs-1) SF-9 calcium efflux (fmole·cm-2·s-1) 9.02 5.85 1.05 0.34 9.47 6.14 3.59 1.16 2.74 0.89 Similar to Table 1 except VO4NRS was used instead of 0CaNRS to lower the calcium concentration. Figure 1. Intracellular calcium concentration of a SF-9 cell A time course calcium recording of a single SF-9 cell (19 mm) with the successive applications of 4-bromo-A23187, NRS, 0CaNRS and VO4NRS. It was noted that after 2 applications of VO4NRS, the cell was impaired in its ability to extrude calcium. Abbreviations: A, 4-bromo-A23187; N, NRS; 0, 0CaNRS; V, VO4NRS. Figure 2. Intracellular calcium concentration of a BSG cell In contrast to the SF-9 cell in Figure 1, the BSG cell (39 mm) still maintained its ability to extrude (or decrease) calcium after three applications of VO4NRS even at a high calcium concentration. Abbreviations: same as in Figure 1. DISCUSSION In the beginning of the experiment, both the transfected and non-transfected SF-9 cells were used although only non-transfected SF-9 cells were reported here. It was found that the transfected cells had unusual low calcium concentration (less than 20 nM, results are not included in this report). However, it was later found that the cells were not very successfully transfected. T-test did not show any significant difference between the calcium levels in the BSG cells and the SF-9 cells which leads to the question of whether the transfecting process would cause certain biophysiological changes in the cells which led to low intracellular calcium concentrations. Moreover, it was learned during the experiment that it was not necessary to apply 4-bromo-A23187 every cycle to raise the calcium level. It was only necessary to apply once in the beginning of the experiment to raise the calcium concentration. NRS was then used to raise the calcium concentration in the subsequent cycles. This is probably due to the high lipidphilicity of the 4-bromo-A23187 which enable it to partition into the cell membrane and the internal organelles. Hence the one application of 4-bromo-A23187 would allow it to partition and remain in the cell membrane and acted as an ionophore without the necessity of further subsequent addition. The effects of the NRS at raising calcium concentration appeared to be similar to 4-bromo-A23187's. This technique was more economical and also reduced the effects of DMSO (which was used to dissolve 4-bromo-A23187) on the cells. A general discussion on of ionophores can be found in an article by Pressman (1976). A more specific topics of 4-bromo-A23187 on use with fluorescent probes and its action on calcium can be refered to Deber (1985) and Reed and Lardy (1972). The calcium efflux after VO4NRS for the BSG cells appeared to be greater than the SF-9 cells' (see result section). But there were insufficient data to perform a reliable statistical test to prove such view. Vanadate is referred to an active transport inhibitor. It acts as a phosphate substitute for ATP and thus stops or slows the ATP production. Without ATP, active transport cannot be carried out. In the case of calcium, the addition of VO4NRS would cause the cells not able to extrude the calcium out after the application of 4-bromo-A23187. It was indeed what was observed for the SF-9 cell (Figure 1). It was noted that the calcium concentration remained at a high level and became unstable after 2 applications of VO4NRS. It suggested that the calcium mobilization in the SF-9 cells was closely linked to the ATP production. Without ATP, the SF-9 cells were unable to regulate their intracellular level in a normal manner. However, the BSG cells showed different responses to VO4NRS (Figure 2) compared to the SF-9 cells. After 3 applications of VO4NRS, the BSG cell was still able to extrude calcium, despite the abnormal high calcium concentration after the third VO4NRS application. This result was not anticipated because the BSG cells had higher calcium effluxes relative to the SF-9 cells, hence calcium extrusion of the BSG cells were more dependent on the ATP production. One possible explanation would be that the BSG cells had excess organelles to store calcium instead of extruding it. Since the SF-9 cells are commonly used for gene expressions, it is important to know the basic biophysiology of these cells. However, there is still a lot unknown about these cells. By studying these cells in greater details, it will improve our understanding of the calcium transport system. Also, it may be useful for the molecular biologists to improve the techniques of gene expressions using the SF-9 cells. Acknowledgments I thank Dr. S. M. Ross for his academic and technical supports throughout this study, and for kindly reading this manuscript. Dr. P. S. Pennefather was invaluable in providing excellent advice during this study. I also thank B. Clark for preparing the BSG culture dishes and Dr. D. R. Hampson for his kind gift of SF-9 cells. References Deber, C. M.; Tom-Kun, J.; Mack, E.; Grinstein, S. Bromo-A23187: a nonfluorescent calcium ionophore for use with fluorescent probes. Anal. Biochem. 146(2):349-352;1985. Grynkiewicz, G.; Poenie, M.; Tsien, R. Y. A new generation of Ca2+ indicator with greatly improved fluorescence properties. J. Biol. Chem. 260:3440-3450; 1985. Kuffler, S. W.; Sejnowski, T. J. Peptidergic and muscarinic excitation at amphibian sympathetic synapses. J. Physiol. 341:257-278; 1983. Luckow, V. A.; Summers, M. D. Trends in the development of baculovirus expression vectors. Biotechnology. 6:47-55; 1988. Pressman, B. C. Biological applications of ionophores. Ann. Rev of Biochem. 45:501-530; 1976. Reed, P. W.; Lardy, H. A. A23187: A divalent cation ionophore. J. Biol. Chem. 247:6970-7; 1972. Schwartz, J.-L.; Garneau, L.; Masson, L.; Brousseau, R. Early response of cultured lepidopteran cells to exposure to d-endotoxin from Bacillus thuringiensis: involvement of calcium and anionic channels. Biochem. Biophys. Acta 1065:250-260; 1991. Summers, M. D.; Smith, G. E. A manual of methods for baculovirus vectors and insect cell culture procedures. Texas Agric. Exper. Sta. Bull. no 1555; 1987.
f:\12000 essays\sciences (985)\Chemistry\Cancer The costs the causes and the cures.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cancer is a major killer of people all around the globe. We do not have a definite cure,
but the amount of research done on this one disease costs on the average of $1.2 billion dollars
annually, and $20 billion annually in care of cancer patients.
What is Cancer?
Cancer is a broad ranging term that is used by many people, including medical
professionals such as doctors. Cancer, in its most fatal and aggressive form, is of a larger class of diseases known as neoplasms. There are two forms of a neoplasm: benign or malignant. A benign neoplasm is encapsulated, or surrounded, so that it's growth is restricted, whereas a malignant neoplasm is not closed in. Malignant tumors grow much more quickly than benign forms and spread into the surrounding normal tissue, and virtually destroy it, (Grolier Electronic Encyclopedia, Cancer).
The question is, what exactly is cancer? Cancer, is the break down and mutation of the
cells of the body, when the DNA (Deoxyribonucleic Acid) sequences in those molecules are
disrupted and errors form in the structures, (Grolier, Genetic Code). This mutation spreads
through surrounding tissue until it disrupts major systems in the body (such as respiratory,
digestive and waste management) cause that system to fail.
What causes Cancer to become active?
Since it is believed that almost all people have some type of cancer in their body, (although
benign), any person that comes in contact with a carcinogen, (any cancer-causing agent), will
cause these benign cells to become malignant.
It is when the cells become malignant, that cancer actually occurs. Cancer, in this context,
can be caused by many different agents; chemical, biological or physical.
Chemical Agents
Chemicals that can cause a benign cell to become active include things such as complex
hydrocarbons, aromatic amines, certain metals, drugs, hormones, and naturally occurring
chemicals in plants and molds. Hydrocarbons and nitrosamines can be found in cigarette smoke and may contribute to the condition called "lung cancer". Other chemicals that seem to cause incidents of "bladder cancer", such as 2-naphthylamine, were used in the dye industry for dyeing cloth, but when a number of cases of cancer turned up, its use was discontinued. Vinyl Chloride, a chemical gas, has also appeared, seeming to cause "liver cancer" ,(Grolier, Cancer)
Drugs, such as some cancer-treating alkylating agents, are also carcinogens. These agents
are used to break the DNA strands in the cells, thereby killing the cells, but it also effects the cells surrounding the tumor, actually making them malignant. When these chemicals are used to treat cancer in this way, they must in exact proportions for each person and if the dosage is incorrect, the chemical will create a cancerous effect. Estrogens, a group of female hormones, usually administered to women after menopause seem to cause an increased incidence of cancer of the uterus. This has been alleviated today by administering estrogen in combination with progesterone. Certain salts, that contain arsenic, are suspected to casually relate to cancer of the skin and liver, (Grolier, Cancer).
The suggestion that cancer is caused by an alteration of DNA within the benign cell, was proposed by James and Elizabeth Miller in the 1960s, who demonstrated that chemical carcinogens must be metabolized and broken down so that they may interact with the DNA of the cells in question, directly, (Grolier, Cancer).
Biological Agents
Our own bodies, in conjunction with parasites found in different parts of the world, have been related to the causes of many types of cancer. Some of the most clearly established biological agents are the oncogenic (cancer-causing) viruses that commonly cause the formation of neoplasms in lower animals have been linked to some human cancers, and at least one has been definitely proven to cause cancer of the blood (leukemia), (Grolier, Cancer).
Physical Agents
High energy and ultraviolet radiation are two of the major causes of human and animal cancer. It has been proven that there is a relation between the sun's ultraviolet rays, and the development of skin cancer in humans. Cancer caused by radiation include just about every known variety, including leukemia, cancer of the thyroid, breast, stomach, uterus and bone, (Grolier, Cancer).
It has also been suggested that electromagnetic fields can pose a risk of contracting cancer
when the field strength is extremely high. Power lines running through cities and through the country, including large hydro towers that carry power through the countryside, create such electromagnetic fields. These fields are strong enough, in most cases, to light a oi= scent tube that a person is holding just by walking underneath the power lines. In several instances in the United States, especially in Washington and New York, the fields are so strong that people are fearing that the exposure may be hurting their children, (Fortune, p. 80).
It is recommended that pregnant women do not use electric heating blankets for extended periods, (sleeping, etc.), as the blankets create a low electromagnetic field that may actually creating cancer in the fetus during development.
Inherited Cancer
As was stated earlier, everyone has cancer, although benign. It is passed from one
generation to the next, and depending on the amount of carcinogens the previous generation has had contact with, will be relative to the risk placed on their offspring. Some carcinogens can be stored in the body and not used, and can be replicated in the next generation because the body has the ability to create DNA and RNA sequences that represent their carcinogens.
This is not necessarily the cause for inherited cancer, nor why it affects some offspring and not others. It has been shown that cancer can remain dormant in several generations, and then suddenly become active in a healthy generation.
Stages
Cancer does not jump out of the woodwork in a day; in most cases, it takes a long time
for cancer to become detectable, depending on the type, and where it is growing. It has shown
that cancer detected in the earliest stages of its growth is far easier to stop, and so the American
Cancer Society has begun to promote public awareness of the seven warning signs to look for:
(1) a change in bowel or bladder function; (2) a sore that does not heal; (3) unusual bleeding or discharge; (4) a thickening or lump in the breast or elsewhere; (5) indigestion or difficulty in swallowing; (6) an obvious change in a wart or a mole; and, (7) a nagging cough or hoarseness. Should anyone exhibit any of these symptoms, they should see a physician immediately. (Groiler, Cancer)
Initiation and Promotion
One of the characteristics in the development of cancer in an organism is the amount of
time between initial exposure to a carcinogen, and the actual development of a malignant
cancerous tumor:
Beginning in the late 1940s, a number of investigators defined the early stages in the development, or natural history, of cancer. In a classical experiment pertormed on the skin of mice, a single application of an agent induced no neoplasms, but when it was followed by several applications of a second agent, termed the promoter, neoplasms developed. (Grolier, Cancer)
It was found that initiation by the first agent is irreversible once the reaction has begun,
however, it was also noted that if the addition of the promoter was done in several doses, over a long period of time, no neoplasms would occur, even though the actual dosage of the promoter was the same. In humans for instance, alcoholic beverages, dietary fat, and many of the components of cigarette smoke are shown to be effective promoting agents. (Grolier, Cancer)
Progression
Once a tumor has been created by initiation and promotion, it can progress from a benign
to malignant form, or from a slowly reproducing tumor to a rapidly growing malignant tumor. This progression has been shown to be related to the number of abnormalities within a cell's DNA. The cells surrounding the tumor will be assimilated into the tumor as it grows. It has been shown that tumors can suddenly stop growing, and then resume its growth at a later time. There is no evidence as to why this happens, and scientists believe it is related to unused portions of the DNA strands in cells that have been transformed into instructions regarding the tumors growth.
Treatment
Treatment of cancer seems to be more of an art than a science at the moment. Especially
in the areas of surgery, radiation treatment, chemotherapy and other areas.
Surgery
Surgical removal of a cancer from the body is the oldest and sometimes most effective
means of disrupting and stopping cancer growth. Surgery can be used to remove malignant or benign tumors within the body, although the practice of removing benign tumors is not practiced due to the possibility of making it active. Removal of complete malignant tumors is often successful in halting cancer growth in that particular region, when followed by radiation therapy. It is also possible to remove parts of tumors, to reduce the amount of cancerous tissue in the body as a whole. The one major drawback to surgery is that quite often a tumor is not accessible to a surgeon, or it may be attached to a major organ of the body, in which removing the tumor may cause serious side affects and even death. So long as a cancerous tumor has not spread to a major organ or tissue, the removal will be a safe and will be successful in most cases.
Surgical removal of a cancerous tumor may give people the extra months or years to carry
out things they want to do, especially for those people who can not be totally free of cancerous
tissue. Surgery to remove a tumor may give people the comfort in which to live out their lives,
even though it may not be the complete solution, (Encyclopedia Britannica, p. 541 )
Radiation
Radiation treatments are normally conducted after surgery if there was a large affected
area, or treatments can be used on small tumors when surgery is not possible. Irradiating a large
area of the body for a large tumor can create other types of cancer within the body.
Treatments can be carried out using gamma rays that are emitted by Cobalt-60, a
radioactive element, by focusing high powered X-rays (many times the strength of the normal Xray used to scan the body) or particles (electrons and neutrons). Although some of the surrounding cells are killed in the radiation process, the effect is minimized by shielding surrounding areas with dense materials, such as lead and gold. The source of the radiation and the sensitivity of the tumor are relative to the overall effect on the tumor.
Radiation treatments are extremely efective on leukemia and carcinomas (solid tumors
which forms the skin and linings of most glands and organs) as they are extremely sensitive to the
radiation.
If radiation therapy is unsuccessful in the first few treatments, it is unlikely to have any
significant impact on the cancer after this stage, and may cause more damage than it does good.
Chemotherapy
Chemotherapy
Chemotherapy is the process by which chemicals are administered into the body to fight
and destroy cancerous cells and tumors. At present, at least 10 types of human cancer can be
treated and cured by chemotherapy alone or in conjunction with surgery and/or radiation,
(Britannica, p. 558).
Chemotherapy has been proven successful against some strains of cancer such as
lymphocytic leukemia in children, Hodgkin's Disease, sarcomas (connective tissue such as bone
and fat) and kidney tumors. Chemotherapy is usually not a complete cure, but has helped to
drastically increase the useful lifetime of many patients with these diseases.
There is one major point to note about chemotherapy, and that is that it has been shown
that some chemicals used in the treatment of cancer will actually create other forms of cancer, or
speed the growth of those already malignant, if dosages or administration of the chemicals is
incorrect:
Compounds that have been effective in the chemotherapy of human cancer include certain hormones, especially the steroid sex hormones and those from the adrenal cortex; antibiotics produced naturally by a variety of microorganisms; plant alkaloids, including vinblastine and vincristine, derived from the periwinkle flower; alkylating agents--chemicals that react directly with DNA; and antimetabolites, which resemble normal metabolites (metabolic compounds) in structure and compete with them for some metabolic function, thus preventing further utilization of normal metabolic pathways. (Grolier, Cancer)
It has also been noted that as chemotherapy damages some of the surrounding tissue
around a tumor, chemotherapy can have some serious side effects. Some patients develop severe nausea and vomiting, become very tired, and lose their hair temporarily. Special drugs are given to alleviate some of these symptoms, particularly the nausea and vomiting, (Compton's Multimedia Encyclopedia, Cancer - Chemotherapy).
Immunotherapy
hile still a rather new form of treatment, it is looked as having great promise.
Immunotherapy is where the body's own immune system is used to combat the neoplasms situated in the body, with the help of "engineered" antibodies that are added to the patients immune system. The immune system will then replicate the antibody and send it out to destroy any cancerous cells matching the DNA and RNA sequences it was designed to track, while attaching itself to healthy cells to prevent assimilation by cancerous cells. This process has worked on a single-case basis with good results, but it is expected to be a while before its use is wide spread. (How it Works, p. 415)
Recent Trends
It has come about recently, that therapies combining less radical forms of surgery, with
radiation, chemotherapy and/or preventive medicine have been used:
Such therapy has been especially useful in the treatment of breast cancer, where the traditional radical mastectomy, involving removal of the breast, lymph nodes, and parts of the arm and chest muscles is becoming less common. It is being replaced by relatively simple surgery involving removal of only the lump itself or the breast, followed by chemotherapy or the use of preventive drugs. An example of the latter is tamoxifen, an anti-estrogen that prevents the growth of cancer cells with little or no toxicity to the host and remaining normal cells. (Grolier, Cancer)
Remission
Rehabilitation is an important and ongoing process after having cancer treatment(s) of any
kind. The majority of cancers are considered cured if there is no reoccurrence within five years after the last treatment. Other types of cancer are required to be monitored for ten years after treatments to be sure that no reoccurrence is to happen. It is also to be noted that many types of leukemia may seem to be none existent for several years, and then it may appear again. Also, it is generally harder to treat a reoccurrence such as this than in the original case.
Present and Future Research
Cancer is an elusive and stubborn disease that can be turn up in almost any system of the
body. With this in mind, it must be known that there must be a common reason for malignant cancer cells to continue to plague a large percentage of our population:
In 1983, for example, U.S. and British researchers determined that at least two genetic changes may be needed to transform cells into cancerous cells under laboratory conditions: one stage enables the cell to grow indefinitely; the other stage enables the cell to ignore signals from surrounding cells that would otherwise halt its growth. Also, because the means to identify most
of the carcinogenic agents in the environment are now available, a major program of cancer prevention is within reach. (Grolier, Cancer)
With this theory in mind, it will be relatively easy to control cancer once we are intelligent and wise enough to know how to directly manipulate the DNA sequences in cells, and place that information in the bodies of the patients in question. It will be a glorious day when we can alleviate cancer from this world, or will it?
My Thought and Ideas about the Future of Cancer
In the present day, our technology increases ten-fold each year. We are able to find out more, faster and more efficiently than in any other time in history. With our new knowledge that is forthcoming, I would predict the end of most major diseases early in the next century. Once we are able to read and modify the data and instructions found in our own DNA, we can directly access the way we as living beings will grow and evolve. However, we will have another problem, and that is of population. If there are no diseases to disrupt the growth of our population on this planet, we will soon overcrowd, and we may not yet have the technology to leave this world. However, I think we will still be better off without cancer.
References
Tetzeli, R. (1990). Can Power Lines Give You Cancer? FORTLINE Magazine, 49, 80-85
Pitot, H.C. M.D. et al. (1992) Cancer. Grolier Electronic Encyclopedia,1992 ed. Search
phrases: CANCER, GENETIC CODE, DNA, RNA
Clarke, D. & Dartford, M. ( 1992). Cancer Treatment. How It Works: The New Illustrated
Science and Invention Encyclopedia, 414-418
Abeloff, M.D. et al (1991) Cancer. Encyclopedia Britannica: Macropedia, 534-542
Drill, V.A. et al (1991) Drugs and Drug Action - Chemotherapy. Encyclopedia Britannica:
Macropedia, 553-560
American Cancer Society et al (1992) Cancer. Compton's Multimedia Encyclopedia,1992 ed.
Search phrases: CANCER, CHEMOTHERAPY, GENETICS
f:\12000 essays\sciences (985)\Chemistry\Catalytic Converter.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Notice when a vehicle drives by nowadays, that it is so much quieter than those
loud oldies that pour out the blue smoke. Ever wonder just what is underneath a vehicle
that makes the new ones so much cleaner. It is called a catalytic converter. The main
function of a catalytic converter is to decrease pollution emitted from a vehicles exhaust.
The concept behind this is to add a catalyst and force a reaction between the automobile's
exhaust and oxygen in the converter. To see just how this happens let's look inside of a
catalytic converter.
A catalytic converter is made up mainly of a mufflerlike chamber which contains
porous, heat-resistant materials coated with either platinum or palladium. These
materials are known as catalysts. A catalyst is an element which although causing a
reaction to occur, does not change at all during the reaction. This is the idea behind a
catalytic converter. The carbon monoxide gas and hydrocarbons emitted from the engine
will travel along the exhaust system until they reach they catalytic converter. There it
comes into contact with the described catalyst. This forces a reaction between the
carbon monoxide and hydrocarbons with the oxygen inside the converter creating
products of carbon dioxide and water vapor. The reaction which occurs inside the
converter is as follows:
The main compounds involved are carbon monoxide and hydrocarbons
(compounds of hydrogen and carbon), as well as oxygen. When these three are
combined with the provided catalyst, a reaction occurs as above. During the reaction the
oxygen splits apart the carbon monoxide and the hydrocarbons and allows them to
combine with its elements forming the aforementioned products.
The catalytic converter first made an appearance in vehicles in 1975. The
government of the United States of America had established a law controlling auto
emissions. There was one minor detail that was outlined in the use of a catalytic
converter, however. There must only be the use of lead-free gasoline. The reasoning
behind this was that if a leaded gasoline was used the lead would cover the platinum and
palladium pellets rendering them ineffective and thereby ceasing the reaction to exist.
Phosphorus had much of the same effect on the pellets so the gasoline must contain
minimal amounts of it as well.
A catalytic converter can be located in every new vehicle today, unless the
vehicle runs on diesel fuel. In case you were interested in finding the catalytic converter
nearest you, you may want to take a look under the nearest vehicle. It looks like the
muffler only it is a little bit larger and more to the front of the exhaust system. I advise
the unauthorized removal of it, however, for it may result in the breakdown of our
atmosphere.
Threats like these to our atmosphere spurred on the creation of anti-pollutant
components in vehicles and the trend for a pollution free environment still continues. It
will be a great struggle as inventors come up with new and bizarre ways to keep our
atmosphere intact. Already, electric and solar powered test cars have taken to the
highways to test their durability, effectiveness, and convenience. The catalytic converter
was definitely the original spark that started the new "safe" auto craze, and was an
ingenious invention. I'm sure that as long as I live such inventions will never outlive
their usefulness.
A. Texts
i) Chemistry - Heath
pgs. -
B. Computer
i) The Internet - "Catalytic Converters"
www.generalmotors.com
ii) "Catalysis." Encyclopedia Encarta. 1994 ed.
C. Encyclopedia
i) "Catalytic Converter." The World Book
Encyclopedia. 1985 ed.
f:\12000 essays\sciences (985)\Chemistry\cfcs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chloroflourocarbons were discovered in the 1920's by Thomas Midgley, an organic chemist at General Motors Corporation. He was looking for inert, non-toxic, non-flammable compounds with low boiling points that could be used as refrigerants. He found what he was looking for in the form of two compounds: dichlorodifluoromethane (CFC-12) and trichloromonoflouromethane (CFC-11). In both compounds, different amounts of chlorine and fluorine are combined with methane, which is a combination of carbon and hydrogen. These two CFCs were eventually manufactured by E.I. du Pont de Nemours and company, and, under the trade name "freon," constituted 15% of the market for refrigerator gases.
CFCs were the perfect answer for cooling refrigerators and air conditioners. They were easily turned into liquid at room temperature with application of just a small amount of pressure, and they could easily then be turned back into gas. CFCs were completely inert and not poisonous to humans. They became ideal solvents for industrial solutions and hospital sterilants. Another use found for them was to blow liquid plastic into various kinds of foams.
In the 1930's, household insecticides were bulky and hard to use, so CFCs were created because they could be kept in liquid form and in an only slightly pressurized can. Thus, in 1947, the spray can was born, selling millions of cans each year. Insecticides were only the first application for CFC spray cans. They soon employed a number of products from deodorant to hair spray. In 1954, 188 million cans were sold in the U.S. alone, and four years later, the number jumped to 500 million. CFC filled cans were so popular that, by 1968, 2.3 billion spray cans were sold in America.
The hopes of a seemingly perfect refrigerant were diminished in the late 1960's when scientists studied the decomposition of CFCs in the atmosphere. What they found was startling. Chlorine atoms are released as the CFCs decompose, thus destroying the Ozone (O3) atoms in the high stratosphere. It became clear that human usage of CF2Cl2 and CFCl3, and similar chemicals were causing a negative impact on the chemistry of the high altitude air.
When CFCs and other ozone-degrading chemicals are emitted, they mix with the atmosphere and eventually rise to the stratosphere. CFCs themselves do not actually effect the ozone, but their decay products do. After they photolyzed, the chlorine eventually ends up as "reservoir species" - they do not themselves react with ozone- such as Hydrogen Chloride, HCL, or Chlorine Nitrate, ClONO2. These than further decompose into ozone hurting substances. The simplest is as follows: (How do CFCs Destroy the Ozone)
Cl + O3 -----> ClO + O2 ClO + O ------> Cl + O2 O3 + O -------> 2 O2
The depletion of the ozone layer leads to higher levels of ultraviolet radiation reaching Earth's surface. Therefore, this can lead to a greater number of cases of skin cancer, cataracts, and impaired immune systems, and is expected to reduce crop yields, diminish the productivity of the oceans, and possibly contribute to the decline of amphibious populations that is occurring in the world. Besides CFCs, carbon tetrachloride methyl bromide, methyl chloroform, and halons also destroy the ozone.
In 1985, the degradation of the ozone layer was confirmed when a large hole in the layer over Antarctica was reported. The hole's existence is due to industrial chemicals which were
manufactured there. During September/October of 1985, up to 60 percent of the ozone had been destroyed. Since then, smaller yet significant stratospheric decreases have been seen over more populated regions of the Earth.
Worldwide monitoring has shown that stratospheric ozone has been decreasing for more than 20 years. The average loss across the globe totaled about five percent since the mid-1960's with cumulative losses of about ten percent in the winter and spring. A five percent loss occurs in the summer and autumn over North America, Europe, and Australia.
The world has been forced to address this issue. Thus, the major powers of the world created a global treaty, the Vienna Convention for the Protection of the Ozone Layer. The agreement was put into affect in 1988 and the subsequent Montreal Protocol on Substances that Deplete the Ozone layer went into effect in 1989. To date, 140 countries are acknowledging the Montreal Treaty. The countries decided on a timetable for countries to reduce and to end their production and consumption of eight major halocarbons. The timetable was accelerated in 1990 and 1992. Various amendments were adopted in response to scientific evidence that stratospheric ozone is depleting at a much faster rate than was predicted.
On the home front, the U.S. Environmental Protection Agency (EPA), under authority of the U.S. Clean Air Act, have issued regulations for the phase out of production and importation of ozone depleting chemicals. The EPA established various policies such as refrigerant recycling in both cars and stationary units, a ban on nonessential products, labeling requirements, and a requirement to revise federal procurement specifications.
One of the largest single uses of CFCs is as a refrigerant (CFC-12) used in automobile air conditioners. Since a big source of this CFC-12 is leaking automobile air conditioners, many new environmental rules have an impact on the auto service and repair industry.
(1) Anyone repairing or servicing motor vehicle air conditioners must recover and/or recycle CFCs on-site or recover CFCs and send them off-site for recycling.
(2) Everyone dealing with A/C must be certified to use CFC recovery and recycling equipment. The shop must own EPA- approved recovery and recycling equipment.
(3) Retailers can only sell cans of automotive refrigerant (less than 20 pounds) to certified technicians. This discourages do-it-yourselfers from topping off their own A/C.
The fines for violating any of these rules can run as much as $25,000 per violation.
If someone wants to keep working on A/C, they will have to make an investment in equipment. Is it worth it? Recovery-only units cost about $500 and recovery/recycling units run from $1,800 to $5,000. People working on air conditioning units must pass an EPA-approved CFC recycling course. CFC-12 used in motor vehicles was phased out of production at the end of 1995. No more will be made.
CFCs, when first developed, were thought to be a miracle compound. They made excellent refrigerants, pesticides, deodorants, packaging foam, and had many other uses. Unfortunately, a hole in the stratosphere was found over Antarctica in 1985, and CFCs were to blame. Since 1989, numerous laws and restrictions have been made to stop production of CFCs and allow the ozone in the stratosphere to replenish itself. Fortunately, the laws and restrictions have been effective and the ozone layer is slowly but surely filling in. The future of the ozone layer looks good indeed.
Bibliography
Atmospheric Civic Homepage [on-line] http://www.epa.gov/Ozone/defns.html#cfc
Dolan, Edward. Our Poisoned Sky. Cobblehill books, New York: 1991
EPA's Stratospheric Ozone Protection Program World Wide Web Site [on-line]
http://www.epa.gov/docs/ozone/index.html, March 8, 1997.
Gay, Kathylyn. Ozone. Impact, New York: 1989.
Hoff, Mary and Mary Rodgers. Our endangered Planet Atmosphere. Lener Publication Company, Minneapolis: 1995.
Preston, James. Air Conditioning and Refrigeration Technician's EPA Certification Guide, Quality Books, New York: 1994.
Roan, Sharon. Ozone Crisis. Wiley science editions, New York: 1989.
f:\12000 essays\sciences (985)\Chemistry\Chemist.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Becoming a chemist takes a lot of hard work and discipline. One very importan
aspect of being a chemist is English, Comunication is of the utter most importance
(Murphy). As well as having good communication skills, you also need a lot of patience.
However, there are many other qualities you will need such as an excellent learning ability
and mathematical skills. You will also need to be able to preceive concepts or objects.
Once you get into college you need to know what kind of degree to get in order to have a
fulfilling and successful career. For most entry level jobs a BS degree is sufficient.
However, for a college teaching job a Ph.D. is required (Choices).
After obtaining a degree, your next step would be to find a job. According to
Jerry Murphy, if you want an easy way into the chemistry field you need to know someone
already in that occupation. For the most part in Missouri, employment is increasing.
Nevertheless, if you are not restricted to finding a job in Missouri, in the United States
a whole employment is expected to increase 21% (Choices).
After finding a job in the chemistry field that you will enjoy another quesiton arises,
money. On hte average if you begin working at a entry level job witha bachelors
degree your salary will be somewhere around $24,000 a year. If you start work with a
masters degree you can expect about $32,000 and with a Ph.D. as mcuh as $60,000 ("Chemists")
Research and development is the subcareer most chemist choose. In this subfield
your primary goal would be to look for and use information about chemicals ("Chemists").
A chemists also spends a considerable amount of time in an office where
he/she stores information or reports about research he/she has made. There are two
different types of research basic research and applied research. In basic research a
chemists studies the qualities and what makes up matter. In applied chemistry a chemist
uses information obtained from basic research and puts it to practical use ("Chemists").
Chemistry includes many other subfields some of these are analytical chemistry, organic
chemistry, and physical chemistry. Analytical chemists ascertain the nature, structure, and
composition of a substance. Organic chemistry involves orgainc substances. Aphysical
chemist, however, studies the attributes of atoms and molecules. Also they study why
and how chemical reactions occur (Choices).
After researching this career thoroughly I have concluded that this occupation,
even though it is not my first choice, would be a good career to pursue. According to the
information I have obtained employment should increse over the next couple of years
allowing for a fairly lucrative life if I obtain a good degree. I will attempt to
pursue this career even though my English skills are lacking.
f:\12000 essays\sciences (985)\Chemistry\chemistry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* DO wants hvnsgtxt.zip (the book zipped) msg me...
winecoolers
if you know what i mean
heh
take 1 and drink 2 beers and it feels like drank a 12 pack
*** salt_pepr (Jenn@Cust52.Max1.Houston2.TX.MS.UU.NET) has left #highersource
*** ROK (Rok@172-226-116.ipt.aol.com) has left #highersource
check http://www.washingtonpost.com/wp-srv/national/longterm/documents/heavensgate/contents.htm
*** beks (email@170-246-210.ipt.aol.com) has left #highersource
*** nads (bleh@ppp-1.ts-1.ptn.idt.net) has left #highersource
I did.
It was ignorant
http://www.lotek.org/highersource
and you catch some serious packetloss if you drink too much with them
*** LadyDani^ (Trinity@ppp0168.connect.ab.ca) has joined #highersource
*** NuSer (~schourp@shellx.best.com) has joined #HigherSource
*** alxxx has quit IRC (irc.cdc.net irc.voicenet.com)
heh. that sounds like the cult writings...
*** Pachinko (rein0062@pub-11-b-152.dialup.umn.edu) has joined #highersource
hehe
*** RobberBar changes topic to "My nextlevel hardrive does not grok"
* DO wants hvnsgtxt.zip (the book zipped) msg me...
if you want transcript logs goto ftp://www.lotek.org/incoming/hs
*** RobberBar sets mode: +v DethCrush
*** RobberBar sets mode: +v LadyDani^
*** RobberBar sets mode: +v Pachinko
*** weezr (zero@dialin102-6.iwsc.com) has joined #highersource
*** RobberBar sets mode: +v NuSer
f:\12000 essays\sciences (985)\Chemistry\'Chemistry'.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CHEMISTRY
I am chemistry. I am mysterious and mature, malodorous, yet vivacious. I am a heaving
search for answers to all kinds of interesting questions. I am extremely broad, that I
overlap with all the other natural sciences. I am the fundamental unit of matter-the atom-
only to be seen by the utmost effective microscope. I prosper in the dashing, fiery flames
in a fragile glass beaker over a bunsen burner and develope powerful rocket fuels. I am
a clamorous explosion of two flammable chemicals intermixed in a laboratory. I am liquid
flowing from one tube to another, "volumous" gas, and clustered solids. I am the most abundant
element in the Earth's crust, a thick,blanket of gas enveloping the Earth, providing gases
necessary for the support of plants and animals. I am the colorless air that is breathed
in constantly. I am the plunging rain tah pours when the sun is shining, the white, damp
snow that drifts in children's dreams, and the darting hail that prevents little boys and
girls to attend school. I am the vivid part of life. I shield protection for those below
me from the sun's intense heat. I am fed and drunk by those who go through hunger. As acid
contained in the human body, I digest food and clean out wastes. I am a desperate hope for
a cure, yet poison, slowly spreading in the air and along the land. I am the colorful
fireworks "sprocketed" across the dark sky when the touch of two soft lips are gently
pressed together. I am the warm tears trickling slowly down a child's face when the thought
of going to school all alone crosses his mind. I am te pure exhilaration every time two
people fall in love. I am rubbed roughly against a filthy body and massaged along a baby's
butt. Ancient or modern, I am vigorous.
f:\12000 essays\sciences (985)\Chemistry\Chevron.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chevron is the second largest producer of oil in the Gulf of Mexico. It is the third largest producer of oil in the United States and 24 other countries. Their production worldwide has been quoted as 1.4 million barrels of oil and gas a day. Chevron's products are transported over land by pipeline and tankers, and over water by barges. The headquarters for this huge corporation is in Houston, Texas, but they also have offices in California, London, Singapore, Mexico, and Moscow to name a few. They have pipelines that extend across the United States and also in Africa, Australia, Indonesia, New Guinea, Europe, and the Middle East. In addition to oil and natural gas, Chevron is also one of the leading coal producers in the United States. The company is very interested in the environment and more than half of the company's reserves are of low sulfur coal. Chevron's latest accomplishment is geared towards capturing much of the oil reserves waiting to be found in Russia. Before the fall of the Soviet Union in 1991, Chevron had nearly signed a deal with the government to buy Tenghiz, the biggest oil field to become available in twenty years. Hug reserves of oil, approximately 250 billion barrels, were waiting to be taken from the earth. After the uprising in Russia, Chevron feared that the deal would be off. Fortunately, they were able to bargain with the new-found government and enter into a joint agreement to produce oil from the fields in Tenghiz. At this time, Chevron is planning to export the oil from Russia by pipeline to the Black Sea where it will be transported out by oil tanker. The cost of this entire deal will be somewhere in the area of $10 billion dollars. In 1991 Chevron had revenues of $40 billion dollars with a net income for the year of 1.3 billion dollars.
f:\12000 essays\sciences (985)\Chemistry\Chlorine 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chlorine
Chlorine is (at room temperature) a greenish-yellow gas that can be readily liquefied at 5170 Tarr or 6.8 atmospheres, at 20 C (68 F), and has a very disagreeable odor. It's Element Symbol is Cl, atomic number is 17, and atomic mass is 35.453. Chlorine's melting point is -101 C or 149.8 F. The boiling point is -34.05 C or -29.29 F, at one atmosphere pressure. Chlorine is a member of the halogen group. Chlorine was discovered by Swedish scientist Karl Wilhelm in 1784, but he first thought it was a compound, rather than an element. In 1810, Sir Humphrey Davy named it Chlorine, from the Greek word meaning "greenish-yellow".
Chlorine is used in bleaching agents, disinfectants, monomers (plastics), solvents, and pesticides. It is also used for bleaching paper pulp and other organic materials, preparing bromine, (a poisonous element that at room temperature is a dark, reddish-brown), tetraethyl lead, and killing germs in water, particularly in swimming pools and hot tubs.
Like every member of the halogen group, chlorine has a tendency to gain one electron and become a chloride ion. Chlorine strongly reacts with metals to form mostly water-soluble chlorides. Chlorine also strongly reacts with nonmetals such as sulfur, phosphorus, and other halogens. If you were to mix hydrogen and chlorine gases and keep them in a cool dark place, the mixture would be stable, but if it were exposed to sunlight, it would cause a strong explosion. If a burning candle were placed in a sealed container of chlorine, it would keep burning, and it would produce thick, black, smoke, leaving behind soot. There are five oxides that chlorine can form: chlorine monoxide; dichloride monoxide; chlorine dioxide; chlorine heptoxide; and chlorine hexoxide.
Chlorine is used in bleaching agents, disinfectants, monomers (plastics), solvents, and pesticides. It is also used for bleaching paper pulp and other organic materials, preparing bromine, (a poisonous element that at room temperature is a dark, reddish-brown), tetraethyl lead, and killing germs in water, particularly in swimming pools and hot tubs.
Electron Dot Model
Cl
Additional Information
Chlorine was the first substance used as a poisonous gas in World War I (1914-1919) , along with gases like tear gas, phosgene (a lung irritant), and mustard gas. Flame-throwers were also tried, but at first were thought ineffective because of their short range, but when napalm (made up of palmitic and napthenic acids), a sort of thick, sticky gasoline, was developed, flame throwers were quite useful in World War II.
Most Chlorine is made by electrolysis of a salt solution, with a by-product of sodium hydroxide. Some industrial chlorine is made by oxidizing hydrogen chloride (a colorless, corrosive, nonflammable gas with a penetrating, suffocating odor.) .
Bibliography
* Microsoft (r) Encarta. Copyright (c) 1994 Microsoft Corporation. Copyright (c) 1994 Funk & Wagnalls Corporation.
· Asimov, Isaac, Building Blocks of the Universe, rev. ed. (1974); Downs, A. J., The Chemistry of Chlorine, Bromine, Iodine and Astatine (1975); Hamilton, E. I., The Chemical Elements and Man (1978); Nechamkin, Howard, Chemistry of the Elements (1968); Ruben, Samuel, Handbook of the Elements, 2d ed. (1967; repr. 1985); Trifonov, D. N., Chemical Elements: How They Were Discovered (1985).
f:\12000 essays\sciences (985)\Chemistry\Chlorine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Chlorine Debate:
How White Do
You Want It?
Chlorine is one of the world's most widely used chemicals, the building
element vital to almost every United States industry. We use chlorine and
chlorine-based products whenever we drink a glass of water, buy food wrapped in
plastic, purchase produce in the supermarket, pour bleach into a washing machine,
have a prescription filled, print out a computer document like this one, or even
drive a car. (Abelson 94)
Chlorine, a member of the halogen (salt-forming) group of metallic
elements, was first made by Swedish chemist Carl Wilhelm Scheele in 1774, who
treated hydrochloric acid with manganese dioxide. In 1810, the English chemist
Sir Humphrey Davy determined that chlorine was a chemical element and named it
from the Greek word meaning greenish-yellow. One hundred and eighty-five
years later, chlorine compounds are ubiquitous components in the manufacturing
of paper, plastics, insecticides, cleaning fluids, antifreeze, paints, medicines, and
petroleum products. The unfortunate and unavoidable by-product of these
manufacturing processes is dioxin, one of the most toxic substances on the planet
Olsen 2
earth. Dioxins are also produced whenever chlorine containing substances, such
as PVC, are burned.
Life as we know it will change, if a Greenpeace campaign is successful.
The powerful environmental group has mounted a well-organized campaign that
has as its objective nothing less than a total, worldwide ban on chlorine. With the
public health and billions of dollars at stake, the debate over chlorine has become
one of the world's most contentious and controversial issues. "Is a chlorine-free
future possible?" asked Bonnie Rice, a spokesperson for Greenpeace's Chlorine
Free Campaign. "Yes, it can be done without massive disruption of the economy
and of society, if it is done in the right matter." (Gossen 94)
The chlorine industry and its allies say a total ban on chlorine would be
neither wise, possible, nor economically feasible. "We find the chlorine campaign
outrageous in its scope and purpose," explained Leo Anziano, the Chairman of the
Washington-based Chlorine Chemistry Council, and organization that lobbies on
behalf of the chlorine industry. "We believe it's based on pure emotion and not on
science. Without any real study, it's been determined that all organochlorines
(compounds containing chlorine) are harmful". The chlorine industry has
presented many statistics on what it says will be the cast to society of substituting
other substances for chlorine, and these figures are staggering. The net cost to
consumers would exceed $90 billion a year, about $1,440 a year for a family of
four, according to studies conducted by the Chlorine Institute. About 1.3 million
jobs depend on the chlorine industry, an amount equal to the number of jobs in the
state of Oregon. Wages and salaries paid to those employees totaled more than
$31 billion in 1990, approximately the same as the total payroll that year for all
state and local government employees in Oregon. (WHO 94-95)
Olsen 3
With its call for a total ban, Greenpeace has gone beyond common sense
and is jeopardizing the health and economic well-being of this country," Anziano
charged. Greenpeace is also well-armed with statistics. Their spokesmen argue
that, if implemented with careful planning, the transition to a chlorine-free
economy could save money, create new jobs, and be "economically and socially
just." Greenpeace puts the savings from phasing-out chlorine at $80 to $160
billion annually.
The phase out of chlorine would take place over a 30-year period and
would involve substituting what Greenpeace describes as "traditional materials and
non-chlorinated plastics." In the pulp and paper industry, for example, a totally
chlorine-free bleaching process would be implemented, while, in dry cleaning,
water based systems would replace chlorine-based solvents. Nothing is more
contentious in the chlorine debate than Greenpeace's firm position that all chlorine
and organochlorines threaten people and so should be banned. "Industry produces
more than 11,000 chlorine chemicals, each of which could take years of study, "
explains Jack Weinburg, a spokesperson for Greenpeace's Chlorine Campaign.
"Traditionally, we have looked at chemicals as being innocent until proven guilty.
We need to change that approach." (Greenpeace 94)
Industry warns that it is a big mistake not to distinguish among chlorinated
compounds because the mere presence of chlorine does not render a compound
carcinogenic or harmful. "Regulations should target specific substances whose
environmental harm has already been demonstrated through rigorous scientific
studies," says Anziano. "The sloppy reasoning used by Greenpeace and their
allies is no substitute for careful risk analysis."
Olsen 4
Science aside, much of the chlorine debate has been emotional, and nothing
has made tempers flare more than the issue of whether a link exists between breast
cancer and chlorinated pesticides and other chlorine-based chemicals. Greenpeace
has released a report, "Chlorine, Human Health and the Environment: The Breast
Cancer Warning," which reviews "new scientific evidence" linking chlorine-based
chemicals to breast cancer, and epidemic that kills 50,000 women annually in the
United States alone. Not surprisingly, industry has produced its own "scientific
evidence." For example, a study released by CanTox, a Canadian environmental
consolation group, concluded that "it is evident ... the proposed causal association
(of breast cancer) to bio-accumulative chlorinated organic compounds should be
rejected." (CMR 4)
This just proves the all-too clear point that a group, (namely Greenpeace),
points a finger at a problem and then starts making generalizations about the
causes and the effects of the problem, this not only causes a public outcry for an
answer to the problem, but also a united defense put up by the big companies in
question. This could be taken as a sign that they have something to hide, but that
is not very likely.
In the titanic struggle over chlorine's future, industry is clearly on the
defensive. Recognizing that the court of public opinion will be the final arbiter on
the issue, it has begun to shift its own public relations machine into gear. The
Chemical Manufacturers Association has established the Chlorine Chemistry
Counsel, which has a multi-million dollar budget, while big chemical companies
such as Dow Chemical have created full-time positions with names like "Director
of Chlorine Issues," "We need to offer the public a different view of chlorine
Olsen 5
chemistry than the one the anti-chlorine forces have been purveying for years",
says Brad Lienhart, Managing Director of the Chlorine Chemistry Council.
The anti-chlorine camp, however, has garnered the support of several
influential scientific, environmental, and international organizations, including the
International Joint Commission on the Great Lakes, the Paris Commission on the
North Atlantic (a multinational-level meeting of 15 European governments and the
European Community), the 21-nation Barcelona Convention on the Mediterranean,
and the American Public Health Association.
Strong anti-chlorine sentiment exists in the White House, the United States
Environmental Protection Agency and in both the United States Senate and House
of Representatives. President Clinton's proposal for the Clean Water Act involves
a strategy for reducing or prohibiting chlorine use. Meanwhile, the chlorine
industry is worried that the Environmental Protection Agency watchers might
curtail or even ban the production of chlorine and organochlorines. These
developments are making many chemical companies such as Vulcan and Dow
Chemical look quietly for alternatives to chlorine and organochlorines. Dow, for
example, has created a new business called Advanced Cleaning Systems, or ACS
for short, which provides water-based cleaning technology for green industrial
niches. "In the future, we have to be more critical of irresponsible chlorine and
organochlorine use to protect the essential uses of both of these substances," Tom
Parrott, Vulcan's Director of Environmental Health and Safety, explained to
Chemical week. (Lucas 94)
Though tempers seem to flare at this seemingly undecidable debate, the
basis of the debate seems to be the solution. Banning or getting rid of chlorine,
organochlorines, or most any other chemical can only cause more problems than
Olsen 6
they will solve unless a proven and effective alternative is developed to take the
place of that chemical. Most everyday things would have to drastically be altered
to make suit for a complete chlorine ban, and that would take a great deal of time,
effort, and money to do.
If a ban on chlorine was implemented, who would be responsible for the
cost and maintenance of switching the equipment: the consumer, the producer,
Greenpeace and other environmental watch organizations, or the government?
The brunt of the cost would most likely fall into the hands of the consumers,
which would kill most middle and lower-class families.
Chlorine is a building block of most of our everyday conveniences and a
major player in most chemical compounds. Until a sturdy and cost-effective
alternative is made, most of the everyday consumers will still have to go on using
the same chlorine and organochlorine-based products that they have used for years
before.
f:\12000 essays\sciences (985)\Chemistry\cloning.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CLONING
Richard Pech
6-5-96
per. 7
Richard Pech
Cloning, is it the thing of the future? Or is it a start of a new generation? To
some, cloning could give back a life. A life of fun, happiness, and freedom. For others it
could mean destruction, evil, or power. Throughout this paper, you the reader, should get
a better concept of cloning, it's ethics, the pro's and con's, and the concerns it has brought
up. You will hear the good of what cloning can do and the bad that comes with the good.
Most of the information you will read about in this paper is what might become of the
future. Even though the cloning of humans can not be accomplished. When it is the
possibilities are endless.
What is cloning? How did it get started? Well, it is like this. A clone is a genetic
copy or a replica of an living organism. But, when you gear cloning doesn't a Si-Fi movie
come to mind. Like when they take a nucleus, place it in a egg, put the egg in a incubator,
and when it hatches it's an exact replica of the original being (Lawren). Though this has
been done with frogs it has not yet been accomplished with mammals (Lawren). Another
way to make a clone, as they do in the cattle buisness, is to split the cells of a early multi-
celled embryo which will form two new embryos (Lawren).
For it to get started into practice it took more than fifty years of questioning and
testing. The first successful cloning experiment involved a leopard frog. It took place in,
1952 with group of scientist from the Institute for Cancer Research in Philadelphia
(Lawren). To clone the frog they used an embryonic frog cell nucleus(Margery). 1962,
John Gurdon of Cambridge University cloned a toad that survive threw adulthood and was
able to reproduce. He was also the first to take a nucleus from a fully contrast tadpole
intestinal cell and cloned toads(Robertson). As you can see we are getting close to the
cloning of humans. 1981, Steen Willadsen was the first to clone a artificial chimera. He
did this by mixing a sheep and a goat getting the result of a "geep" (Lawren). It had the
body shape and the head of a goat, and a dappled coat which had large patches of
Richard Pech
sheep's wool. 1984, Willadsen cloned the first verifiable mammal, using embryonic nuclei
transplant into an unfertilized sheep egg. Also in, 1986, when he worked for Texas
bioengineering company (Lawren). By using the embryonic nuclei, he produces the first
cloned calves from cattle. The cloned cattle that were produced were super-elite, high
production dairy cows and bulls who had a high breeding rate (Robertson). 1987, James
Robl of the University of Massachusetts was the first to clone rabbits also using embryonic
nuclei. But who can say when we will be able to clone human organs or complete
"biocopies" of human beings by using just the nuclei taken from a skin sample (Lawren).
What's so good about cloning? Lets look at this at a different scenario. Ned and
Stacey are in a hospital. The both of them have a kidney that is failing them. For Ned this
is no big deal, since he has a clone. All the doctor has to do is remove the cloned kidney
and switch it with the bad one. With this cloned kidney you don't need to worry about the
body rejecting it because it is made from the same DNA and the cells will react to it as if it
was the original one. On the other hand for Stacey she doesn't have a clone. So, all she
can do is pray for a donor's kidney to arrive before she dies. Another good thing is we
could create farm or "pharm" animals genetically engineered to produce valuable drugs
(Resenberger). Like scientist are creating an animal that will manufacture anti-clotting
drugs for humans in their milk by gene-spliced sheep and mice (Resenberger). With this
breeders could make formerly expensive drugs in large quanities at a low cost. Doesn't all
of this sound good to have? Or are we just overlooking the bad possibilities. Lets just say
some freak wants to make an army of one hundreds Adolf Hitlers. Or try to clone
Einstein. Also people could go out and buy a son that will grow up to be Micheal Jordan
or Mike Tyson. But in a way this is good for people who are unable to have children.
Some thought of the future is immortality. When you make a clone it is like being born
again. You have a whole other body waiting for you. You could be 80yrs old and switch
into a 21yrs olds body (Lawren). You lose a limb just get another one sewn back on
(Lawren). These possibilities can go on and on.
Richard Pech
Cloning can also produce doubles, triples, even quadruplets. In a way this is good
for some families. Are you wondering how this could be good? Well, just think about
this. This couple had a cloned son implanted in the wife. When the son is born he is just
fine and normal like every other baby. After a few years the kid is able to walk and
wonders off some where. the next thing you now it the kid is hit by a car and is killed on
the scene. Even though this is a tragic event. The mother and father can go back to the
lab to get the exact cloned baby. The new cloned baby has the same physical features, but
mentally he will be different (Robertson). So, the personalities of the two will be different.
One or the other will learn different stuff and at a different rate. And the lab will always
have a copy or clone for another child. In a bad sense, the company that is making the
cloned embryo could also sell the same copy to another couple. In time you had one
cloned son and then a couple years later you have the exact clone son again. Which might
make the first son feel like he is in a awkward position . Having a brother that is exactly
like him physically. So, as you can see cloning has its good and its bad (Robertson).
Though this seems too good to be true or the worst nightmare you have ever had.
This stuff still can not be accomplished yet. All though a lot of attempts have been tried.
The human embryo still does not want to develops into a clone. And so far it has been
taking years of painstaking research. Some peoples opinion about are good or bad. Like
Marie Diberardino, Ph.D. who researches animal cloning at the Medical College of
Pennsylvania in Philadelphia says, "The cloning of animals is certainly useful, but I'm
morally against manipulation genetic material that would develop into a whole human
being. We just don't have the right to manipulate the gene pool of human
individuals."(Lawren).
As you can see, Marie is against the cloning of human beings, but John C.
Fletcher, Ph.D., of the University of Virginia's Center for Biomedical Ethics believes in
cloning for human parts, but not for human manipulation. He says, "I don't think any
Richard Pech
[ethics] committee would approve research that would mutilate an embryo by destroying
the brain. I know if I were looking at such a proposal, I'd say no. But If you could grow
me a liver from one of my cells, I wouldn't opposed-- ass long as you weren't growing me.
It's certainly better than taking a liver from a kid." (Lawren).
A biologist at Bio Time Inc. in Berkeley, California is Paul Segall. He is the
coauthor of Living Longer, Growing Younger. He says this " The aging surgeon's
dexterity, the athlete's wind, the construction worker's muscles, the fashion model's face --
all restored. Complexions smooth as a baby's joints and tendons as spry as a teenager's,
hearts and lungs of an adult in his or her prime... Cloning will provide the raw materials to
put us pack together." Segall is definitely pro with cloning. One thing that Segall
enthuses is , "The Seventy-year-old transformed into a nineteen-year-olds body."
(Robertson).
To research in the United States is like selling drugs on the street. So, to see any
progress in cloning seems remote. The main ethical problem is the fact that cloning deals
with human embryos (Robertson). Kind of like abortion, since so far no cloning embryo
has lived. Back in 1975, The federal government declared that there could be no funding
on the experiments on human embryos until the government's Ethics Advisory Board gave
its approval (Resenberger). And since this is so close to abortion all politicians may stay
clear of cloning research for the foreseeable future (Resenberger). But right now the
immediate concern is whether if there should be any restriction on research with embryos
designed to improve or perfect techniques of embryo splitting (Margery). if they are able
to establish the efficacy and safety of embryo splitting then it
will have to be a concern. The two biggest ethical issues are whether they should be able
to research on normal human embryos. The second is if the embryo create can be placed
into a uterus and be born. Some of the families concerns are the fear if cloning violates
one's uniqueness and dignity. It will also give the child a unrealistic parental figures.
Richard Pech
Some feel discomfort with the manipulation and destruction of human emboss in research.
They even fear the fact that clones could be created to provide a sick or dying child will
the organ or tissue transplant. Still the worst fear is the case of the production of the
cloned embryo will be produced and sold to certain parents looking for the desirable child
of their dreams (Resenberger)
.
In conclusion, cloning could possibly bring us immortality, it could be the fountain
of youth, its the ultimate life insurance, it could bring back loved ones, give some couples
their first child, and provide us with our own transplants. On the bad side one could
possibly conquer the world, bring back evil souls, create mind zombies and sell you body
without you even knowing it. So does all the good even up with the bad or is it not worth
the trouble? I personally do not know, because to me it is strange to think that they could
make me again. It seems like it is impossible to be born a second time in life. One thing
for sure, cloning is bring around a whole new idea to peoples heads.
R. Pech 10
Works Cited
Facklam and Margery. From Cell to Clone. San Francisco: W. H. Freeman, 1979
Lawren, Bill. Bionic Body Building. --: Longevity Publications Internation, Ltd., 1991.
Rensberger, Boyce. The Frightful Invasion of the Body Doubles will have to Wait.
Washington,
D.C.:Washington Post, 1993
Robertson, John A. The Question of Human Cloning. New York: Hastings Center
Report, 1994.
f:\12000 essays\sciences (985)\Chemistry\Cobalt.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cobalt
My report is about the element Cobalt. Cobalt is the 27th element on the periodical table and has an atomic number of twenty-seven. It has a symbol of Co. Cobalt¹s atomic weight is 58.9332. It has a melting point of 1,490? C. and a boiling point of 2,900? C. Cobalt looks almost exactly like iron and nickel. Cobalt is between iron and nickel on the periodical table and found in only .001-.002 percent of the earth¹s crust. Cobalt was first found in the Harz Mountains. People in the silver mines dug up arsenic cobalt ores. Then, because they thought the ores contained copper, heated the ores releasing arsenic trioxides. Cobalt was named after the German kobold. A kobold was said to be an underground goblin or demon. In 1735 cobalt was identified. Cobalt is a white metal with a bluish cast. It is magnetic and very hard and does not tarnish.
Cobalt has many uses and I will talk about some of them. It is a very expensive metal that is used in the manufacture of very many expensive alloys. Cobalt-iron alloys have very unique and special magnetic properties. For example, Hyperco is used as the nucleus in strong electromagnets. Alloys containing titanium, aluminum, cobalt and nickel can be made to become permanently magnetic. One alloy, called Stellite, is an alloy of cobalt, chromium, tungsten, and molybdenum. This alloy is extremely hard and keeps its hardness at extreme temperatures. It has many uses: cutting tools are made of it along with gas turbines. Zaire is the world¹s largest producer of cobalt with 65% of the world¹s reserve.
Cobalt is a common trace element found in food. It is a component of vitamin B12. It is important to our health. But excessive amounts may cause nausea, damage to the heart, kidneys, and nerves, and even cause death.
I think that Cobalt is a neat element. Before I did this report I knew nothing of Cobalt. Now I know how they use it as an alloy and in other ways.
f:\12000 essays\sciences (985)\Chemistry\CODEINE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CODEINE (C18 H2, NO3 H3PO4 1/2 H2O)
Our team researched the drug Codeine.
We used several different sources to gather our information. We go information from Jay
Moser and Sue Peterson, our two local pharmacists. We researched medical encyclopedias,
journals, and magazines. Codeine is known medically as methylmorphine. It is a drug derived
from opium, a poppy plant. It was discovered in 1832 by French chemist Pierre-Jean
Robiquet. Codeine constitutes about 0.5 to 2.5 percent of this plant substance. The drug
has been in use since the early 1900's and it shares most of the pharmacologic
characteristics of morphine, the other alkaloid in opium. Codeine is classified as a
narcotic, it has the same painkiller effect as morphine but is only one-sixth to one-tenth
as strong. Codeine occurs as a colorless or white crystals or as a white, crystalline
powder and is slightly soluble in water and freely soluble in alcohol. The phosphate and
sulfate salts of codeine occur as white, needle-shaped crystals or white, crystalline
powders. Why is it used? Codeine is most useful in the relief of mild to moderate pain. It
is also used as a cough remedy because it suppresses the part of the brain that triggers
coughing, and as an anti-diarrheal drug, because it slows down muscle contractions in the
intestinal wall. There are possible adverse effects. The most frequently observed adverse
reactions include lightheadedness, dizziness, sedation, nausea, vomiting, and sweating.
These effects seem to be more prominent in ambulatory patients and in those who are not
suffering severe pain. Other adverse reactions include the following: (1) Central Nervous
System- Euphoria, dysphoria, weakness, headache, insomnia, agitation, disorientation, and
visual disturbances. (2) Gastrointestinal- Dry mouth, anorexia, constipation, and biliary
tract spasm. (3) Cardiovascular- Flushing of the face, abnormally slow heartbeat,
faintness, and syncope. (4) Genitourinary- Urinary retention of hesitancy, anti-diuretic
effect.
(5) Allergic- skin rashes.
Most drug manufactors list specific warnings to be aware of when taking codeine. (1)
Codeine sulfate can produce drug dependence of the morphine type, and therefore has the
potential for being abused psychic dependence, physical dependence and tolerance may
develop upon repeated administration of Codeine. (2) Codeine may impair the mental and or
physical abilities required for the performance of potentially hazardous tasks such as
driving a car or operating machinery. (3) Patients receiving other narcotic painkillers,
general anesthetics, tranquilizers, or other central nervous system depressants, including
alcohol with codeine may exhibit an additive central nervous system depression. Who
shouldn't take codeine? Pregnant women should not use codeine because safe use in pregnancy
has not been established. Children below the age of three shouldn't be given this drug for
that age group hasn't been established. Codeine should be given with caution to certain
patients such as the elderly or debilitated, and prostatic hypertrophy or urethral
stricture. Codeine can be taken as a tablet, liquid or by injection. A prescription is
needed for codeine in the United States and it is available as a generic. The usual
antitussive oral dosage of codeine, codeine phosphate, or codeine sulfate for adults and
children 12 years of age or older is 10-20 mg every 4-6 hours, not to exceed 60 mg daily.
The usual antitussice dosage for children 3 to younger than 6 years of age is 1 mg/kg daily
given in 4 equally divided doses every 4-6 hours.. What are the signs and symptoms of
overdosage? Serious overdose with codeine is characterized by respiratory depression,
extreme somnolence progressing to stupor or coma, skeletal muscle flaccidity, cold and
clammy skin, and sometimes abnormally slow heartbeat and hypotension. In severe overdosage,
circulatory collapse, cardiac arrest and death may occur. Our investigation of codeine
progressed from knowing very little about it in the beginning to acquiring much information
on its adverse effects, its warnings, and its dosage administration.
f:\12000 essays\sciences (985)\Chemistry\Contamination of Road Salt.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IL 3; Experiment 1
October 31, 1996
The Study of Akali Metal Contamination in Road Side Soil
Abstract
Six soil samples were taken from a roadside that was expected to exhibit characteristic of road salt contamination. This contamination is characterized by the presence of magnesium, calcium and sodium. The relationship between akali metal concentration and distance from the pavement was examined and determined to be nonexistent. Additionally, atomic absorbtion and atomic emission spectroscopy were compared and and atomic absorbtion was found to be 1.89 times as sensitive as atomic emission.
Introduction
A common technique in snow and ice removal on roadways is the application of magnesium, calcium, and sodium chloride salts to the surface of the road. When the ice melts it dissolves these salts and causes them to migrate into soil that is adjacent to the pavement. Over time, the accumulation akali metal salts can change the chemical profile of the soil which can lead to detrimental biological effects. Flame atomic spectroscopy provides a technique that can quantify metal concentrations in the extracts of the soil samples and consequently examine the relationship between distance from the point of road salt application and akali metal concentrations.
Experimental
Soil preparation: Six surface soil samples were collected at the intersection of Cold Spring Lane and the exit ramp of Interstate 83, in northwest Baltimore city. These samples were collected at distances from the roadway of 0m, 2m, 4m, 6m, 10m, and 20m. These samples were dried in a convection oven at 110°C for over 24 hours then crushed. Aliquots of approximately one gram were weighed and then extracted with 10.0 mL of 1M ammonium acetate. The extract was filtered with an inline filter disc with a pore size of 5mm and then diluted to 100.0 mL.
Instrumental: The extracts were analyzed for Ca, Na, and Mg using a Varian model AA-3 flame atomization spectrophotometer with a diffraction grating monochromator. Data was collected with a Houston Instrument chart recorder. An acetylene/air reducing flame was used for all determinations (10 psi acetylene/7 psi air). Two replicates of each sample were made and averaged for both AA and AE. The analysis was seperated into two methods; atomic absorbtion (AA) and atomic emission (AE). The emission intensities and absorbances were determined from the measured peak height obtained from the chart recordings.
Atomic Emission: Na and Ca concentrations in the soil were determined using AE. The spectrophotometer was calibrated using the standard series method for both elements. Regression analysis was performed on the calibration data to provide a functional relationship between emision intensity and concentration.
Results and Conclusions:
Sodium: The atomic line used in the analysis for sodium was at 589.0 nm. The relationship between emision intensity and concentration was found to be quadriatic, as depicted in the below chart. The equation that describes intensity (I) as a function of concentration (C) is as follows:
eq (1): I=(-0.0207±0.0004)C2+(0.814±0.0168)C+(0.894±0.0242)
The fact that the relationship is quadriatic shows the effects of self absorbtion at higher concentrations, which suggests that the linear dynamic range is smaller than 20 ppm.
Chart 1:
Calcium: The atomic line used in the analysis of calcium was at 422.6 nm .The relationship between emision intensity and concentration was found to be linear, as depicted in the below chart. The equation that describes intensity (I) as a function of concentration (C) is as follows:
eq(2): I=((0.243±0.0117)C)+(0.570±0.0430)
Chart 2:
Atomic Absorbance: Mg and Ca concentrations in the soil were determined using AA. The source used was a Varian multielement (Mg/Ca) hollow cathode lamp running at 25 milliamperes. The spectrophotometer was calibrated using the standard series method for both elements. Regression analysis was performed on the calibration data to provide a functional relationship between absorbance and concentration.
Calcium: The atomic line used in the analysis of calcium was at 422.6 nm. The relationship between absorbance and concentration was found to be linear, as depicted in the below chart. The equation that describes atomic absorbtion (A) as a function of concentration (C) is as follows:
eq(3): A=((0.459±0.0152)C)+(0.100±0.0181)
Chart 3:
Magnesium: The atomic line used in the analysis of magnesium was at 285.2 nm. The relationship between absorbance and concentration was found to be linear, as depicted in the below chart. The equation that describes atomic absorbtion (A) as a function of concentration (C) is as follows:
eq(4): A=((10.4±0.420)C)+(0.238±0.0478)
Chart 4:
Soil Samples: The soil extracts were analyzed for Na, Ca, and Mg at the aforementioned wavelengths. To determine the unknown concentrations of the soils from the known emission intensities or absorbances rearangement of equations 1-4 was required and each new equation is denoted by the suffix A following the original equation number.
·Na Emission:
eq (1): I=(-0.0207)C2+(0.814)C+(0.894)
eq(1A):C=
Note: This is a result of the fact that equation 1 is a quadriatic equation of the general form:
y=ax2+bx+c, with y¹0, where a, b, and c are constants. At any point in the domain of x, y takes on a constant value and the following equation can be written: 0= ax2+bx+(c-y). Let (=(c-y).The difference of two constants is certainly a constant, thus, 0= ax2+bx+(. The quadriatic formula can be written as x= . Only the solution obtained from adding the discriminant was used in subsequent calculations.
·Ca Emission: ·Ca Absorbtion:
eq(2): I=(0.243)C+(0.570) eq(3): A=(0.459)C+(0.100)
eq(2A) C=(I-0.570)/0.243 eq(3A): C=(A-0.100)/0.459
·Mg Absorbtion
eq(4): A=(10.4)C+(0.238)
eq(4A): C=(A-0.238)/10.4
Solutions of the previous equations are tabulated as follows:
Table 1:
Distance (m) Na Conc.(mg/kg) Mg Conc.(mg/kg) Ca Conc. by AA(mg/kg) Ca Conc. by AE(mg/kg)
0 427 17.7 344 627
2 536 50.6 1840 2520
4 448 80.5 1590 2340
6 166 47.1 1080 4070
10 337 47.2 1020 1720
20 62.4 76.4 1940 2070
It would appear that there is no relationship between akali metal concentration and distance from the roadway at the particular location that the samples were obtained from. The following charts illustrate this graphically.
Atomic Emission vs. Atomic Absorbtion in calcium determination: The did not appear to be much correlation between AA and AE for the soil samples, which is demonstrated in Table 2.O
On average, the AA values were -88.1% lower than AE values, with a sample standard deviation of 87.8% and a relative standard deviation of -99.7%.
Table 2:
Distance (m) Ca Conc. by AA(mg/kg) Ca Conc. by AE(mg/kg) % difference
0 344 627 -82.3
2 1840 2520 -37.0
4 1590 2340 -57.0
6 1080 4070 -277
10 1020 1720 -68.6
20 1940 2070 -6.7
Average N/A N/A -88.1
Std. Dev N/A N/A 87.8
%RSD N/A N/A -99.7%
The sensitivities of the two methods were compared using the parameter defined as calibration sensitivity, which is the slope of the calibration curve. Analytical sensitivity was not determined because it is concentration dependent and the signal standard deviations were often zero due to the fact that only two replicates per standard were made. The ratio of the slopes (AA:AE) of the curves is 1.89, indicating that atomic absorbtion is almost twice as sensitive as atomic emission.
In conclusion, the dry weight concentrations of magnesium, calcium, and sodium in roadside soil samples were determined by atomic spectroscopy and no relationship between distance from the road and concentration was observed. Atomic absorbtion spectroscopy was compared to atomic emission spectroscopy and emission spectroscopy was found to be 0.529 times as sensitive atomic absorbtion. When actual concentrations that were determined by the two techniques were compared, AA values were, on average, -88% lower. This could be a result of matrix effects or spectral interferences in the soil extracts used for AE.
f:\12000 essays\sciences (985)\Chemistry\cool.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Radioactive wastes, must for the protection of mankind be stored or disposed in such a manner that isolation from the biosphere is assured until they have decayed to innocuous levels. If this is not done, the world could face severe physical problems to living species living on this planet.
Some atoms can disintegrate spontaneously. As they do, they emit ionizing radiation. Atoms having this property are called radioactive. By far the greatest number of uses for radioactivity in Canada relate not to the fission, but to the decay of radioactive materials - radioisotopes. These are unstable atoms that emit energy for a period of time that varies with the isotope. During this active period, while the atoms are 'decaying' to a stable state their energies can be used according to the kind of energy they emit.
Since the mid 1900's radioactive wastes have been stored in different manners, but since several years new ways of disposing and storing these wastes have been developed so they may no longer be harmful. A very advantageous way of storing radioactive wastes is by a process called 'vitrification'.
Vitrification is a semi-continuous process that enables the following operations to be carried out with the same equipment: evaporation of the waste solution mixed with the
------------------------------------------------------------1) borosilicate: any of several salts derived from both boric acid and silicic acid and found in certain minerals such as tourmaline.
additives necesary for the production of borosilicate glass,
calcination and elaboration of the glass. These operations are
carried out in a metallic pot that is heated in an induction
furnace. The vitrification of one load of wastes comprises of the following stages. The first step is 'Feeding'. In this step the vitrification receives a constant flow of mixture of wastes and of additives until it is 80% full of calcine. The feeding rate and heating power are adjusted so that an aqueous phase of several litres is permanently maintained at the surface of the pot. The second step is the 'Calcination and glass evaporation'. In this step when the pot is practically full of calcine, the temperature is progressively increased up to 1100 to 1500 C and then is maintained for several hours so to allow the glass to elaborate. The third step is 'Glass casting'. The glass is cast in a special container. The heating of the output of the vitrification pot causes the glass plug to melt, thus allowing the glass to flow into containers which are then transferred into the storage. Although part of the waste is transformed into a solid product there is still treatment of gaseous and liquid wastes. The gases that escape from the pot during feeding and calcination are collected and sent to ruthenium filters, condensers and scrubbing columns. The ruthenium filters consist of a bed of
------------------------------------------------------------
2) condensacate: product of condensation.
glass pellets coated with ferrous oxide and maintained at a
temperature of 500 C. In the treatment of liquid wastes, the
condensates collected contain about 15% ruthenium. This is
then concentrated in an evaporator where nitric acid is destroyed by formaldehyde so as to maintain low acidity. The concentration is then neutralized and enters the vitrification pot.
Once the vitrification process is finished, the containers are stored in a storage pit. This pit has been designed so that the number of containers that may be stored is equivalent to nine years of production. Powerful ventilators provide air circulation to cool down glass.
The glass produced has the advantage of being stored as solid rather than liquid. The advantages of the solids are that they have almost complete insolubility, chemical inertias, absence of volatile products and good radiation resistance. The ruthenium that escapes is absorbed by a filter. The amount of ruthenium likely to be released into the environment is minimal.
Another method that is being used today to get rid of radioactive waste is the 'placement and self processing radioactive wastes in deep underground cavities'. This is the disposing of toxic wastes by incorporating them into molten silicate rock, with low permeability. By this method, liquid
wastes are injected into a deep underground cavity with mineral treatment and allowed to self-boil. The resulting
steam is processed at ground level and recycled in a closed system. When waste addition is terminated, the chimney is allowed to boil dry. The heat generated by the radioactive wastes then melts the surrounding rock, thus dissolving the wastes. When waste and water addition stop, the cavity temperature would rise to the melting point of the rock. As the molten rock mass increases in size, so does the surface area. This results in a higher rate of conductive heat loss to the surrounding rock. Concurrently the heat production rate of radioactivity diminishes because of decay. When the heat loss rate exceeds that of input, the molten rock will begin to cool and solidify. Finally the rock refreezes, trapping the radioactivity in an insoluble rock matrix deep underground. The heat surrounding the radioactivity would prevent the intrusion of ground water. After all, the steam and vapour are no longer released. The outlet hole would be sealed. To go a little deeper into this concept, the treatment of the wastes before injection is very important. To avoid breakdown of the rock that constitutes the formation, the acidity of he wastes has to be reduced. It has been established experimentally that pH values of 6.5 to 9.5 are the best for all receiving formations. With such a pH range, breakdown of the formation
rock and dissociation of the formation water are avoided. The stability of waste containing metal cations which become hydrolysed in acid can be guaranteed only by complexing agents which form 'water-soluble complexes' with cations in the
relevant pH range. The importance of complexing in the preparation of wastes increases because raising of the waste solution pH to neutrality, or slight alkalinity results in increased sorption by the formation rock of radioisotopes present in the form of free cations. The incorporation of such cations causes a pronounced change in their distribution between the liquid and solid phases and weakens the bonds between isotopes and formation rock. Now preparation of the
formation is as equally important. To reduce the possibility of chemical interaction between the waste and the formation, the waste is first flushed with acid solutions. This operation removes the principal minerals likely to become involved in exchange reactions and the soluble rock particles, thereby creating a porous zone capable of accommodating the waste. In this case the required acidity of the flushing solution is established experimentally, while the required amount of radial dispersion is determined using the formula:
R = Qt
2 mn
R is the waste dispersion radius (metres)
Q is the flow rate (m/day)
t is the solution pumping time (days)
m is the effective thickness of the formation (metres)
n is the effective porosity of the formation (%)
In this concept, the storage and processing are minimized. There is no surface storage of wastes required. The permanent binding of radioactive wastes in rock matrix gives assurance of its permanent elimination in the environment.
This is a method of disposal safe from the effects of earthquakes, floods or sabotages.
With the development of new ion exchangers and the advances made in ion technology, the field of application of these materials in waste treatment continues to grow. Decontamination factors achieved in ion exchange treatment of waste solutions vary with the type and composition of the waste stream, the radionuclides in the solution and the type of exchanger.
Waste solution to be processed by ion exchange should have a low suspended solids concentration, less than 4ppm, since this material will interfere with the process by coating the exchanger surface. Generally the waste solutions should contain less than 2500mg/l total solids. Most of the dissolved solids would be ionized and would compete with the radionuclides for the exchange sites. In the event where the waste can meet these specifications, two principal techniques are used: batch operation and column operation.
The batch operation consists of placing a given quantity
of waste solution and a predetermined amount of exchanger in a vessel, mixing them well and permitting them to stay in contact until equilibrium is reached. The solution is then filtered. The extent of the exchange is limited by the selectivity of the resin. Therefore, unless the selectivity for the radioactive ion is very favourable, the efficiency of
removal will be low.
Column application is essentially a large number of batch operations in series. Column operations become more practical. In many waste solutions, the radioactive ions are cations and a single column or series of columns of cation exchanger will provide decontamination. High capacity organic resins are often used because of their good flow rate and rapid rate of exchange.
Monobed or mixed bed columns contain cation and anion exchangers in the same vessel. Synthetic organic resins, of the strong acid and strong base type are usually used. During operation of mixed bed columns, cation and anion exchangers are mixed to ensure that the acis formed after contact with the H-form cation resins immediately neutralized by the OH-form anion resin. The monobed or mixed bed systems are normally more economical to process waste solutions.
Against background of growing concern over the exposure of the population or any portion of it to any level of
radiation, however small, the methods which have been successfully used in the past to dispose of radioactive wastes must be reexamined. There are two commonly used methods, the storage of highly active liquid wastes and the disposal of low activity liquid wastes to a natural environment: sea, river or ground. In the case of the storage of highly active wastes, no absolute guarantee can ever be given. This is because of a possible vessel deterioration or catastrophe which would cause a release of radioactivity. The only alternative to dilution
and dispersion is that of concentration and storage. This is implied for the low activity wastes disposed into the environment. The alternative may be to evaporate off the bulk of the waste to obtain a small concentrated volume. The aim is to develop more efficient types of evaporators. At the same time the decontamination factors obtained in evaporation must be high to ensure that the activity of the condensate is negligible, though there remains the problem of accidental dispersion. Much effort is current in many countries on the establishment of the ultimate disposal methods. These are defined to those who fix the fission product activity in a non-leakable solid state, so that the general dispersion can never occur. The most promising outlines in the near future are; 'the absorbtion of montmorillonite clay' which is comprised of natural clays that have a good capacity for chemical exchange of cations and can store radioactive wastes, 'fused salt calcination' which will neutralize the wastes and 'high temperature processing'. Even though man has made many breakthroughs in the processing, storage and disintegration of radioactive wastes, there is still much work ahead to render the wastes absolutely harmless.
f:\12000 essays\sciences (985)\Chemistry\Copper.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Blake Adams Period 3
Grade 8 2/5/97
Copper Report
Copper is a mineral. it is not a plant or a animal. Copper is a
metallic metal. It can never be broken down into differnet substances by
normal chemical means. Copper was one of the first metals known to humans.
People liked it because in it's native condition, it could easily be beaten into
weapons or tools. Copper has been one of the most useful metals for over
5000 years. Copper was probably used around 8000 B.C by people living
along the Tigris and Euphrates rivers. In 6000 B.C, Egyptians learned how to
hammer copper into things they wanted. Around 3500 B.C, People first
learned how to melt copper with tin to make bronze. So the period between
3000 B.C and 1100 B.C became known as the bronze age.
Today, some of the leading states of the copper industry
are Arizona with 747,000 short tons, Utah with 187,000 short tons, New
Mexico with 161,000 short tons. Some other leading countries are Chile with
1,422,000 short tons, United States with 1,203,000 short tons, Soviet Union
with 650,000 short tons, and Zambia with 596,000 short tons.
When copper is being mined, both Native copper and copper ore
are usually found. The highest grade of copper ore is pale silvery gray.
Miners used to be always in danger in copper mines. Today, we have reduced
a fair amount of these hazards. Miners wear hats made of iron or very hard
plastic. This is to protect them from falling rocks. Lamps are also attached to
these helmets in case some of the lighting in the mine goes out leaving a
miner stranded in the dark. One of the biggest problems with mining is that in
some places dangerous gas's may exist, like Carbon Monoxide. In the past
we had very cruel and inhuman ways to detect harmful gases. One of these
ways was the use of canaries. Miners would let them fly into a part of the
mine where a poison gas was suspected. If there was a harmful gas, the bird
would fall over dead at the first scent of the gas. Today, we have better ways
to detect gases without having animals die. We now have detection machines
in all parts of mines. Mines also have top of the line fire alarms and water
systems. If a flammable gas ignites, like sulfur, the fire may not die for years,
which results in closing the mine. Another problem miners complain about are
the rats. Mines will often have mine cats that hunt out the rats. These cats are
well fed and petted by most of the miners.
Most copper is found in seven ores. That means it's mixed
in with other metals like lead, zinc, gold, cobalt, bismuth, platinum, and
nickel. These ores will usually have only about 4% pure copper in them
though. Sometimes miners may only find 2%. The things that make copper
such a popular metal are malleability which is how easily it bends. Copper is
highly malleable and won't crack when hammered or stamped. Ductility is
also a good property and is the ability to be molded or bent into a shape.
Copper can be pulled into very thin wire. For example, if you took a copper
bar, 4 inches (10 centimeters) square, you could draw it into wire thinner then
a human hair. One of the most amazing things about copper is its resistence to
corrosion. Copper will not rust. However, when the air grows damp, copper
will go from reddish-orange to reddish brown. After being in damp air for
long periods of time, a green film will coat the copper, called patina, which
will protect it from further corrosion.
Since copper is one of the most widely used metals in the
world we use it for a lot of things. Copper gives us water heaters, boilers and
cooking utensils. It is used for out door power lines, cables, lamp cords, and
house wiring. Electrical machinery like generators, motors, controllers,
signaling devices, electro magnets, and communication devices all use
copper.
f:\12000 essays\sciences (985)\Chemistry\CostChem Analysis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Performance:
- Loading: Manual
- Precision: None
- Speed: Dependence upon user.
Environment:
- Temperature Range - Appropriate user working environment.
- Pressure Range - Appropriate user working environment
- Humidity - Appropriate user working environment
- Shock Loading: High resistance due to cast-iron material.
- Dirty - The (screw & handle) must be properly lubricated
- Corrosion - Rust.
- Noise Level - Dependant upon user.
- Insects - Not applicable.
- Vibration: Dependence upon usage (cutting, drilling, pounding).
- Person type: Average individual
Service Life:
Service life of the bench vise should be quite long, due to the durable product at hand. Product life-span should be a minimum of 5 years with a noticeable turnover rate afterwards.
Maintenance:
The product is maintenance free with the only alternative of purchasing a new item for broken parts or a return of defective parts via warranty. This provides a more economical solution for the end user in comparison to the purchase of a new product. This type of resolution is not easily completed and must be done by the end user. Profitability on this basis would be at a bare minimum with only the customer's satisfaction in mind.
Target Costs:
Original cost of the bench vise is $14.99 retail and $8.00 manufacturing cost. Initial startup costs would include machines for producing casted parts, grinding and polishing the anvil, and painting the parts.
Competition:
The competition has comparable initial products available, but after our redesigning process our product will be of higher quality. NOTE: We will provide detailed reports of other products for the future presentation.
Shipping:
In bulk, by land and sea, directly to the companies warehouse in enormous shipping quantities. Distribution will be handled through a exterior company, this will allow us to not only focus on our product only, but (most likely) reduce our overall overhead due to our manufacturing of only one product.
Product Volume (Quantity):
Projected annual sales: 10,000,000 products (worldwide dist.)
Method of construction: Casting, then minimal manual assembly.
Retooling efforts would be very minimal with only the changing of the dies for each part within the product.
Packing:
Placement into a sturdy box capable of handling the weight of the product. Due to the resistance of the product packaging can be at this minimum level.
Manufacturing Facility:
We are a startup company with very minimal, if any, current production levels. We will need to build a complete manufacturing line for our new product.
Size:
Various sizes of the bench vise will be available to apply to varying situations and needs (too large for design constraints, work environment, etc.). Due to the work environment nature of the product portability will be held to a minimum.
Weight:
The weight factor will be tightly integrated into the design constraints related to clamping / external forces. Shipping costs will be, on the whole, not affected largely due to the enormous quantities shipped per a delivery.
Aesthetics and Finish:
The current product has a more "square" appearance, we would like to evolve the existing competitor product so that it has more round features. This will give it the "new age" appearance.
Materials:
None.
Product Life Span:
Product lifespan should be a minimum of 5 years with a noticeable turnover rate afterwards.
Standards, Specifications, and Legal Aspects:
We will tailor current acceptable design standards to meet our product specifications. Current legal ramifications of design standards and of our additional ones should not result in high liability suits.
Ergonomics:
The ergonomic factor of this product will be primarily handled by the end- user (workplace attributes of noise level, open area, reachability, etc.). We will take into account the factors of applied force and leverage to result in a acceptable clamping force.
Customer:
The customer may have several current ties with certain product lines (Sears Craftsman products, etc). We will have to overcome this disadvantage by attaining a higher level of quality over our competitors.
Quality & Reliability:
Due to the quick production scheme required a larger amount of defective products will be produced, but as long as we keep tolerance levels low and continue with spc production levels should be quite high.
Shelf Life:
For our sake, unlimited (Note: item is corrosion resistant).
Processes:
None.
Timescales:
An acceptable design startup --> launch date would be approx. 6 months. This would include assembly line setup and implementation of new design constraints. One note, the initial time spent on design and setup should be taken very seriously as this will lead to a savings of time / money in the future due to no large re-designing of the process / parts. SPC should be taken into account at this stage.
Testing:
Testing should be completed by the end user. This is going to give us the widest range of environmental settings and a variance of customer complaints / compliments. The testing of the product will allow us to continue with the above SPC process.
Safety:
1. Properly secured product.
2. Keep fingers away from clamping jaws.
3. High stress / force factors may be involved with use.
Company Constraints
Startup project, the facilities / personnel are added as needed.
Market Constraints:
Comparative prices / functionality with competitive products.
Patents, Literature, and Product Data:
[ Research needed ]
Political and Social Implications:
None known.
Disposal:
Re-melting of product parts when use is no longer desired or available will provide and afterlife.
f:\12000 essays\sciences (985)\Chemistry\D‚pistage des maladies thyro‹diennes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE DES MATIÈRES
I. INTRODUCTION 1
A. RÔLE PHYSIOLOGIQUE DES HORMONES THYROÏDIENNES 1
B. BIOSYNTHÈSE DES HORMONES THYROÏDIENNES 1
C. MÉTHODES DIAGNOSTICS 3
D. SIGNES ET SYMPTÔMES 3
E. TESTS THYROÏDIENS 4
F. TESTS MORPHOLOGIQUES DE LA GLANDE THYROÏDE 6
G. TRAITEMENTS 6
II. CONCLUSION 7
III. RÉFÉRENCES 8
INTRODUCTION
La glande thyroïde joue un rôle essentiel dans le contrôle du métabolisme général et en particulier celui des glucides. Cette glande est située sur la face centrale du cou, juste au-dessous du larynx. Arrangées en forme de sacs, les cellules de la thyroïde sécrètent plusieurs hormones principalement la thyroxine et la calcitonine. En effet, la thyroïde assure la synthèse de ces hormones et les entrepose dans le colloïde. Normalement, la thyroïde libère lentement ces hormones dans le système sanguin ou bien elle les met en réserve pour une durée d'au moins 100 jours.
Rôle physiologique des hormones thyroïdiennes
Les fonctions principales de hormones thyroïdiennes chez l'homme s'agit d'être la synthèse des protéines et le métabolisme énergétique. Cependant, ces hormones sont aussi impliquées dans plusieurs autres activités physiologiques telles que les suivantes. Elles provoquent une augmentation de la lipolyse tout en abaissant le taux de cholestérol chez la personne. Elles favorisent la croissance en agissant sur les chondrocytes situés dans les os. Elles interviennent au niveau de la thermorégulation et elles peuvent accélérer le rythme cardiaque. Elles accélèrent l'absorption intestinale des hydrates de carbone tout en augmentant le catabolisme des glucides (glycogénolyse). Elles sont impliquées dans la phase de décontraction des muscles. Finalement, elles sont capables d'augmenter la diurèse et l'élimination urinaire et fécale du calcium (Matte, R., Bélanger, R., 1985).
Biosynthèse des hormones thyroïdiennes
Le système principal qui contrôle la concentration de thyroxine dans le sang est exercé par la thyréostimuline (TSH) qui provient de l'adénohypophyse. La sécrétion de TSH est donc responsable d'une rétroaction négative qui est assurée par la concentration de thyroxine dans le sang. Un deuxième système est géré par un neurohormone appelé l'hormone de libération de la thyréostimuline (TRH). Aussitôt qu'il y a une diminution de thyroxine dans le sang, il s'ensuit d'une sécrétion de TSH et de TRH. Une fois que la TSH atteint la glande thyroïde, elle entraîne une libération d'hormones thyroïdiennes. (Matte, R., Bélanger, R., 1985).
Généralement la synthèse des hormones thyroïdiennes se fait en quatres étapes. Premièrement, l'iode qui provient des aliments et des liquides qu'on ingère, est capté par la glande thyroïde. Deuxièmement, l'iode devient oxydé et organifié afin d'être incorporé avec la thyroglobuline pour former de la monoiodotyrosine (MIT) et de la diiodotyrosine (DIT). Troisièmement, ces iodotyroisines se font oxidés pour former de la T4 et la T3 qui pourront ensuite être emmaganisé dans la colloïde avant leurs sécrétion. La sécrétion de ces hormones constitue la quatrième étape. (Matte, R., Bélanger, R., 1985).
Pour assurer cette synthèse hormonale, l'homme a besoin d'un apport minimal d'environ 50 à 200 ug/jour. L'iode qui provient des aliments tels que les fruits de mer et des boissons sont absorbés par l'intestion sous forme d'iodures. "Sur les 25 mg d'I du corps humains, on en retrouve 30 à 50% dans la glande thyroïde (concentration près de 1300 fois supérieur à celle des autres tissus), soit de 9 à 12 mg" (Idleman, S., 1990, p.75). Dans le sang, la concentration d'iode est d'ordre de 6 à 12 ug/100 ml et parmi cela, 1 ug/ml constitue de l'iode organique, 5 à 7 ug/ml correspond aux MIT et DIT et 95% est associé au T4 qui est lié à l'alpha-globuline (TBG - thyroxin binding globulin).
Méthodes diagnostics
La compréhension des systèmes qui contrôlent la sécrétion des hormones thyroïdiennes facilite l'établissement d'un diagnostic d'hyposécrétion (hypothyroïdisme) ou d'hypersécrétion (hyperthyroïdisme) de cette glande. Cette glande exerce ainsi une influence déterminante sur la croissance, ce qui est surtout évident lorsqu'une insuffisance thyroïdienne apparaît tôt dans la vie. En plus de l'arrêt de la croissance corporelle, on remarque aussi des malformations au niveau du visage et des cellules du cerveau qu'on appele le crétinisme. Cette maladie est souvent caractérisée par une arriération mentale chez l'individu. Bien qu'il existe plusieurs complications qui sont associées à ces troubles, nous allons maintenant examiner en plus de détails les signes et symptômes reliés à une hypothyroïdie et à une hyperthyroïdie. (Matte, R., Bélanger, R., 1985).
Signes et symptômes
Les signes et symptômes caractéristiques de l'hypothyroïdie sont les suivants: la fatigue, la peau sèche, la constipation, des troubles menstruels, des crampes musculaires, un gain de poids, la bradycardie, le coeur dilaté et flasque, l'infertilité, la galactorrhée, le syndrome de tunnel carpien, l'apathie et l'anémie. De l'autre côté, les signes et symptômes de l'hyperthyroïdie sont les suivants: la tachycardie, la fatigue, la nervosité, les tremblements, l'ostéoporose, du goître, l'intolérance à la chaleur, l'ophtalmopathie, la polyphagie, la psychose, l'onycholyse, la décompensation cardiaque et l'hépato-splénomégalie. (Idleman, S., 1990).
En clinique, les maladies thyroïdiennes se présentent parfois par des principaux signes et symptômes qui sont généralement dépistés lors d'un examen médical. La découverte d'une nodule anormale lors d'un examen clinique ou lors d'un test biochimique sont des exemples courants. Le médecin possède donc deux moyens pour dépister des troubles au niveau de la glande thyroïde: l'examen clinique et les tests thyroïdiens.
Pour déterminer la morphologie de la glande, le médecin se base sur l'inspection et la palpation. Ces méthodes permettent de définir la forme, le volume et la constance de la glande en question. (Idleman, S., 1990, p.75).
Tests thyroïdiens
Les tests thyroïdiens permettent de confirmer la présence de la pathologie d'hyperthyroïdie ou d'hypothyroïdie chez le client. Il existe plusieurs tests qui évaluent le fonctionnement et la morphologie de la glande thyroïde. Les tests qui déterminent le fonctionnement sont: le dosage de la T4, T3 et TSH plasmatique, la mesure du taux de saturation de la TBG et le dosage de T4 libre (ITL ou FT4I). Généralement la variation des taux sériques de T4, T3, TBG, TSH et T4 libre chez la personne normale sont les suivants: T4 - 4.5 à 11.5 ug/100ml, T3 - 90 à 200 ng/100ml, TBG - 25 à 35%, TSH - 0 à 6 uU/ml, T4 libre - 0.7 à 1.8 ng/100ml. (Matte, R., Bélanger, R., 1985).
Dans la mesure du taux de saturation de TBG, on administre des T3 radioactifs dans un échantillon de sang du client. Le TBG du sérum et une résine sont mis en compétition avec le T3 radioactif. Le pourcentage de captation par la résine est indicatif du nombre de sites libres sur la TBG. En autres mots, des résultats de T3 radioactifs élevés indiquent soit une hyperthyroïdie ou une diminution de la TBG. Un taux faible de T3 radioactifs indiquent soit une hypothyroïdie ou une élévation de la TBG.
Pour déterminer le dosage de T4 libre (FT4I) d'un client, on utilise généralement la formule mathématique suivante: (T U = radioactif)
Hyperthyroïde: T4 % T3 U (patient) = FT4I
T U (normal)
Hypothyroïdie: T4 ¯ % T3 U (patient) = FT4I
T U (normal)
Pilule T4 % T3 U (patient) = FT4I
anticonceptionnelle: T U (normal)
Syndrome T4 ¯ % T3 U (patient) = FT4I
néphrotique: T U (normal)
Un autre test est celui de la captation d'I131. Ce test vérifie l'état de captation de la glande thyroïde lorsqu'on administre une dose traceuse d'I131 à un client. Simplement, une captation élevée indique de l'hyperthyroïdie alors qu'une captation diminuée indique de l'hypothyroïdie. (Matte, R., Bélanger, R., 1985).
Le test de TRH est une autre mesure qui permet de dépister une maladie thyroïdienne. L'administration par voie intraveineuse de TRH provoque généralement une augmentation de la TSH sérique. Dans le cas de l'hyperthyroïdie, cette réponse est nulle alors que dans l'hypothyroïdie, cette réponse est exagérée.
Le dosage du taux de cholestérol chez le client est aussi utile pour évaluer la sorte de maladie thyroïdienne. Généralement, il est élevé dans l'hypothyroïdie et diminué dans l'hyperthyroïdie. La mesure de certains enzymes tels que la CPK et les phosphatases alcalines aide à formuler un diagnostic. Souvent les CPK sont élevés dans l'hypothyroïdie et vice versa. Les phosphatases alcalines sont souvent élevées dans l'hyperthyroïdie et diminuées dans l'hypothyroïdie. De plus, on retrouve aussi que la période de relaxation des réflexes tendineux est surtout prolongée dans l'hypothyroïdie mais ce phénomène se présent dans autres conditions telles que la diabète, l'oèdeme et les vasculopathies et ne prouve pas de façon exacte une maladie thyroïdienne. (Matte, R., Bélanger, R., 1985).
Tests morphologiques de la glande thyroïde
Il existe plusieurs manoeuvres qui permettent d'évaluer la morphologie de la glande thyroïde. La cartographie est un test très populaire qui établit les caractéristiques de la glande telle que la goître, les métastases et les cancers. L'échographie permet d'identifier des masses comme les kystes. Cette méthode et aussi très utilisée car elle est simple et non invasive. La ponction et la biopsie s'agit d'être très efficace à distinguer entre un solide et un kyste. L'aspiration du kyste permet une évaluation qui est fait lorsqu'on interprète les lames dans le laboratoire. On peut aussi mesurer le dosage d'anticorps antithyroïdiens dans les cas de la thyroïdite d'Hashimoto et dans la maladie de Graves. Pourtant pour déterminer les cancers thyroïdiens, on fait le dosage de la thyroglobuline au niveau de la thyroïde détruite. (Matte, R., Bélanger, R., 1985).
Traitements
Il existe trois formes générales de traitement qui sont disponibles pour les clients atteintent par l'hyperthyroïdie. Premièrement, ces personnes peuvent subir une thyroïdectomie par la chirurgie. Deuxièmement, on peut traiter cette maladie en administrant de l'iode radioactive aux patients à chaque jour jusqu'à ce que certains tissus thyroïdiens sont détruit. Troisièmement, on peut administrer des médicaments antithyroïdides telles que la propylthiouracil ou la methimazole, qui inhibent la production des hormones thyroïdiennes.
Les cas d'hypothyroïdie sont généralement traités par l'administration d'hormones thyroïdiennes par voie orale. L'hormone thyroïdienne la plus populaire est le T4 synthétisé appelé levothyroxine (SYNTHROID, LEVOTHROID, LEVOXIL) à environ 0.15 mg/jour. (Matte, R., Bélanger, R., 1985).
CONCLUSION
Les patients atteintent d'une hypothyroïdie ou d'une hyperthyroïdie doivent généralement se soumettrent à de nombreuses épreuves diagnostiques pour établir les causes de la maladie et le traitement médical ou chirurgical. Il existe plusieurs médicaments et interventions qui diminuent le taux récidive de ces maladies cependant il est primordial de signaler n'importe quel signes et symptômes avant que cette condition progresse de façon néfaste.
RÉFÉRENCES
Idleman, Simon, (1990). Endocrinologie: Fondements physiologiques. France: Presses Universitaires de Grenoble.
Matte, R., Bélanger, R., (1985). Endocrinologie. Montréal: Les Presses Universitaires de Montréal.
Rosenzweig, M. R., Leiman, A. L., (1991). Psychophysiologie: 2ième Édition. Québec: Décarie Éditions Inc.
UNIVERSITÉ LAURENTIENNE
Projet de Chimie:
DÉPISTAGE DES MALADIES
THYROÏDIENNES
Par: Luc Gervais
Présenté à Dr. Vasu Apanna
Dans le cadre du cours:
CHMI 2220 FA
Date de remise:
Le 4 mars, 1997
f:\12000 essays\sciences (985)\Chemistry\Decomposition.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Decomposition
12/09/96
Purpose:
In this lab we will observe the products of decomposition of potassium perchlorate (KClO4). We will then predict from our results the correct chemical reaction equation.
Procedure:
1. Weigh out about 4.0g of KClO4 in a test tube. Record the accurate weight below.
Product Weight Before Weight After
Mass of Test tube + KClO4 41.5g 39.8g
Mass of Test tube 37.5g 37.5
Mass of KClO4 4.0g 2.3g
2. Set up the apparatus shown below.
3. Gently heat the test tube containing the potassium perchlorate. Gas should begin to collect in the collection bottle. Record all observation.
4. Once the reaction is complete, no more gas give off, allow the test tube to cool. While the test tube is cooling test the gas in the collection bottle with glowing splint.
Caution: Do not leave the rubber tubing down in the water trough during cooling or you will experience back-up.
5. After the test tube has cooled weigh it on a balance. What is the change in mass?
Observations:
Oxygen flowed from the test tube into the bottle of water, forcing the water out.
Burning ember re-ignited when placed into the bottle of O2.
Calculations:
1. The number of moles of KClO4 that we began with is .03 moles. 4.0g ¸ 138.6g/mol = .03 moles
2. The number of moles of O2 that were present in our sample of KClO4 was .06 moles. 1.9g ¸ 32g/mole = .06 moles
3. The number of moles of O2 lost is .02 moles. 1.7g ¸ 32g/mol = .05 moles
4. KClO4 à KCl + 202
4.0g ¸ 138.6g/mol = .03 moles ´ 202 ¸ KClO4 = .06 moles ´ 32g = 1.9g
5 Percent Yield: 89% O2 lost 1.7g ¸ O2 Expected 1.9g
f:\12000 essays\sciences (985)\Chemistry\Design of Structures in respect to heat efficiency.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
OUTLINE
Introduction
Problem
What materials are better for insulation?
What designs are better for insulation?
Purpose
Background
Organizations Researching Problem
Materials
lustrous
dull
dark
light
Design
Windows
Enclosed
Hypothesis
Materials
Procedure
Summary
Materials that Work Best in Heat Efficiency
Designs that Work Best in Heat Efficiency
References
Introduction
Heat efficiency in any architectural design is always a topic that must be addressed. Without this key element, structures would be totally inefficient to heat, not to mention extremely expensive. In order to design a heat efficient building you must first understand where heat is lost or where cold air enters the structure in question. My research will first be to determine what materials are best for insulation and which materials are not. Second, I will try to find where heat is most likely to escape in a structure by researching efficient designs. This, in turn, will provide information to where it is necessary to add more insulation to a particular structure.
Background
It has been proven time and time again that solar energy plays a crucial part in the heating of any structure regardless of its design. The intensity of solar energy is almost an exact constant only varying in energy about 0.2 % every 30 years. This intensity on average is about 1.37 ( 106 ergs per second per cm2, or about 2 calories per minute per cm2. This intensity can vary of course when the solar photons interact with different conditions in the atmosphere. This energy from the sun can be converted so that it is able to heat a structure in many different ways. During my experiment though, I will only be testing the effects of a structure's heat related to passive solar energy as illustrated in figure 1. Passive Solar energy is where the sun's heat is able to heat a structure without the use of specialized equipment such as a photovoltaic cell or other direct solar energy device.
Many organizations in such countries as Australia and England are conducting nationwide heat energy efficiency ratings that can be used as references for engineers and architects. These ratings could inform a designer as to what designs work better and which do not. The program in Australia is titled the "Nationwide House Energy Rating Scheme" (NatHERS) and became available to all designers who wished to use it early in 1995. A parallel program to the NatHERS is New Zealand's "Window Energy Rating Scheme" (WERS) which allows homeowners to make better decisions about the selection and design of window systems from an energy perspective. The WERS rating system will not be available however until late 1996. Great Britain and many other nations have just recently begun conducting their own Energy Efficiency Rating systems that will not become available to the pubic until the early 21st century. So far though each of the research organizations has been making their own discoveries that have already begun to effect architectural designs of structures.
Often a structure's ability to collect heat is directly associated with the materials it was built with. Depending on the material itself, it can either hinder or help the structure's ability to collect heat. A lustrous material such as a mirror for example, would reflect light but retain the heated photons. This effect would heat the structure extremely well because of the lustrous surface's ability to attract light towards it and collecting its heat. A dull material like natural wood, has proven not to attract much light nor to collect a substantial amount of heat. Plain wood would not be a wise decision to use if the material used in the structure was going to be how the structure was heated. Most often structures are built with internal systems that produce heat. The color of the material used to build a structure is also a key element. For the most part the darker the color the more heat it attracts and the more heat it can store. A structure that is entirely black in color will be far more easy to heat than one of any other color. Exactly opposite to dark colors are light colors that do not attract much heat at all and are not efficient at storing heat. The best combination is most often a dark, lustrous material if heat is the desired effect. A completely wrong choice would be materials that are dull and light in color unless cooling was the purpose.
The design of a structure can contain an infinite number of different elements each either helping to make the structure efficient or hindering its ability. In my experimentation I am only going to focus on two design elements. These two elements pertain to if the structure has windows or if it is enclosed. The false-color image in figure 2 shows heat eminating from a house in the form of infrared radiation. The black regions radiate away the least amount of heat, while the white regions, which coincide with the house's windows, radiate away the most heat (NASA,1991). Because of the fact that solar energy can not be collected in any structure during the night, all of my experimentation will be conducted around noon in order to create a constant. Figure 2 shows heat escaping mainly through the windows but does not show that during the day windows are the most significant passive heat intakes for a structure. Windows are however, a disadvantage if the structure is being placed in a highly shaded area such as a forest. In this case heat would have to be contracted in some other way. A greenhouse is probably one of the best examples of passive heating. Without the aid of any other device, greenhouses are able to maintain a high temperature. As stated above, windows are also responsible for most heat loss in a directly heated home. In fact windows account for 41% of heat loss in a typical US home. Double-pane windows are one way to decrease heat loss from a structure but do not solve the problem entirely. If the structure is to be built in an area were lack of sunlight is not a problem then a structure with many windows should not be a problem for heat loss. An enclosed structure with no windows in theory should cut down on at least 41% of heat loss. This would also cut down on a large amount of heat gained during the day by passive energy through windows. That would then cause heating efficiency to decrease. Even though the efficiency would decrease it would probably still be less than 41% making it the better choice. Many designers do not choose to do this however because of the lack of view not having any windows would cause. Also windows serve as decoration in many designs. Research has shown that the best compromise is to have double-pane windows evenly placed throughout the structure in order to prevent one particular area from becoming too cold or too hot. Insulation in the walls, roof and floor is also a compromise. Too little insulation allows an excessive amount of heat escape from a structure while too much insulation allows almost no heat to enter. Most structures are directly heated from the inside allowing more insulation to be applicable.
A combination of the right materials and correct design according to where the structure is to be placed is crucial. If a structure built in a cold, cloudy, climate was to be made purely of windows and white wood, the temperature inside the structure would be close to the temperature outside. Structural design must include many various factors. Only the three factors of luster, color and windows will be used in my experiment.
Hypothesis & Experiment
If different materials are used to build a scaled structure, then the structure with high luster will have a higher temperature than the structure with low luster.
If different materials are used to build a scaled structure, then the structure with a darker color will have a higher temperature then the structure with a light color.
If different materials are used to build a scaled structure, then the structure with an enclosed design (no transparent areas) will have a higher temperature than the structure with a transparent design.
MATERIALS:
1 sheet of standard sheet metal
1 sheet of brown box cardboard
2 sheets of black plexi-glass
1 sheet of white plexi-glass
1 sheet of clear plexi-glass
2 thermometers
1 stopwatch or alarm clock
1 jigsaw
1 roll of duct tape
PROCEDURE:
Using the jigsaw, cut the sheet metal into 5 5"(5" squares. Then do the same with the cardboard.
Using the duct tape, secure the 5 squares or sheet metal together forming a cube with 1 side missing. Then do the same for the cardboard.
Place the 2 semi-cubes out side at about 11:00 a.m. with the missing side facing down. Place the thermometers inside the semi-cubes and use the stopwatch or alarm clock to time 2 hours.
At about 1:00 p.m. check the thermometers and record the two temperatures in a data table.
Using the 1st black sheet and the white sheet of plexi-glass, follow steps 1-4 the next day.
Using the 2nd black sheet and the clear sheet of plexi-glass, follow steps 1-4 the 3rd day.
SUMMARY
The experiment in this paper will probably support my hypothesis based on the research collected. The NatHERS rating organization expressed that lustrous materials are much more likely to collect and store heat better than dull materials. They also express that darker colored materials will more often than not collect heat at a higher rate that that of a color such as white. In addition to these theories SOLARCH (National Solar Architecture Research Unit) has studied window advantages alongside WERS to support my own theory that enclosed structures store more heat than transparent structures. Further studies on my part could branch into the other areas of structural design and placement providing a more in detailed plan for experimentation.
REFERENCES
"The Integration of Window Labeling in the Nationwide House Energy Rating Scheme (NatHERS) for Australia".
John Ballinger, Deborah Cassell, Deo Prasad, Peter Lyons,.
SOLARCH- National Solar Architecture Research Unit
The University of South Wales
Sydney 2052 Australia
Internet
Behrman, Daniel. "Solar Energy: the awaking science",
Little, 1980
Butti, Ken and Perlin, John. A Golden Thread
Van Nostrard, 1980. "2500 Years of Solar Architecture and Technology"
Titlepage
1
f:\12000 essays\sciences (985)\Chemistry\Determination of an unknown amino acid from a titration curve.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract
Experiment 11 used a titration curve to determine the identity of an unknown amino acid. The initial
pH of the solution was 1.96, and the pKa's found experimentally were 2.0, 4.0, and 9.85. The accepted
pKa values were found to be 2.10, 4.07, and 9.47. The molecular weight was calculated to be 176.3 while
the accepted value was found to be 183.5. The identity of the unknown amino acid was established to be
glutamic acid, hydrochloride.
Introduction
Amino acids are simple monomers which are strung together to form polymers (also called proteins).
These monomers are characterized by the general structure shown in figure 1.
Fig. 1
Although the general structure of all amino acids follows figure 1, the presence of a zwitterion is made
possible due to the basic properties of the NH2 group and the acidic properties of the COOH group. The amine group (NH2) is Lewis base because it has a lone electron pair which makes it susceptible to a coordinate covalent bond with a hydrogen ion. Also, the carboxylic group is a Lewis acidic because it is able to donate a hydrogen ion (Kotz et al., 1996). Other forms of amino acids also exist. Amino acids may exists as acidic or basic salts. For example, if the glycine reacted with HCl, the resulting amino acid would be glycine hydrochloride (see fig. 2). Glycine hydrochloride is an example of an acidic salt form of the amino acid. Likewise, if NaOH were added, the resulting amino acid would be sodium glycinate (see fig. 3), an example of a basic salt form.
Fig. 2
Fig. 3
Due to the nature of amino acids, a titration curve can be employed to identify an unknown amino acid.
A titration curve is the plot of the pH versus the volume of titrant used. In the case of amino acids, the
titrant will be both an acid and a base. The acid is a useful tool because it is able to add a proton to the
amine group (see fig. 1). Likewise the base allows for removal of the proton from the carboxyl group by
the addition of hydroxide. The addition of the strong acid or base does not necessarily yield a drastic
jump in pH. The acid or base added is unable to contribute to the pH of the solution because the protons
and hydroxide ions donated in solution are busy adding protons to the amine group and removing protons
from the carboxyl group, respectively. However, near the equivalence point the pH of the solution may increase or decrease drastically with the addition of only a fraction of a mL of titrant. This is due to the fact that at the equivalence point the number of moles of titrant equals the number of moles of acid or base originally present (dependent on if the amino acid is in an acidic or basic salt form). Another point of interest on a titration curve is the half-equivalence point. The half-equivalence point corresponds to the point in which the concentration of weak acid is equal to the concentration of its conjugate base. The region near the half-equivalence point also establishes a buffer region (Jicha, et al., 1991). (see figure 4).
Fig. 4
The half-equivalence point easily allows for the finding of the pKa values of an amino acid. A set
pKa values can be extremely helpful in identifying an amino acid. Through a manipulation of the
Henderson-Hasselbalch equation, the pH at the half-equivalence point equals the pKa. This is reasoned
because at the half-equivalence point the concentration of the conjugate base and the acid are equal.
Therefore the pH equals the pKa at the half-equivalence point (see figure 5.)
Fig. 5 [base]
pKa= pH - log -------
[acid]
[base]
log -------- = log 1 = 0
[acid]
therefore, pH = pKa
However, many substances characteristically have more than one pKa value. For each value, the
molecule is able to give up a proton or accept a proton. For example H3PO4 has three pKa values. This is
due to the fact that it is able to donate three protons while in solution. However, it is much more difficult
to remove the second proton than the first. This is due to the fact that it is more difficult to remove a
proton from a anion. Furthermore, the more negative the anion, the more difficult to remove the proton.
The trapezoidal method can be employed to find the equivalence points as shown if figure 6. The
volume of titrant between two equivalence points is helpful in the determination of the molecular weight
of the amino acid.
Fig. 6
The purpose of experiment 11 is to determine the identity of an unknown amino acid by analyzing a
titration curve. The experiment should lend the idea that the following may be directly or indirectly
deduced from the curve-- the equivalence and half equivalence points, pKa values, the molecular weight
and the identity of the unknown amino acid.
Experimental
The pH meter was calibrated and 1.631 grams (.0089 moles) of the unknown amino acid was weighed and placed in a 250-mL volumetric flask. About 100 mL of distilled water was added to dissolve the solid. The flask was gently swirled and inverted to insure a complete dissolution of the solid. The solution was diluted with distilled water to the volume mark on the flask. Then, one buret was filled with 0.100 M HCl stock solution and another buret was filled with 0.100 M NaOH. A pipet was used to add 25.00 mL of the unknown amino acid solution to a 100-mL beaker. The solution's initial pH was established to be 1.96 by the pH meter. The electrode was left in 100-mL beaker with the unknown amino acid solution. In the accurate titration curve, the acid was added in 0.5 mL increments until the pH of the solution was 1.83. As the titrant was added the pH of the solution was recorded on a data sheet. Also, a graph of pH versus the mL of titrant added was plotted. After the addition of the acid, a new 25 mL aliquot of unknown solution was added to a clean 100-mL beaker. The base was then used to titrate the solution. It was added in 0.20 to 1.0 mL increments depending on the nature of the curve. (The nature of the curve was somewhat expected because previously an experimental titration curve was established. This curve used increments of up to 2.0 mL.) The base was added until the pH reached 12.03.
Results
Table 1 shows the pH endpoints for both the titration with the acid as well as with the base. It also shows the initial pH. Table 1 also shows the experimentally determined and accepted molecular weight and pKa values for the glutamic acid, hydrochloride. Tables 2 and 3 show the amounts of base and acid added to the unknown solution (respectively) and the pH which corresponds to that amount. Figures 7 and 8 represent the exploratory titration and the accurate titration curves (respectively). Figure 9 represents the structure of the unknown amino acid, glutamic acid, hydrochloride.
Table 1
pH of endpoints pKa values (experimental) pKa values (accepted) initial pH Molecular weight identity of unknown
1.83 2.0 2.10 1.96 176.3 (expt.) Glutamic acid, hydro-chloride
12.03 4.0 4.07 183.5 (accepted)
9.85 9.47
Table 2
Accurate Titration for NaOH Fig.9
total mL of 0.10 M NaOH pH of solution
0.00 1.96
1.0 2.05
3.0 2.26
5.0 2.5
7.0 2.84
9.0 3.28
10.0 3.53
11.0 3.77
13.0 4.14
14.0 4.39
15.0 4.56
15.5 4.66
16.0 4.78
16.5 4.93
17.0 5.13
17.5 5.63
17.7 5.99
17.8 6.52
18.0 7.93
18.2 8.18
18.4 8.50
18.5 8.56
19.0 8.83
21.0 9.44
22.0 9.62
23.0 9.82
23.5 9.93
24.0 9.98
24.5 10.12
25.0 10.21
25.5 10.37
26.0 10.52
26.5 10.69
27.0 10.86
27.5 11.06
28.0 11.22
28.5 11.37
29.0 11.41
29.5 11.53
30.0 11.58
31.0 11.71
33.0 11.85
36.0 12.03
Table 3
Accurate Titration for HCl
total mL of 0.10 M HCl pH of solution
0.00 1.96
0.5 1.93
1.0 1.91
1.5 1.87
2.0 1.85
2.5 1.83
Discussion
The initial pH of the unknown solution was 1.96. This information was helpful in determining the identity of the unknown amino acid because only a three of the nine unknowns were acidic salts. (Acidic salt forms of amino acids are capable of having pH values of this degree.) However, more information was required before the determination could be conclusive. The unknown produced three equivalence points and therefore, three pKa values. Therefore, one of the three remaining amino acids one could be omitted from the uncertainty, because it contained only two pKa values. However, after examining the pKa values of the unknown, it was apparent that they were remarkably similar to those of glutamic acid, hydrochloride. The unknown's pKa values were 2.0, 4.0, and 9.85, while the glutamic acid's pKa values were 2.10, 4.07, 9.47. At this point, the identity of the amino acid was conclusive. However, as a precautionary measure, the molecular weight of the amino acid was calculated and found to be 176.3 amu. The calculated value corresponds well with the known value of 183.5 amu.
There are a few errors that can be held accountable for the small deviation from the accepted values.
First, the pH meters never reported a definite value; most times the meter would report a floating number. Therefore, one have no way of knowing which reported pH was more correct. Also, the method by which the equivalence points was extremely crude. It called for a series of rough of estimations. These estimations then led to the equivalence point. Then the use of the equivalence point was used to determine the half-equivalence point. This point was then used to find the pKa. The deviance from accepted values of the pKa values occur because of the compounded series of crude estimates which were required. Likewise, the deviance of the calculated molecular weight can be attributed to these crude vehicles, because the change in volume (between equivalence points) were used in calculation.
Conclusion
The identity of an unknown amino acid was determined by establishing a titration curve. The
equivalence and half-equivalence point, the pKa values, and the molecular weight were directly or
indirectly found through the titration curve. The equivalence points were found through a crude method
known as the trapezoidal method. The establishment of the equivalence points gave rise to the half
equivalence points and the D volume (used in calculating the molecular weight). The half-equivalence
points were directly used to find the pKa values of the unknown. The molecular weight could also be
calculated. This data led to the determination of the identity of the unknown amino acid--glutamic acid,
hydrochloride.
References
Jicha, D.; Hasset, K. Experiments in General Chemistry; Hunt: Dubuque, 1991:37-53.
Kotz, J.C.: Treichel , P. Jr. Chemistry and Chemical Reactivity; Harcourt-Brace: Fort Worth, 1996; 816- 837.
f:\12000 essays\sciences (985)\Chemistry\Determining the Ratio of the Circumference to the Diameter of.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Determining the Ratio of Circumference to Diameter of a Circle
In determining the ratio of the circumference to the diameter I began by
measuring the diameter of one of the si objects which contained circles, then
using a string, I wrapped the string around the circle and compared the length
of the string, which measured the circumference, to a meter stick. With this
method I measured all of the six circles. After I had this data, I went back and
rechecked the circumference with a tape measure, which allowed me to make
a more accurate measure of the objects circumferences by taking away some
of the error that mymethod of using a string created.
After I had the measurements I layed them out in a table. The objects that
I measured were a small flask, a large flask, a tray from a scale, a roll of tape,
a roll of paper towels, and a spraycan.
By dividing the circumference of the circle by the diameter I was able to
calculate the experimental ratio, and I knew that the accepted ratio was pi. Then
I put both ratios in the chart.
By subtracting the accepted ratio from the experimental you find the error.
Error is the deviation of the experimental ratio from the accepted ratio. After I
had the error I could go on to find the percentage error. The equation I used was,
error divided by the accepted ratio times 100. For example, if I took the error of
the experimental ratio for the paper towels, which was 0.12. I took that and
divided it by the accepted ratio giving me .03821651. Then I multiplied that by
100 giving me about 3.14. Using these steps I found the percentage error for all
of the objects measured.
The next step was to graph the results. I was able to do this very easily
with spreadsheet. I typed in all of my data and the computer gave me a nice
scatter block graph. I also made a graph by hand. I set up the scale by taking
the number of blocks up the side of my graph and dividing them by the number
of blocks across. I placed my points on my hand drawn graph. Once I did this
I drew a line of best representation because some of the points were off a little
bit due to error.
By looking at my graph I can tell that these numbers are directly
proportional to each other. In this lab it was a good way to learn about error
which is involved in such things as measurements, and also provided me with
a good reminder on how to construct graphs.
There were many errors in this lab. First off errors can be found in the
elasticity of the string or measuring tape. Second there are errors in the
measurements for everyone. Errors may be present when a person moves
their finger off of the marked spot on the measuring device.
Object Circumference Diameter
small flask 20.5 6.3
large flask 41.3 12.9
tray from a scale 40.1 9.5
roll of tape 6.4 1.2
roll of paper towels 44.5 11.8
spraycan 25.1 7.7
f:\12000 essays\sciences (985)\Chemistry\Diet and Cancer What is the Link .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Diet and Cancer... What is the Link?
Today we know that too much of a certain type of foods can have
harmful effects on our health and well-being and we are learning that
diseases such as cancer are caused in part by our dietary choices.
In the 1950's scientists discovered relationship between diet and
coronary heart disease, the nations number one killer. In the last 15 year a
link between cancer and diet has been discovered by scientists.
The National Academy of Sciences (NAS), an organization of the
nation's foremost scientists found evidence so persuasive that in their
landmark report Diet, Nutrition and Cancer of 1982 they insisted Americans
to begin changing their diets to reduce their risk of developing cancer. The
results of the study were supported by later research done by NAS, the
Surgeon General, Department of Agriculture and Health and Human
Services, and the National Institute of Health.
Based mainly on the study by NAS done in 1982, the American
Institute for Cancer Research (AICR) devised a guideline with four parts to
help lower people's risk of developing cancer. The guidelines have been
updated since then to reflect recent research on the link.
The AICR guidelines are:
1. Reduce the intake of total dietary fat to a level of no more than 30%
of total calories and, in particular, reduce the intake of saturated fat to less
than 10% of total calories.
2. Increase the consumption of fruits, vegetables and whole grains.
3. Consume salt-cured, salt-pickled and smoked foods only in
moderation.
4. Drink alcoholic beverages only in moderation, if at all.*
Most cancers start when the body is exposed to a carcinogen, a
cancer-causing substance that is found everywhere in our environment for
example in sunlight. When the body is exposed to this substance it can
usually destroy the carcinogen without malignant effects. If any of the
substance eludes the body's defense system it can alter a cell's genes to make
it become a cancerous cell.
Cancer doesn't suddenly appear, it develops through gradual stages of
which the initial stages can be reversed. The foods we eat can either increase
the rate of which these stages advance or help fight and prevent it from
spreading. Salt-cured and salt-pickled foods don't contain carcinogen
however they do contain another ingredient which is changed to carcinogens
while being digested. Smoked foods are a little different, they have the
carcinogen in them.
Fruits, vegetables, and whole grains should be eaten to help prevent
cancer or fight against cancer advancing through the body. Foods high in fat
which include marbled meats, baked goods such as cookies and pastries and
high-fat dairy products must be avoided. They help a cancer cell grow,
multiply and spread.
By following these guidelines will not guarantee that one will not get
cancer however, it will lower chances. Cancer is still somewhat of an
unknown disease but we do know that the foods one eats can have powerful
effects on the development of cancer. This is good news because it provides
people an opportunity to stop or prevent cancer.
f:\12000 essays\sciences (985)\Chemistry\discovery of the electron.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Discovery Of The Electron
The electron was discovered in 1895 by J.J. Thomson in the
form of cathode rays, and was the first elementary particle to be
identified. The electron is the lightest known particle which
possesses an electric charge. Its rest mass is Me 9.1 x 10 -28 g, about 1/1836 of the mass of the proton or
neutron.
The charge of the electron is -e = -4.8 x 10^-10 esu and in
many other decay processes. The electron itself is completely
stable. Electrons contribute the bulk to ordinary matter; the
volume of an atom is nearly all occupied by the cloud of elec
trons surrounding the nucleus, which occupies only about 10^-13
of the atom's volume. The chemical properties of ordinary matter are
determined by the electron cloud.
The electron obeys the Fermi-Dirac statistics, and for this
reason is often called a fermion. One of the primary attributes
of matter, impenetrability, results from the fact that the elec
tron, being a fermion, obeys the Pauli exclusion principle.
The electron is the lightest of a family of elementary
particles, the leptons. The other known charged leptons are the
muon and the tau. These three particles differ only in mass;
they have the same spin, charge, strong interactions, and weak
interactions. In a weak interaction a charged lepton is either
unchanged or changed into and uncharged lepton, that is a neutri
no. In the latter case, each charged lepton is seen to change
only into the corresponding neutrino.
The electron has magnetic properties by virtue of (1) its
orbital motion about the nucleus of its parent atom and (2) its
rotation about its own axis. The magnetic properties are best
described through the magnetic dipole movement associated with 1
and 2. The classical analog of the orbital magnetic dipole moment
of a small current-carrying circuit. The electron spin magnetic
dipole moment may be thought of as arising from the circulation
of charge, that is, a current, about the electron axis; but a
classical analog to this moment has much less meaning than that
to the orbital magnetic dipole moment. The magnetic moments of
the electrons in the atoms that make up a solid give rise to the
bulk magnetism of the solid.
f:\12000 essays\sciences (985)\Chemistry\DNA What is it .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DNA
Deoxyribonucleic acid and ribonucleic acid are two chemical substances involved in transmitting genetic information from parent to offspring. It was known early into the 20th century that chromosomes, the genetic material of cells, contained DNA. In 1944, Oswald T. Avery, Colin M. MacLeod, and Maclyn McCarty concluded that DNA was the basic genetic component of chromosomes. Later, RNA would be proven to regulate protein synthesis. (Miller, 139)
DNA is the genetic material found in most viruses and in all cellular organisms. Some viruses do not have DNA, but contain RNA instead. Depending on the organism, most DNA is found within a single chromosome like bacteria, or in several chromosomes like most other living things. (Heath, 110) DNA can also be found outside of chromosomes. It can be found in cell organelles such as plasmids in bacteria, also in chloroplasts in plants, and mitochondria in plants and animals.
All DNA molecules contain a set of linked units called nucleotides. Each nucleotide is composed of three things. The first is a sugar called deoxyribose. Attached to one end of the sugar is a phosphate group, and at the other is one of several nitrogenous bases. DNA contains four nitrogenous bases. The first two, adenine and guanine, are double-ringed purine compounds. The others, cytosine and thymine, are single-ringed pyrimidine compounds. (Miller, 141) Four types of DNA nucleotides can be formed, depending on which nitrogenous base is involved.
The phosphate group of each nucleotide bonds with a carbon from the deoxyribose. This forms what is called a polynucleotide chain. James D. Watson and Francis Crick proved that most DNA consists of two polynucleotide chains that are twisted together into a coil, forming a double helix. Watson and Crick also discovered that in a double helix, the pairing between bases of the two chains is highly specific. Adenine is always linked to thymine by two hydrogen bonds, and guanine is always linked to cytosine by three hydrogen bonds. This is known as base pairing. (Miller, 143)
The DNA of an organism provides two main functions. The first function is to provide for protein synthesis, allowing growth and development of the organism. The second function is to give all of it's descendants it's own protein-synthesizing information by replicating itself and providing each offspring with a copy. The information within the bases of DNA is called the genetic code. This specifies the sequence of amino acids in a protein. (Grolier Encyclopedia, 1992) DNA does not act directly in the process of protein synthesis because it does not leave the nucleus, so a special ribonucleic acid is used as a messenger (mRNA). The mRNA carries the genetic information from the DNA in the nucleus out to the ribosomes in the cytoplasm during transcription. (Miller, 76)
This leads to the topic of replication. When DNA replicates, the two strands of the double helix separate from one another. While the strands separate, each nitrogenous base on each strand attracts it's own complement, which as mentioned earlier, attaches with hydrogen bonds. As the bases are bonded an enzyme called DNA polymerase combines the phosphate of one nucleotide to the deoxyribose of the opposite nucleotide.
This forms a new polynucleotide chain. The new DNA strand stays attached to the old one through the hydrogen bonds, and together they form a new DNA double helix molecule. (Heath, 119) (Miller, 144-145)
As mentioned before, DNA molecules are involved in a process called protein synthesis. Without RNA, this process could not be completed. RNA is the genetic material of some viruses. RNA molecules are like DNA. They have a long chain of macromolecules made up of nucleotides. Each RNA nucleotide is also made up of three basic parts. There is a sugar called ribose, and at one end of the sugar is the phosphate group, and at the other end is one of several nitrogenous bases. There are four main nitrogenous bases found in RNA. There are the double-ringed purine compounds adenine and guanine, and there is the single-ringed pyrimidine compounds of uracil and cytosine. (Miller, 146)
RNA replication is much like that of DNA's. In RNA synthesis, the molecule being copied is one of the two strands of a DNA molecule. So, the molecule being created is different from the molecule being copied. This is known as transcription. Transcription can be described as a process where information is transferred from DNA to RNA. All of this must happen so that messenger RNA can be created, the actual DNA cannot leave the nucleus. (Grolier Encyclopedia, 1992)
For transcription to take place, the RNA polymerase enzyme is needed first separate the two strands of the double helix, and then create an mRNA strand, the messenger. The newly formed mRNA will be a duplicate of one of the original two strands. This is assured through base pairing. (Miller, 147)
When information is given from DNA to RNA, it comes coded. The origin of the code is directly related to the way the four nitrogenous bases are arranged in the DNA. It is important that DNA and RNA control protein synthesis. Proteins control both the cell's movement and it's structure. Proteins also direct production of lipids, carbohydrates, and nucleotides. DNA and RNA do not actually produce these proteins, but tell the cell what to make. (Heath, 111-113)
For a cell to build a protein according to the DNA's request, a mRNA must first reach a ribosome. After this has occurred, translation can begin to take place. Chains of amino acids are constructed according to the information which has been carried by the mRNA. The ribosomes are able to translate the mRNA's information into a specific protein. (Heath, 116) This process is also dependent on another type of RNA called transfer RNA (tRNA). Cytoplasm contains all amino acids needed for protein construction. The tRNA must bring the correct amino acids to the mRNA so they can be aligned in the right order by the ribosomes. (Heath, 116) For protein synthesis to begin, the two parts of a ribosome must secure itself to a mRNA molecule. (Miller, 151)
Methods and Materials:
For the first part of the lab, colored paper clips were needed to construct two DNA strands. Each color paper clip represented one of the four nitrogenous bases. Black was used as adenine, white was thymine, blue was cytosine, and yellow represented guanine. A short sequence of the human gene that controls the body's growth hormone was then constructed using ten paper clips. The complementary strand of the gene was then made using ten more clips. The two model strands were laid side by side to show how the bases would bond with each other. The model molecule was then opened and more nucleotides were added to show what happens during replication.
For the second part of the lab, models of DNA, mRNA, tRNA, and amino acids were used to simulate transcription, translation, and protein synthesis. The model molecules were cut out with scissors and placed on the table. The DNA and mRNA molecules were put on the left side of the table, the others on the right. To simulate transcription, the mRNA molecule was slid down the DNA strand until the nucleotides matched. The mRNA molecule was then moved from the left side of the table to the right, showing it's movement from the nucleus to the cytoplasm. The tRNA molecules were then matched up with an amino acid. Once matched up, they were slid along the mRNA until their nucleotides matched.
Conclusions:
The most surprising discovery made was finding out that there are only four main bases needed in a DNA and RNA molecule. Also, each of these bases will only bond with one other base. It is important to realize how DNA greatly affects a cell's functions, in growth, movement, protein building, and many other duties. DNA is not nearly complex in structure as I had thought either. Containing only it's three main parts of a sugar, phosphate, and of course it's base. From these studies it is easy to see how DNA and RNA greatly affect the life and functions of an organism.
Bibliography:
Emmel, Thomas C. Biology Today. Chicago: Holt, Rinehart and Winston, 1991.
Foresman, Scott. Biology. Oakland, New Jersey: Scott Foresman and Company, 1988.
Hole, John W., Jr. Essentials. Dubuque, Iowa: Wm. C. Brown Company Publishers, 1983.
Mader, Sylvia S. Inquiry Into Life. New York: Wm. C. Brown Company Publishers, 1988.
McLaren, Rotundo. Heath Biology. New York: Heath Publishing, 1987.
Miller, Kenneth R. Biology. New Jersey: Prentice Hall, 1993.
Welch, Claude A. Biological Science. Boston: Houghton Mifflin Company, 1968.
f:\12000 essays\sciences (985)\Chemistry\Do Cleaning Chemicals Clean as Well After they have been froz.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Problem:
The researcher is trying to determine whether or not cleaning materials will clean as well if they have been frozen solid and subsequently thawed out until they have returned to a liquid state of matter.
The researcher will use Dial Antibacterial Kitchen Cleaner, Clorox Bleach, and Parson's Ammonia, applied to simple bacon grease, to determine which chemical is least affected by the glaciation.
Hypothesis:
The researcher feels that the process of glaciation will degrade the ability of these three household cleaning chemicals to breakdown the most common kitchen cleaning problem - grease.
For example, the freezing, thawing, and then freezing again of ice cream puts the substance through the freezing process. The result is a separation of heavy and light substances which breaks down the food. The researcher feels that the same end result may happen with the cleaning materials.
Experimentation
Test Concept:
In order to determine weather the glaciation process affected the cleaning chemicals, it is first important to establish its potency prior to freezing. Accordingly, two test sets were created by the researcher. The purpose of the test was to determine how well the chemicals could break down household grease before and after the substances were frozen. The first test set would focus on unfrozen chemicals, while the second was set up for previously frozen chemicals.
The Test:
To start the experiment the researcher fried four pieces of bacon until there was enough grease in the skillet to perform the test. He then put a quarter teaspoon of the grease onto two nine by thirteen casserole dishes. Each casserole dish was set up for three frozen and three unfrozen chemical cleaners. A measured amount of cleaner (both frozen and unfrozen) was added to each spot of grease. After approximately two minutes of breaking down the grease, the dishes were raised to a uniform height at one end and the broken down grease was allowed to run. By measuring how far the grease ran, the researcher could then determine how much the cleaner broke down and therefore which cleaner was affected by the glaciation.
Resources
The resources for this experiment were acquired from the labels of the chemicals. Research was also done to try and find information about Chlorine in the Clorox Bleach but ended unsuccessfully. There was also research done to find out about the reason the 409 degreaser performed so poorly.
Conclusion
The researcher has concluded that the previously frozen chemicals performed just as well if not better than the unfrozen chemicals. See charts one and two for details o
f:\12000 essays\sciences (985)\Chemistry\El petroleo.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EL PETRÓLEO Y SUS DERIVADOS
DERIVADOS PLÁSTICOSMEDICINASPERFUMESTELASDETERGENTES
EL PETRÓLEO
COMPOSICIÓN QUÍMICA MEZCLA DE HIDROCARBUROS, ALCANOS, ALQUENOS, NAFTENOS Y AROMÁTICOS
El petróleo es un líquido oleoso, mas ligero que el agua, de color obscuro y olor fuerte, que se encuentra nativo, formando a veces grandes manantiales en los estratos superiores de la corteza terrestre; es una mezcla de hidrocarburos (Alcanbos, Alquenos, Naftenos y aromáticos), es insoluble en el agua, arde con facilidad y, sometido a una destilación fraccionada da una gran cantidad de productos volátiles.
Fue conocido en la antigüedad; los calderos usaban el asfalto del petróleo como mortero en sus construcciones y los egipcios lo utilizaban para embalsamar a los muertos; pero no adquirió importancia comercial hasta principios del siglo XIX con el principio de grandes yacimientos norteamericanos.
Se le supone formado a partir de la descomposición de la materia orgánica en determinadas condiciones de presión, temperatura, etc. Su composición varía mucho según su procedencia (petróleos linfáticos, petróleos naftenicos, petróleos ricos e hidrocarburos aromáticos.)
La capitación de petróleo se efectúa en gralte mediante perforaciones en el terreno. En las proximidades de los yacimientos se construyen depósitos para recoger el líquido, que luego se conducen por tuberías y oleoductos a las refinerías o a los puertos.
Para sus diferentes usos es necesario someter el petróleo bruto a la destilación fraccionada y subsiguiente rectificación de las fracciones obtenidas, con lo que se separan así los siguientes productos:
· Gases o líquidos muy fácilmente volátiles, que a veces se aprovechan como combustibles en la misma operación
· Gasolinas bajas o petróleo ligero, que hierve por debajo de 150°
· Petróleo de quemar, porción que destila entre 150 y 170°
· Aceites pesados de gas-oil y sólidos, de punto de ebullición superior a los 350°, que quedan en la caldera.
Estas ultimas fracciones se pueden transformar en las primeras por el proceso de cracking, que consiste en citar a alta temperatura y presión una sustancia macromolecular, hasta conseguir la división de sus moléculas en otras mas sencillas. Así se obtienen gasolinas de punto de ebullición superior a los 300°.
La industria del petróleo ha experimentado un gran desarrollo y por ello se ha intensificado la búsqueda de petróleo en el mar, principalmente en las proximidades del Golfo de México y en el Mar del Norte.
Se considera la fuente actual para la obtención de productos plásticos, caucho sintético, fibras, detergentes sintéticos y numerosos aditivos que se utilizan en la industria de los aceites, incluyendo el plomotetraetilino y diversos antioxidantes.
El petróleo puede llegar a revolucionar la industria. En la alimenticia, por ejemplo, se están realizando diversos trabajos científicos para obtener alimentos a partir de cultivos de microorganismos en el petróleo.
El enorme consumo actual de petróleo hace prever en un futuro relativamente cercano el agotamiento de los yacimientos hoy conocidos que son indudablemente la gran mayoría de los existentes en la superficie terrestre.
Mapa señalando algunos yacimientos petrolíferos.
PAÍSES PRODUCTORES DE PETRÓLEO · U.R.S.S. · ARABIA SAUDITA· E.U.A.· IRAK· IRÁN· KUWAIT· VENEZUELA· NIGERIA· R.P.CHINA· LIBIA
f:\12000 essays\sciences (985)\Chemistry\Ethical Procedures and Guidelines Defining Pschycological Res.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Psychological research is often a very controversial subject among experts. Many people feel that there are many moral standards that are often not followed. Others may believe that there is much harmful misinformation that can often be harmful to subject and others. Still others believe that psychology is a lot of theories without any reinforcing information. Whether any of these assumptions may be true or not, there have been guidelines created which serve to silence many critics of the science. These guidelines make research safe and structured, which will protect the subjects from unnecessary harm.
As psychology advances, there is seen a need for more rules and regulations for the ensurement of subject comfort. Hence, there are many more rules now than even twenty years ago. These rules really encompass a few broad but very important ideas. One of these ideas is protecting the dignity of the subjects. Another important component of this code refers to consent. All of these will be explained in greater detail below. Another gray area in psychology lies in the deception of subjects. There are some basic rules guiding how deceptions can be carried out. There is a large section of the code that was made with regards to animal research. The last major section of the ASA ethical guidelines has to do with giving credit where credit is due, and information sources. All of these regulations make research safer for the subjects and increase the effectively of psychological research.
In psychological research, protecting subjects dignity is very important. Without willing subjects the research process would be brought to a halt. In order to protect the subjects' dignity, the lab experiments must be well prepared, and ethically appropriate. Only subjects who are targeted should be affected, and if a large number of people are to be affected, psychologists should consult experts on that specific group. Psychologists are to be held directly responsible for the ethics that are utilized during the experiment. In addition to this psychologists are bound by the normal, governmental laws concerning research. In addition to these regulations concerning the law and standards, psychologists are required to inform subjects of the basic procedure that they will be agreeing to. This flows into the idea of informed consent.
Informed consent means basically that the subject must be informed of the basic procedure that they will be agreeing to. There should not be any variations from the agreed upon plan. Whenever there is a doubt about whether or not informed consent is necessary , an institution or expert in the area of the subjects should be consulted. One complicating factor in this sector is deception in research. In order to conduct certain experiments, it is helpful to psychologists to deceive the participants, with respect to exactly which experiment is being performed upon them. The rules concerning this are effective, but (necessarily) rather vague. First of all, psychologists are never supposed to use deception unless no other alternative of method for the experiment at hand is available. The deception cannot be in a manner that would affect the participants' decision to participate. And any deception that takes place should always be explained as soon as possible, after the experiment has reached its conclusion.
In order to preserve subjects dignity, the information about the experiment that the subjects have participated in should be made available to the subjects as soon as possible. This includes, the exact nature of the experiment, the results , and the conclusions of the experiment. This will probably have been already agreed upon by the experimenter and the subject, but just in case, the experimenters are required to honor all commitments made to the subject. This improves the credibility of the whole science, as a whole.
When the subjects are not human, there are still rules governing the treatment of such subjects. These pertain mostly to protecting the (relative) comfort of these subjects during experimentation. Basically, when experimenting upon animals, basic care procedures must be followed. When anesthetic or euthanasical procedures are to be used, they must be carried out in a fashion that will be both professional and comfortable to the subjects. Obviously, the procedures that can be carried out upon animals are more drastic than those on humans because there is no informed consent involved in the study of animals, and the procedures can be justified because the results are purportedly supposed to assist in the betterment of the human race.
The last area of ASA code lies in reporting information. The natural plagiarism laws are, as always, in effect. This is in addition to many precise scientific falsification laws. These state that a scientist may not falsify or fabricate information, first of all. Also, if a psychologist discovers any significant errors in the study after the fact, steps to correct these errors must be taken immediately. Also, the psychologists must give credit when it is necessary, and never neglect to leave any information out.
All of these regulations seem to be very logical, and it is well that they should. They have been developed over hundreds of years throughout the study of psychology. With respect to current times, these rules seem like they are sufficient, but the book of code should never be closed. There will always be a new situation where a new addendum is required to protect a subject, or to assist in the research. As is the case with therapy, there will, without a doubt, be court cases that will change the code of ethics. But the ASA codes seem to be as proficient as any that are practical in this age. Some of these regulations may inhibit the immediate results that can be gained, but without them, there would be a definite lack of willing participants to volunteer. This would essentially bring psychological research upon humans to a halt.
f:\12000 essays\sciences (985)\Chemistry\evaluating an enthalpy change that can not be measured direct.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chemistry Experiment.
Dr. Watson.
Evaluating An Enthalpy Change That Cannot
Be Measured Directly.
Introduction.
We were told that sodium hydrogencarbonate decomposes on heating to give sodium
carbonate, water and carbon dioxide as shown in the equation below:-
2NaHCO3(s)--------> Na2CO3 (s) + H2O (l) + CO2 (g) = DeltaH1
This was given as deltaH1 and we had to calculate as part of the experiment.
This however cannot be measured directly, but can be found using the enthalpy
changes from two other reactions. These being that of sodium hydrogencarbonate and
hydrochloric acid and also sodium carbonate and hydrochloric acid.
We were given a list of instructions in how to carry out the experiment, which are
given later.
List of Apparatus Used.
1 x 500ml Beaker.
1 x Thermometer(-10 to 50oC).
1 x Polystyrene Cup.
1 x Weighing Balance.
1 x Weighing Bottle.
10 grams of Sodium Hydrogencarbonate.
10 grams of Sodium Carbonate.
A bottle of 2 molar HCL.
Diagram.
Method.
Three grams of sodium hydrogen carbonate was weighted out accurately using a
weighting bottle and a balance. Then thirty centimetres cubed of 2 molar HCL was
measured using a measuring cylinder. The acid was then placed into the polystyrene
cup and its temperature was taken and recorded using the thermometer. The pre-
weighted sodium hydrogencarbonate was then added to the solution, and the final
temperature was recorded.
The contents of the cup were then emptied out and the cup was washed out with
water and then thoroughly dried. This was done three times for the sodium hydrogen
carbonate so that I could remove any anomalies that were obtained.
The experiment was then repeated in exactly the same manner except sodium
carbonate was used instead of sodium hydrogen carbonate.
The results were then tabulated, this table is shown below.
Results Table.
Results Table for Sodium Hydrogencarbonate.
Results Table for Sodium Carbonate.
Calculations.
From these results I had to calculate deltaH2 and deltaH3. DeltaH2 refers to the
enthalpy change when sodium hydrogencarbonate reacts with hydrochloric acid, and
deltaH3 is the enthalpy change when the sodium carbonate reacts with the acid.
Firstly however it is necessary to show the equations for the two reactions:-
DeltaH2= 2NaHCO3 (s) + 2HCl (aq)----> 2NaCl (aq) + 2H2O (l) + 2CO2 (g).
DeltaH3= Na2CO3 + 2HCl (aq) ----> 2NaCl (aq) + H2O (l) + CO2 (g)
The enthalpy changes of the two reactions can be worked out using the formula
shown below :-
Energy Exchanged between = Specific Heat Capacity x Mass of the x Temperature
Reactants and Surroundings of the Solution Solution Change.
Therefore the DeltaH2 of the reaction when fitted into the formula is :-
Energy Exchanged between = 4.18 x (84 x 2) x -11.1
Reactants and Surroundings.
This gives the enthalpy change for DeltaH2 to be = -7794.9 Joules per mole.
The same formula is used for DeltaH3:-
Energy Exchanged Between = 4.18 x 106 x 21.8
Reactants and Surroundings.
This gives the Enthalpy change for DeltaH3 to be = 9659.1 Joules per mole.
From these two results we are able to work out what DeltaH1 is likely to be even
though we have not done the experiment. This is done using the formula :-
DeltaH1 = DeltaH3 + DeltaH2 =>
DeltaH1 = 9659.1 + (-7794.9) =>
DeltaH1 = 1864.2 Joules per mole.
Conclusions.
The result obtained will not be a very accurate due to the means by which the
experiment was done. The equipment used was not the most efficient for measuring
enthalpy changes, however it does give a rough estimate to work from. Some errors
of the equipment would have been heat lost through conduction from the reaction
vessel. Also heat may well have been lost through the open top of the container, even
though there was a lid this was not very secure some heat will have escaped through
here.
In summation the experiment was very difficult to undertake as the enthalpy change
for DeltaH1 is hard to determine due to the fact that it thermally decomposes in the
air, causing great problems in calculating its enthalpy change with its surroundings.
f:\12000 essays\sciences (985)\Chemistry\Evolution of Jet Engines.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Evolution of Jet Engines
The jet engine is a complex propulsion device which draws in air by means of an intake,
compresses it, heats it by means of an internal combustion engine, which when expelled it turns a
turbine to produce thrust, resulting in a force sufficient enough to propell the aircraft in
the opposite direction (Morgan 67). When the jet engine was thought of back in the 1920's the
world never thought it would become a reality, but by 1941 the first successful jet flight was
flown in England. Since then the types of engines have changed, but the basic principals have
remained the same.
In 1921 thoughts of a jet engine were based upon adaptations of piston engines and were
usually very heavy and complicated. These thoughts were refined in the 1930's when the turbine
engine design lead to the patent of the turbojet engine by Sir Frank Whittle of Great Britian. It
was Sir Whittle's design that lead Great Britian into the jet age with the first successful flight. At
the same time, the Germans were designing there own jet engine and aircraft which would be one
of the factors that kept Germany alive in World War II. With technological advances by the allies
a prototype turbojet known as the "Heinkel He 178" came into a few operational squadrons in the
German, British, and the American air forces towards the end of World War II. These jets finally
helped the allies to win the war against the axis powers(Smith 23-27).
A later development in the jet industry was the overcoming of the sound barrier and
establishing normal operations up to and beyond twice the speed of sound. Also air force
bombers and transports were able to reach and cruise at supersonic speeds(Silverstein 56-70). In
the late 1950's civil transcontinental jet services started with the Comet 4 and the Boeing 707. In
the mid 1960's all major jet manufacturing companies revised their present engines with new
materials such as aircraft aluminium which made them lighter and turbine changes so they could
compress the air at a much higher pressure so the engine can produce much more thrust.The first
supersonic airliner is the twin turbojet Concorde which flies at over twice the speed of sound
which was brought into regular service in 1976(Smith 27-30). The one company that dominates
the private jet industry is Bombardier which makes the Learjet turbofans, they have an
approximate cruising distance of 1880 nautical miles(Jennings 103).
In the future, turbojet engines will continue to further develop due to the technological
advances made. As in graphite composite wings, thermoplastic chassis, and kevlar skins that have
changed the weight of modern planes and gliders. With these and other developments jet engines
will be honed to produce greater thrust without increases in weight or size. Which will involve
small refinements rather than major changes to the existing engine and engine compartment. In
the near future there will be a substantial reduction of noise emitted from the jet engine, due to a
change in materials and a reduction of vibration in the housing. Right now the jet industry has
over one thousand jets operational at one time, which poses the threat of malfunction and crashes.
With the new computer analysis of problems and the new materials found in the internals of the
engines, there is less of a risk of malfunction than in the past.
Many factors have lead to the popular take over of the jet, replacing the traditional
propeller driven planes. Some of the basic reasons are the speed, fuel economy, and endurance of
the jet engine oveer piston-driven engines. Together with the new refinements and the currently
changing jet industry, future transportation will become faster and safer for the flier.
f:\12000 essays\sciences (985)\Chemistry\Expansion on the Recent Discoveries Concerning Nitric Oxide.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Expansion on the Recent Discoveries Concerning Nitric Oxide
as presented by Dr. Jack R. Lancaster
Nitric Oxide, or NO, its chemical representation, was until recently not considered to be of any benefit to the life processes of animals, much less human beings. However, studies have proven that this simple compound had an abundance of uses in the body, ranging from the nervous system to the reproductive system. Its many uses are still being explored, and it is hoped that it can play an active role in the cures for certain types of cancers and tumors that form in the brain and other parts of the body.
Nitric Oxide is not to be confused with nitrous oxide, the latter of which is commonly known as laughing gas. Nitric oxide has one more electron than the anesthetic. NO is not soluble in water. It is a clear gas. When NO is exposed to air, it mixes with oxygen, yielding nitrogen IV dioxide, a brown gas which is soluble in water. These are just a few of the chemical properties of nitric oxide.
With the total life expectancy of nitric oxide being from six to ten seconds, it is not surprising that it has not been until recently that it was discovered in the body. The compound is quickly converted into nitrates and nitrites by oxygen and water. Yet even its short-lived life, it has found many functions within the body. Nitric oxide enables white blood cells to kill tumor cells and bacteria, and it allows neurotransmitters to dilate blood vessels. It also serves as a messenger for neurons, like a neurotransmitter. The compound is also accountable for penile erections. Further experiments may lead to its use in memory research and for the treatment of certain neurodegenerative disorders.
One of the most exciting discoveries of nitric oxide involves its function in the brain. It was first discovered that nitric oxide played a role in the nervous system in 1982. Small amounts of it prove useful in the opening of calcium ion channels (with glutamate, an excitatory neurotransmitter) sending a strong excitatory impulse. However, in larger amounts, its effects are quite harmful. The channels are forced to fire more rapidly, which can kill the cells. This is the cause of most strokes.
To find where nitric oxide is found in the brain, scientists used a purification method from a tissue sample of the brain. One scientist discovered that the synthesis of nitric oxide required the presence of calcium, which often acts by binding to a ubiquitous cofactor called calmodulin. A small amount of calmodulin is added to the enzyme preparations, and immediately there is an enhancement in enzyme activity. Recognition of the association between nitric oxide, calcium an calmodulin leads to further purification of the enzyme. When glutamate moves the calcium into cells, the calcium ions bind to calmodulin and activate nitric oxide synthase, all of these activities happening within a few thousandths of a second. After this purification is made, antibodies can be made against it, and nitric oxide can be traced in the rest of the brain and other parts of the body.
The synthase containing nitric oxide can be found only in small populations of neurons, mostly in the hypothalamus part of the brain. The hypothalamus is the controller of enzyme secretion, and controls the release of the hormones vasopressin and oxytocin. In the adrenal gland, the nitric oxide synthase is highly concentrated in a web of neurons that stimulate adrenal cells to release adrenaline. It is also found in the intestine, cerebral cortex, and in the endothelial layer of blood vessels, yet to a smaller degree.
Although the location of nitric oxide was found by this experimentation, it wasn't until later that the function of the nitric oxide was studied. Its tie to other closely related neurons did shed some light on this. In Huntington's disease up to ninety-five percent of neurons in an area called the caudate nucleus degenerate, but no daphorase neurons are lost. In heart strokes and in some brain regions in which there is involvement of Alzheimer's disease, diaphorase neurons are similarly resistant. Neurotoxic destruction of neurons in culture can kill ninety percent of neurons, whereas diaphorase neurons remain completely unharmed.
Scientists studied the perplexity of this issue. Discerning the overlap between diaphorase neurons and cerebral neurons containing nitric oxide synthase was a good start to their goal. First of all, it was clear that there was something about nitric oxide synthesis that makes neurons resist neurotoxec damage. Yet, NO was the result of glutamate activity, which also led to neurotoxicity. The question aroused here is, how could it go both ways?
One supported theory is that in the presence of high levels of glutamate, nitric oxide-producing neurons behave like macrophages, releasing lethal amounts of nitric oxide. It is then assumed that inhibitors of nitric oxide synthase prevent the neurotoxicity. The neurotoxicity of cerebral cortical neurons were studied to test this theory. NMDA is added to the cultures from the brain cells of rats. One day after being exposed to the NMDA for only five minutes, up to ninety percent of the neurons were dead. This reveals the neurotoxicity that occurs in vascular strokes.
It is found through these experiments that nitroarginine, which is a very powerful and selective inhibitor of nitric oxide synthase, completely prevents the neurotoxicity given from the NMDA. Removing the arginine from the mixture protects the cells. Also, homoglobin, which binds with and inactivates nitric oxide, also acts as an inhibitor to the harmful effects of neurotoxicity.
The findings of these experiments led to further tests with a direct exposure of lab rats to the nitric oxide synthase. Because NMDA antoagonists can block the damage caused from the glutamate associated with heart strokes, it is questioned whether nitric oxide has the ability to modulate the destruction caused by the stroke. In an experiment performed by Bernard Scatton in Paris, lab rats were injected with small doses of nitroarginine immediately after initiating a stroke on the rats. The nitroarginine reduced stroke damage by seventy-three percent. This fantastic find proves that there is hope in the evolution and search for cures for vascular strokes.
Nitric oxide may also be involved in memory and learning. Memory involves long-term increases or decreases in transmission across certain synapses after the repetitive stimulation of neurons. They then can detect persistent increases or decreases in synaptic transmission. The role of nitric oxide synthase in these processes. The effects of nitric oxide synthase inhibitors were studied in hippocampus, which is the area of the brain that controls the memory. Due to its many influences, however, further study is needed to determine exactly what role nitric oxide plays in the memory.
Scientists have high hopes for the further investigations of nitric oxide. More experiments lead to greater knowledge, and the effects of this knowledge are receiving a warm reception in this day and age of medicine. The knowledge gained by the study of nitric oxide is hoped to lead to cures and better fighting agents for cancers, tumors, strokes, memory loss, as well as other brain diseases, sensory deprivation, intestinal activity, and various other biological conditions that are affected by neurotransmission. It is amazing already the breakthroughs that have surfaced within the past six years concerning the study of nitric oxide, and its further study is excitedly under way.
f:\12000 essays\sciences (985)\Chemistry\Filtration Plant.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Frank J. Horgan Filtration Plant
Introduction
The Frank J. horgan Filtration Plant is located Southeast of Toronto on the shores of Lake Ontario (See map). Its purpose is to provide safe drinking water to our taps by filtering the water. The water is gathered from Lake Ontario. This plant has a production capacity of 455 million litres per day to supply the residents of Toronto with drinking water. Its average production of drinking water is 355 million litres per day. It is also the newest filtration plant in Toronto.
History
The Frank J. horgan Filtration Plant was built from 1974 to 1979 on property acquired from the city of Scarborough, It opened on May 22,1980. When it opened, it was not named Frank J. Horgan Filtration Plant but was names Easterly Filtration Plant. This was because the plant was on the eastern side of Toronto. The name seemed appropriate at the time. The name was changed to Frank J. Horgan Filtration Plant at 1990 by the commissioner of works for Metro Toronto. This plant cost about 57 million dollars to construct. About nineteen major contractors worked on this plant and were supervised by the Engineering firm of James F. Macharen Limited. Although it is the newest plant, it had it's disasters. Their intake value exploded twice between 1980 and 1995 because of the extreme pressure and Wight of the water. these incidents cause a shutdown of the plant until they could repair it.
Production
The Frank J. Horgan Filtration Plant needs only one row materials to operate, which is water. The plant is right next to lake Ontario, collecting water to purify. The water enters the plant by means of two 114 and four 182 million litres per day pumps, sum 18 meters below sea level and 2960 meters off the shore. Since the pressure of the water at that depth is so strong, there is no need for any mechanical pumps. They just let pressure and suction to do the job. The water is now treated with chemicals which are aluminium sulphate (alum), lime and chlorine. Alum is used to stick dirt particles together, to make large clumps of dirty called "floc". A lot of chlorine is added to the water to kill the bacteria. If we were to drink it, you would die from chlorine poisoning. The Chlorine, by the end of the filtration, drops to a safe level. This is where the alum does its work. Coagulation is basically mixing the alum with the water. This is a achieved by high speed in-line mechanical blenders. Flocculation occurs right afterwards. Alum could be either poly-aluminium chloride or aluminium sulphate, is a very sticky substance which likes clinging onto dirt particles. All this flocculation is done in three stages:
1.Focculation is achieved by exail flow turbines with varied inputs of energy and the last two stages are done in two 900mm diameter pipelines.
2.The next step in filtering the water is filtration. The water passes through 8 dual media filters. This is where some bacteria and the floc are removed. The filters consists of the following in order: 0.305m of grated gravel, 0.35m of sand and 0.460m of anthracite. This was the best composition for the filter to make it effective. Normally this would be done once but if the water is really dirty it would have to be filtered again to meet the government standard. By now most of the chlorine in the water has killed most of the bacteria and the level of chlorine in the water is much lower.
3.Here is where they kill any remaining bacteria and add flavouring to the water. They add about 1.2mg per litre of chlorine, any more and it will kill you from fluorine poisoning. Chlorine is also added to kill any remaining bacteria. This time if you drink the water it is safe. If there are high levels of bacteria they would have to go through a process called "Super-chlorination". Hence the name, they increase the chlorine dosage. After that, they reduce the chlorine content by adding sulphur-dioxide. Ammonium is added to the water.
The final stage is storage and distribution. By the time the water gets to your home there may not be any chlorine left. This is not good because what if there was bacteria in the pipes? The ammonia prevents the chlorine from evaporating that easily that way it is killing any bacteria in the pipes. All water you receive from your taps is a combination of all four filtration plants in Toronto.
Waste Disposal
Most of the waste produced in the filtration plant is in the dirty filters. It is too expensive and time wasting to go down and replace the filters every time they get dirty. Since making clean water is a 24 hour, 7 days a week job, they had to think of a way to clean the filters fast and effectively. To clean the filters they use a process called "back wash". The back wash uses treated water and is forced up the filter and out the other way. The filter will now expand inside because of the pressure of the water. The waste will go to a separate place and then will be dumped to highland creek water pollution control plant for treatment and disposal of the waste.
Employment
It is required that at least two people are at all times in the plant. On the weekdays during 4-5 hours about 35 people work there. On week-ends and on holidays only two people are at the plant. The two required people are usually found in the control room. The people there have twelve hour shifts. If one person is late for work, the person duty is required to stay in the control room until they are relieved by the other person.
Marketing
We mostly pay water through maintenance fees or through utility bills. On average in North America, water costs about $1.30 American currency per 1000 gallons. That is essentially less than one cent per gallon. United States and Canada produce 49 billion gallons of water each day. That is about a revenue of 54.6 million U.S. dollars per day. The Franc J. Horgan filtration plant accounts for a part of this production with revenues of $122 200 us dollars a day. About 10% of the water produced is lost or unaccounted for. Canada is amongst the biggest water wasters in the world. An average Canadian uses 340 litres of water every day. That is more than twice the consumption of Europeans. About 39% of the water distributed is used in homes compared to 27% used in factories, 19% in commercial businesses and 5% used by the public. Therefore, most of the water distributed is used in our homes and although water is a bargain we must remember that it is in limited supply.
f:\12000 essays\sciences (985)\Chemistry\Fission of Fusion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fission or Fusion
I think that right now, fission is the only way that we can get more energy out of a nuclear reaction than we put in.
First, the energy per fission is very large. In practical units, the fission of 1 kg (2.2 lb) of uranium-235 releases 18.7 million kilowatt-hours as heat. Second, the fission process initiated by the absorption of one neutron in uranium-235 releases about 2.5 neutrons, on the average, from the split nuclei. The neutrons released in this manner quickly cause the fission of two more atoms, thereby releasing four or more additional neutrons and initiating a self-sustaining series of nuclear fissions, or a chain reaction, which results in continuous release of nuclear energy.
Naturally occurring uranium contains only 0.71 percent uranium-235; the remainder is the non-fissile isotope uranium-238. A mass of natural uranium by itself, no matter how large, cannot sustain a chain reaction because only the uranium-235 is easily fissionable. The probability that a fission neutron with an initial energy of about 1 MeV will induce fission is rather low, but can be increased by a factor of hundreds when the neutron is slowed down through a series of elastic collisions with light nuclei such as hydrogen, deuterium, or carbon. This fact is the basis for the design of practical energy-producing fission reactors.
In December 1942 at the University of Chicago, the Italian physicist Enrico Fermi succeeded in producing the first nuclear chain reaction. This was done with an arrangement of natural uranium lumps distributed within a large stack of pure graphite, a form of carbon. In Fermi's "pile," or nuclear reactor, the graphite moderator served to slow the neutrons.
Nuclear fusion was first achieved on earth in the early 1930s by bombarding a target containing deuterium, the mass-2 isotope of hydrogen, with high-energy deuterons in a cyclotron. To accelerate the deuteron beam a great deal of energy is required, most of which appeared as heat in the target. As a result, no net useful energy was produced. In the 1950s the first large-scale but uncontrolled release of fusion energy was demonstrated in the tests of thermonuclear weapons by the United States, the USSR, Great Britain, and France. This was such a brief and uncontrolled release that it could not be used for the production of electric power.
In the fission reactions I discussed earlier, the neutron, which has no electric charge, can easily approach and react with a fissionable nucleus ,for example, uranium-235. In the typical fusion reaction, however, the reacting nuclei both have a positive electric charge, and the natural repulsion between them, called Coulomb repulsion, must be overcome before they can join. This occurs when the temperature of the reacting gas is sufficiently high, 50 to 100 million ° C (90 to 180 million ° F). In a gas of the heavy hydrogen isotopes deuterium and tritium at such temperature, the fusion reaction occurs, releasing about 17.6 MeV per fusion event. The energy appears first as kinetic energy of the helium-4 nucleus and the neutron, but is soon transformed into heat in the gas and surrounding materials.
If the density of the gas is sufficient-and at these temperatures the density need be only 10-5 atm, or almost a vacuum-the energetic helium-4 nucleus can transfer its energy to the surrounding hydrogen gas, thereby maintaining the high temperature and allowing subsequent fusion reactions, or a fusion chain reaction, to take place. Under these conditions, "nuclear ignition" is said to have occurred.
The basic problems in attaining useful nuclear fusion conditions are to heat the gas to these very high temperatures, and to confine a sufficient quantity of the reacting nuclei for a long enough time to permit the release of more energy than is needed to heat and confine the gas. A subsequent major problem is the capture of this energy and its conversion to electricity.
At temperatures of even 100,000° C (180,000° F), all the hydrogen atoms are fully ionized. The gas consists of an electrically neutral assemblage of positively charged nuclei and negatively charged free electrons. This state of matter is called a plasma.
A plasma hot enough for fusion cannot be contained by ordinary materials. The plasma would cool very rapidly, and the vessel walls would be destroyed by the temperatures present. However, since the plasma consists of charged nuclei and electrons, which move in tight spirals around strong magnetic field lines, the plasma can be contained in a properly shaped magnetic field region without reacting with material walls.
In any useful fusion device, the energy output must exceed the energy required to confine and heat the plasma. This condition can be met when the product of confinement time t and plasma density n exceeds about 1014. The relationship t n ³ 1014 is called the Lawson criterion.
Numerous schemes for the magnetic confinement of plasma have been tried since 1950 in the United States, the former USSR, Great Britain, Japan, and elsewhere. Thermonuclear reactions have been observed, but the Lawson number rarely exceeded 1012. One device, however, the tokamak, originally suggested in the USSR by Igor Tamm and Andrey Sakharov, began to give encouraging results in the early 1960s.
The confinement chamber of a tokamak has the shape of a "torus", with a minor diameter of about 1 m (about 3.3 ft) and a major diameter of about 3 m (about 9.8 ft). A toroidal magnetic field of about 50,000 gauss is established inside this chamber by large electromagnets. A longitudinal current of several million amperes is induced in the plasma by the transformer coils that link the torus. The resulting magnetic field lines, spirals in the torus, stably confine the plasma.
Based on the successful operation of small tokamaks at several laboratories, two large devices were built in the early 1980s, one at Princeton University in the United States and one in the USSR. In the tokamak, high plasma temperature naturally results from resistive heating by the very large toroidal current, and additional heating by neutral beam injection in the new large machines should result in ignition conditions.
Another possible route to fusion energy is that of inertial confinement. In this concept, the fuel, tritium or deuterium ,is contained within a tiny pellet that is then bombarded on several sides by a pulsed laser beam. This causes an implosion of the pellet, setting off a thermonuclear reaction that ignites the fuel. Several laboratories in the United States and elsewhere are currently pursuing this possibility. Progress in fusion research has been promising, but the development of practical systems for creating a stable fusion reaction that produces more power than it consumes will probably take decades to realize. The research is expensive, as well.
However, some progress has been made in the early 1990s. In 1991, for the first time ever, a significant amount of energy, about 1.7 million watts, was produced from controlled nuclear fusion at the Joint European Torus (JET) Laboratory in England. In December 1993, researchers at Princeton University used the Tokamak Fusion Test Reactor to produce a controlled fusion reaction that output 5.6 million watts of power. However, both the JET and the Tokamak Fusion Test Reactor consumed more energy than they produced during their operation.
If fusion energy does become practical, it offers the many advantages includimg a limitless source of fuel, deuterium from the ocean, no possibility of a reactor accident, as the amount of fuel in the system is very small, and waste products much less radioactive and simpler to handle than those from fission systems.
I conclude, that even though fusion is much better, cleaner, and safer, than fission, we do not have the knowledge of how to create and contain the energy realesed in a fusion reaction. So, until we do, fission is the only way we can use the atom to create power.
f:\12000 essays\sciences (985)\Chemistry\Flight Chemistry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Jonathan Cerreta
Chemistry
"Crash Course in Density"
As flight 143, a twin engine 767, was passing over Red Lake on its was to Edmonton, Canada, the left front fuel pump warning light went on. There were a few possibilities for this to happen, such as the fuel pump failing, a fuel line clogging, or a empty fuel tank. The former two were easily dealt with, since the plane could fly without one fuel pump. However, the last possibility was horrifying. After a few minutes, the second fuel pump in the left wing began to blare. It would be too much of a coincidence for two fuel pumps to independently fail, or two fuel lines to independently clog, so it was apparent that the left tank was out of fuel.
Quickly, the pilots decided that getting to Edmonton was out of the question. The nearest large airport was at Winnipeg, so they radioed ahead and changed their course. In a few minutes, all four of the fuel pumps had failed. The worst possible news, they were out of fuel. In a few more minutes the engines stopped running, and all of the high tech instruments became useless.
They realized that they could not even make it to Winnipeg. Their only chance was an abandoned to a abandoned Air Force airstrip. Unfortunately, the airstrip had been converted to a race track, complete with race cars, fences, and spectators. The 767 crash landed, and, fortunately, no one was killed.
Their were many contributing factors that made this plane run out of fuel. First of all, the computerized fuel gauge was not working, and a maintenance worker said , incorrectly, that the plane was still certified to fly. To measure the amount of fuel remaining, they use a drip stick method. They discovered that their was 7 682 liters in the tank. However, they had always measured fuel in the past as pound, while the 767 consumed fuel in kilograms. The drip sticks did not express the amount of fuel in pounds or kilograms, but in liters. It seems to be a simple matter of conversion to arrive a the answer. All they needed to know was how many kilograms were in a liter. Someone said 1.77, and they calculated the value they needed. However, the conversion of 1.77 was not Kilograms per liter, but pounds per liter. The actual value it .803. They were off by more than 50%. This problem would have been avoided if they had kept track of the units during the conversion. That is why it is always important to keep track of the units.
f:\12000 essays\sciences (985)\Chemistry\Fluoridation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In 1931 at the University of Arizona Agricultural Experiment Station M. C. Smith, E. M.
Lantz, and H. V. Smith discovered that when given drinking water supplied with fluorine,
rats would develop tooth defects. Further testing by H. T. Dean and E. Elove of the
United States Public Health Service confirmed this report, and stated that what is known
as mottled tooth. Mottled tooth is a condition in which white spots develop on the back
teeth. Gradually the white spots get darker and darker until the tooth is eroded
completely. This was believed to be caused by fluorine in drinking water (Behrman pg.
181).
A strong uproar was heard when this was released and people wanted all fluorine
out of their water. But later tests concluded that communities with high levels of fluorine
in their drinking water suffered less dental cavities. Further testing concluded that at least
1.0 parts per million of fluorine could help to prevent cavities, but more than 1.5 PPM
would cause mottled tooth, so basically a little fluorine would be okay but a lot of fluorine
would be bad (Behrman 182).
In 1938, with this information, Dr. Gerald Cox of the Mellon Institute began to
promote the addition of fluoride to public water systems, claiming that it would reduce
tooth decay, however there were two major obstacles in his path, The American Medical
Association, and The American Dental Association. Both associations wrote articles in
their journals about the dangers of fluoridation of water supplies. The American Dental
Association wrote the following in the October 1, 1944 issue: "We do know the use of
drinking water containing as little as 1.2 to 3.0 parts per million of fluorine will cause such
developmental disturbances in bones as osteoslcerosis, spondylosis and osteoperosis, as
well as goiter, and we cannot afford to run the risk of producing such serious systemic
disturbances in applying what is at present a doubtful procedure intended to prevent
development of dental disfigurements among children." (Yiamouyiannis pg. 138)
Despite these warnings Dr. Cox continued to promote fluoridation of water
supplies and even convinced a Wisconsin dentist, J. J. Frisch to promote the addition of
fluoride to water supplies in his book, The Fight For Fluoridation. Frisch soon garnered
the support of Frank Bull. Frank Bull organized political campaigns in order to persuade
local officials to endorse fluoridation. This began to apply heavy pressure on the United
States Public Health Service and the American Dental Association. (Yiamouyiannis pg.
139)
In 1945 before any tests had been proven to show that fluoride reduced cavities, it
was added to the drinking water supply of Grand Rapids, Michigan. This was done as a
test. It would be the experiment to see if fluoride would decrease the number of cavities.
The data would be collected periodically over the next five years, and in 1950 the data
showed that the number of cavities was decreasing, but in the town of Muskegon, which
did not have a fluoridated water supply, cavities decreased by the same margin. However
the information about Muskegon was covered up (Waldbott pg. 262).
A few days after the information about Grand Rapids was released the United
States Public Health Service called a press conference in which they said that:
"Communities desiring to fluoridate their communal water supplies should be strongly
encouraged to do so." (Waldbott pg. 263)
In June 1951, dental health representatives from around the U. S. met with dental
health officials to discuss the promotion and implementation of fluoride. It was at this
conference that the United States Public Health Service formally endorsed fluoridation. It
had finally succumb to the pressure. Two years later in 1953, the American Dental
Association also began to support fluoridation, when they released a pamphlet, sending it
to every dentistry office in the U. S. The pamphlet told the advantages of using fluoride,
encouraged acceptance and use of fluoride, and sought to overcome public resistance to
fluoride (Coffel).
From 1953 till 1977 the only debates going on about fluoridation was how to fund
it. Most organizations supported fluoridation, and those that did not soon did, including,
the National Research Council, the American Water Works Association, the American
Medical Association, and the World Health Organization. All of these organizations
endorsed fluoridation (Waldbott pg. 277).
However in 1977, the fluoridation controversy was brought back up by John
Yiamouyiannis. A committee was commissioned to clear up the fluoride controversy once
and for all. But it did not, it just raised it even more. Yiamouyiannis led this committee.
Yiamouyiannis in his statement to congress referring to the results the committee
gathered, said: "provide clear evidence that fluoride is a carcinogen". In his study
Yiamouyiannis learned that people living in the nation's ten largest fluoridated cities
suffered 15 percent more cancer than those living in the ten largest non-fluoridated cities.
Backing up this report was senior science advisor for the Environmental Protection
Agency, William L. Marcus. He stated that the committee report not only overlooked
liver cancer evidence, but also would have reported clear evidence of carcinogenicity, had
they not fallen to pressure from pro-fluoride groups to release a "sanitized" report
(Coffel).
In 1978 Dr. Wallace Armstrong, Dr. Robert Hoover, and Dr. Stephen Barret
published a two part report on fluoridation for "Consumer Reports". These two articles
were meant to discredit Yiamouyiannis' findings that fluoridation is linked to cancer. The
authors deliberately lied and slandered Yiamouyiannis, so that the general public would
feel safe, after all, by now the majority of water supplies in the country had been
fluoridated. This battle waged on for several years, with people trying to discredit
Yiamouyiannis, but he would not go away. The battle of whether to use fluoride or not is
still going on. It has been proven to be toxic and cause some serious health problems, but
it is still widely used in dentistry, and more importantly, is still contained in our drinking
water supplies (Yiamouyiannis pg. 144).
Although fluoride is still used and fluoridated water is still drank, there are many
disadvantages that many people may not know about that could cause serious health
problems. The first major health threat, is fluoride's link to cancer. The most recent study
done was conducted with rats. 180 male rats were given fluoridated water. Out of those
180, 80 were given fluoridated water with a 78 parts per million fluoride count. Out of
those 80 rats three developed a very rare type of bone cancer called Osteosarooms. Such
a rare cancer should not be found at such a rate of three out of 80, but 78 parts per million
is 78 times what is in people's water today, but if given enough water a person could
develop cancer.
Of course more that 1 part per million would cause mottled tooth, or as it is also
known as, dental fluorosis. A condition in which white spots appear on teeth, and
gradually become darker and darker until the tooth is completely eroded away and
destroyed (Coffel).
In the town of Kizilcaoern, Turkey, the water has 5.4 parts per million fluoride. In
this town all the people and animals age prematurely. Men that are 30 look 60, this is due
to the high fluoride content in the water. Their skin wrinkles excessively, they have severe
arthritic pain, and their bones shatter like glass after a fall. The fluoride in the water
breaks down the protein Collagen. Collagen makes up 30 percent of the body's protein
and serves as a major structural component in skin, ligaments, bones, tendons, muscles,
cartilage, and teeth. When the Collagen is broken down the skin and other parts of the
body weaken. As the skin weakens it wrinkles (Yiamouyiannis pg. 4).
There are many other problems attributed to increased aging due to fluoride. Like
severe arthritis. Also other body organs will not function properly because they get old
too fast, just like a person getting old, naturally their organs don't function like they once
did.
Fluoride can also damage the immune system. Studies done by Dr. Sheila Gibson,
from the University of Glasgow, show that fluoride slows the migration rate of white
blood cells. White blood cells must travel through the walls of blood vessels to fight
disease, but fluoride slows down white blood cells. They don't work as fast as they
should, and this weakens the immune system. The following table shows the migration
rates of white blood cells treated with different concentrations of fluoride.
(Yiamouyiannis pg. 23)
Another one of the most damaging health hazards caused by fluoride is fluoride
poisoning. This does not consist of one symptom or condition, but many. It begins with
dental fluorosis. Then the bones begin to show signs of faster aging. The bones get what
is known as outgrowth. Bony outgrowth is when the bones get larger unnaturally. This is
caused because fluoride redeposits calcium and other ions on the bones and teeth. Bony
outgrowth can cause joints to lock because the bone will get too large and prevent the
tendons and ligaments from working properly (Yiamouyiannis pg.40).
Other damage that can be caused is chromosome damage. When chromosomes
are damaged by fluoride the children to be born of the person whose chromosomes were
damaged will have serious defects. Other side effects of fluoride that are not as serious as
the ones mentioned above are, black tarry stools, bloody vomit, faintness, nausea,
vommiting, shallow breathing, stomach cramps, tremors, unusual excitement, unusual
increase in saliva, watery eyes, weakness, constipation, loss of apetite, pain and aching of
bones, skin rash, sores in mouth and on lips, stiffness, weight loss, and white, brown, or
black discoloration of teeth. (Yiamouyiannis pg. 6)
Besides all of these disadvantages of fluoride, it has been proven to reduce tooth
decay by 25%. It does this by redepositing calcium and other ions onto the teeth, but this
comes with many disadvantages, so it is not really beneficial to one's health to use it. It
will benefit one's dental health, but will harm many other aspects of their lives. (Coffel)
Works Cited
1. Behrman, A. S., Water is Everybody's Business. Doubleday, New York, 1968.
2. Coffel, Steve, "The Great Fluoride Fight", Garbage, Vol. 4, Issue 3. Dovetail
Publishers, New York, 1992.
3. Waldbott, George L., Fluoride: The Great Dilemma. Coronado Press, Kansas, 1978.
4. Yiamouyiannis, John, Fluoride: The Aging Factor. Health Action Press, Delaware,
1986.
f:\12000 essays\sciences (985)\Chemistry\Following a dream toward freedom.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Following a dream toward freedom"
465 words
"Following a dream toward freedom"
Freedom has always come very easily for
me. I've always had it and I've never been
without it. But as I sit here thinking I remember
all the stories that were told to me, about the
struggles we were put through to get these
freedoms. Since I am a black woman my
general knowledge of history tells me that
the struggle for freedom was extremely
great. Blacks had to endure slavery and go
through wars to achieve their freedoms.
Woman had to live in silence while the world
was run without their say. To overcome this
they created woman's suffrage and woman's
rights acts to finally allow them their
freedoms. It is an extremely triumphant feeling
to know the things they went through to give
me the luxuries I have today. But what if they
didn't? What if we were still having to fight
wars for our freedoms? I often wonder what
slavery would be like? Looking in todays
society slavery is still the same nightmare it
was then. People in South Africa and Iran
wake to this same nightmare everyday. They
have no personal rights or freedoms at all.
Everyday they live in fear for their lives. If its
not being threatened by their own government
its being threatened by the lack of food they
receive. Imagining the things they go through
everday makes me wonder about my freedoms.
Why is it that I can go to my refrigerator
whenever I want and be able to get a nice,
clean drink of water. When Meanwhile in some
foreign country some 10 year old kid, who has
the same thirst as I do has to go to a lake
where the animals bathe to get a foul, disease
infested drink. Knowing about these peoples
sufferings really makes me realize how
important and how special the freedoms that
Americans posses are. I believe everyone takes
their freedoms for granted. We are by far
granted the greatest freedoms in the
world.But we are far from perfect and
unfortunately we have our own struggles for
freedom. Ranging from racial freedoms to
religious freedoms. I believe that freedom is
the greatest thing in the world but
unfortunately there are a lot of sacrifices to
be made. Americans need to realize that
freedom is not a necessity. We don't have to
have it. Freedom is a gift. It is the most precious
gift in the world simply because it cannot be
bought in any store. It is given to you just for
being an American. My last thought on this is
that I think everyone has had and still has
there struggles with freedom. But I believe
that what doesn't kill us makes us stronger,
so the struggles we endure are worth it in the
end........Aren't they?
f:\12000 essays\sciences (985)\Chemistry\Fossil Fuels.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Our society has become dependent on fossil fuels for energy. That seems fine for now
considering the fact that everyone is generally happy in the present situation. Fossil fuels
are relatively inexpensive and seem to be doing the trick right now. Using fossil fuels
arise such issues as global warming, rising costs of scarce resources, and shortages of raw
materials. None of these problems will draw full attention until the demand is needed,
it's the old supply and demand scenario. Although my opinion may seem pessimistic if
you look at past events it points to the supply and demand scenario.
During World War II rubber supplies were cut off to the western world and we
began to work on a compound that was a synthetic rubber. We succeeded in supplying
the demand and now that same synthetic compound is used today. My theory is that the
same thing will happen with such things as plastic, which is made from fossil fuels.
Someone will either come up with a synthetic plastic or come up with something to
substitute for plastic. The person who comes up with the solution will become and
instant millionaire and everyone will be happy. There is one draw back of this way of
solving problems, I mean sure it's great to wait until the demand but we should still learn
from our mistakes. We should learn to plan ahead and see what the consequences could
possibly be.
We still have other demands to meet, there are three major demands of fossil
fuels and they are heating, transportation, and industry. Although transportation is taken
care of, we may not like the thought of a solar car or an electric car but there are
solutions out there. Frankly the oil companies don't want to lose their monopoly in the
transportation industry and that brings us into the whole economy issue. If we run out of
fossil fuels what will happen to the economy? Will it suffer? These are just a few
questions that are asked everyday, but for now we are just going to look at solutions for
demands on fossil fuels. This chart below illustrates the demands and the possible
solutions.
Energy Demand Alternative Energy Sources and Practices
heating - solar heating, heat pumps, geothermal energy,
biomass gas, and electrical from hydro and
nuclear plants
transportation - alcohol/gasohol and hydrogen fuels, and
electric vehicles
- mass transit, bicycles, and walking
- solar energy, nuclear energy, and
hydroelectricity
- improved efficiency and waste heat recovery
In conclusion I don't think the need for a substitute for fossil fuels will be fully
met until the demand arises. A substitute maybe found before the need, but as history
demonstrates the use of it will either be delayed or will not occur. As for the economy
jobs will be lost, but jobs will also be created in new areas. The big oil companies will
lose big and so may some small countries like Kuwait. No one can really predict how the
economy will turn out but if the substitute or substitutes are less dangerous to the
environment and not as hard on natural resources our economy will eventually get over
the lose of their precious oil.
Natural Resources Report
by
Sean Falconer
Chemistry 122 Mr. Hart
1997-02-07
f:\12000 essays\sciences (985)\Chemistry\Freezing point of Napthalene.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Freezing Point of Naphthalene
I. Purpose
To determine the freezing point of a known substance, naphthalene
II. Materials
ringstand gas source
test tube test tube clamps
thermometer naphthalene
Bunsen burner goggles
hose stopwatch
III. Procedure
1. Assemble the Bunsen burner, attaching one end of the hose to the burner and the
other to a gas source.
2. Assemble the ring stand so that a ring clamp is attached to the stand holding the
test tube that will be used in the experiment.
3. Fill the test tube to approximately 1/8 capacity with naphthalene crystals.
4. Place the thermometer in the crystals so that it is surrounded by the naphthalene
powder but not touching the sides or bottom of the test tube. Use a clamp to hold
the thermometer in place.
5. Ignite the Bunsen burner and using direct heat melt the naphthalene powder until
it completely turns to a liquid. When the temperature reaches approximately 90o
Celsius, stop heating.
6. Observe the change in temperature from 90o to 70o Celsius, recording the
temperature at regular intervals, preferably 15 seconds. This data will be used to
make a chart later.
7. Once the temperature has fallen to 70o, melt the naphthalene which is now
frozen to remove the thermometer. Properly dispose of the naphthalene liquid as
instructed by the teacher.
IV. Data
Time Elapsed Temperature of Naphthalene Time Temperature
Initial (0:00) 100oC 7:00 78.5oC
0:30 97.5oC 7:15 78.3oC
1:00 93.0oC 7:30 78.3oC
1:30 89.5oC 7:45 79.0oC
2:00 86.1oC 8:00 79.0oC
2:30 84.6oC 8:15 79.0oC
2:45 82.3oC 8:30 79.0oC
3:00 81.2oC 8:45 79.0oC
3:15 81.0oC 9:00 79.0oC
3:30 80.5oC 9:15 78.5oC
3:45 80.2oC 9:30 78.1oC
4:00 80.0oC 9:45 78.0oC
4:15 79.9oC 10:00 78.0oC
4:30 79.8oC 10:15 77.5oC
4:45 79.4oC 10:30 77.0oC
5:00 79.1oC 10:45 76.5oC
5:15 79.1oC 11:00 76.0oC
5:30 79.0oC 11:15 75.2oC
5:45 78.9oC 11:30 73.8oC
6:00 78.8oC 11:45 73.0oC
6:25 78.8oC 12:00 72.1oC
6:30 78.7oC 12:15 71.1oC
6:45 78.6oC 12:30 70.3oC
V. Graph
(See following pages)
VI. Calculations
Using 80.1 oC as the theoretical value for the freezing point of naphthalene, we can now
determine percent error.
Percent Error = ((Theoretical - Experimental) / Theoretical) x 100
Percent Error = ((80.1 oC - 79.0 oC) / 80.1oC) x 100
Percent Error = 1.4%
VII. Conclusions
In this lab, we heated the known substance naphthalene in a test tube to approximately
100oC and observed its temperature while it cooled to approximately 70oC. Over a time period of 12 minutes and 30 seconds, we recorded the temperature at regular 15 second intervals, and, with this data, constructed a chart showing the general curve. Upon inspection of the graph and our data chart, we found the experimental freezing point of naphthalene to be around 79oC. This results in 1.4% error when compared to the actual value for the freezing point of naphthalene, 80.1oC. Considering the impurities in the consumer grade naphthalene, the interference of outside air on the temperature of the test tube and its contents, and the inaccuracy of 1/10 measurements on a thermometer graduated by whole numbers, the error we accquired in this lab was minimal and easily explained.
f:\12000 essays\sciences (985)\Chemistry\front office.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Role of The Front Office
A security program is most effective when all employees participate in the hotel's security efforts. Front office staff play a paticularly important role. Front desk agent, door attendants, bellpersons, and parking attendantshave the oppurtunity to observe all persons entering or departing the premises. Suspicious activities or circumstances involving a guest or visitor should be reported to the hotel's security department or a designated staff member.
Several procedures front desk agents should use to protect guests and property have already been mentioned. For example, front desk agents should never give keys, room numbers, messages, or mail to anyone requesting them without first requiring appropriate identification. Similiarly, the front desk agent should not announce an arriving guest's room number.
Guest's may be further proteceted if the front office prohibits staff members frrom providing guest information to callers or visitors. Generally, front desk agent should not mention guest room numbers. People calling guest's at the hotel should be directly connected to the appropriate guestroom without being informed of the room number. Conversely, someone asking for a specific room number over the telephone should never be connected until the caller identifies whom he or she is calling and the hotel employee verifies the identity of the person in the room requested. A person inquiring at the front desk about a guest may be asked to use the house phones so that they connect only to the hotel operator. The caller can then be properly screened to provideadditional security.
Front office staff may also inform guest's of personal precautions they may take. For example, front desk agents may suggest that guests hide and secure any valuables left in their cars. Bellpersons accompanying the guest to a room generally provide instructions on the operation of in-room equipment. The bellpersons may also review any decals or notices in the room relating to guest security. This should always include emergency evacuation paths and procedures. The front office may provide the guests with flyers containing safety tips, such as the example shown in exhibit 6.5.
f:\12000 essays\sciences (985)\Chemistry\Gallium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1871
Dmitrii Ivanovich Mendelev predicts the existance and properties of the element
after zinc in the periodic table. He Gives it the name "eka aluminium".
1875
Paul Emile Lecoq de Boisbaudran discovers gallium.
Its properties closely match those predicted by Mendelev.
Gallium, atomic number 31, is very similar to aluminum in its chemical
properties. It does not dissolve in nitric acid because of the protective film of
gallium oxide that is formed over the surface by the action of the acid. Gallium
does however dissolve in other acids, and alkalies.
Gallium was discovered (1875) by Paul Emile Lecoq de Boisbaudran, who observed
its principal spectral lines while examining material seperated from zinc blende.
Soon after he isolated the metal studied its properties, which coincided those that
Dmitrii Ivanovich Mendelev had predicted a few years earlier for eka-aluminium, the
then undiscovered element lying between aluminum and indium in his periodic table.
Though widely distributed at the Earth's surface, gallium does not occor
free or concentrated in independant minerals, except for gallite. It is extracted as
a by-product from zinc blende, iron pyrites, bauxite, and germanite.
Silvery white and soft enough to be cut with a knife, gallium takes on a bluish
tinge because of superficial oxidation. Unusual for its low melting point
( about 30 degrees C, 86 degrees F ), gallium also expands upon solidification and
supercools readily, remaining a liquid at temperatures as low as 0 degrees C ( 32 degrees F ).
Gallium has the longest usefull liquid range of any element. The liquid metal
clings to glass and similar surfaces. The crystal structure of gallium is orthorhombic.
Natural gallium consists of a mixture of two stable isotopes: gallium-69 ( 60.4 percent )
and gallium-71 (39.6 percent ).
Somewhat similar to aluminum chemically, gallium slowly oxidizes in moist air
until a protective film forms, and it becomes passive in cold nitric acid.
Gallium has been considered as a possible heat-exchange medium in nuclear reactors,
although it has a high neutron cross section. Radioactive gallium-72 shows some promise
in the study of bone cancer; a compound of this isotope is absorbed by the cancerous
portion of the bone.
The most common use of gallium is in a gallium scan. Gallium scans are often used
to diagnose and follow the progression of tumors or infections. Gallium scans can also be
used to evaluate the heart, lungs, or any other organ that may be involved with inflammatory
disease.
A gallium scan usually requires two visits to the Nuclear Medicine Department.
On the first day you recieve an injection in a vein in your arm, you will then be scheduled
to return beetween 2 and 5 days later, depending on your diagnosis. Your initial scan can
take several hours, while you lay on a stretcher, or imaging table, and a camera is positioned
above you or below you, taking pictures as it moves slowely along the length of your body.
No special preperation must be taken before the scan, and the gallium is usually excreted
through the bowel.
f:\12000 essays\sciences (985)\Chemistry\Geothermal Energy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Matt Arnold
9/17/96
Physics 009
Professor Arns
GEOTHERMAL ENERGY
The human population is currently using up its fossil fuel supplies at staggering rates. Before long we will be forced to turn somewhere else for energy. There are many possibilities such as hydroelectric energy, nuclear energy, wind energy, solar energy and geothermal energy to name a few. Each one of these choices has its pros and cons. Hydroelectric power tends to upset the ecosystems in rivers and lakes. It affects the fish and wild life population. Nuclear energy is a very controversial subject. Although it produces high quantities of power with relative efficiency, it is very hard to dispose of the waste. While wind and solar power have no waste products, they require enormous amounts of land to produce any large amounts of energy. I believe that geothermal energy may be an alternative source of energy in the future. There are many things that we must take into consideration before geothermal energy can be a possibility for a human resource. I will be discussing some of these issues, questions, and problems.
In the beginning when the solar system was young, the earth was still forming, things were very different. A great mass of elements swirled around a dense core in the middle. As time went on the accumulation elements with similar physical properties into hot bodies caused a slow formation of a crystalline barrier around the denser core. Hot bodies consisting of iron were attracted to the core with greater force because they were more dense. These hot bodies sunk into and became part of the constantly growing core. Less dense elements were pushed towards the surface and began to form the crust. The early crust or crystalline barrier consisted of ultra basic, basic, calc-alkaline, and granite. The early crust was very thin because the core was extremely hot. It is estimated that the mantel e
200 to 300 degrees Celsius warmer than it is today. As the core cooled through volcanism the crust became thicker and cooler.
The earth is made up of four basic layers, the inner solid core, the outer liquid core, the mantel and the lithosphere and crust. The density of the layers gets greater the closer to the center of the earth that one gets. The inner core is approximately 16% of the planet's volume. It is made up of iron and nickel compounds. Nobody knows for sure but the outer core is thought to consist of sulfur, iron, phosphorus, carbon and nitrogen, and silicon. The mantel is said to be made of metasilicate and perovskite. The continental crust consists of igneous and sedimentary rocks. The oceanic crust consists of the same with a substantial layer of sediments above the rock.
The crust covers the outer ridged layer of the earth called the lithosphere. The lithosphere is divided into seven main continental plates. These continental plates are constantly moving on a viscous base. The viscosity of this base is a function of the temperature. The study of shifting continental plates is called Plate Tectonics. Plate Tectonics allows scientists to locate regions of geothermal heat emission. Shifting continental plates cause weak spots or gaps between plates where geothermal heat is more likely to seep through the crust. These gaps are called Subduction Zones. Heat emission from subduction zones can take many forms, such as volcanoes, geysers and hot springs. When lateral plate movement induced gaps occur between plates, collisions occur between other plates. This results in partial plate destruction. This causes mass amounts of heat to be produced due to frictional forces and the rise of magma from the mantle through propagating lithosphere fractures and thermal plumes sometimes resulting in volcanism. During plate movement, continental plates are constantly being consumed and produced changing plate boundaries. When collisions between plates occur, the crust is pushed up sometimes forming ranges of mountains. This is the way that most Midoceanic ranges were formed. Continental plates sometimes move at rates of several centimeters per year. Currently the Atlantic ocean is growing and the Pacific ocean is shrinking due to continental plate movement.
In Rome people first used geothermal resources to heat public bath houses that were used for bathing or balneology. The mineral water was thought to be therapeutic. The minerals in the water have been used since the beginning of time. Through out the years geothermal heated water or steam has been used in many different systems from heating houses and baths to being a source of boric acids and salts. Today geothermal fluids provide energy for electricity production and mechanical work. Boric acid is still extracted and sold. Other byproducts of geothermal heated liquid are carbon dioxide, potassium salts, and silica.
The first 250 kilowatt geothermal power plant began operation in 1913 in Italy. By 1923 the United States had drilled its first geothermal wells in California. In 1925 Japan built a 1 kilowatt experimental power plant. The first power plants constructed in Italy were destroyed in WWII, then rebuilt bigger and more efficient. Mexico built a 3.5 megawatt unit in 1959. In the United States an 11 megawatt system at the geysers in California was constructed in 1960. Japan then installed a 22 megawatt plant in 1966. Geothermal energy has been used for things other than energy production, such as geothermal space-heating systems, horticulture, aquaculture, animal husbandry, soil heating and the first industrial operation of paper mills in New Zealand. Large scale geothermal space-heating systems were constructed in Iceland in 1930.
The word "geothermal," refers to the thermal energy of the planetary interior and it is usually associated with the concept of systems in which there is a large reservoir of heat to comprise energy sources. Geothermal systems are classified and defined depending on their geological, hydrogelogical and heat transfer characteristics. Most geothermal heat is trapped or stored in rocks. A liquid or gas is usually required to transfer the heat from the rocks. Heat is transferred in three different ways, convection, conduction, and radiation. Conduction is the transfer of energy from one substance to another, through a body that may be solid. Convection is the transfer of energy from one substance to another through a working moving medium, such as water. The medium usually transfers the energy in an upward direction. Radiation is the transfer of energy out of a substance through the excitement of gas molecules surrounding a substance. Radiation is dependent upon two things the object emitting the heat and the surrounding's ability to absorb heat. Convective geothermal systems are characterized by the natural circulation of a working fluid or water. The heated water tends to rise and the cool to sink continually circulating water throughout the ground. The majority of the heat transfer is done through convection and conduction, radiation hardly ever effects heat flow. When geothermal heated water collects into a reservoir one form of a geothermal resource is created. One can approximate the amount of thermal energy present in a geothermal resource by comparing the average heat content of the surface rocks with the enthalpy of saturated steam. Enthalpy is energy in the form of heat released during a specific reaction or the energy contained in a system with certain volume under certain pressure. It is generally accepted that below a depth of ten meters, the temperature of the ground increases one degree Celsius for every thirty or forty meters. At a depth of ten meters annual temperature changes no longer affect the temperature or the earth.
The most common geothermal resources used for the production of human consumed energy are hydrothermal. Hydrothermal systems are characterized by high permeability by liquids. There are two basic types of hydrothermal systems, vapor and liquid dominated systems.
In a liquid based system, pumps must be placed very deep in the well where only the liquid phase is present. By keeping the liquid under pressure it is possible to keep the liquid at a much higher temperature than the liquid's normal boiling point. If the liquid is not kept under pressure, it will flash. Flashing is the process of vaporization. It requires 540 calories per gram of heat to vaporize water. The super heated pressurized water is pumped up a long shaft into the plant. When it reaches the plant, controlled amounts of the pressurized water is allowed to flash or vaporize. The rapidly expanding gas pushes or turns the turbine. A power plant may have numerous flash cycles and turbines. The more flash cycles the higher the efficiency of the power plant. Once the heated liquid has been used to the point where it has cooled to an unusable temperature it is reinjected into the ground in hopes that it will replenish the geothermal well.
Vapor systems work in much of the same way. The super heated gas flows through surface reboilers that remove all of the non-condensable gases from the mixture of gases. The gas is pumped into pressurization tanks where extreme pressure causes the gas to condense. The super heated liquid is then allowed to flash. The rapidly expanding gas turns the turbine. Specific examples and sites of electrical energy production will be discussed later.
Conductive geothermal systems consist of heat being transferred through rocks and eventually being transmitted to the surface. The amount of heat transferred in a conductive geothermal is considerably less than the heat transferred in a convective system. Conductive geothermal systems lack the water to efficiently transfer the heat, so water must be artificially injected around the hot rocks. The heated water is then pumped from the underground reservoir to the surface. This system is not as effective as others because the temperature that the heated water reaches is not very great.
Geopressured geothermal systems are similar to hydrothermal systems. The only difference is the pressure of the high temperature reservoir. Geopressured geothermal systems may be associated with geysers. Some geopressured geothermal systems reach pressures of fifty to one hundred megapascals (MPa) at depths of several thousand meters. These systems provide energy in the form of heat and water pressure making them more powerful and useful. Currently most electricity producing geopressured geothermal systems are only experimental. There are many factors in this type of system that are very hard to predict such as the reservoirs potential energy. It is very hard to predict the force at which the water will be projected from the well since the pressure of the high temperature is constantly changing. The salinity of the liquid projected is also very high. In some instances the liquid consists of twenty to two hundred grams of impurities per liter.
Today with the depletion of many other natural resources using geothermal resources in more important than ever. Hot springs are natural devices that bring geothermal heated water to the surface of the earth. This processes is very efficient, little heat is lost during the transportation of the water to the surface. The heat is brought to the surface via water circulation in either the liquid or gaseous form. Geothermal hot springs are a good source of energy because it is probable that they will never be exhausted as long as water is not pumped from the spring faster than it naturally replenishes itself.
A simplified version of a vapor run geothermal electric plant might operate under the following conditions. Holes are drilled deep into the ground and fitted with pipes that resist corrosion. When the hole is first opened, steam escapes into the atmosphere. Once the pipes are inserted into the holes the steam expansion becomes adiabatic. An adiabatic system is a system in which there is little or no heat loss. Next the pipe is connected to the central power station. No condensation takes place because the steam is superheated. Many drill holes are connected to the central power station which results in mass quantities of superheated water vapor pushing the turbine. The more drill holes that are connected to the power station the greater the pressure of the gas flowing through the turbine. The greater the pressure of the gas the faster the turbine turns and the more electricity produced. In some power plants the water vapor itself is not used to turn the turbines but only to heat another purer substance. This method is less efficient but does not corrode the machinery. Most superheated gas from geothermal resources is not pure water but a mixture of gases. Some of these gases can be extremely corrosive so using purer non-corrosive materials has its advantages. Some common gases used are ethyl chloride, butane, propane, freon, ammonia. The efficiency of these generators is limited by the second law of thermodynamics.
The second law of thermodynamics states that a thermal engine will do work when heat entering the engine from a high temperature reservoir is at a different temperature than the exhaust reservoir. The thermal engine must take heat from the high temperature reservoir convert some of that heat to work and exhaust the remaining heat into a low temperature reservoir. The difference between the heat put into the engine and the heat deposited as waste energy is transformed by the engine into mechanical work. The maximum possible efficiency of a heat engine is called its Carnot efficiency. Carnot efficiency is never reached and the actual efficiency is always lower than the Carnot efficiency. The greater the difference in temperature between the superheated gas and the low temperature exhaust reservoir the higher the efficiency of the power plant. The average actual efficiency for a geothermal power plant ranges from the single digits to about twenty percent. The average actual efficiency for a fossil fuel burning electrical power plant is approximately thirty percent. While other methods of electricity production may have slightly better efficiency than a geothermal power plant, the less destructive environmental impacts of geothermal power plants offset the importance of the a higher efficiency. Direct use of geothermal heat for heating purposes can result in actual efficiencies of up to ninety percent. Fossil fuel powered heat systems can generally only reach actual efficiencies of seventy to eighty percent.
As well as being used for electricity, geothermal energy is currently being used for space heating. Geothermal heated fluid used for space heating is widespread in Iceland, Japan, New Zealand, Hungary and the United States. In a geothermal space heating system, electrically powered pumps push heated fluid through pipes that circulate the fluid through out the structure. Geothermal heated fluid is also being used to heat greenhouses, livestock barns, fish farm ponds. Some industries use geothermal energy for distillation and dehydration. Although there are many pluses to using geothermal energy there are also some problems. It was generally assumed that geothermal resources were infinite or they could never be completely depleted. In reality the exact opposite is true. As water or steam is pumped out of the well the pressure may decrease or the well may go dry. Although the pressure and fluid will eventually return it may not do so fast enough to be useful. Drilling geothermal wells is very expensive. It is generally figured that a geothermal well should last 30 years in order to pay for itself. Another factor to take into consideration is the disposal of the waste water. Some geothermal fluid consists of several toxic materials such as arsenic, salt, dissolved silica particles. These materials can pollute drinking water and lakes. When the waste water is reinjected back into the earth the previously dissolved silica particles precipitate out of the liquid and can block up the pores in the reinjection well. The cool water can also create new passages through the rocks and create unstable ground above. There are three main problems that can plague a power plant when it is operated using geothermal energy, silting, scaling and corrosion. Scaling is caused by silting or when suspended particles build up on the insides of the pipes. Scaling is directly related to the pH of the liquid. In some cases chemicals or other additives such as HCl have been added to the liquid to try to neutralize the liquid. Silting is when the particles that were dissolved in the hot fluid precipitate out when the fluid cools. This generally occurs in the pipes and can cause considerable damage to the pipes if significant pressure builds. This problem can be solved by using simple filters that are periodically changed in the pipes. Corrosion occurs because of acidic substances incorporated in the geothermal fluid. Usually geothermal fluid contains some boric acid. Using pipes that are not affected by these liquid generally takes care of corrosion. Unfortunately most metals that are non-corrosive are very expensive. Most types of wildlife can not live in or consume saline water. If the cooled fluid containing dissolved toxins and salt contaminates lakes or streams the environmental effects can be disastrous. Air pollution from geothermal resources is also significant. The most common type of air pollution is the release of hydrogen sulfate gas into the air. At the geysers in California an estimated 50 tons per day of hydrogen sulfite is released into the atmosphere. Iron catalysts have been added to try to offset the effects of pollution but have failed because moisture and carbon dioxide reduce the efficiently of the catalysts so much that it is not effective. Noise pollution is another consideration that must be taken into account. When the steam and water escape from the system it makes a relatively loud noise. If the wells are located near any residential areas it can raise problems and discontentment within the community. Some geothermal power plants have installed cylindrical towers where the water vapor and water is swirled around. The friction created by the movement of the gas or fluid decreases the overall kinetic energy of the gas or fluid causing the internal energy to decrease. When the internal energy is decreased the noise of gas escaping is also decreased. Geothermal resources do produce pollution but the pollution would be there even if we did not exploit the resource. Other energy producing systems used today produce and emit pollution that otherwise would not be introduced into the environment. I feel that the benefits of using geothermal resources as a source of energy for electricity and mechanical work production out weigh the downfalls.
The world has many different geothermal regions that are exploited for the production of electricity and other things. The United States is one of the leaders in manufacturing geothermal produced electricity. One of the most productive regions in the U.S. is the Pacific Region. Most geothermal regions contain mostly heated water. Geysers produce very large amounts of water vapor and other gases. Geysers have the potential to produce electricity relatively efficiently.
In 1979 The Geyser power plants had a rating of 600 megawatts of electricity(MWe). Today they are rated for over 2000MWe. Most of the geysers are located on the side of a mountain near Big Sulfur Creek, on the California coast west of Sacramento. William Bell Elliott was the first to see this natural wonder in 1947 while surveying, exploring and looking for grizzly bears. The earth around the Geysers geothermal site consists of highly permeable fractured shale's and basalt's created during Jurassic age. The ground above the wells consists of graywake sandstone. This form of sand stone is very hard to penetrated. Scientists believe that the large geothermal reservoir was created when an earthquake caused fault and shear zones. Steam temperatures in the geothermal wells range from 260 to 290 . Pressures deep in the wells range from 450psig to 480psig (3.1MPa to 3.3) . Some wells are 3000 meters deep and produce almost 175 tonnes of steam per hour.
It is thought that the center of the magma or the heat source at The Geysers geothermal site lies under Mt. Hannah. Geologists are led to believe that there is a large mass of magma cooling under the geysers and power plants that is the source of all the heat. This assumption is proven when seismic waves caused by earth quakes are slowed when they pass through the mountain. A fairly large fractured steam reservoir rests above the cooling molten.
In 1967, the Union Oil Company in partnership with Magma Power Corporation and Thermal Power Company began producing electricity from the Geysers Geothermal region and selling it to the Pacific Gas and Electric Company. The turbines in the power plant were designed to operate under intake pressures of 80psig to 100psig. At first the plant operated at maximum efficiency but as the years went by the geothermal resource was slowly depleted. The depleted heat source did not produce the constant pressure that was required for maximum efficiency so the efficiency decreased. There are two methods of drilling wells, mud drilling and air drilling. Mud drilling tends to clog up the porous rock but it is easier on the drilling machinery. Air drilling leaves the porous rock free for water and steam flow but it is very hard on machinery due to abrasion and heating. Air drilling is therefore very expensive. Geothermal wells do not always maintain constant pressure. New wells must be drilled to continually maintain constant pressure on the turbine. The system built at The Geysers geothermal field delivers of super heated steam. The steam produced by the wells is not pure water but consists of 1% non-condensable gases along with dust particles. If not cleaned off, the dust can accumulate on the inside of the turbine blade shrouds and cause turbine failure. This problem was virtually eliminated when heavy duty blades and shrouds replaced the faulty ones. It was thought that by the time the steam made it to the turbine very little of it was still superheated, so special non-corrosive metal was not required in the construction of the upper level piping and the turbine. Normal carbon piping was used in the original construction. This proved not to be the case, after a while the pipes began to corrode. As steam condenses non-condensable gases become more of a problem. They become more concentrated, more corrosive and can form sulfuric acid. This new problem was solved by replacing the carbon steel used in the original construction with austenitic stainless steel. Electrical connections and wires were also effected by concentrations of sulfuric acid. They were replaced with aluminum and stainless steel.
The steam generated from the wells and geysers has a constant enthalpy of 1200-1500 Btu per lb. The use of condensing steam turbines that exhausted waste water below atmospheric pressure increased the efficiency of the plant. There were no rivers or streams in the immediate area that were sufficiently cool enough to be used as a cooling mechanism, so cooling towers were constructed. Incorporating the cooling towers into the system allowed the waste water to be discharged at a cooler temperature f 18 therefore increasing the possible efficiency of the system.
Carnot Efficiency of The Geysers Power Plant
Carnot Efficiency =
=18
=290
Carnot Efficiency =
Carnot Efficiency = .4831
or
48%
This is a relatively efficient cycle. It certainly can compete with other modern day types of electricity production. Unfortunately carnot efficiencies can never be reached. A large amount of energy is lost in the condensers and turbines. I feel that while the efficiency of this geothermal power plant might not be overwhelmingly better than other modern day methods of electricity, the lack of pollution makes up for the loss in efficiency. Even though The Geysers power plant is relatively efficient, it does not even come close to taking advantage of all the emitted heat. Only 2% of the emitted heat from the source is used to heat water for electricity production. This geothermal resource will not last for ever though.
Heat Content of the Entire Geysers Geothermal Site
-The Geysers geothermal site covers approximately .
-Heat is only recovered from the top 2km of the earth at The Geysers site.
-The average temperature in this top 2km of earth is 240 .
-The average air temp at The Geysers site is 15 .
-The specific heat of the permeable rock that makes up most of geothermal region is .
Volume x Specific Heat x Change in Temperature = Heat Content
Vol = x =
SpHt=
= 240 - 18 = 222
Q =( x )( )( )(222 )
Q= Joules of Heat Content in the entire Geysers geothermal region
Life of The Geysers Heat Source
-Power output of The Geysers plant =2000MW
-Fraction of the total heat used in the production of steam = 2%
-Power taken from the geothermal resource = 2,000MW/2% =
100,000 MW
-Heat content of the entire Geysers geothermal region = Joules
-Seconds in one year =
-1 Watt = 1 Joule/sec
100000MW = J/year
J/ J/year = 24.67years.
According to my calculations The Geysers geothermal resource will be depleted in 24.67 years at the current rate of usage. Of course this is not taking into account the rate at which the resource is renewed from heat coming from deeper in the earth. I am assuming that the rate of depletion is so much greater than the rate of renewal that it is not significant in the calculation.
The power plant at The Geysers site is run on dry superheated gases. The power plant now has 11 generators and has a rating of over 2000 MWe. The process of electrical power generation used at The Geysers power plant is relatively simple when compared to other modern day power plants. The steam that evolves from the wells flows through pipes that lead to the turbine. The pressure exerted by the superheated steam turns the turbine which produces electricity. The steam then flows into the direct-contact condensers below the turbine. Cooling water from the cooling towers is constantly circulated through the condensers. The condensed steam and cooling water is then pumped back into the cooling towers. Because the evaporation rate from the towers is slower than the rate at which water is pumped into the towers, excess amounts of water accumulate in the cooling tower. This excess water is then pumped to reinjection wells where it flows down through the soil and porous rock and is reheated by the heat source. The cycle begins all over again. See the diagram below.
The costs of running this particular geothermal electrical plant are very competitive with the cost of other types of modern day plants. The operation costs for the plant at The Geysers is almost same the as the operation costs of an average fossil fuel powered plant and much less than the operating costs of a hydroelectric or nuclear plant. One of the greatest advantages of this and most geothermal systems is the relative lack of pollution. While most coal plants give off significant amounts of sulfur, somewhere around 93 tons per day for the average coal plant, geothermal plants produce no gas pollution other than the gases that would be naturally emitted from the geysers anyway. Coal plants are by far the worst polluters but other types of plants are not far behind.
Average Cost of Geothermal Produced Energy per Kilowatt in the U.S.
Total electricity produced in the U.S. during 1985 = 652000MW
Percent of Geothermal energy contributed to total U.S. production 3%
3% x 652000MW = 19560MW
Methods of geothermal energy production Capital Dollars per Kilowatt
Dry Steam Flash 83% $1000/kW
Binary 17% $3600kW
Dry Steam Flash = 83% x 19560MW x 1000kW/MW x $1000/kW =
Binary = 17% x 19560MW x 1000kW/MW x $3600/kW =
Total = +
total = per 19560MW
/1956MW x 1MW/1000kW = $1431.5 per kW
The future of geothermal energy looks very promising. There have been many technological breakthroughs that have resulted in increased efficiencies of modern day geothermal electrical plants. I feel that with the current environmental situation that the world now faces a viable method of clean up will include the use of geothermal power plants and resources. In a world that is suffocating from the chemicals, and particulates that are created in the production of electricity and other commercial industries, we have no choice but to change our ways. The earth can not support the current rates of pollution. If we do not change reduce pollution the effects that are beginning to be see now will become irreversible. Using geothermal resources for other purposes such as space heating can only help reduce pollution emission. With in the next century the world will begin to feel the energy crunch. Supplies of other natural resources such as coal, oil and other petroleum products will begin to become scarce. The world today is completely electricity dependent. Without electricity, the world as we know it would cease to exist. In the next century we must learn to be less electricity dependent or find other sources of energy. If less env
f:\12000 essays\sciences (985)\Chemistry\How Albert Einsteins Knowledge Aided Civilization.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Albert Einstein
1879-1955
Einstein was undoubtedly the single greatest contributor to science in the 20th century. Few will argue with that point. His gifts to today's understanding of the universe, energy, time among others base many branches of modern science.
His contributions are not restricted only to the fields of science, but also to the individual person: from powerful heads of states to the average citizen. Albert Einstein helped Oppenheimer1 develop the fundamental science needed to break atoms, causing massive amounts of energy to be released. There are two common forms of this technology today, the Nuclear Power Plants, and the Atomic, or Nuclear Bomb.
During the WWII battles with Japan, the United States government instructed a group of scientists to derive a new weapon, one that could potentially cause large scale destruction emitting from a single bomb. Many notable scientists contributed to this project, but none with as much global respect as Einstein. With the help of his physics knowledge, the mission was accomplished: a weapon yielding the force of thousands of tons of dynamite was tested at a government installation test site in Nevada.
Soon after the United States used this weapon on Japan twice, The Soviet Union developed their own nuclear weapon. The Arms Race was on. Suddenly both countries expended large amounts of resources on making these bombs useful in combat. Three hundred billion U.S. dollars2 were spent to ignite this project and produce only a small number of functional bombs. The Soviet Union was thought to have spent about equal amounts.
By the late 1950's what we now know as the Cold War erupted. Nuclear Holocaust seemed inevitable. Tensions between the Communists and the States reached monumental highs. The whole United States suddenly went into a panic mode that would stay resident until the 1980's. Children on the first day of a new year of school were taught where the fallout shelter was. Instead of swimming pools, people would purchase subterranean bunkers to protect them from the radiation and chaos that was expected to follow the attack. Both sides of this war scrambled to better their strategic location of missiles. All to many times did one country push the other nearly to the brink of all out nuclear war.
It seemed that Einstein had foreseen the use of this weapon and made it know in a statement that he is commonly quoted saying,
"I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones."
He was all too correct with this statement. He knew very well that when a war was fought with such weapons as nuclear bombs, there would be little left to fight over. There would be no winners in this war. The whole globe would be shattered by these two giants. And both sides knew it.
As soon as one power would attack the other, an all out fight would engage. The peak U.S. response time was only a little less than 10 seconds. Missiles would be launched, landing in a little over a minute in key locations first, like industrial centers, capitals, etc. The clean up round would completely annihilate all land governed by the country in less than 4 minutes. Without the ability to launch full destruction from a button, these two powers would have for certain used land troops and air attacks. Civilians would die and cities would be crumbled in a non-nuclear battle, but this was a winnable war. One would walk away having brought the other to its knee.
Some treat the nuclear weapon like a curse, others the single most important tool in protecting our liberty.
I feel that my personal well being has been the result of the nuclear bomb. Since the end of WWII until today, there has been less bloodshed between world powers than in centuries, if not millenniums. Although Einstein was hated by some groups in his time because of his contribution, I believe that he knew very well of the future. I strongly feel he would avoid helping the Manhattan Project if he felt it would result in nuclear war.
"Before God we are all equally wise - and equally foolish"
References:
http://sci.hkbu.edu.hk/math/einstein.html
http://galileo.eng.wayne.edu/Alumni/vinod/ae.html
http://www.sofitec.lu/misc/einstein.htm
http://www.netfrance.com/Libe/manhat/ho_eins.html
http://magna.com.au/~prfbrown/albert_e.html
http://cuy.net/%7Eeinstein/einstein.html
http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Einstein.html
http://www.aip.org/history/esva/einstein.htm
http://www.pbs.org/wgbh/pages/nova/einstein/
http://www.yahoo.com/Reference/Encyclopedia
http://www2.elibrary.com/id/27/86/search.cgi
1 Robert Oppenheimer was the head of the Manhattan Project, the project given the instruction to harness the power of the atom in a destructive way. Einstein played a great role in laying the roadwork for this new venture. Oppenheimer himself received a Nobel Prize in physics.
2 Inflation adjusted to 1995.
f:\12000 essays\sciences (985)\Chemistry\How is helium produced.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"How is Helium produced?"
Production: Although Helium is one of the most common elements in the
universe it is a rare gas on earth. It exists in the atmosphere in such
small quantities (less than five parts per million) that recovering it
from the air is uneconomical. Helium is produced as a by-product of the
refining of natural gas, which is carried out on a commercial scale in the
USA and Poland. In these areas natural gas contains a relatively high
concentration of Helium which has accumulated as a result of radioactive
decay of heavy elements within the earth's crust. Helium is supplied to
distribution centres throughout the world in liquid form in large cryogenic
containers. The Helium is filled into liquid containers, gas cylinders and
cylinder packs as necessary.
History of Helium Production: Government involvement in helium conservation
dates to the Helium Act of 1925 which authorized the Bureau of Mines to build
and operate a large-scale helium extraction and purification plant. From 1929
until 1960 the federal government was the only domestic helium producer. In
1960, Congress amended the Helium Act to provide incentives to natural gas
producers for stripping natural gas of its helium, for purchase of the
separated helium by the government, and for its long-term storage. With
over 960 million cubic meters (34.6 billion cubic feet) of helium in
government storage and a large private helium recovery industry, questions
arise as to the need for either the federal helium extraction program or the
federally maintained helium stockpile.
In a move which would take the federal government out of the helium business,
Congress passed the Helium Privatization Act (H.R. 873) as part of the
Seven-Year Balanced Budget Reconciliation Act of 1995 (H.R. 2491). Although
the measure died when the President vetoed the Budget Act on December 6, 1995,
the Administration has made a goal the privatization of the federal helium
program. On April 30, 1996, the House suspended the rules and passed H.R.
3008, the Helium Privatization Act as agreed to in the House-Senate conference
on the Budget Act. Subsequently, the Senate Energy and Natural Resources
Committee amended the bill to provide for the National Academy of Sciences
to study how best to dispose of the helium reserve. On September 26, 1996,
with limited time remaining for the 104th Congress, the House again suspended
the rules and passed H.R. 4168, a new bill containing the Senate Committee
language. This would avoid the need for a conference if the Senate would
also pass the same bill. The Senate did so on September 28, 1996. This report
reviews the origin and development of the Federal Helium Program; analyzes
the choices that Congress faced in terminating the program; reviews the
issues that the National Academy of Sciences will study, and summarizes
H.R. 4168.
Federal interest in helium began with World War I when its military value as
an inert lifting gas was recognized by the Army and Navy. The Bureau of
Mines' involvement in the Helium Program dates back to passage of the Helium
Act of 1925 under which the Bureau was authorized to build and operate a
large-scale helium extraction and purification plant. This plant went into
operation in 1929 at Amarillo, Texas. Demand increased significantly during
World War II and four more plants were built, including the Exell, Texas
plant, which is now the Bureau's only operating plant. Private helium
operations followed passage of the Helium Act Amendments of 1960 (P.L. 86-777)
which authorized the Secretary of the Interior (authority delegated to the
Bureau of Mines) to enter into long-term contracts for the acquisition and
conservation of helium to be stored in the Cliffside Reservoir near Amarillo,
Texas. The Act directed the Secretary of the Interior to operate and maintain
helium production and purification plants and related storage, transmission,
and shipping facilities. The Act also authorized the Secretary to borrow from
the Treasury up to $47.5 million per year, at compound interest, to purchase
helium in lieu of direct appropriations. The 1960 Act required the Secretary
of the Interior to determine the net worth of assets of the Helium Program
acquired prior to 1960 ($40 million) and establish this as debt in the Helium
Fund to which subsequent borrowing would be added. The Act stipulated that the
Bureau of Mines set prices that would cover all of the program's costs,
including debt and interest, and provided a period of 25 years to pay back the
debt (with a 10-year extension to 1995). In addition, federal agencies and
contractors were required to buy helium from the Bureau of Mines.
As a result of the 1960 Act, four private natural gas producing companies
built five helium extraction facilities and entered into 22-year contracts
with the Bureau of Mines. Because demand for helium did not meet the forecast
of the late 1950s, the Bureau of Mines began to borrow from the Treasury as
authorized to pay for helium purchases. In 1973, the government had 970
million cubic meters (35 billion cubic feet) of helium in storage, which was
far in excess of projected government needs, and canceled the purchase
contracts. This led to several years of litigation during which most private
helium extraction plants remained idle.
Where is Helium Produced: World helium resources exclusive of the United
States are estimated at 18 billion cubic meters (650 billion cubic feet) of
which 9.2 billion cubic meters are in the former Soviet Union, mostly in
Russia. Other helium resources are located in Algeria, 2.1 billion cubic
meters; Canada, 2.1 billion cubic meters; China, 1.1 billion cubic meters;
Poland, 0.8 billion cubic meters; and the Netherlands, 0.7 billion cubic
meters.
The helium resources of the United States are estimated to be about 13
billion cubic meters (470 billion cubic feet). This includes 1.0 billion
cubic meters (34 billion cubic feet) in storage in the government stockpile,
6.8 billion cubic meters (250 billion cubic feet) in helium-rich natural gas
(0.3% helium or more), and 5.2 billion cubic meters (190 billion cubic feet)
in helium-lean natural gas (less than 0.3% helium). Other than the two major
helium-rich natural gas fields (Riley Ridge in southwestern Wyoming and
Hugoton extending from southwest Kansas through the Oklahoma and Texas
Panhandles), most of the helium-rich natural gas fields in the United States
will be exhausted by the year 2000. As these fields deplete, future
production will probably shift to extracting helium from helium-rich natural
gas with little fuel value and from helium-lean resources.
Uses of Helium: Liquid Helium is used for several things, including
Chilling powerful magnets in Magnetic Resonance Imaging and Spectroscopy
Cryogenic Research. Gaseous Helium is used for Gas Chromatography, Leak
Detection, Scuba Diving, Medical Therapy, Controlled/Modified Atmospheres
Balloons (including the ones in the Macy's Parade) and Airships. It is
also used in Welding, and as a heat transfer medium. Liquid Helium is an
ideal source of cold for superconductivity and for low temperature
applications. In particular liquid Helium enables the development of the
high strength magnetic fields required in NMR (Nuclear Magnetic Resonance
Imagery) Spectroscopy and MRI (Magnetic Resonance Imagery) medical body
scanners. Liquid Helium is also used extensively in low temperature research.
f:\12000 essays\sciences (985)\Chemistry\Ideal gas vs a real gas.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
An ideal gas is a theoretical gas which perfectly fits into the equation PV= nRT . An ideal gas is different from a real gas in many ways. An ideal gases' mass can be disregarded in the equation because it has none; this is because an ideal gas is said to be a particle and particles do not have any mass. Ideal gases obtain no volume unlike real gases which obtain small volumes. Also, since ideal gas particles excerpt no attractive forces, their collisions are elastic. Real gases excerpt small attractive forces. The pressure of an ideal gas is much greater than that of a real gas since its particles lack the attractive forces which hold the particles back when they collide. Therefore, they collide with less force. The differences between ideal gases and real gases can be viewed most clearly when the pressure is high, the temperature is low, the gas particles are large, and when the gas particles excerpt strong attractive forces. Monoatomic gas molecules are much closer to ideal gases than other particles since their particles are so small. Because of the differences between ideal and real gases, Van der Waals created an equation to relate the two.
f:\12000 essays\sciences (985)\Chemistry\Insulation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
The experimenter is testing on denim, cotton T-shirt material, wool fabric,
thermal underwear, polyester fabric, and a Ziplock bag with no insulator. From
research the experimenter learned that wool is a fine soft wavy hair that forms all
or part of the protective coat of a sheep. Since ancient times it was harvested to
provide clothing and is an important part in textile trade because of its insulation.
Woolen fabric is when the woolen system uses short or mixed long and short
fiber where no combing is done. It has a rough appearance and is most suitable
for blankets, overcoats, and tweeds. Denim which the experimenter is also
testing is the material used to make blue jeans and is currently one of the world's
most popular fabrics. It is fairly heavy and is made with a blue cotton warp and a
white cotton filling (Groilers, 1996). The thermal underwear is duofold, with an
outer layer made of 65% cotton, 25% wool, and 10% nylon, and an inner layer
made of 100% cotton.
It's the winter again and the weather is becoming colder. Each morning
many people wonder what to wear to stay as warm as possible, but they aren't
sure which material will keep them warmest. The experiment was chosen to see
which clothing insulator retains the most heat. "Insulation is material that
protects against heat, cold, electricity, or sound." (Science Encyclopedia, 1984).
In this case the insulation will be protecting against a cold temperature.
The hypothesis is if denim, cotton T-shirt material, wool fabric, polyester
fabric, thermal underwear, and a Ziplock bag with out insulating material are
tested to see which one retains the most heat, then wool fabric will retain the
most heat because it holds an important place in today's textile trade because of
its good insulation and the fact that it comes from the protective coat of sheep
who need to stay warm and use that as their insulator.
Procedure
The first thing the experimenter does is fill the inside of five, gallon-sized Ziplock
bags with the insulation material so it is one centimeters thick all around. Leave
the sixth Ziplock bag empty because it will serve as the control group. Then
fasten the insulating materials to the inside of the gallon sized Ziplock bag with
adhesive tape.
Next the experimenter boils ten pints of tap water and let it cool until
(using the candy thermometer) the temperature drops to 49 degrees Celsius.
Then immediately fill each of the six canning jars with equal amounts of the
water. Immediately after that drop a regular thermometer into each jar, and cap
it tightly and as quick as you can. Put the six jars into the six Ziplock bags and
seal them. Then put the six jars which are inside the Ziplock bags in the
refrigerator for two hours and take the temperature readings every 15 minutes.
Repeat all these steps two more times. Then look and compare your readings
and note how they changed over time and graph your data and make a
conclusion.
Results
The purpose of this experiment was to find the effect of different forms of
insulation on how much heat each type retains to show the best insulators for
keeping the human body warm. The mean had thermal underwear retaining the
most heat at the end of the two hours. The mean temperatures at the end of the
two hours were denim 27 degrees Celsius, cotton T-shirt material 27, wool fabric
28, thermal underwear 28.67, polyester fabric 27.33, and no insulation 19.67
degrees Celsius. The range at the end of two hours was Denim had a range of
zero, Cotton T-shirt material had two, Wool Fabric zero, Thermal Underwear
three, Polyester Fabric had one, no insulation had a range of one. The range
was not big, so the experiment was accurate. Basically the experiment showed
most clothing insulations retained near the same amount of heat. Thermal
underwear retained the most heat by an average of about one degree over the
other insulations. Another major result was all insulations retain much more heat
than no insulation.
Data Table
Conclusion
The purpose of this experiment was to find the effect of different forms of
insulation on how much heat each type retains to show the best insulators for
keeping the human body warm. The hypothesis is if denim, cotton T-shirt
material, wool fabric, polyester fabric, thermal underwear, and a Ziplock bag with
out insulating material are tested to see which one retains the most heat, then
wool fabric will retain the most heat because it holds an important place in
today's textile trade because of its good insulation and the fact that it comes
from the protective coat of sheep who need to stay warm and use that as their
insulator. The hypothesis was not supported because wool fabric had an
average of two-thirds of a degree Celsius less than thermal underwear at the
end of the two hours of testing. Thermal Underwear retained more heat because
it was designed to keep you as warm as possible. Another major result was all
the insulations were around the same temperature (27-28.67 degrees Celsius) at
the end of two hours of testing and the bag without insulation was only 19
degrees Celsius. The experimenter thinks this is because all clothes has
insulation as a high priority and thermal underwear has insulation as its highest
priority. The experimenter thinks an experimental error is he always took the
temperatures in the same order. The difference in seconds could change the
data by a degree. The experimenter could improve this by rotating or making
the measuring order randomly. Other areas of study can be the effect of the
amount of layers of insulation on how much heat is retained.
Bibliography
______. "Wool". Word Search. Groliers. 1995.
______. "Cotton". Netscape. 1995.
Bochinski, Julianne, Science Fair Projects, Wiley Science Editions, New York,
1991
f:\12000 essays\sciences (985)\Chemistry\Intermolecular Bonding.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IB Chemistry - Intermolecular Bonding Essay:
Write an essay on intermolecular bonding. Explain how each type of bond arises and the evidence for the existence of each. Comment on their strengths in relation to the types of atoms involved; the covalent bond and relative to each other. Use the concepts of different types and strengths of intermolecular bonds to explain the following:
There exists four types of intermolecular bonding, they include ionic, covalent, Van der waals and hydrogen bonding. In order to describe the existence of such bonding you must also understand the concepts of polarity, polar and non-polar, and electronegativity.
Ionic bonds are created by the complete transfer of electrons from one atom to another. In this process of electron transfer, each atom becomes a ion that is isoelectronic with the nearest noble gas., the substance is held together by electrostatic forces between the ions. The tendency for these ions to be formed by elements is corespondent to the octet rule, when atoms react,, they tend to do so in such a way that they attain an outer shell containing eight electrons. The factors that effect the formation of ions are ionization energy, electron affinity, lattice energy.
Figure 1
The transfer of electrons involved in the formation of (a) sodium chloride and (b) calcium fluoride. Each atom forms an ion with an outer shell containing eight electrons.
For many elements, compounds cannot be formed by the production of ions, since the energy released in the formation of the lattice of ions would be insufficient to overcome the energy required to form the ions would be insufficient to overcome the energy required to form the ions in the first place. In order for the atoms to achieve a noble gas configuration they must use another method of bonding by the process of electron sharing. From figure 2, you can see that the example of two hydrogen atoms combing. As the atoms get closer together, each electron experiences an attraction towards the two nuclei and the electron density shifts so that the most probable place to find the two electrons is between the two nuclei. Effectively each atom now has a share of both the electrons. The electron density between the two nuclei exerts an attractive force on each nucleus keeping them held tightly together in a covalent bond.
Figure 2
A covalent bond forming between two hydrogen atoms.
It is also possible for two atoms share more than one pair of electrons, sharing two pairs results in a double bond and sharing three pairs results in a triple bond.
Electronegativity is a measure of how powerful a atom is in a molecule to attract electrons. Polarization is a term given to name the unequal sharing of electrons in a covalent bond. Molecules that have unequal sharing of electrons are called polar molecules and dipole molecules are ones which have the charge separated, therefore all polar molecules must have a dipole attraction. Non-polar molecules are ones in which there shapes are symmetrical so the electrons are evenly distributed. Polar molecules have a permanent dipole in other words they have a permanent separation of charge. As a result of this, polar molecules are attracted to one another by forces called permanent dipole-permanent dipole interactions, in which the negative end of one molecule is attracted towards the positive end of another. These interactions decrease quite rapidly as the distance between molecules increases. They are approximately 100 times weaker than covalent bonds.
There are also very strong types of dipole-dipole interactions called Hydrogen bonds. Evidence for the existence of such intermolecular forces lies in the properties of hydrides formed by element in groups 4,5,6 and 7. While all the hydrides formed in group 5 behave in a similar way, the hydrides of other groups do not. This suggest that the intermolecular forces in these hydrides are much stronger than expected compared with other hydrides of the other elements in each group. This type of intermolecular bonding occurs in two molecules that each contain a polar bond between hydrogen and another atom.
Figure 3
The variation in boiling points of the hydrides of groups IV, V, VI and VII.
The forces of attraction that exists between two non-polar molecules also arise due to an uneven charge distribution. If we consider a neutral atom, at any particular moment the centres of positive and negative charge may not coincide, due to an instantaneous asymmetry in the electron distribution around the nucleus. So, there must be an instantaneous dipole in the molecule. Any other atom next to an atom with an instantaneous dipole will experience an electric field due to the dipole, and so itself develop an induced dipole. These instantaneous dipole-induced dipole interactions between neighboring molecules enable non-polar molecules to come together. This is the basis for another branch of intermolecular forces known as Van der waals forces. These forces are weak, short-ranged forces of attraction between molecules. They are the weakest force of attraction between atoms.
Covalent bond strengths are typically between 200 and 500 kJ mol-1. Hydrogen bonds are weak in comparison, they range from 5 to 40 kJ mol-1. Van der waals forces are weaker still having a strength of about 2 kJ mol-1. Hydrogen bonds and Van der waals forces are not strong enough to influence the chemical behavior of most substances, although they may affect the physical properties of substances.
a. The heat of solvation arises when an ionic substance is dissolved in a polar solvent. Intermolecular bonds form between polar solvent and ionic substance molecules. Bonds that are within the ionic lattice between molecules are broken as charged molecules are attracted to solvent molecules. Ionic lattice bonds that are broken release more energy than the energy put into the newly formed intermolecular bonds this explains why an exothermic reaction occurs.
b. Sodium Chloride is a ionic compound and when mixed in tetrachloromethane, it does not dissolve. The tetrachloromethane which has a symmetrical tetrahedral shape is a non-polar substance so no intermolecular attractions between molecules occur. Since there are no intermolecular attractions, no forces are created which can attract NaCl molecules away from their ionic lattice. In the case of NaCl and ethanol the polar molecules forms intermolecular attractions with charged NaCl molecules, pulling the molecules away from the ionic lattice and therefore allowing NaCl to dissolve in ethanol.
c. Water is a polar molecule and oil is non-polar. If no intermolecular bonding occurs, the two substances will be immiscible.
d. All organic acids such as ethanoic acid, CH3 -COOH, are partially polar molecules. One side of the molecule is non-polar while the other side is polar. Ethanoic acid has extending hydrogen atoms that form hydrogen bonds with oxygen atoms from the COOH group of neighboring molecules. So a dimer is formed. When organic acids are heated, energy is needed to overcome both van der Waals forces and hydrogen bonds between molecules. This explains why organic acids have a higher than expected boiling and melting point than other similar compounds.
e. Through the process of condensation polymerization, amino acids form into polypeptide chains, or proteins. Hydrogen bonds form to stabilize the structure of these compounds and the more hydrogen bonds present in a polypeptide, the more stable it is. At 40 degrees Celsius, molecules in protein gain enough kinetic energy to vibrate rapidly and overcome and break the stabilizing hydrogen bonds. As the bonds break, the protein loses shape and returns to a primary structure. This is the process which makes the compound denatured. A similar process occurs with DNA. DNA is composed of two polynucleotide chains, attached together by hydrogen bonds. Hydrogen bonds form between the complimentary base pairs and this occurs throughout the double helical structure. At 40 degrees Celsius, the structure vibrates so rapidly that the hydrogen bonds between the base pairs are broken. As these hydrogen bonds break, the DNA molecule loses its shape and is denatured.
f. The boiling point of water can be explained by the hydrogen bonds present. Oxygen has a very high electronegativity value and when bonded to hydrogen, a very polar molecule is formed. The hydrogen bonds occur throughout the liquid so when water is boiled, enough kinetic energy must be supplied to the atoms to break all of the hydrogen bonds before water boils. This explains why water has such a high boiling point.
h. The increased boiling points and melting points of alkanes of increasing size are due to stronger intermolecular forces. The only significant intermolecular force in alkanes are the van der Waals interaction. This explains why, as the size of alkanes increases, their boiling and melting points also increase.
i. Dimethylpropane molecules have a lower boiling point than pentane molecules. Branching produces less efficient packing and thus weakens the intermolecular interactions. Dimethylpropane also has a lower melting point because of it's repeating crystal structure. Pentane has a highly symmetrical structure and because of the ease with which it packs into the solid crystal structure, it has a higher boiling point.
j. Iodine is a molecular solid at room temperature. Although individual atoms are covalently bonded in pairs, weak van der Waals forces act between them and induced dipoles are formed, these act throughout the structure and are strong enough to hold the molecules in place. At the same temperature, chlorine molecules are in a gaseous state. Like iodine, the atoms are bonded covalently in pairs, but because Cl atoms are smaller in size, van der Waals forces are even weaker than in iodine and not strong enough to hold chlorine molecules in place. Therefore Cl molecules remain in a gaseous state at room temperature.
f:\12000 essays\sciences (985)\Chemistry\Involvement of K during suntracking.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Involvement of K+ in Leaf Movements During Suntracking
Introduction
Many plants orient their leaves in response to directional light signals. Heliotropic movements, or movements that are affected by the sun, are common among plants belonging to the families Malvaceae, Fabaceae, Nyctaginaceae, and Oxalidaceae. The leaves of many plants, including Crotalaria pallida, exhibit diaheliotropic movement. C. pallida is a woody shrub native to South Africa. Its trifoliate leaves are connected to the petiole by 3-4 mm long pulvinules (Schmalstig). In diaheliotropic movement, the plant's leaves are oriented perpendicular to the sun's rays, thereby maximizing the interception of photosynthetically active radiation (PAR). In some plants, but not all, his response occurs particularly during the morning and late afternoon, when the light is coming at more of an angle and the water stress is not as severe (Donahue and Vogelmann). Under these conditions the lamina of the leaf is within less than 15° from the normal to the sun. Many plants that exhibit diaheliotropic movements also show paraheliotropic response as well. Paraheliotropism minimizes water loss by reducing the amount of light absorbed by the leaves; the leaves orient themselves parallel to the sun's rays. Plants that exhibit paraheliotropic behavior usually do so at midday, when the sun's rays are perpendicular to the ground. This reorientation takes place only in leaves of plants that are capable of nastic light-driven movements, such as the trifoliate leaf of Erythrina spp. (Herbert 1984). However, this phenomenon has been observed in other legume species that exhibit diaheliotropic leaf movement as well. Their movement is temporarily transformed from diaheliotropic to paraheliotropic. In doing so, the interception of solar radiation is maximized during the morning and late afternoon, and minimized during midday. The leaves of Crotalaria pallida also exhibit nyctinastic, or sleep, movements, in which the leaves fold down at night. The solar tracking may also provide a competitive advantage during early growth, since there is little shading, and also by intercepting more radiant heat in the early morning, thus raising leaf temperature nearer the optimum for photosynthesis.
Integral to understanding the heliotropic movements of a plant is determining how the leaf detects the angle at which the light is incident upon it, how this perception is transduced to the pulvinus, and finally, how this signal can effect a physiological response (Donahue and Vogelmann).
In the species Crotalaria pallida, blue light seems to be the wavelength that stimulates these leaf movements (Scmalstig). It has been implicated in the photonastic unfolding of leaves and in the diaheliotropic response in Mactroptilium atropurpureum and Lupinus succulentus (Schwartz, Gilboa, and Koller 1987). However, the light receptor involved can not be determined from the data. The site of light perception for Crotalaria pallida is the proximal portion of the lamina. No leaflet movement occurs when the lamina is shaded and only the pulvinule is exposed to light. However, in many other plant species, including Phaseolus vulgaris and Glycine max, the site of light perception is the pulvinule, although these plants are not true suntracking plants. The compound lamina of Lupinus succulentus does not respond to a directional light signal if its pulvini are shaded, but it does respond if only the pulvini was exposed. That the pulvinus is the site for light perception was the accepted theory for many years. However, experiments with L. palaestinus showed that the proximal 3-4 mm of the lamina needed to be exposed for a diaheliotropic response to occur. If the light is detected by photoreceptors in the laminae, somehow this light signal must be transmitted to the cells of the pulvinus. There are three possible ways this may be done. One is that the light is channeled to the pulvinus from the lamina. However, this is unlikely since an experiment with oblique light on the lamina and vertical light on the pulvinus resulted in the lamina responding to the oblique light. Otherwise, the light coming from the lamina would be drowned out by the light shining on the pulvinus. Another possibility is that some electrical signal is transmitted from the lamina to the pulvinus as in Mimosa. It is also possible that some chemical is transported from the lamina to the pulvinus via the phloem. These chemicals can be defined as naturally occuring molecules that affect some physiological process of the plant. They may be active in concentrations as low as 10-5 to 10-7 M solution. Whatchemical, if any, is used by C. pallida to transmit the light signal from the lamina of the leaflet to its pulvinule is unknown. Periodic leaf movement factor 1 (PLMF 1) has been isolated from Acacia karroo, a plant with pinnate leaves that exhibits nychinastic sleep movements, as well as other species of Acacia, Oxalis, and Samanea. PLNF 1 has also been isolated from Mimosa pudica, as has the molecule M-LMF 5 (Schildknecht).
The movement of the leaflets is effected by the swelling and shrinking of cells on opposite sides of the pulvinus (Kim, et al.) In nyctinastic plants, cells that take up water when a leaf rises and lose water when the leaf lowers are called extensor cells. The opposite occurs in the flexor cells (Satter and Galston). When the extensor cells on one side of the pulvinus take up water and swell, the flexor cells on the other side release water and shrink. The opposite of this movement can also occur. However, the terms extensor and flexor are not rigidly defined. Rather, the regions are defined according to function, not position. Basically, the pulvini cells that are on the adaxial (facing the light) side of the pulvinus are the flexor cells, and the cells on the abaxial side are the extensor cells. Therefore, the terms can mean different cells in the same pulvinus at varying times of the day. By coordinating these swellings and shrinkings, the leaves are able to orient themselves perpendicular to the sunlight in diaheliotropic plants.
Leaf movements are the result of changes in turgor pressure in the pulvinus. The pulvinus is a small group of cells at the base of the lamina of each leaflet. The reversible axial expansion and contraction of the extensor and flexor cells take place by reversible changes in the volume of their motor cells. These result from massive fluxes of osmotically active solutes across the cell membrane. K+ is the ion that is usually implicated in this process, and is balanced by the co-transport of Cl- and other organic and inorganic anions.
While the mechanisms of diaheliotropic leaf movements have not been studied extensively, much data exists detailing nyctinastic movements. Several ions are believed to be involved in leaf movment. These include K+, H+, Cl-, malate, and other small organic anions. K+ is the most abundant ion in pulvini cells. Evidence suggests that electrogenic ion secretion is responsible for K+ uptake in nyctinastic plants. The transition from light to darkness activates the H+/ATPase in the flexor cells of the pulvinus. This leads to the release of bound K+ from the apoplast and movement of the K+ into the cells by way of an ion channel. This increase in K+ in the cell decreases the osmotic potential of the cells, and water than influxes into the flexor cells, increasing their volume. In Samanea, K+ levels changed four-fold in flexor cells during the transition from light to darkness. In a similar experiment, during hour four of a photoperiod, the extensor apoplast of Samanea had 14mM and the flexor apoplast had 23 mM of K+. After the lights were turned off, inducing nyctinastic movements, the K+ level in the apoplast rose to 72 mM in the extensor cells and declined to 10mM in the flexor cells. Therefore, it appears that swelling cells take up K+ from the apoplast and shrinking cells release K+ into the apoplast.
In the pulvinus of Samanea saman, depolarization of the plasma membrane opens K+ channels (Kim et al). The driving force for the transport of K+ across the cell membranes is apparently derived from activity of an electrogenic proton pump. This creates an electrochemical gradient that allows for K+ movement. From concentration measurements in pulvini, K+ seems to be the most important ion involved in the volume changes of these cells. How then, is K+ allowed to be at higher concentrations inside a cell than out of it? Studies indicate that the K+ channels are not always open. In protoplasts of Samanea saman, K+ channels were closed when the membrane potential was below -40mV and open when the membrane potential was depolarized to above -40mV. A voltage-gated K+ channel that is opened upon depolarization has been observed in every patch clamp study of the plasma membranes of higher plants, including Samanea motor cells and Mimosa pulviner cells.
It is proposed that electrogenic H+ secretion results in a proton motive force, a gradient in pH and in membrane potential, that facilitates the uptake of K+, Cl-, sucrose, and other anions. External sodium acetate promotes closure and inhibits opening in Albizzia. This effect could be caused by a decrease in transmembrane pH gradients. The promotion of opening and inhibition of closure of leaves by fusicoccin and auxin in Cassia, Mimosa, and Albizzia also implicate H+ in the solute uptake of motor cells, since both chemicals are H+/ATPase activators, stimulating H+ secretion from the plant cells into the apoplast. Vanadate, an H+/ATPase inhibitor, inhibits rhythmic leaflet closure in Albizzia. Although this conflicts with the movement effected by fusicoccin and auxin, it is believed that vanadate affects different cells, acting upon flexor rather than extensor cells. The model indicates that there are two possible types of H+ pumps. One is the electrogenic pump that creates the pmf mentioned above and opens the K+ channels. The other pump is a H+/K+ exchanger, in which K+ is pumped into the cell as H+ is pumped out of the cell in a type of antiport. The presence of this typ of pump is only hypothetical, however, since at present there is no evidence to support it. Thus there are two possible ways for K+ to enter the pulvini cells. The buildup of the pH gradient may also promote Cl- entry into the cell via a H+/Cl- cotransporter as the H+ trickles back into the cell. Cl- ions may also be driven by the electrochemical gradient for Cl- via Cl- channels, as with K+. A large Cl- channel was observed in the membrane of Samanea flexor protoplasts. The channel closed at membrane potentials above 50mV and opened at potentials as low as -100mV.
Light-driven changes in membrane potential may be involved in the activation of these proton pumps. This may be mediated by effects on cytoplasmic Ca2+. Ca2+-chelators inhibit the nyctinastic folding as well as the photonastic unfolding responses in Cassia. Thus Ca2+ may act as a second messenger in a calmodulin-dependent reaction. The Ca2+ may be what turns on the electrogenic proton pumps, causing changes in membrane potential. However, there is no direct evidence to support this hypothesis, although chemicals that are known to change calcium levels have been shown to alter the leaf movement of Cassia fasciculata and other nyctinastic plants. One study involving Samanea postulates that Ca2+ channels are also present in the plasma membrane of pulvini cells, and inositol triphoshate, a second messenger in the signal transduction pathway in animals, stimulates the opening of these channels. This insinuates that some light signal binds to a receptor on the outside of the cell and stimulates this transduction pathway. However, whether this hypothesis is true is unclear. It has also been proposed that an outwardly directed Ca2+ pump functions as a transport mechanism to restore homeostasis after Ca2+ uptake through channels.
The changes in Cl- levels in the apoplast are less then that for K+. The Cl- levels are 75% that of K+ in Albizzia, 40-80% in Samanea, and 40% in Phaseolus. Therefore, other negatively charged ions must be used to compensate for the positive charges of the K+. Malate concentrations vary, and it is lower in shrunken cells than in swollen cells. It is believed that malate is synthesized when there is not enough Cl- present to counteract the charges of the K+.
An experiment with soybeans (Cronland) examined the role of K+ channels and H+/ATPase in the plasma membrane in paraheliotropic movement. This was done by treating the pulvini with the K+ channel blocker tetraethylammonium chloride (TEA), the H+/ATPase activator fusicoccin, and the H+/ATPase inhibitors vanadate and erythrosin-B. In all cases the leaf movements of the plant were inhibited, leading to the hypothesis that the directional light results in an influx of K+ into the flexor cells from the apoplast and an efflux of K+ from the extensor cells into the apoplast, and these movements are driven by H+/ATPase pumps. This combined reaction results in the elevation of the leaflet towards the light.
In this study, the diheliotropic movements of C. pallida are examined. The purpose of this experiment is to determine which ions, if any, are used by pulvini cells of Crotalaria pallida Aiton to control the uptake of water, thereby affecting diheliotropic movement. As mentioned before, most studies investigating the mechanisms of leaf movement have been performed on nyctinastic plants. These plants respond to light and dark changes, not direction or intensity of a light stimulus. Therefore, it is of interest to learn whether the same principles can be applied to diheliotropic movement.
Different inhibitors at varying concentrations will be injected individually into the pulvinus of C. pallida, and the suntracking ability of the plant will then be measured. Tetraethylammonium (TEA), a K+ channel blocker will be added to test whether K+ is involved in suntracking. Likewise, , a Cl- channel blocker will be added to determine if Cl- is used. Vanadate, a H+/ATPase inhibitor, will determine if hydrogen ions are pumped across the plasma membrane, causing a hyperpolarization of the membrane. Fusicoccin, a H+/ATPase activator will also be tested .
f:\12000 essays\sciences (985)\Chemistry\Iron.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IRON
Iron in its pure state is soft, malleable and ductile (that can be stretched, drawn or hammered thin without breaking ((Webster's Dictionary, 419, 1988)) with a hardness of 4-5. It is easily magnetized at room temperatures and this property disappears when heated above 790 degrees Celsius.. Metal iron occurs in a free state in only a few localities, notably Greenland (Encarta, 1996). One of the physical properties of iron as an ore is its color which can be black, brown or even reddish. Hematite is the most important iron ore, commonly occurs as "kidney ore" - so -called because of its shape (Symes, 1988, 56). Other ores included goethite, magnetite, siderite, and bog iron (Encarta, 1996). Even though iron is tough and hard it is still easy to work. Iron is a active metal and will combine with halogens, carbon, etc. It has an atomic weight 55.847, it's atomic number is 26, it's specific gravity is 7.86, it's melting point is 1535 degrees Celsius, and it's boiling point is 3000 degrees Celsius. It burns in oxygen forming ferrous oxide. When exposed to moist air, iron becomes corroded, forming a reddish - brown, flaky, hydrated ferric oxide, commonly known as rust. (Encarta, 1996)
Iron is formed in shallow seas. It comes out of the water and collects on the sea floor. This creates an underwater deposit. This process occurs over billions of years. Through plate movement the whole sea floor is eventually moved up out of the water. Once out of the water, the iron has formed a land deposit. The biggest iron deposit in the United States is in the Great Lakes. Northern Minnesota is often called the Iron Range. There are two ways iron deposits are located. In the first method special machines that detects the iron's magnetism are used ti find a deposit. In the second method a plane with special equipment flies over an area of land suspected of having ore deposits and shoots down sound waves to determine if that area contains iron deposit. The waves come back up to the plane and determined by the pattern one can tell if there is an iron deposit.
In the early 1990's annual production of iron ore in the United State exceeded 56 million metric tons (Encarta, 1996). There are two ways in which iron is mined. The two ways are open pit and shaft mining. Open pit mining is used 85% of the time for shallow deposits. Open pit mining is also call strip mining. The way open pit mining works is the top soil is removed with a bulldozer and the land is terraced downward. Then the miners set off a large blast which scattereds and loosens the ores. Truckloads of ore are carried to the surface. As the pit gets deeper and deeper the expense increase and at some point it is more economical to go to the second method of mining. The second method in which iron is mined is called shaft mining. Shaft mining is usually used for deep and concentrated iron deposits. The way shaft mining works is that a large machine crushes the rock and then conveyor belts are used to transport it up to the surface.
The definition of refining is to make fine or pure; free from impurities; dross, alloys, sediment, etc. When they refine iron they alloy it with other metals. This produces a very common metal known as steel. Steel is very widely used. It is used from making buildings to making single screws. Though one might think because steel is made from refined iron the properties would be similar. This is false. It is the same as gasoline is refined oil. One might think they are pretty similar but they are very different.
Mining is considered to be extremely detrimental to the environment. Open pit mining has left a major effect on the surrounding environment. Large areas of land are literally dug up and large creators are left. The huge holes disrupt the environment and aren't aesthetic. However, some pits, such as those on the Mesabi Range in Minnesota have been turned into lakes and this has created recreational areas for people as well as habitats for birds, fish and other wild life. The noise of the large machines and the frequent blasts in mining must scare away much of the animal life. Fumes from dynamite explosions produce extemely poisonous gases and in mining pockets of hazardous materials may be released. Dust produced during mining can cause illness, especially lung disease; black lung disease is associated primarily with coal mining. In shaft mining, gas can accumulate and explosions can results. Shafts have also collasped killing workers. Refining of the ore causes dust and fumes to be released into the environment. In the smelting process to make steel, acidic clouds are formed from the burning of coal. In the U.S., scrubbers are required in smoke stacks to prevent acid rain. Through the years, iron and steel have been used to build our country and provide employment. It's hard to imagine what life would be like if man had not learned how to mine and manufacture iron. Mining is now done in a manner to conform more to the environment.
Once the iron ore is mined it is taken to a refinery and purified. Much of the iron from the Northern Minnesota ranges are loaded on barges at Duluth and shipped via the Great Lakes to port near Pennsylvania, such as Erie and then shipped to cities such as Pittsburgh and Bethlehem. Shipping it here is easier than to ship the coal to make steel as more coal is used than iron in this process. Pure iron has very limited use. It is used to produce galvanized sheet metal and electromagnets. Most of the iron used commercially contains small amounts of carbon and other impurities. This includes wrought iron, cast iron and steel. Smelting was used to produce early forms of these. Today blast furnaces that use blasts of air heat the iron with ferroalloys to produce steel. (Encarta, 1996). The steel is then shaped into screws which are used to hold the face mask on the shell of the football helmet. Steel is used for this purpose due to its strength. The physical properties of iron are totally changed once it has become steel and therefore good to make screws as it will no longer rust.
f:\12000 essays\sciences (985)\Chemistry\Lab protocols.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Through trial and error my class and I have learned that screwing around and misbehaveing in lab not only results in multiple page papers, but can also be harmful, dangerous, and costly to our teacher and school.
There are many rules or "protocols" that should be followed in a lab enviroment. In this situation there are ten basic rules that must be followed at all times while participating in lab experiments. These are here for our own safety and should be followed for this reason.
The first rule is that everyone in the lab should wear eye protection. In a lab enviroment eye injury is very common. Eye protection greatly reduces this risk of injury.
The second is no horseplay. Horseplay can cause injury to yourself others and can cause damage to the laboraty.
The third rule is that you should only interact with your partner and the teacher. This will prevent distraction from your set experiment.
The fourth rule is that you should not leave your experiment unattended whether you think it is dangerous or not. This rule is completely self explanatory for safety purposes.
The fifth rule is to be extremely careful with equipment. Not only for money purposes but also for your own personal safety.
The sixth rule is to not touch anything that the teacher or the lab specifically instructs you to. Because you don't need to.
The seventh rule states that activities should only be done if they are specifically discussed in your lab. This is for safety purposes and for the liability of the school.
The eighth rule is that you are not to contaminate chemicals by using equipment in more than one substance without washing it thoroughly. Doing this can cause explosion, fire, bodily harm, poisonous gasses, or possibly death. Also do not return chemicals to the original container after they have been used.
The ninth rule is that you must read your lab handout thoroughly before experimenting in a lab environment. It is a good idea to ask any necessary pertinent questions prior to partaking in your lab. Follow directions exactly. This is to prevent possible harmful mistakes that may result in death, poisonous gasses, bodily harm , explosion , or fire.
Last but not least the tenth rule is that before taking leave of the laboratory environment set forth by the laboratory director and our prestigous school. You should thoroughly clean to your best ability the above stated area. This is to prevent possible chemical burns, toxic gasses, highly flammable substances, and acids from forming on countertops. It also helps to keep the lab environment sightly to the eyes.
I am truly sorry for any disturbances my classmates may have caused. I also hope I have redeamed myself by writing this paper and herefore promising I will "try " to be less disruptive in the future.
f:\12000 essays\sciences (985)\Chemistry\lead and environment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Some materials are so commonplace that we take them for granted. One of those
materials is a grayish metal that has been with us for thousands of years. That metal is lead,
still one of the world's most useful substances, and one that never ceases to find a role in
human society.
Lead has the atomic symbol of Pb (for plumbum, lead in Latin). The atomic number
for lead is 82 and the atomic mass is 207.19 AMU. It melts at about 327.502 oC and boils at
1740 oC. Lead is a heavy, ductile, soft, gray solid. It is soluble in nitric acid and insoluble in
water. It is found in North, Central and South America, Australia, Africa and Europe. In
modern times, lead has found a wide range of uses, and world demand for lead and its
products has steadily increased. Lead's usefulness stems from the metal's many desirable
properties: softness, high density, low melting point, ability to block radiation, resistance to
corrosion, readiness to form alloys and chemical compounds, and ease of recycling. Its
versatility, as well as its physical and chemical properties, accounted for its extensive use.
Lead can be rolled into sheets which can be made into rods and pipes. It can also be molded
into containers and mixed with other metallic elements.
Lead was used in ancient times for making coinage, art objects and water pipes. One
of the first known toxic substances, lead was used by the Romans for lining aqueducts and in
glazes on containers used for food and wine storage; and it is suspected to have resulted in
widespread lead poisoning. Members of the famous Franklin Expedition to the Northwest
Passage in the mid-1840s met a similar fate, being poisoned from lead in solder, widely used
at the time to seal tins used to store foods. Until recently, one of the most significant uses was
an anti-knock additive in gasoline. In the 1970s and 1980s, steps were taken to reduce the
use of leaded gas. By 1990, these actions had virtually eliminated the use of lead in gasoline.
Lead is also one of the best and earliest examples of recycling about 55 percent of the lead
used in Canada comes from recycled material.
One particular category of toxic tort is injury caused by exposure to lead-based paint. The
hazards of lead-based paint have been known since the early 1900s, when the use of lead in the
manufacture of paint was banned in Australia. The lead mining and lead pigment industries in the
United States were able, however, to forestall the banning the use of lead in the manufacture of paint
until 1978, when it (finally) became illegal in our nation. Lead poisoning occurs only when too much
lead accumulates in the body. Generally, lead poisoning occurs slowly, resulting from the gradual
accumulation of lead in the bone and tissue after repeated exposures. However, it is important to note
that young children absorb 50% of a lead ingestion, while adults absorb only 10%.
The greatest risk of injury from lead poisoning is to children under the age of seven, whose
developing bodies and brains are sensitive to even small amounts of lead, which can leave children
with subtle but irreversible injury that does not appear until many years after the exposure to lead. The
kinds of injuries lead causes in children include: learning disabilities, brain damage, loss of IQ points
and intellect, academic failure, neuropsychological deficits, attention deficit disorder, hyperactive
behavior, antisocial (criminal) behavior, neurological problems, encephalopathy (brain swelling),
major organ failure, coma, and death. These injuries can be life-threatening or can prevent a child from
realizing his or her scholastic, vocational, and financial potential, or from becoming a self-sufficient
adult.
To confirm lead poisoning, the best test is a venous blood lead level. If the blood lead level is
below 25 g/dL, then a serum ferritin level and other iron studies can be used to determine if iron
deficiency anemia exists. With an elevated blood lead level of 50 ug/dL, the conclusion is that the boy
is lead-poisoned. In this case, the child should be referred for appropriate chelation therapy
immediately.
One particular category of toxic tort is injury caused by exposure to lead-based paint. The
hazards of lead-based paint have been known since the early 1900s, when the use of lead in the
manufacture of paint was banned in Australia. The lead mining and lead pigment industries in the
United States were able, however, to forestall the banning the use of lead in the manufacture of paint
until 1978, when it (finally) became illegal in our nation. Lead poisoning occurs only when too much
lead accumulates in the body. Generally, lead poisoning occurs slowly, resulting from the gradual
accumulation of lead in the bone and tissue after repeated exposures. However, it is important to note
that young children absorb 50% of a lead ingestion, while adults absorb only 10%.
What we can do to prevent lead poisoning:
1. Do not burn lead debris.
2. Place lead debris in a six mil plastic bag.
3. For storage (of less then 48 hours), place storage bags in a secure area, away from children and
animals.
4. Lead materials and debris must be transported in a covered vehicle to a lined municipal
landfill in accordance to state regulations.
f:\12000 essays\sciences (985)\Chemistry\Linus Carl Pauling.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Linus Carl Pauling
Linus Carl Pauling was born in 1901 and died in 1994. He was an American chemist and physicist, whose investigations into the structure of molecules led to discoveries of how chemicals bond.
Pauling was born in Portland, Oregon, on February 28, 1901, and educated at Oregon State College and the California Institute of Technology (Caltech). He began to apply his insights into quantum physics as professor of chemistry at Caltech, where from 1927 to 1964 he made many of his discoveries. By devising techniques such as X-ray and electron diffraction, he was able to calculate the interatomic distances and angles between chemical bonds.
During the 1930s, Pauling introduced concepts that helped reveal the bonding forces of molecules. The Nature of the Chemical Bond, the result of these investigations, has been a major influence on scientific thinking since it was published in 1939. Pauling also investigated the atomic structure of proteins, including hemoglobin, and discovered that cell deformity in sickle-cell anemia is caused by a genetic defect that influences the production of hemoglobin. He was awarded the 1954 Nobel Prize in chemistry for his work. In later years Pauling fought ardently against nuclear weapons testing, warning the public of the biological dangers of radioactive fallout, and presented a petition to the United Nations in 1958 signed by over 11,000 other scientists. In 1962 he was awarded the Nobel Peace Prize, becoming the second person, after Marie Curie, to win two Nobel Prizes.
Throughout his scientific career, Pauling has followed his creative hunches, no matter how controversial they were. In 1970, for example, he advocated large doses of vitamin C to treat the common cold-a belief, however, that few medical authorities have endorsed.
f:\12000 essays\sciences (985)\Chemistry\Magnetic Susceptabilty.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract:
The change in weight induced by a magnetic field for three solutions of complexes was recorded. The change in weight of a calibrating solution of 29.97% (W/W) of NiCl2 was recorded to calculate the apparatus constant as 5.7538. cv and cm for each solution was determined in order to calculate the number of unpaired electrons for each paramagnetic complex. Fe(NH4)2(SO4)2€6(H20) had 4 unpaired electrons, KMnO4 had zero unpaired electrons, and K3[Fe(CN)6] had 1 unpaired electron. The apparent 1 unpaired electron in K3[Fe(CN)6] when there should be five according to atomic orbital calculations arises from a strong ligand field produced by CN-.
Introduction:
The magnetic susceptibility is a phenomena that arises when a magnetic moment is induced in an object. This magnetic moment is induced by the presence of an external magnetic field. This induced magnetic moment translates to a change in the weight of the object when placed in the presence of an external magnetic field. This induced moment may have two orientations: parallel to the external magnetic field of or perpendicular to the external magnetic field. The former is known as paramagnetism and the later is known as diamagnetism. The physical effect of paramagnetism is an attraction to the source of magnetism (increase in weight when measured by a Guoy balance) and the physical effect of diamagnetism is a repulsion from the source of magnetic field (decrease in weight when measured by a Guoy balance).
The observed magnetic moment is derived by the change in weight. This observed magnetic moment arises from a combination of the orbital and spin moments of the electrons in the sample with the spin component being the most important source of the magnetic moment. This magnetic moment is caused by the spinning of an electron around an axis acting like a tiny magnet. This spinning of the ³magnet² results in the magnetic moment.
Paramagnetism results from the permanent magnetic moment of the atom. These permanent magnetic moments arise from the presence of unpaired electrons. These unpaired electrons result in unequal number of electrons in the two possible spin states (+1/2. -1/2). When in the absence of an external magnetic field, these spins tend to orient themselves randomly accordingly to statistics. When they are placed in the presence of an external magnetic field, the moments tend to align in directions anti parallel and parallel to the magnetic field. According to statistics, more electrons will occupy the lower energy state then the higher energy state. In the presence of a magnetic field, the lower energy state is the state when the magnetic moments are aligned parallel to the external field. This imbalance in the orientation favoring the parallel orientation results in attraction to the source of the external magnetic field.
Diamagnetism is a property of substances that contain no unpaired electrons and lack a permanent dipole moment. The magnetic moment induced by one electron is canceled by the magnetic moment of an electron having the opposite spin state. The force of diamagnetism results from the effect of the external magnetic field on the orbital motion of the paired electrons. The susceptibility is correlated to the radii of the electronic orbits and the precession of the electronic orbits. The complex mathematical system describing this system is beyond the scope of the experiment. It must be included that paramagnetic substances do have a diamagnetic component to them but it is much smaller than the paramagnetic component and therefore can be ignored.
Calculation. cm (the mass susceptibility)is found for a calibrating solution of NiCl2 using the equation
(1)
where p is the mass fraction (w/w) of NiCl2 of the solution and T is the absolute temperature.
cv (the volume susceptibility)is determined using equation
(2)
where r is the density of the solution.
The apparatus constant moH2A/2 is evaluated using equation
(3)
With the apparatus constant known and W (mass(kg) x 9.8 m s-2) known, it is possible to determine cv for each solution using the equation
(4)
cM (molar susceptibility) is calculated (in SI units) using the equation
(5)
With cM determined, the Curie Constant C is calculated by the equation:
(6)
The small diamagnetic term can be neglected for paramagnetic compounds and the equation becomes:
(7)
The atomic moment µ can then be calculated using the equation:
(8)
The number of unpaired electrons can be found approximately by the equation:
(9)
where n is the number of unpaired electrons.
Experimental Method:
The method described in Experiments in Physical Chemistry was followed. The density of all solutions were measured using a pycnometer.
A solution of NiCl2 was made with the following parameters (table one):
Table One: Parameters of NiCl2
Solution Concentration (M) Weight Fraction Density (kg/m3)
NiCl2 2.308 .016 0.2997 1.3552 .003x103
Three test solutions were prepared as follows (table two):
Table Two: Parameters of Solutions
Solution Concentration (M) Density (kg/m3)
Fe(NH4)2(SO4)2€
6(H20) 0.705 .016 1.1148 .003x103
KMnO4 0.377 .016 1.0201 .003x103
K3[Fe(CN)6] 0.498 .016 1.0834 .003x103
Measurements in the presence and absence of magnetic fields were made using a Guoy balance as described in Experiments in Physical Chemistry and were made in triplicate.
Results:
All measurements were performed at 293K.
Table Three: Mass (field on -field off)
Solution Mass (g)
Run One Run Two Run 3 Average
NiCl2 0.09349 0.0001 0.09381 0.0001 0.10427 0.0001 0.09719 0.0001
Fe(NH4)2(SO4)2€
6(H20) 0.03548 0.0001 0.03665 0.0001 0.04785 0.0001 0.03999 0.0001
KMnO4 -0.00406 0.0001 -.00404 0.0001 -0.00399 0.0001 -0.00403 0.0001
K3[Fe(CN)6] 0.00252 0.0001 0.00258 0.0001 0.00386 0.0001 0.00299 0.0001
Table Four: Weight for Solutions
Solution Weight (N)
NiCl2 9.5246 .0098x10-4
Fe(NH4)2(SO4)2€6(H20) 3.9190 .0098x10-4
KMnO4 -3.948 .098x10-5
K3[Fe(CN)6] 2.930 .098x10-5
The following parameters of NiCl2 were determined (table five) using equations 1 and 2:
Table Five: Parameters of NiCl2
cm 1.22 .04x10-7 m3kg-1.
cv 1.66 .06x10-4
The apparatus constant moH2A/2 was evaluated using equation 3 as 5.73 .02.
cv was calculated for each solution (table six) using equation 4.
Table Six
Solution Weight (N) cv
Fe(NH4)2(SO4)2€6(H20) 3.9190 .0098x10-4 6.831 .003x10-5
KMnO4 -3.948 .098x10-5 -6.88 .02x10-6
K3[Fe(CN)6] 2.930 .098x10-5 5.10 .02x10-6
cM is calculated (in SI units) using equation 5 (table seven):
Table Seven
Solution cv cM
Fe(NH4)2(SO4)2€6(H20) 6.831 .003x10-5 1.087 .007x10-7
KMnO4 -6.88 .02x10-6 4.79 .03x10-9
K3[Fe(CN)6] 5.10 .02x10-6 2.69 .02x10-8
With cM determined, the Curie Constant C is calculated by equation 7 (table eight):
Table Eight
Solution C
Fe(NH4)2(SO4)2€6(H20) 3.18 .02x10-5
KMnO4 1.406 .008x10-6
K3[Fe(CN)6] 7.90 .06x10-6
The atomic moment was then be calculated using equation 8 (table nine):
Table Nine
Solution µ (Bohr Magneton)
Fe(NH4)2(SO4)2€6(H20) 4.5044
KMnO4 0.9462
K3[Fe(CN)6] 2.2428
The number of unpaired electrons was found approximately by the equation 9 (table ten):
Table Ten
Solution n # unpaired electrons
Fe(NH4)2(SO4)2€6(H20) 3.6141 4
KMnO4 0.3767 0
K3[Fe(CN)6] 1.4557 1
Discussion:
The number of unpaired electrons determined experimentally is correct as compared to atomic orbital calculations except for K3[Fe(CN)6](table eleven):
Solution Experimental Determined A.O. Calculations
Fe(NH4)2(SO4)2€6(H20) 4 4
KMnO4 0 0
K3[Fe(CN)6] 1 5
The cause of the discrepancy of the K3[Fe(CN)6] complex is not experimental error but is from the physical properties of transition metal complexes such as K3[Fe(CN)6]. These properties are characterized by ligand field theory.
The compound K3[Fe(CN)6] is characterized as a low spin case. A low spin case causes the measured numbers of unpaired electrons to be considerably less than that calculated theoretically. This is caused by splitting of the five degenerate d- level electronic orbitals into two or more levels of different energies by the fields put out by the ligand.
In the case of K3[Fe(CN)6], CN- exerts a strong ligand field. This strong splitting field results in a greater energy difference between the bonding and antibonding orbitals. (see picture one) making it more probable that all 5 e- will occupy the lower energy bonding orbital.
Picture One: A diagram of the weak field and strong field effect on electron arrangement in Fe+3
The strong ligand field produced by CN- results in spin moment cancellation of four out of the five unpaired electrons. This results to the apparent 1 free electron determine by the experiment.
The sources of error in this experiment are the solutions, density and mass. All confidence limits were determined using the method of partial fractions just how all lab reports are done. Although we initially had trouble with the scale, these problems were resolved prior to taking measurements.
Although more accurate results are not needed, a possible way to increase accuracy is to use more volume of solutions. When we performed this experiment, we had to cut the volume used in other years by 50% because the weight exceeded the capacity of the balance. Using more solution would decrease the significance of the error in mass.
References:
1. Shoemaker, Garland, and Nibler, Experiments in Physical Chemistry, Fifth Edition, McGraw-Hill Company, New York, 1989.
2. Mulay, L.N., Magnetic Susceptibility,. Intersceince Publishers, New York, 1963
3. Adamson, Arthur W., A Textbook of Physical Chemistry,. Tjird Edition, Academic Press College Division, Orlando, Flrida, 1986.
4. Barrow, Gordon M., Physical Chemistry,. Third Edition, McGraw-Hill Company, New York, 1973.
Experiment 33
Magnetic Susceptibility
Michael J. Horan II
f:\12000 essays\sciences (985)\Chemistry\marie curie a pioneering physicist.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aspirations come from hopes and dreams only a dedicated person can
conjure up. They can range from passing the third grade to making the
local high school football team. Marie Curie's aspirations, however,
were much greater.
Life in late 19th century Poland was rough. Being a female in
those days wasn't a walk in the park either. Marie Curie is recognized
in history by the name she took in her adopted country, France. Born in
Poland in 1867, she was christened Manya Sklodowska. In the year of her
birth, Poland was ruled by the neighboring Russia; no Pole could forget
it, or at least anyone involved in education, as both Manya's parents
were. Manya's mother was a headmistress of a girls' school. The Russians
insisted that Polish schools teach the Russian language and Russian
history. The Poles had to teach their children their own language and
history in secrecy.
Manya enjoyed learning but her childhood was always overshadowed
by depression. At the young age of six, her father lost his job and her
family became very poor. In the same year of 1873, her mother died of
tuberculosis. As if that wasn't enough tragedy for the family already,
two of her sisters died of typhus as well. Her oldest sister, Bronya,
had to leave school early to take care of the family. Despite all these
hardships and setbacks, Manya continued to work hard at school.
Although her sister Bronya had stopped going to school to act as
the family's housekeeper, she desperately wanted to go on studying to
become a doctor. This was almost impossible in Poland, however. In
Poland, women were not allowed to go to college. Many Poles took the
option to flee from Russian rule and live in France; this is exactly
what Bronya did. She had set her heart on going to Paris to study at
the famous Sorbonne University (The University of Paris). The only
problem now was that she had no money to get there.
Manya and Bronya agreed to help each other attain their
educations. Manya got a job as a governess and sent her earnings to
support Bronya in Paris. Then, when Bronya could afford it, she would
help Manya with her schooling and education in return. Manya went to live
in a village called Szczuki with a family called Zorawski. Aside from
teaching the two children of the family for seven hours a day, she
organized lessons for her own benefit as well. Manya spent her evenings,
late evenings, and even mornings devouring books on mathmatics and science.
Bronya finished her studies and married a Polish doctor, Casimir
Dluski. They invited Manya to live with them in Paris while she went to
college. Manya didn't want to leave her country and most importantly, her
family. Her eagerness for the quest of knowledge overcame her fear of the
unknown, nonetheless. She travelled to Paris in an open railroad car on a
trip that lasted three days in the Polish winter. She arrived safely to
her long-since-childhood dream, the city of Paris. Manya Sklodowska
quickly became Marie.
While Marie improved her French, she stayed with Bronya and her
husband. They lived more than an hour away from the university. Marie
wanted to be nearer to her work, so she eventually ended up moving out of
her sister's home and into a single cold damp room, eating only enough to
keep her alive. Fortunate enough for a scholarship, Marie was able to go
on studying until she had completed two courses. In her final exam-
inations, she came in first in the subject of mathematics and second in
physics. By 1894, at the age of 27, Marie had aquired not one, but two
degrees from France's top university and also became a totally fluent
speaker of the French language.
Marie had always ruled love and marriage out of her life's
program. She was obsessed by her dreams, harassed by poverty, and
overdriven by intensive work. Nothing else counted; nothing else existed.
She did, however, meet a young man every day at Sorbonne and at the
laboratory. Marie and her destiny actually met on coincidence. Marie
needed somewhere to conduct her experiments for research ordered by the
Society for the Encouragement of National Industry. The lab at Sorbonne
was too crowded with students, in addition to not having the
right equipment. A friend of hers suggested a friend's labratory. His
name was Pierre Curie. Marie soon completed her commitment to her adopted
country by marrying this Frenchman.
Marie and Pierre Curie got married in 1895. The two of them
combined probably made up the best team of scientists ever. Pierre had
made important discoveries about magnetism. Marie decided to follow this
up by looking at the magnetic properties of steel. In the same year of
their marriage, a German scientist by the name of Wilhelm Roentgen made an
accidental discovery. He found that certain substances produced rays of
energy that would pass through soft materials as opposed to hard
materials. Due to the fact that scientists often use the symbol "x" to
stand for anything unknown, he called his mysterious
discovery the "x-ray." The x-ray was more than an ammusing puzzle. By
directing x-rays and photographic film at a solid object that consisted
of both soft and hard substances a positive image can be made of the hard
substance. A prime example would be the human body. This discovery now
made it possible to look inside the human body without performing
surgery. Within the few days of the findings, x-rays were used to locate
a bullet in a man's leg. The world of medicine had acquired a major new
tool for examining the sick and injured.
The year after Roentgen's discovery, a French researcher and a
friend of the Curie's, Antoine Henri Becquerel found that a rare substance
called uranium gave off rays that seemed to be very much like the x-rays
that Wilhelm Roentgen had described.
In 1897, the year of Roentgen's discovery, Marie Curie gave birth
to her very first daughter, Irene. Despite being caught up in family
life, Marie was still determined to go on with her scientific work. She
decided to follow up Becquerel's discovery and do special research on the
study of uranium and the rays it produced.
Elements are the raw materials of our universe. Everything is
made up of these basic substances. Scientists are able to break things
down into their various elements and tests can be made to discover its
array of properties.
In the small damp labratory in the back of Sorbonne's School of
Physics and Chemistry, Marie began a long, tedious and painstaking
series of experiments that tested every element known to man. She found
that only the two elements uranium and thorium gave off rays.
"Radioactivity" was the name Marie gave to this property. Marie soon
again made another important discovery about a mineral called alled pitch-
blende, a black substance, somewhat stiff like that of tar, which contains
tiny quantities of uranium but absent of thorium. Pitchblende gave off
eight times more rays than the uranium that it contained. It was,
utilizing Marie's new term, more radioactive. Marie figured out that
pitchblende must therefore contain another element,which was also radio-
active that no one had discovered as of yet. Pierre was so overwhelmed
with this discovery, he quit his own work to join in his wife's research
and find out more on this new element. The Curie team decided to call it
radium.
Marie realized that the new element within the pitchblende was
in minute quantities only, therefore, to isolate any respectable amount
to test and measure large portions of pitchblende were needed. To
separate the radium from the pitchblende, it would have to be heated,
which purifies the substance. While working with the pitchblende, another
element was discovered which wasn't radioactive, therefore not radium.
Marie named this element polonium, in honor of her native homeland Poland.
Marie's experiments were now being conducted in an abandoned
wooden shed, furnished with only old kitchen tables, a cast-iron stove
and a blackboard. One evening, in 1902, after four long years of
exhausting work, Marie decided to go back to their lab and check on the
experiments they had done earlier in the day. When Marie and Pierre got
to the laboratory, they saw a "faint blue glow" in the darkness; it was
the radium.
Radium proved to be one of the world's most important
discoveries, especially for its miraculous medical uses. Radium was
measured to be two million times more radioactive than uranium. The
smallest amount of radium was capable of giving off immense radiation.
Radium is extremely powerful and, unless used with care and in a
controlled environment, very dangerous. Unfortunately, this was not
known in the days of the Curies. While working with radioactive
materials, both Pierre and Marie suffered from many illnesses and pains.
They encountered aching arms and legs, sores, colds and blisters that
never seemed to go away. They often pinned these problems to their lack
of rest due to being in the laboratory. Only later did the two connect
their improvement in health with their absense from the radium. The
Curie's great discovery prompted scientists and doctors to work and
further develop its uses. It was found that radiation could be used to
destroy unhealthy growth in the human body, thus helping to stop cancer.
Besides being able to cure, radium can also kill. Handling and
controlling the radium is the first and foremost dilemna. The Curie's
found this out the hard way...
The discovery of radium did, however, bring the Curies something
they were proud of. In 1903, Marie Curie was awarded the degree of Doctor
of Science. At the awards ceremony, Marie showed how grateful she was
by wearing a new dress. The Curies were then showered with awards and
honors from then on. That same year, Pierre was invited to London to
give a lecture on radium. In November of that year, the Royal Society,
Britain's leading association of scientists, presented Pierre and his wife
with one of its highest awards, the Davy Medal. Not a month later, they
heard from the Academy of Sciences in Sweden that the Nobel Prize for
physics was to be awarded to the Curies along with Henri Becquerel. Marie
and Pierre felt too ill to make the jounrney to Sweden to accept the prize
in person, so Becquerel accepted the medals for them. The Nobel Prize
included a rather large sum of money... 70,000 gold francs. The Curies
accepted the money finance for their experiments. This released Pierre
from his teaching so that he could concentrate on research and to repay to
kindness and support they had received from their friends and family over
the years. They also gave gifts to poor Polish students and made a few
improvements to their small apartment.
One new comer that the Curies didn't mind was Eve, their second
daughter, born in December in 1904. Her arrival didn't disrupt the Curies
research and teaching, as their first child Irene had threatened.
The Curie's lust for science still lingered.
In the year of 1905, Pierre was elected a member of the French
Academy of Sciences and became a Professor of Physics at the Sorbonne.
Early in the following year, tragedy struck. Crossing the road in a
shower of rain, Pierre stepped out from behind from a cab straight into
the path of a heavy horse drawn wagon. The driver tried to stop the
wagon, but all was in vain. The weight of his load was too great for him
to stop, and the left back wheel crushed Pierred as he lay stunned in the
road. Pierre Curie died instantly.
Marie was shattered by the news of her husband's death but soon
recovered the determination to carry on with her work. The French govern-
ment proposed to recognize Pierre's work to the nation by granting Marie a
pention for herself and her children. She refused saying, "I am young
enough to earn my living and that of my children..."
The Sorbonne agreed with her because The Faculty of Science voted
unanimously that she should succceed Pierre as Professor. It was a unique
tribute, for she became not only the first woman professor at Sorbonne but
the first at any French university.
Marie had felt it was her duty to succeed her husband. He had
always said he would have liked to see Marie teach a class at Sorbonne.
Marie at last showed her final feeling on the matter by the way in which
she gave her first public speech lecture to a packed crowd.
In the year of 1910, four years after Pierre's death, Marie
published a long account of her discoveries of radioactivity. This led
to her being awarded a second Nobel Prize. Not for another fifty years
would anyone accomplish such a remarkable honor. This time, Marie went
to Stochholm in Sweden to accept her prize in person. 1911 should have
been a year of triumph, but it turned out to be a awful year of anguish,
however. The awarding of Marie's second Nobel Prize was controversal
because many say it was given to her out of pity of her husband. That
same year, Marie failed by two votes to be elected to be in the Academy
of Sciences. Worse yet, some newspapers said that her close friendship
with the scientist Paul Langevin was wrong because he was a married man
with four children.
Marie received many spiteful letters and became distressed. A
spell in the nursing home and a trip to England helped her to recover.
Marie's real cure for her problems was definitely her work. The Sorbonne
at last decided to give her what she needed to do it properly - a special
institute for the study of radium, newly-built on a road renamed in honor
of her husband, "Rue Pierre Curie." Marie was thrilled with this new
project and gave it, as her own personal gift, the precious radium she
and Pierre had prepared with their own hands. This radium was precious
in every sense. It was vital for further scientific research. It was
essential for it's use in medicine and it was worth more than a million
gold francs.
The Radium Institution was finished on July 13, 1914. Less than
a week later, World War I broke out. Marie gave up all thought of
scientific work in her new institute and threw herself behind the cause
of her adopted country. Before dedicating herself to the war, Marie made
a special trip to Bordeaux, in western France and put the precious gram of
radium away in a bank vault.
Marie donated all her money toward the war efforts including her
own personal savings in gold to be melted down. She even offered her
medals, but the bank refused them. Marie quickly saw that there was one
service that she could do for France that no one else could - organize a
mass x-ray service for the treatment of wounded soldiers. During the
course of the war, Marie, along with volunteers, equipped 20 cars as
mobile x-ray units and set up more than 200 hospital rooms with x-ray
equiptment. Over a million men were x-rayed, which saved tens of
thousands of lives and prevented an untold number of amputations. Between
1916 and 1918, Marie Curie trained 150 people including 20 American
Expeditionary Force members in x-ray technology of radiology. After the
war ended, Marie continued to train radiologists for another two years.
Marie disliked reproters and kept away from journalists. One
American reporter, Mrs. Marie Melaney was persistent. Marie finally gave
in to her and agreed to an interview. The two quickly became friends.
Mrs. Melaney understood how Marie had put aside her scientific work
during the war and knew that in the whole of France there was only one
gram of radium that Marie had presented to the newly-established
institute. Mrs. Melaney went back to the United States and asked the
country for a sum of $100,000 for another gram of radium for Marie's
research. Marie was widely known and millions dutifully complied. In
1921, Marie was invited to the United States to receive her radium. After
stepping out into the public just once, the world fell in love.
She became sort of and ambassador for science, travelling to
other countries, educating as well as still receiving honors. In 1925,
the Polish government erected another radium institute, this time in her
honor - The Marie Sklodowska/Curie Institute. The President of Poland
laid the first corner stone while Marie laid the second. The women of
the United States acknowledged her a second time and collected enough
money to produce yet another gram of radium to be presented to the
Polish Institute for its research and treatment program.
In may of 1934, Marie Curie was stricken to her bed due to the
flu. Being too weak to fight against the virus, she died in a sanitarium
in the French Alps. She was quietly buried on July 6, 1934 and laid to
rest next to her husband Pierre.
Marie Curie was a woman of the ages. She represented true
humanity in the pusuit of perfection. Marie found humanity's perfection
in chemistry and her work. Loving what she did and devoting herself to
the sciences is what made her happy in the sense that true perfection was
found.
f:\12000 essays\sciences (985)\Chemistry\Marie Curie.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MARIE CURIE
LIFE OF MARIE CURIE
Marie Curie(1867-1934) was a French physicist with many accomplishments in both physics and chemistry. Marie and her husband Pierre, who was also a French physicist, are both famous for their work in radioactivity.
Marie Curie, originally named Marja Sklodowska, was born in Warsaw, Poland on Nov.7, 1867. Her first learning of physics came from her father who taught it in high school. Marie's father must have taught his daughter well because in 1891, she went to Paris(where she changed her original name) and enrolled in the Sorbonne. Then two years later she passed the Examination for her physics degree ranking in first place. She met Pierre Curie in 1894, and married him in the next year. Marie subsequently gave birth to two daughters Irene(1897) and Eve(1904).
Pierre Curie(1859-1906) obtained his doctorate in the year of his marriage, but had already distinguished himself in the study of the properties of crystals. He discovered the phenomenon of piezoelectricity, whereby changes in the volume of certain crystals excite small electric potentials. He discovered that the magnetic susceptibility of paramagnetic materials is inversely proportional to the absolute temperature, and that there exists a critical temperature above which the magnetic properties disappear, this is called the Curie temperature.
Marie Curie was interested in the recent discoveries of radiation, which were made by Wilhelm Roentgen on the discovery of X-rays in 1895, and by Henri Becquerel in 1896, when he discovered uranium gives off similar invisible radiation as the X-rays. Curie thus began studying uranium radiation and made it her doctoral thesis. With the aid of an electrometer built by Pierre, Marie measured the strength of the radiation emitted form uranium compounds and found it proportional to the uranium content, constant over a long period of time and influenced by external conditions. She detected a similar immutable radiation in the compounds of thorium. While checking these results, she made the discovery that uranium pitchblende and the mineral chalcolite emitted four times as much radiation as their uranium content. She realized that unknown elements, even more radioactive then uranium must be present. Then in 1898 she drew the revolutionary conclusion that pitchblende contains a small amount of an unknown radiating element.
Pierre Curie understood the importance of this supposition and joined his wife's work. In the next year, the Curie's discovered two new radiating elements which they named Polonium(after Maries native country) and Radium. They now began the tedious and monumental task of isolating these elements so that their chemical properties could be determined. During the next four years, working in a leaky wooden shed, they processed a ton of pitchblende, laboriously isolating from it a fraction of a gram of radium.
In 1903, Marie Curie obtained her doctorate for a thesis on radioactive substances, and with her husband and Henri Becquerel she won the Nobel Prize for physics for the joint discovery of radioactivity. Finally, the Curies financial aspect was relieved, and the following year Pierre was made the professor at the Sorbonne, and Marie the assistant. Everything was going well for the Curies, but then Pierre was run over by a horse drawn cart and killed. Marie was deeply affected by his death and overcame this blow only by putting all her energy into her scientific work that they had begun together.
Marie took over her husband's post at the Sorbonne, thus making her the first female lecturer at the Sorbonne, and in 1908 she was appointed the professor. In 1911 she received an unprecedented second Nobel prize, this time in chemistry for her work on radium and it's compound.
During World War I, Madame Curie dedicated herself entirely to the development of the use of X-rays in medicine. In 1918 she became head of the Paris Institute of Radium, were her daughter Irene Joliot-Curie worked with her husband Fredric Joliet. Her research for the rest of her life was dedicated to the chemistry of radioactive materials and their medical applications. She labored to establish international scholarships and lectured abroad. Marie Curie died on July 4, 1934 of Leukemia, which was undoubtedly caused by prolonged exposure to radiation. A year later Irene and Fredric won the Nobel prize in chemistry for the synthesis of new radioactive elements.
ELEMENTS MARIE DISCOVERED
Polonium is a rare metallic element, which naturally occurs in uranium ore pitchblende. But most commonly is made artificially by bombarding bismuth( a brittle metal) with neutrons. It is used chiefly by scientists for nuclear research.
Radium is a highly radioactive metallic element. It occurs mostly in thorium ores and uranium. It was discovered by the Curies while processing pitchblende. Until mid-1950's radium was only used for treating cancer and an ingredient in fluorescent paint used for watch and instrument dials. Today safer and cheaper sources of radiation have replaced radium for most industrial and medical uses.
CONCLUSION
The work of the Curies, which by its nature dealt with changes in the atomic nucleus, led the way toward modern understanding of the atom as an entity that can be split to release enormous amounts of energy. With these discoveries we have been able to actually put to use the elements in our everyday life.
f:\12000 essays\sciences (985)\Chemistry\Max Planck.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Max Planck
Justin Thomas
Period 4
Chemistry
10/08/96
On April 23, 1858 Max Karl Ernst Ludwig Planck was born in Kiel, Germany. He was the sixth child of a law professor at the University of Kiel. At the age of nine his interest in physics and mathematics was developed by his teacher Hermann Muller. When he graduated at the age of seventeen he decided to choose physics over music for his career. Although he is know for physics he was an exceptional pianist who had acquired the gift of being able to hear absolute pitch. His favorite works of music were known to be Schubert and Brahms. Entering the University of Munich in 1874 he got little inspiration and was unimpressed at the University of Berlin which was between the years of 1877 and 1878. He in turn did independent studies primarily on Rudolf Clausius' writings of thermodynamics which inspired him and in July 1879 he received his doctoral degree at the age of twenty-one. He became a lecturer at the University of Munich. His father helped him be promoted to associate professor at Kiel by means of professional connections. At the age of thirty he was promoted to full professor at the University of Berlin.
After he decided to become a theoretical physicist he started a quest for absolute laws. His favorite absolute law was the law of the conservation of energy which was the first law of thermodynamics that stated that you could take any equal amount of energy and transform it into the same equal amount of energy ideally, meaning no energy was lost. The second law of thermodynamics led him to discover the quantum of action or Planck's constant h. How he came upon his formula for quantum mechanics well be explained as follows. Planck saw that blackbody radiation acted in an absolute sense because it was defined by Kirchhoff as a substance that could absorb almost all radiating energy and emit all that it had absorbed perfectly which is associated with the first law of thermodynamics. By using various experiments and theoretical failures many scientists tried to find the spectral energy distribution to try and draw a diagram of a curve that showed the amount of radiation given off at different frequencies for a blackbody with a given temperature. Then using Wien's law which worked out for high frequencies but didn't work for low, he saw a relationship with the mathematics of the entropy of the radiation in the high-frequency waves in correlation to the low frequency waves and he guessed if he combined the two in the simplest way that he would get a formula that related to the amount of radiation to a blackbody's frequency. Although Planck's formula was accepted to be correct without argue, he was not satisfied and he tried to relate his formula with the absolute laws which he loved so much. In order to make sense of his formula he had to kick the second law of thermodynamics out the door and accepted it as a statistical law, as it was interpreted by Ludwig Boltzmann. He also had to accept that the blackbody could not absorb energy continuously but in separate amounts of energy spread over time like pulses. Planck called this quanta of energy. To prove his formula even further he used it to find Planck's constant h, which turned out to be a very small number(six and fifty-five hundredths times ten to the negative twenty-seventh power). The fact that Planck's constant was not zero and it was in fact a number made the unseen physical world indescribable by classical methods which sparked a revolution in physical theory.
When Planck discovered the theory of quantum he was forty-two and then later won the Nobel Prize for physics in 1918. After his discovery he still contributed to physics increasingly. Planck was the first physicist to back up Einstein's theory of relativity. After he retired as a physicist which was around the time Hitler rose to power Planck focused on philosophical and questions of faith in his writings. During the World War I he stayed in Germany to preserve as much as he could of German physics. The later part of Max Planck's life is filled with sadness. In 1909 his first wife died after twenty-two years of marriage. He later married another women in which he had one child. His children from his first wife which was Marie Merek included twin sisters and two sons who managed to all die before he did. His first son Karl was killed in action in 1916 and then one year later Margarete one of his daughters died in child birth. Two years later, Emma his other twin daughter died the same death as her sister. As if the first war hadn't damaged him enough World War II destroyed his house completely and his younger son was painfully killed by the Gestapo for trying to assassinate Hitler in 1944. By the time the war ended Max Planck lost the will to live and died of natural causes on October 4, 1947 when he was eighty-nine.
Plack's contributions to the modern world are vast. He led Albert Einstein towards the theory of relativity which led to the construction of the atom bomb. That is one huge example of the things Max Plank made possible by discovering quantum theory.
f:\12000 essays\sciences (985)\Chemistry\Mercury.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MERCURY
Mercury is a metallic element that is a liquid at room temperature, it is one of the transition elements. Mercury's atomic number is 80. It is superconductive when cooled to within a few degrees of absolute zero. Mercury was once known as liquid silver or quicksilver which was studied by the alchemists. Mercury was first distinguished as an element by the French chemist Antoine Laurent Lavoisier in his experiment on the composition of air. At room temperature mercury is a shining, moving liquid that has a silvery-white color, and slightly volatile. Mercury remains a liquid over a wide temperature range. Mercury is a solid when given a pressure of 7640 atmospheres (5.8 million torrs). It dissolves in nitric or concentrated sulfuric acid but is resistant to alkalies. Mercury melts at -39C, boils at about 357C,and has a gravity of 13.5. The atomic weight of mercury is 200.59. Mercury comes in its pure form or combined with silver in small amounts. It is mostly found in the form of the sulfide.
Mercury has many uses and is a very important element. A major use of mercury is in electrical equipment such as fluorescent lamps, and mercury batteries. Mercury is used in thermometers because the change in volume for each degree of rise or fall in temperature is the same. The use of mercury in the thermometer instead of alcohol was done by Gabriel Daniel Fahrenheit in 1714. It was also used in vacuum pumps, barometers, and electric rectifiers and switches. Mercury is used in a mercury-vapor lamps which are used as a source of ultraviolet rays in homes and for sterilizing water. Mercury-vapor is also used instead of steam in the boilers of some turbine engines. Mercury is sometimes used for amalgamation. Amalgamation is a metallurgical process that utilizes mercury to dissolve silver or gold to form an amalgam. This process has been largely supplanted by the cyanide process, in which gold or silver is dissolved in solutions of sodium or potassium cyanide.
Mercury is a poisonous element. Mercury is semi hazardous as a vapor and Among the many good things mercury does for people there is a flip side. in the form of its water-soluble salts. Chronic mercury poisoning, which occurs when small amounts of the metal are repeatedly ingested over long periods of time, causes irreversible brain, liver, and kidney damage.
f:\12000 essays\sciences (985)\Chemistry\Millikan.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In 1909 Robert Andrew Millikan set up an apparatus to measure the charge of an electron within an accuracy range of 3%. In 1913 he came out with a value of the electrical charge that would serve the world of science for a generation.
Young Millikan had a childhood like most others: he had no idea what his profession would be. Once he recalled trying to jump from a rowboat to a dock, falling in the water, and almost drowning. Here he had his first account with physics - Newton's Third Law of Motion: "For every action there is an equal and opposite reaction". Even in High School Physics courses Millikan was not so spirited, which may have had a little to do with his teacher's habit of spending the summers using a divining rod to find water. After Millikan graduated from Maquoketa High he was accepted into Oberlin College. Robert actually began his physics career when he taught an elementary course at the request of his Greek professor during his sophomore year. He then transferred to Columbia University from which he graduated in 1893 as the only student graduate in physics. After this accomplishment Millikan travelled to Germany to study with such professors Planck and others. When this period was on his resume Millikan was offered a position in the Physics department at the University of Chicago and Millikan took it. After teaching for a period Millikan decided that physics could only be taught properly through the practice of experimentation and getting your hands in it just as many other things are. Thus, he began writing better textbooks for the University of Chicago, "In fact he spent the morning of his wedding day reading proofs of his textbooks" ( http://physics.uwstout.edu/sotw/millikan.html )
During his 12 hours of teaching each day Millikan spent half of his time doing research. In 1909 he constructed his first oil drop apparatus to determine the charge of an electron. Millikan discovered that the charge depended on the frequency of incident light. In the beginning of his experimentation Millikan was using a drop of water. Using a water drop only gave Millikan forty five seconds in which to measure the charge, due to the volatility of the water. Millikan then switched to using a drop of oil because of its low volatility and as a result was allowed four and one half hours to measure the charge.
In 1909 Millikan figured he was within 2% of being accurate. In 1910 Millikan actually announced numerical value for this fundamental atomic constant, 4.891x10-10 esu. [ After Millikan announced this number he was elected Vice Chairman and Director of Research for the National Research Council in 1917.] Millikan realized there were inaccuracies when then photocurrent near the cutoff point was too low to measure. Noticing that the current was highest when the metal was fresh Millikan fashioned his targets into thick cylinders and rigged up an electro-magnetically controlled knife to shave off the ends of the blocks.
Millikan went on to the Physics Laboratory at California Institute of Technology, where he obtained his Doctorate and stayed on doing research on Cosmic Rays until he retired in 1945. It was while he was at Cal Tech in 1923 that he won the Nobel Prize in Physics. Millikan was the first Cal Tech Doctorate to achieve a Nobel Prize.
f:\12000 essays\sciences (985)\Chemistry\MiniResearch.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mini- Research
ELECTRON- In 1897, Sir J. J. Thomas, an English physicist, measured the deflection of cathode-ray particles in magnetic and electrical fields. As a result he found the ratio of the charge, e, to the mass, m, of the cathode-ray particles. He found e/m identical to those particles irrespective of the metal the electrodes were made of or the kind of gas in the tube. In 1909, RA Millikan, an American scientist, measured that charge. All electrons are found to be identical no matter their source or the method of liberating them from matter. From the values of e/m, and e, the mass of an electron was calculated to be .00055 amu.
PROTON- Eugeen Goldstein used a Crookes tube with holes in the cathode, and observed that another kind of ray was emitted from the anode and passed through the holes. He discovered this in 1886. In 1889, William Wien showed these rays to be positively charged. The ratio of charge to mass was smaller than electrons, but varied in different gasses. This meant that either charge varied, mass varied or both varied. Both vary. Charge is equal to an electron, but opposite in sign. Mass was smaller when used as a gas. From the values of e/m for positive particles, m was calculated to be 1.0073 amu. This became known as a proton.
NEUTRON- In 1932, James Chadwick detected the third of the basic parts of an atom. He showed that uncharged particles, or neutrons, are emitted when atoms of other elements are bombarded with high-velocity helium atoms with all electrons remored, or an alpha particle. Neutrons were determined to have a mass of 1.0087 amu. They are unstable outside of an atom and slowly degenerate to form protons and electrons.
f:\12000 essays\sciences (985)\Chemistry\Nitrate Contamination.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nitrate Contamination of Groundwater Poses a Serious Health Threat
Nitrates contamination of the world's underground water supply poses as
a potentially serious health hazard to the human inhabitants on earth.
High nitrate levels found in well water has been proven to be the cause for
numerous health conditions across the globe. If we intend to provide for
the future survival of man, and life on planet earth, we must take action
now to assure the quality of one of our most precious resources, our
underground water supply.
Ground water can be defined as the water stored in the open spaces
within underground rocks and unconsolidated material (Monroe and Wicander
420). Ground water is one of the numerous parts that make up the
hydrologic cycle. The primary source of water in underground aquifers is
precipitation that infiltrates the ground and moves through the soil and
pore spaces of rocks (Monroe and Wicander 420). There are also other
sources that add water to the underground aquifer that include: water
infiltrating from lakes and streams, recharge ponds, and wastewater
treatment systems. As groundwater moves through the soil, sediment, and
rocks, many of its impurities are filtered out. Take note, however, that
some, not all, soils and rocks are good filters. Some are better than
others and in some cases, serious pollutants are not removed from the water
before it reaches the underground supply.
Now that we have a good working definition of what groundwater is, and
where it comes from, just how important is it? Groundwater makes up about
22% of the worlds supply of fresh water. Right now, groundwater accounts
for 20% of all the water used annually in the United States. On a national
average, a little more than 65% of the groundwater in the United States
each year goes to irrigation, with industrial use second, and third is
domestic use (Monroe and Wicander 420). Some states are more dependent on
groundwater for drinking than others. Nebraska and the corn belt states
rely on underground water for 85% of their drinking needs, and in Florida
90% of all drinking water comes from underground aquifers (Funk and Wagnall
2). People on the average in the United States require more than 50
gallons of water each day for personal and household uses. These include
drinking, washing, preparing meals and removing waste. A bath in a bathtub
uses approximately 25 gallons of water and a shower uses about l5 gallons
per minute of water flow while the shower runs. Just to sustain human
tissue requires about 2.5 quarts of water per day. Most people drink about
a quart of water per day, getting the rest of the water they need from food
content. Most of the foods we eat are comprised mostly of water: for
example, eggs, are about 74% water, watermelon 92%, and a piece of lean
meat about 70%. Most of the beverages we drink are also mostly comprised
of water, like milk, coffee, tea and soft drinks. And the single largest
consumer of water in the United States, is agriculture. In dry areas,
farmers must irrigate their lands to grow crops. It is estimated that in
the United States, more than 100 billion gallons of fresh water are used
each day for the irrigation of croplands (Funk and Wagnall 2).
Since agriculture is the leading user of our groundwater, perhaps it
is fitting, that it is also the biggest contributor of contaminating
nitrates that work into our water supply each year. Agriculture and
livestock production account for 80% of all nitrogen added to the
environment ( Terry, et al. 1996). Industrial fertilizers make up 53%,
animal manure 27%, atmosphere 14%, and point source 6% (Puckett, 1994).
Just how do these nitrates get from the field into our water supply? There
are two primary reasons that nitrate contaminates reach our underground
water supply and make it unsafe. Number one reason is farmer's bad habits
of consistently over- fertilizing and applying too much nitrogen to the
soil. In 1995 America's agricultural producers added 36 billion pounds of
nitrogen into the environment, 23 billion pounds of supplemental industrial
nitrogen, and 13 billion pounds of extra nitrogen in the form of animal
manure. Twenty percent of this nitrogen was not used by the crops it was
intended. This accounts for about 7-8 billion pounds of excess nitrogen
remaining in the environment where much of it has eventually entered the
reservoirs, rivers, and groundwater that supply us with our drinking water
(NAS 1995). The number two reason these contaminants reach our groundwater
supply runs parallel with the first. Over-irrigation causes the leaching of
these nitrates past the plants root zone where they can be taken in by
crops and used effectively. Not all soils are the same and all have
different drainage characteristics. Soils with as higher amount of sand
and gravel are going to filter liquids down to the aquifer faster than
soils comprised of more silty finer sorted particles. Today's farmers not
only need to know when it is time to irrigate, they also need to know how
much and for how long. When the two problems are added together,
over-fertilization, and over-irrigation, the potential for harmful nitrate
contamination runs terrifyingly high.
Just how harmful are nitrates in our drinking water? Nitrates levels
that exceed the Federal standard level of 10 parts per million can cause a
condition known as Methemoglobinemia, or Blue Baby Syndrome in infants.
Symptoms of Methemoglobinemia include anoxic appearance, shortness of
breath, nausea, vomiting, diarrhea, lethargy, and in more extreme cases,
loss of consciousness and even death. Approximately seven to ten percent
of Blue Baby Syndrome cases result in death of the infant (HAS 1977,
Johnson et al. 1987). When nitrate is ingested it is converted into
another chemical form, nitrate. Nitrate then reacts with hemoglobin, the
proteins responsible for transporting oxygen in the body, converting them
to methemoglobin, a form that is incapable of carrying oxygen. As a
result, the victim suffers from oxygen deprivation, or more commonly
stated, the individual slowly suffocates (HAS 1977, Johnson et al. 1987).
Although, Methemoglobinemia is the most immediate life-threatening effect
of nitrate exposure, there are a number of equally serious longer-term,
chronic impacts. In numerous studies, exposure to high levels of nitrate
in drinking water has been linked to a variety of effects ranging from
hypertrophy (enlargement of the thyroid) to 15 types of cancer, two kinds
of birth defects, and even hypertension (Mirvish 1991). Since 1976 there
have been at least 8 different epidemeology studies conducted in 11
different countries that show a definite relationship between increasing
rates of stomach cancer and increasing nitrate intake (Hartmann, 1983;
Mirvish 1983). The facts speak for themselves, increasing levels of
nitrates in our groundwater are slowly poisoning our society.
We have only discussed contamination of our groundwater supply by
nitrates through the misuse of resources involved in agriculture. Be aware
that there are hundreds of other substances and practices that add to the
further contamination of our groundwater every day. Time does not allow
for an in-depth analysis of all aquifer contaminates in this paper,
however, I would like to mention a few that are at the top of the list just
briefly. Storm water runoff. Streets and parking lots contain many
pollutants including oils, greases, heavy metals and coliform, that can
enter groundwater directly through sinkholes and drainage wells.
Pesticides and herbicides can end up in the water supply much the same way
as do nitrates. Septic tanks that are improperly or poorly maintained, can
contaminate groundwater. Underground storage tanks, hazardous wastesites,
landfills, abandoned wells, accidents and illegal dumping all threaten the
quality of our drinking water. We must be aware of the potential hazards
and take measures to ensure the safety of our drinking water supply for
generations to come.
What can we do to prevent unnecessary contamination of our
groundwater? Farmers will and must continue to use nitrogen fertilizer.
They do not, however, need to overuse it. By following a few simple
guidelines, such as accounting for all sources of nitrogen in the system,
refining estimates of crop nitrogen requirements, synchronizing application
of nitrogen with crop needs, using nitrogen soil tests, and practicing good
water management, farmers can not only help keep our aquifers safe from
contamination, but can probably enjoy the same yields as before and spend
less money on fertilizer, thus increasing their net profits, (Halberg et
al. 1991, Iowa State University 1993). How about the rest of us? What can
we do to help drinking water safe? There are many hazardous substances
around the house that frequently need disposal. Please don't dump them on
the ground, pour them down the drain, and always use fertilizers and
chemicals in moderation. Take proper care and maintenance of your septic
system at all times. Finally, when in doubt, ask. Many areas have local
Amnesty Days. For information or to request an Amnesty Day, call your
local public works department.
Nitrate contamination poses a serious health threat to all of us.
Each of us uses a little more than 50 gallons of fresh water every day.
When all our fresh water is contaminated beyond use, our world will not be
a pleasant environment to live in. We must all act now to maintain a fresh
water system that will be capable of sustaining us, and many generations
into the future.
f:\12000 essays\sciences (985)\Chemistry\Nobelium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NOBELIUM
Nobelium has the symbol No and is a radioactive metallic element with an atomic number of 102. Nobelium is in the actinide series being labeled as one of the transuranium elements. The element is named after Alfred Bernhard Nobel, the Swedish inventor and philanthropist.
Nobelium can be found when produced artificially in a laboratory. Discovery of the element was first claimed in 1957 by scientific groups in the United States, Great Britain, and Sweden, but the first confirmed discovery of a nobelium isotope was by a team of scientists at the Lawrence Radiation Laboratory in Berkeley, California and that took place in 1958. The isotope was created by bombarding curium isotopes with carbon ions.
Chemically, the properties of nobelium are unknown, but because it is an actinide, its properties should resemble those of the rare earth elements. Isotopes with mass numbers from 250 to 259 and 262 are known. The most stable isotope, nobelium-259, has a half-life of 58 minutes. The most common isotope, nobelium-255, has a half-life of a few minutes.
f:\12000 essays\sciences (985)\Chemistry\Noble Gases.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Noble Gases are the far right elements on the periodic table. On the earth they are scarce so we don¹t see much of them. They are do not react well with anything. In fact until around the 50¹s they hadn¹t found anything that they would react with any of the gases. But then someone found out that Fluorine one the of most reactive elements could form compounds with Xenon. Later they found that it could react with most of the other nobles.
Helium is one of the more scarce nobles on earth but in the universe it makes up 25% of it. Helium¹s presence was discovered by using spectral analysis to detect helium in the sun¹s spectrum. Helium is not found a lot on the earth because gravity cannot keep helium from escaping to space. Helium is found mostly in stars, where it goes through nuclear fusion with hydrogen. Most Helium comes from natural gas taps in North America. It is used in balloons and divers use it with oxygen to breath easier and to not get sick or dizzy.
Neon is an element that is lighter than air. The element is found most common in the atmosphere of the earth. It is also found in the earth¹s crust. It was discovered in 1898 by Sir Walter Ramsey and Morris W. Travers. Its uses include electric signs,lamps,and lasers.
Argon is the most abundant and most used noble on earth. It was discovered by Lord Rayleigh and by Sir Walter Ramsey in 1894. Argon makes up about 1.2 % of the earths atmosphere. It is found naturally in rock and in the air. It is used for electric light bulbs and floursent tubes. It is also used a lot in industry.
Krypton a very rare noble was discovered by Sir Walter Ramsey and by Morris W. Travers. Traces of it are found in natural gas,hot springs and volcanoes but most of it is in the atmosphere. It is used for incadesent lights and it is used in high speed photography.
Xenon is the first noble to form compounds with another element. It is very heavy and extremely rare. It was discovered by Sir Walter Ramsey and Morris W. Travers. It is found in mineral springs and in the Martian atmosphere. It is used in stroboscopes and many things to do with photography.
Radon is a very heavy radioactive gas. It was discovered by Freredich E. Dorn in 1900. It is in spring water, soil, and in some rocks. It is used to cause chemical reactions in medical procedures. Even though it helps it is seen as a major health risk because it can seep through poorly ventilated houses and contribute to cancer of the lungs.
f:\12000 essays\sciences (985)\Chemistry\Nuclear Fusion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For a fusion reaction to take place, the nuclei, which are positively
charged, must have enough kinetic energy to overcome their electrostatic
force of repulsion. This can occur either when one nucleus is
accelerated to high energies by an accelerating device, or when the
energies of both nuclei are raised by the application of very high
temperature. The latter method, referred to the application of
thermonuclear fusion, is the source of a lot of really cool energy.
Enough energy is produced in thermonuclear fusion to suck the paint of 1
city block of houses and give all of the residents permanent orange
Afros. The sun is a example of thermonuclear fusion in nature.
If I was a atom, I could only wish to be in a thermonuclear reaction.
Thermonuclear reactions occur when a proton is accelerated and collides
with another proton and then the two protons fuse, forming a deuterium
nucleus which has a proton, neutrino and lots of energy. I have no idea
what a deuterium nucleus is, but is must be 10 times cooler than just a
regular nucleus. Such a reaction is not self sustaining because the
released energy is not readily imparted to other nuclei. thermonuclear
fusion of deuterium and tritium will produce a helium nucleus and an
energetic neutron that can help sustain further fusion. This is the
basic principal of the hydrogen bomb which employs a brief, controlled
thermonuclear fusion reaction. This was also how the car in the Back to
the Future movie worked. It had a much more sophisticated system of
producing a fusion reaction from things like, old coffee grounds,
bananas, and old beer cans. Thermonuclear reactions depend on high
energies, and the possibility of a low-temperature nuclear fusion has
generally been discounted. Little does the scientific community know
about my experiments. I have produced cold fusion in my basement with
things like: stale bread, milk, peanut butter and flat Pepsi. I have
been able to produce a ten-megaton reaction which as little as a
saltine cracker and some grass clippings. But enough about my
discoveries. Early in 1989 two electrochemists startled the scientific
world by claiming to achieve a room-temperature fusion in a simple
laboratory. They had little proof to back up their discovery, and were
not credited with their so-called accomplishment. The two scientists
were Stanley Pons of the university of Utah and Martin Fleischmann of
the University of Southampton in England. They described their
experiment as involving platinum electrodes an electrochemical cell in
which palladium and platinum were immersed in heavy water. These two
losers said that the cell produced more heat than could be accounted
for. Yeah right!! The week before I was talking to both men on the
phone and I told them about all of the cool things you could do with
platinum. I said "Now Martin, what you need to do is get your hands on
some platinum and some heavy Mexican drinking water. The amount of
chemicals in the Mexican drinking water is sure to cause a violent
reaction with the platinum electrodes and produce lots of energy. I have
been doing this sort of things in my basement for years." When I told
him that though that NASA could power their shuttles with this sort of a
reaction, he nearly wet his pants. Now as usual, I received no credit
for MY discovery, but that is ok..I have grown used to it. I taught
Einstein, Newton, and Ron Popeel (inventor of things like the
pasto-matic, hair-in-a-can, and the pocket fisherman) everything they
know. Besides, the two shmucks didn't even follow my instructions for
the experiment. However, until I reveal my secrets about cold fusion,
it will remain only a proposed theory. nuclear fusion is also what
powers the rest of the stars in the solar system. Stars carry out
fusion in a thermonuclear manner. Thermonuclear is a really cool word
which I am going to use several more times just because it is so cool.
In a thermonuclear reaction matter is forced to exist in only in a
plasma state, consisting of electrons, positive ions and very few
neutral atoms. Fusion reactions that occur within a plasma serve to
heat it further, because the portion of the reaction product is
transferred to the bulk of the plasma through collisions. In the
deuterium-tritium reaction the positively charged helium nucleus carries
3.5 MeV. The neutron escaped the plasma with little interaction and ,
in a reaction, could deposit its 14.1 MeV in a surrounding lithium
blanket. I have know idea what that last sentence meat, but I am going
to memorize it, because I will sound super smart if I tell someone about
the neutrons activity in a plasma thermonuclear reaction. The neutrons
activity would breed tritium and also heat as a exchange medium which
could be used to produce steam to turn generator turbines. However, the
plasma also loses thermal energy though a variety of processes:
conduction, convection, and bremsstrachlng which is electromagnetic
radiation about 1000 times stronger than the microwave in your kitchen.
Bermsstrachlng is the electromagnetic energy which is produced by the
deceleration of a charged particle. Energy also escapes in the reaction
through line radiation from electrons undergoing level transitions in
heavier impurities, and through losses of hot nuclei that capture an
election and escape and confining field. Ignition occurs when the
energy deposited within the plasma by fusion reactions equals or exceeds
the energy being lost. In order to achieve ignition, plasma must be
combined and heated. Obviously, a plasma at millions of degrees is not
comparable with an ordinary confining wall, but the effect of this
incompatibility is not the destruction of the wall as might be expected.
I have found that if one uses Corning Ware in a microwave set on high,
plasma can take place quite safely. It is important to note that
tupperware is not suited well for thermonuclear reactions, and is best
left to use to store weapons grade plutonium in. I find that the air
tight lids work simply splendidly in keeping all of that nasty glowing
radioactive dust to a minimum in my room.. Although the temperature of
a thermonuclear plasma is very high and the power flowing through it may
be quite large the stored energy is relatively small and would quickly
be radiated away by impurities if the plasma touched a wall and began to
vaporize it. A thermonuclear plasma is thus self-limiting, because any
significant contact with the vessel housing causes it's extinction
within a few thousandths of a second. Therefore, plasma must be
carefully housed and handled while it is occurring (For further
information on plasma refer to the 2 essay in my series entitled "Why
Plasma Is So Cool").
Most of the research dealing with fusion since 1950 has used magnetic
fields to contain the charged particles that constitute a plasma. The
density required in magnetic-confinement fusion is much lower than
atmospheric density, so the plasma vessel is evacuated and them filled
with the hydrogen-isotope fuel at 0.0000000. What is the deal with all
of those zeros? I mean it means the same as 0..It must be one of those
wacky science thingys. Sort of like why inflammable and flammable mean
the same thing. Who knows. Anyway, Magnetic-field configurations fall
into two typed: open and closed. Wow, that was real obvious. In an
open configuration, the charged particles, which are spiraling along
magnetic field lines maintained by a solenoid, are reflected at each end
of a cell by stronger magnetic fields. I have found in my research that
if one used a 9-volt battery (preferably from a old smoke detector) the
reaction takes place much more efficiently. In this simplest type of
mirror machine, many particles that have most of their velocity parallel
to the solenoidal magnetic field are not reflected and can escape. This
is a real problem for me when ever I try to perform a thermonuclear
fission reaction. I have yet to find a solution to the problem, but for
now stale Trident chewing gum works as a acceptable improvision for the
problem. Present day mirror machines retard this loss by using
additional cells to set up electrostatic potentials that help confine
the hot ions within the central solenoidal field. In a Closed reaction,
the magnetic-field lines along which charged particles move are
continuous within the plasma. This closure has most commonly taken the
form of a toros, or doughnut shape, and the most common example is the
tokamak. In a tokmak the primary confining field is totoidal and is
produced by coils of surrounding the vacuum vessel. Other coils cause
current to flow through the plasma by induction. This toroidally
flowing current wraps itself around the plasma. Is it just me, or are
there a lot of really useless big words. I mean, totoidally, what is
this? My only thought is that is one of those many wacky science terms
that people who you see on the Discovery Channel would use. The
poloidal magnetic field, at right angles, that stronger toroidal field,
acting together, yield magnetic field lines that spiral around the
torus. This spiring ensures that a particle spends equal amounts of
time above and below the totoidal midplane, thus canceling the effects
of a vertical drift that occurs because magnetic field is stronger on
the inside of the torus than on the outside.
Another cool thing about thermonuclear plasma is that a certain type of
plasma called Tokmak plasma can be heated to temperatures of 10-15
million k by the current flowing in the plasma. Imagine how quick one
could broil chicken. In less than 1/2 seconds, you could have a perfectly
golden brown and tender chicken ready for dinner. At higher
temperature the plasma resistance becomes too low for this method to be
effective, and heating is accomplished by injecting beams of very
energetic neural particles into the plasma. These ionize, become
trapped, and transfer their energy to the build plasma through
collisions. Alternatively, radio frequency waves are launched into the
plasma at frequencies that resonate with various periodic particle
motions. The waves give energy to these resonant particles, which then
transfer it to the rest of the plasma through collisions. In some of my
most recent expirations I have been able to use radio frequency waves to
push electrons around the tokmak to maintain the plasma current. Such
noninductive current drive allows the tokamak pulse to outlast the time
limitly imposed by the fact that , in a transformer-driven tokmak
reaction thingy, the plasma current lasts only as long as the current in
the secondary coils reach their current limits, confinement is lost, and
the plasma terminates until the transformer can be reset. Although the
plasma in as inductively driven tokamak is pulsed, the electricity
produced would not ve, because the thermal inertia of the
neutron-capturing blanket would sustain stream generation between
pulses. By allowing longer pulse or steady-state plasma operation,
however, radio frequency current drive could lessen the thermal stresses
in the fusion reaction. However, so far cooking with plasma has not
been to practical for me. Another approach to fusion pusued since
about 1974, is termed inertial confinement. During my many patrols
during the Viet..-NAM war, I further developed my theory's and opinions
regarding inertial confinement fusion. When I arrived home with a
severely hyper-extended earlobe, I was in great pain and suffering, but
I still managed to explain my findings to the scientific community.
essentially, my theory of inertial confinement fusion works similar to
how the atomic bomb works. A small pellet of frozen deuterium and
tritium are compressed to a very high temperature and densities in a
process analogous to what is accomplished by bombarding the pellet from
all sides, simultaneously with a really intense laser. I nearly put my
eye out with the thing. It is certainty not a toy. Anyway, back to
fusion. After you have nuked the pellet thing with the super laser
thingy, the pellet vaporizes and, by mechanical reaction, imparts
inwardly directed momentum to their remaining pellet core. The inertia
of the inwardly driven pellet material must be sufficient to localize
the power of -9 seconds required to get significant energy release. In
1988, after my defeat in the presidential election, I helped the
government preform underground tests in the Nevada desert. I had showed
the government how to do this type of experiment in 1986, but it took
them two years before they could get it right. I think that their chief
nuclear engineers name was Forrest or something..Man what a idiot. He
just could not get it right. Once again, people took credit for my
discovery. The miniminum confinement condition necessary to achieve
energy gain in a deuterium-tritium plasma is that of the product of the
density in ions per cubic cm and energy containment time in seconds must
exceed 6x10 -13th power. This was attained for the first time in a
hydrogen plasma at the Massachusetts Institute of Technology in 1983.
The temperature required to ignite a fusion reactor is in the range of
100-250 million k, several times the temperature of the center of the
sun. What? How can you have a reaction several times the suns central
temperature in a enclosed plasma environment? Is this some kind of
wacky scientist joke or something? Anyway, the science geeks at M.I.T
supposedly did produce this kind of fusion.
The goal on fusion is in effect, to produce and hold a small star. It
is a daunting and tedious research which is considered to be of the most
advanced in the world. Creating a small dwarf star in a man-made
environment has thought to be the highest scientific challenge. Even
though last weekend my little brother and I did create several dwarf
stars, we were forced to put them out because the neighbors kept
complaining about the light. The cop was a real jerk. I tried
explaining to him what I was doing, but he kept asking me to do stupid
things like: stand on 1 leg and recite the alphabet backwards, and touch
my nose with my finger. Apparently the cop though that I was getting
smart with him when I started to explain to him about the beauty of
fission energy. Oh well, at least he didn't arrest me..again...
f:\12000 essays\sciences (985)\Chemistry\Nuclear Legacy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"There is 10 thousand tons of nuclear waste on Earth." Many scientist are in
search for new and efficient ways to dispose of these lethal by-products which can
destroy life itself. Radioactive products can be either beneficial or devastating. It all
depends on how we use them. In the field of medicine, some benefit from radiation
include, radiation therapy for cancer patients. Not all uses of radiation prove to be
beneficial. Many use the power of the atom for destructive purposes, introducing an
age of nuclear warfare. It doesn't matter if we use radiation for good or bad purposes,
they all contribute to the growing rate of "unwanted nuclear waste." The issue now is,
how do we dispose of these nuclear wastes?
Scientist have thought of several methods to dispose the nuclear by-products.
They tried to chemically treat the waste and reuse it, but "that would cost a fortune".
They thought of launching the waste into outer space but it too will cost a fortune.
They tried to dump barrels filled with nuclear waste into the ocean but they started
leaking. As you can see, there is a great need for a nuclear waste disposal site. These
sites may sound frightening, but it may be the only way for us to dispose the
devastation we had longed to create. In 1986, the decision for a nuclear waste
depositary proved to be "the most frightening decision of the decade." Of these sites,
three were chosen to be the "most suitable" for the disposal of nuclear by-products.
These three sites consisted of Hanford, Washington; Yuka Mountain, Nevada; and
Defsmith, Texas.
Hanford, Washington is a low populated U.S. city, and is owned by the
Department of Energy. A low populated city is an ideal site for radioactive disposal.
Although the city of Hanford is sparsely populated, geologists fear the possibility of a
nuclear seepage into the Colombia river. The Columbia River is an important factor for
the U.S. production of wheat. "This makes it the worst of site," says the geologist. If
the Colombia River is contaminated with nuclear waste, it will lead to the contamination
of land surrounding the large body of water, thus making land unusable. Radioactive
contamination of the Colombia river will affect both America's economy and
agricultural production.
Yuka mountain, Nevada is a heavily guarded desert region of America. It is far
away from any lakes, rivers, or oceans, and its repository is located above ground water
levels. These geological conditions make Yuka mountain an almost perfect place for
nuclear waste disposal to take place. This is due to the possibilities of earthquakes
occurring quite frequently within this area. It is said by the geologist that "if an
earthquake was likely to occur, it will only shake the nuclear materials, not enough to
make them leak." Yuka mountain is unfortunately located 70 miles from Las Vegas,
Nevada, a widely known tourist attraction. Thus making Yuka mountain an unsound
place for nuclear disposal.
Defsmith, Texas is known as the "most productive city in Texas". The farmers
from Defsmith rely on the Ogallala aquifer as a source of water for agricultural growth.
If a radioactive disposal site is created in this city, a large pipe extending through the
Ogallala aquifer will have to be built, thus threatening the rich and fertile farmland. The
construction of a disposal site will also affect the genetic pureness of the seeds which
farmers waited so long to obtain. So much value will be lost if a disposal site were to be
created in Defsmith, making it not worth completing.
If I was a member of the Department of Energy and had to choose one of these
sites, I would have to choose Yuka mountain, Nevada for its ideal geological
conditions. This area is widely uninhabited and does not pose a danger to the ground
water supply. If earthquakes occur, not much would happen, as the geologist stated.
Although Yuka mountain is 70 miles from Las Vegas, I would try to have the city
evacuated and moved to a more safeguarded location. thus making Yuka mountain the
"most reliable" nuclear waste disposal site of the three.
If I was a member of the Department of Energy and could not in good
conscious choose one of these three sites, I would propose a plan to launch nuclear
waste-filled lead capsules into an area in outer space with high levels of natural
radiation. Although it may cost a fortune, any price is worth saving the Earth. I believe
that by launching these capsules into space, our Earth will be left unaffected and free
from the possibility of leakage. (As by creating disposal sites, the Earth is still at risk
from a possible radioactive leak). If we launch these pellets to areas in space with high
natural radiation, a leak in the pellet will not be as disastrous as a leak occurring on
Earth. The radiation being emitted from the capsule will then combine with the source
of natural radiation, resulting in a neutral reaction, and will not have an affect on our
planet.
If I were a member in one of these communities, I would take the Department
of energy to court, because they have no right to take away any of the rights we are
entitled to as citizens of America. Second of all, I would petition to the government
that we have the construction of these disposal sites to be halted, as they endanger the
lives of many Americans. Lastly, I would ask the Department of energy to find another
solution to this "Nuclear Legacy".
I have learned that we must always take responsibility for our actions. In this
case, those who have decided to create radioactive products lacked the responsibility to
dispose of them. The consequences resulting from our lack of responsibility is utterly
devastating. It is frightening how our new creations and discoveries can be so
destructive despite their benefits.
I was indeed inspired from this video. I will do all I can to help reduce
radioactive pollution by the source. Through the video, I saw how dangerous nuclear
waste can be to the environment, and how it affects our entire planet, not simply as
individuals.
The debris left from the bombing of Hiroshima, Japan had a great impact on
me. I was heartbroken by the sight of the many people who were killed and those who
were left to die. It is thoroughly frustrating to see how one discovery, the discovery of
the atom, had changed the way we view the world today.
f:\12000 essays\sciences (985)\Chemistry\Nuclear Power Con .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nuclear Power -- Con.
Since the days of Franklin and his kite flying experiments, electricity has been a topic of interest for many people and nations. Nuclear power has been a great advance in the field of electrical production in the last fifty years, with it's clean, efficient and cheap production, it has gained a large share of the world's power supply. However with the wealth of safer alternative sources of electricity, the dangers involved with nuclear reactors to humans (ie. cancer) and past disasters such as Chernobyl there are well based reasons not to pursue this energy source. New sources such as fusion power, new studies concerning the health of nuclear by-products and scares of nuclear accidents like those at Chernobyl are slowly rendering Nuclear Fission an obselete energy source. This essay will prove that nuclear power is a dangerous technology and with many other sources and the dangers involved, the disadvantages of nuclear power far outweigh the benefits.
Alternative sources of energy are making their way into the highly competitive field of electricity production. With the wealth of sources such as solar, wind, hydro or geothermal the dangers involved with fission could be solved by adopting these newer, safer methods. A main source of energy that could lead the way for the near future is solar energy. It is clean efficient and is already a large part of American and Canadian electricity production. "Solar energy already supplies about 6% of the nation's [U.S.A] energy ... the industry is still in an embryonic stage, and opportunity exists for increasing this contribution by ten times from current levels." (Maidique, 92) It is obvious that solar power will become a large part of the electricity production around the world. With future expansion and newer solar cells, the power production could be increased to about 60-70% of the U.S.A's needs.
Cold fusion will most surely be the newest type of energy leading us into the 21st century, producing energy that is cheaper, safer and easier to generate then any existing source. "Fusion fuel releases a million times more energy then does burning a comparable weight of coal or oil; one teaspoon of deuterium, obtained cheaply from H20, contains the equivalent of 300 gallons of gasoline; a mere 1000 pounds of deuterium could fuel a 1000-megawatt power station for a year." (Dean, 84) Such spectacular figures sound unbelievable. Using a thousand pounds of a substance to supply a 1000-megawatt power station for a year, such figures will cause plummeting electricity prices and make fission plants far too expensive.
However, prices and efficiency are useless if the safety factor is abandoned. All three topics are dealt with in fusion, that is why it is such a miracle. In fact, a meltdown in a fusion reactor is impossible, which cannot be said for fission. "Compared with fission reactors the absence of such fission products as radioactive iodine and cesium from the fusion cycle reduces the potential hazard by more then a thousand-fold." (Dean, 84) This is accomplished because in a fission reactor the fuel is formed in a solid form which must be cooled by water, and if water is unavailable then a meltdown may occur. In a fusion reactor the fuel is a hot gas rather then a solid. Because of this even with a complete loss of cooling the gas would cool as it hits the cold walls of the reactor chamber. With future resources, some proven like solar others experimental such as fusion, there is a wealth of possible energy sources.
However, new sources of energy will not reduce the risk of horrific fission disasters such as those at Chernobyl or Three Mile Island. Past disasters such as Three Mile Island are well- based reasons to reconsider nuclear technology. At the Chernobyl power station at 1:00 am on April 25, 1996 reactor number 4 was running smoothly. The engineers performed a standard test on the turbo generators (Engine that turns to produce electricity.) At 1:20am the operator turned off the emergency cooling system. "The sharp temperature increase in the reactor core, the rupture of the cooling channels (releasing steam on to the red-hot graphite moderator, producing water gas) and the chemical reaction between overheated zirconium canning and water -- (releasing hydrogen) ignited by the fireworks of flying hot and glowing fragments produced by the steam explosion -- resulted in the explosion." (Trainer, 116) As the two huge steam explosions tore the core apart, the force of the blast lifted the thousand ton cover lid above the core. Lethal radiation was being released into the air. The explosion gave of more radiation then two atomic bombs dropped on Hiroshima and Nagasaki combined. The accident is an awful reminder that and explosion may not be only a freak occurrence. It so happens that it may be caused by other errors such as human blunder, low water supply or computer glitch, any misfunction may cause horrific problems.
The Ukraine poisoned western Russia and almost all of Europe with the nuclear explosion at Chernobyl. Within the first six days, radiation swept across Europe. Radiation levels were higher as far west as Paris. Many people were infected with cancer or radiation poisoning. "It was predicted that over 21,000 people of the region's infected would die within 50 years from direct cause of the explosion, however total death will range to around 100,000 within fifty years." (Megaw, 87) Not only is the original explosion deadly, but the insuing radiation can leave an area useless, killing or poisoning many plants, animals and humans. With a half-life (half-life is the period of time during which half of the nuclei in a quantity of radioactive material undergo decay) of millions of years the land will suffer for a period of equal time, as will humans.
The explosion at Three Mile Island was a shock to people who said an accident could never happen in the U.S.A. Three Mile Island was located on an island near Harrisburg, Pennsylvania on the Susquehanna River. At around 4am on March 28, 1979, an accident involving reactor number 2 occurred at Three Mile Island. Although Public health was not so much threatened, an inquiry done revealed such operator incompetence that it affected the whole of American Policy on Nuclear Power. When a water pressurizer, designed to keep water at a 325oC burst causing insufficient cooling. Due to this some nuclear waste from the station had to be vented to reduce steam. This was done successfully and the hazard was controlled. "Ground level gamma dose rated at the site boundary reached levels of 25 millirem per hour during the morning, as compared to the regular 0.2 millirem per hour measured regularly." (Zipko, 90) Since the accident the American Nuclear Council (ANC) as well as many other countries have installed a second emergency pressurizer, it is feared however that in an extreme water pressurizer burst the emergency reaction system would not activate. The workers at Three Mile Island, were lucky. Although never in risk of a core meltdown, as in Chernobyl, a more serious water leak could have caused instability and may have called for much more venting then occurred. More venting may have caused incurable pollution to the surrounding area. The accident that occurred on Three Mile Island proves that a nuclear accident does not only occur in the "safety impaired" Russian reactors.
Nuclear power is on whole a clean and efficient source of power, however the aftereffects of already used materials are much more deadly then the process of fission itself. There are many issues brought to life by nuclear fission reactors, probably the most important is its detrimental effect on humans, plants and animals. It has been known for many years the nuclear waste causes many sicknesses including cancer. This effect however, does not form during the process of fission itself. Rather it is a slow release of poisonous radioactive waste into the environment over periods up to 7 billion years. In small quantities, the body can absorb radioactive waste, but with ever- growing share of world power production being fission based it may be responsible for a dramatic rise in cancer since the dawn of the nuclear age. Some "scientists believe that nuclear industry by the year 2000 could increase the level of radiation by up to 3 percent, which would add about 7000 fatal cancers per year to a world population of 4 billion." (Kaku, 82) Considering 7000 deaths out of 4 billion people, it seems like a very high price to pay for what can be currently done safely. Even coal, a very dirty fuel does not contribute to cancer as much as radioactive waste produced by fission. In fact, a child living in close proximity to a fission reactor is fifty times as likely of forming a type of cancer such as leukemia or glaucoma, as opposed to one living close to a coal burning power plant. These numbers are blinding, coal power produces many more immediate deaths due to illnesses such as chronic bronchitis or emphysema. These sicknesses do occur and should be looked at, (in another essay) however they cannot be compared to a serious disease such as cancer, which may be passed genetically and for which there is no immediate cure. "She [a mother] probably is very unhappy to learn that her child living near a fission power plant is at a 0.5 percent chance of dying of cancer over a periodic exposure." (Taylor, 155) A 0.5 percent exposure is equal to a 1 in 200 chance of developing the disease. The thought of a nuclear power reactor located near a large city such as Harrisburg, Pennsylvania (site of Three Mile Island) where the exposure is released to thousands of people is unthinkable but does occur due to company profit needs.
Radiation is not only spread through the air we breathe. It is also passed from plants we eat and water we drink. In areas such as Chernobyl that have had even the mildest nuclear problems (obviously Chernobyl was not a mild problem) we see an area in diameter around Chernobyl reaching as far as Kiev (400km) to have plants that are permanently inedible due to enormous radiation levels. Unfortunately the radiation is not in the plants, it is in the soil, a layer of soil that will spread harmful radiation for the next 7 billion years. These plants should not be eaten, however many poor families have no choice and may not be aware they are poisoning themselves. Neither the animals nor the people eating them know that they are being poisoned. It is more surprising that areas in the U.S.A have to measured with abnormally higher radiation, it must be mentioned these areas are located in relatively close proximity to a fission power plant. In addition, "wind and rain erosion wash nuclear waste into streams and rivers, poisoning the waters, killing the fish and eventually threatening humans throughout the water they drink." (Kronenwetter, 48) The passage of nuclear waste directly from the power plant to the soil, (which poisons plants) run- offs from the land which go into the water affecting both the poisoned water we drink and the contaminated food we eat, not to mention the air that we breathe. These are scary facts that must no longer be overlooked in the name of profit. Nuclear power is a major pollutant and must be recognized as one.
In the 1990's we have many alternatives to Nuclear Power. Solar, wind, hydro and geothermal are all great sources that should be used to limit the use of nuclear power. Although nuclear power on the whole is a clean and efficient, it has many unnecessary drawbacks such as the waste it produces, this will continue to poison humans, plants and animals. With all the choices available to people, why not choose a clean or renewable source of energy, one without the dangers of radioactive waste and possible core meltdowns. New sources can already today replace fission power, it is unsafe, unwarranted and pointless to peruse something that can literally blow up in out face and kill us. In the future use of solar or wind power and maybe someday fusion power will cause nuclear fission power to become obsolete.
f:\12000 essays\sciences (985)\Chemistry\Our Solar System.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Solar cells today are mostly made of silicon, one of the most common elements on Earth.
The crystalline silicon solar cell was one of the first types to be developed and it is still the most
common type in use today. They do not pollute the atmosphere and they leave behind no harmful
waste products. Photovoltaic cells work effectively even in cloudy weather and unlike solar
heaters, are more efficient at low temperatures. They do their job silently and there are no
moving parts to wear out. It is no wonder that one marvels on how such a device would
function.
To understand how a solar cell works, it is necessary to go back to some basic atomic
concepts. In the simplest model of the atom, electrons orbit a central nucleus, composed of
protons and neutrons. each electron carries one negative charge and each proton one positive
charge. Neutrons carry no charge. Every atom has the same number of electrons as there are
protons, so, on the whole, it is electrically neutral. The electrons have discrete kinetic energy
levels, which increase with the orbital radius. When atoms bond together to form a solid, the
electron energy levels merge into bands. In electrical conductors, these bands are continuous but
in insulators and semiconductors there is an "energy gap", in which no electron orbits can exist,
between the inner valence band and outer conduction band [Book 1]. Valence electrons help to
bind together the atoms in a solid by orbiting 2 adjacent nucleii, while conduction electrons,
being less closely bound to the nucleii, are free to move in response to an applied voltage or
electric field. The fewer conduction electrons there are, the higher the electrical resistivity of
the material.
In semiconductors, the materials from which solar sells are made, the energy gap Eg is
fairly small. Because of this, electrons in the valence band can easily be made to jump to the
conduction band by the injection of energy, either in the form of heat or light [Book 4]. This
explains why the high resistivity of semiconductors decreases as the temperature is raised or the
material illuminated. The excitation of valence electrons to the conduction band is best
accomplished when the semiconductor is in the crystalline state, i.e. when the atoms are
arranged in a precise geometrical formation or "lattice".
At room temperature and low illumination, pure or so-called "intrinsic" semiconductors
have a high resistivity. But the resistivity can be greatly reduced by "doping", i.e. introducing
a very small amount of impurity, of the order of one in a million atoms. There are 2 kinds of
dopant. Those which have more valence electrons that the semiconductor itself are called
"donors" and those which have fewer are termed "acceptors" [Book 2].
In a silicon crystal, each atom has 4 valence electrons, which are shared with a
neighbouring atom to form a stable tetrahedral structure. Phosphorus, which has 5 valence
electrons, is a donor and causes extra electrons to appear in the conduction band. Silicon so
doped is called "n-type" [Book 5]. On the other hand, boron, with a valence of 3, is an
acceptor, leaving so-called "holes" in the lattice, which act like positive charges and render the
silicon "p-type"[Book 5]. The drawings in Figure 1.2 are 2-dimensional representations of n-
and p-type silicon crystals, in which the atomic nucleii in the lattice are indicated by circles and
the bonding valence electrons are shown as lines between the atoms. Holes, like electrons, will
remove under the influence of an applied voltage but, as the mechanism of their movement is
valence electron substitution from atom to atom, they are less mobile than the free conduction
electrons [Book 2].
In a n-on-p crystalline silicon solar cell, a shadow junction is formed by diffusing
phosphorus into a boron-based base. At the junction, conduction electrons from donor atoms in
the n-region diffuse into the p-region and combine with holes in acceptor atoms, producing a
layer of negatively-charged impurity atoms. The opposite action also takes place, holes from
acceptor atoms in the p-region crossing into the n-region, combining with electrons and
producing positively-charged impurity atoms [Book 4]. The net result of these movements is the
disappearance of conduction electrons and holes from the vicinity of the junction and the
establishment there of a reverse electric field, which is positive on the n-side and negative on
the p-side. This reverse field plays a vital part in the functioning of the device. The area in
which it is set up is called the "depletion area" or "barrier layer"[Book 4].
When light falls on the front surface, photons with energy in excess of the energy gap
(1.1 eV in crystalline silicon) interact with valence electrons and lift them to the conduction
band. This movement leaves behind holes, so each photon is said to generate an "electron-hole
pair" [Book 2]. In the crystalline silicon, electron-hole generation takes place throughout the
thickness of the cell, in concentrations depending on the irradiance and the spectral composition
of the light. Photon energy is inversely proportional to wavelength. The highly energetic photons
in the ultra-violet and blue part of the spectrum are absorbed very near the surface, while the
less energetic longer wave photons in the red and infrared are absorbed deeper in the crystal and
further from the junction [Book 4]. Most are absorbed within a thickness of 100 æm.
The electrons and holes diffuse through the crystal in an effort to produce an even
distribution. Some recombine after a lifetime of the order of one millisecond, neutralizing their
charges and giving up energy in the form of heat. Others reach the junction before their lifetime
has expired. There they are separated by the reverse field, the electrons being accelerated
towards the negative contact and the holes towards the positive [Book 5]. If the cell is connected
to a load, electrons will be pushed from the negative contact through the load to the positive
contact, where they will recombine with holes. This constitutes an electric current. In crystalline
silicon cells, the current generated by radiation of a particular spectral composition is directly
proportional to the irradiance [Book 2]. Some types of solar cell, however, do not exhibit this
linear relationship.
The silicon solar cell has many advantages such as high reliability, photovoltaic power
plants can be put up easily and quickly, photovoltaic power plants are quite modular and can
respond to sudden changes in solar input which occur when clouds pass by. However there are
still some major problems with them. They still cost too much for mass use and are relatively
inefficient with conversion efficiencies of 20% to 30%. With time, both of these problems will
be solved through mass production and new technological advances in semiconductors.
f:\12000 essays\sciences (985)\Chemistry\Oxygen 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oxygen and its compounds play a key role in many of the important processes of life and industry. Oxygen in the biosphere is essential in the processes of respiration and metabolism, the means by which animals derive the energy needed to sustain life. Furthermore, oxygen is the most abundant element at the surface of the Earth. In combined form it is found in ores, earths, rocks, and gemstones, as well as in all living organisms.
Oxygen is a gaseous chemical element in Group VA of the periodic table. The chemical symbol for atomic oxygen is O, its atomic number is 8, and its atomic weight is 15.9994. Elemental oxygen is known principally in the gaseous form as the diatomic molecule, which makes up 20.95% of the volume of dry air. Diatomic oxygen is colorless, odorless, and tasteless.
Two 18th-century scientists share the credit for first isolating elemental oxygen: Joseph PRIESTLEY (1733-1804), an English clergyman who was employed as a literary companion to Lord Shelburne at the time of his most significant experimental work, and Carl Wilhelm SCHEELE (1742-86), a Swedish pharmacist and chemist. It is generally believed that Scheele was the first to isolate oxygen, but that Priestley, who independently achieved the isolation of oxygen somewhat later, was the first to publicly announce his findings.
The interpretation of the findings of Priestley and the resultant clarification of the nature of oxygen as an element was accomplished by the French scientist Antoine-Laurent LAVOISIER (1743-94). Lavoisier's experimental work, which extended and improved upon Priestley's experiments, was principally responsible for the understanding of COMBUSTION and the establishment of the law of conservation of matter.
Lavoisier gave oxygen its name, which is derived from two Greek words that mean "acid former." Lavoisier held the mistaken idea that oxides, when dissolved in water, would form only acids. It is true that some oxides when dissolved in water do form acids; for example, sulfur dioxide forms sulfurous acid. Some oxides, however, such as sodium oxide, dissolve in water to form bases, as in the reaction to form sodium hydroxide; therefore oxygen was actually inappropriately named.
.NATURAL OCCURRENCE
Oxygen is formed by a number of nuclear processes that are believed to occur in stellar interiors. The most abundant isotope of oxygen, with mass 16, is thought to be formed in hydrogen-burning stars by the capture of a proton by the isotopes of nitrogen and fluorine, with the subsequent emission of, respectively, a gamma ray and an alpha particle. In helium-burning stars the isotope of carbon with mass 12 is thought to capture an alpha particle to form the isotope with mass 16 with the emission of a gamma ray.
In the terrestrial environment oxygen accounts for about half of the mass of the Earth's crust, 89% of the mass of the oceans, and 23% of the mass (and 21% of the volume) of the atmosphere. Most of the Earth's rocks and soils are principally silicates. The silicates are an amazingly complex group of materials that typically consist of greater than 50 (atomic) percent oxygen in combination with silicon and one or more metallic elements.
Several important ores are principally oxides of the desired metals, such as the important iron-bearing minerals hematite, magnetite, and limonite and the most important aluminum-bearing mineral, BAUXITE (a mixture of hydrated aluminum oxides and iron oxide).
.PHYSICAL AND CHEMICAL PROPERTIES
Three naturally occurring isotopes of oxygen have been found: one with mass 16 (99. 759% of all oxygen), one with mass 17 (0.037%); and one with mass 18 (0.204%). The rarer isotopes, principally the latter, find their major use in labeling experiments used by scientists to follow the steps of chemical reactions.
If oxygen at a pressure of one atmosphere is cooled, it will liquefy at 90.18 K (-182.97 deg C; -297.35 deg F), the normal boiling point of oxygen, and it will solidify at 54.39 K (-218.76 deg C; -361.77 deg F), the normal melting point of oxygen. The liquid and solid forms of oxygen have a pale blue color. Several different structures are known for solid oxygen: solid type III, from the lowest temperatures achievable to 23.66 K; type II, from 23.66 to 43.76 K; and type I, from 43.76 to 54.39 K. The critical temperature for oxygen, the temperature above which it is impossible to liquefy the gas no matter how much pressure is applied, is 154.3 K (-118.9 deg C; -181.9 deg F). The pressure of liquid and gaseous oxygen coexisting in equilibrium at the critical temperature is 49.7 atmospheres.
Oxygen gas exhibits a slight but important solubility in water. Molecular oxygen dissolved in water is required by aquatic organisms for their metabolic processes and is ultimately responsible for the oxidation and removal of organic wastes in water. The solubilities of gases depend on the temperature of the solution and the pressure of the gas over the solution. At 20 deg C (68 deg F) and an oxygen pressure of one atmosphere, the solubility of O(2) in water is about 45 grams of oxygen per cubic meter of water, or 45 ppm (parts per million).
Molecular diatomic oxygen is a fairly stable molecule requiring a dissociation energy (the energy required to dissociate one mole of molecular oxygen in its ground state into two moles of atomic oxygen in its ground state) of 493.6 kilojoules per mole. The molecule is dissociated by ultraviolet radiation of any wavelength shorter than 193 nm. Solar radiation striking stratospheric oxygen dissociates it into atomic oxygen for this reason. The atomic oxygen formed in this fashion is capable of reacting with oxygen to form OZONE.
Corrosion
Many direct, uncatalyzed reactions of oxygen do not occur rapidly at room temperature. This fact has a number of important consequences. One of these consequences has to do with the use of metals as structural materials. Metals that are used in construction, such as iron (principally as steel) and aluminum, form highly stable oxides. For example, the oxidation of aluminum has a significant tendency to occur. However, in spite of this tendency, the reaction occurs so slowly at room temperature that it can be said for most practical purposes not to occur at all, and for this reason aluminum is an appropriate and widely used structural material. The slowness of this reaction is due in part to the stability of the oxygen-oxygen bond and in part because of a very thin, protective layer of oxide that forms on the surface of the aluminum. The oxidation of iron is a complex process involving impurities in the iron, as well as water and carbon dioxide. This oxidative destruction, or rusting, of iron and steel--which are among our most important structural materials--is extremely costly to modern societies.
Biological Oxidation
Another important aspect of the rates of oxygen reactions concerns the rate of reaction with organic materials. Such oxidation reactions are, ultimately, the sources of energy for the higher plants and animals, are responsible for the cleansing of streams of biodegradable wastes, and are responsible for the natural decomposition of organic material. The rates of reactions in this category are selectively controlled by enzymes in the organisms that facilitate the reactions. Thus waste products and dead plants and animals decompose (are oxidized) principally through the agency of microorganisms, and energy-bearing foods are metabolized (oxidized) by means of biological processes.
Reactivity
There is a marked difference between the rates of reactions with oxygen at room temperature and the rates at elevated temperatures. Many substances that do not react rapidly with oxygen in air at temperatures below 100 deg C will do so at 1000 deg C with a strong evolution of heat (exothermically). For example, coal and petroleum can be stored indefinitely at the temperatures encountered under normal climatic conditions, but they readily oxidize, exothermically, at elevated temperatures.
The most common compounds of oxygen are those in which the element exhibits a valence of two. This fact is associated with the electronic structure of atomic oxygen; this atom requires two additional electrons to fill its outermost energy level. Examples of divalent oxides are numerous among well-known substances such as water; carbon dioxide; aluminum oxide; silicon dioxide; the silicates, calcium carbonate or limestone; and sulfur dioxide. Oxygen is also known to have other valences, such as in the PEROXIDES, of which hydrogen peroxide is an example.
The direct reaction of oxygen with another element frequently follows the pattern discussed above; that is, it does not occur rapidly or at all at room temperature but is strongly exothermic, and once oxidation is initiated the evolved heat raises the temperature of the reactants such that the reaction is self-sustaining. Examples of such reactions are with the elements magnesium, carbon, and hydrogen. Magnesium and carbon burn in air once the reaction is initiated, and a hydrogen-oxygen mixture can react explosively when the reaction is initiated by a flame or spark. The explosion of a hydrogen-oxygen mixture is an extremely fast reaction and occurs because of the formation of atomic oxygen in the exploding mixture.
Uses
Pure oxygen is used extensively in technological processes. It is used in the welding, cutting, and forming of metals, as in oxyacetylene welding, in which oxygen reacts with acetylene to form an extremely hot flame. Oxygen is added to the inlet air (3 to 5%) in modern blast furnaces to increase the temperature in the furnace; it is also used in the basic oxygen converter for steel production, in the manufacture of chemicals, and for rocket propulsion.
Oxygen is also used in the partial combustion of methane (natural gas) or coal (taken here to be carbon) to make mixtures of carbon monoxide and hydrogen called synthesis gas, which is in turn used for the manufacture of methanol. Processes in which combustible liquids are produced from coal will become increasingly important as petroleum resources become further depleted.
PRODUCTION
Oxygen is conveniently produced in the laboratory by heating mercuric oxide or potassium chlorate to moderately high temperatures. The production from mercuric oxide is the method that was employed by Joseph Priestley, and the production from the potassium chlorate method commonly used by students in today's laboratories. Oxygen is liberated when solid potassium chlorate is heated to 400 deg C or, when manganese dioxide is added as a CATALYST, to 200 deg C. The liberated oxygen can be collected by water displacement because of the low solubility of oxygen in water.
Oxygen can also be produced in the laboratory by the electrolysis of water, a process that reverses the violent hydrogen-oxygen reaction discussed previously. When a current is passed through water the liquid is decomposed at the electrodes. This method is also used to produce oxygen on a commercial scale when a high-purity product is desired.
The more economical, and therefore preferred, method for the commercial production of oxygen is the liquefaction and distillation of air. The air is cooled until it liquefies, principally by being made to do work in a rotating expansion turbine, and the resulting liquid air is fractionated by a complex distillation process. The gaseous oxygen produced in this fashion is shipped in pressurized cylinders or, as is often the case when large amounts are involved, through pipelines to nearby industrial plants.
RELATIONSHIP TO LIFE SCIENCES
Most organisms depend on oxygen to sustain their biological processes. The great majority of living organisms fall into two categories. In the first category are the higher plants and the photosynthetic bacteria. These organisms utilize light energy through PHOTOSYNTHESIS to combine carbon dioxide and water (or, infrequently, other inorganic substances in place of water) into more complex materials characterized as CARBOHYDRATES while at the same time releasing oxygen into the atmosphere. In the second category are the higher animals, most microorganisms, and photosynthetic cells that live in the dark. All these second-category organisms use complex series of enzyme-catalyzed OXIDATION AND REDUCTION reactions using materials such as glucose as the fuel and oxygen as the terminal oxidizing agent (see METABOLISM). The end products of metabolism in these organisms are carbon dioxide and water, which are returned to the atmosphere.
The net result of these complementary functions is the oxygen cycle, in which the photosynthetic organisms, using solar energy, synthesize carbohydrates from water and carbon dioxide and give off oxygen as a by-product, while the aerobic organisms oxidize ingested organic materials, using up oxygen and giving off carbon dioxide and water through a complex series of metabolic processes. It has been estimated that 3.5 X (10 to the power of 11) tons of carbon dioxide are cycled annually via these processes.
Thus, in the vertebrates--and in humans in particular--oxygen is necessary to sustain metabolism and thus life. Air is inhaled and oxygen in the air is exchanged in the lungs between the atmosphere and the hemoglobin in the blood. The blood carries the oxygen, complexed with hemoglobin, to all parts of the body in which metabolic processes occur. It also carries carbon dioxide back to the lungs, where the carbon dioxide is exchanged with the atmosphere and exhaled.
If the oxygen concentration were to drop to about half its value in the atmosphere, humans could no longer survive. For this reason an important component of the life-support systems of divers and astronauts is a source of oxygen gas. Similarly, persons ill with respiratory diseases that interfere with normal respiration, such as pneumonia and emphysema, are often kept in OXYGEN TENTS and HYPERBARIC CHAMBERS, the latter administering high-pressure oxygen, may be used to treat a variety of ailments.
IMPORTANT COMPOUNDS
In the realm of inorganic chemistry there is a very large number of oxygen-containing compounds. There are very few elements for which no oxides are known, and there are several metallic elements (such as titanium, vanadium, and praseodymium) for which a wide variety of solid oxides exist. The solid oxides of the metallic elements can generally be synthesized by the direct reaction of the elements at high temperatures. In many cases such reactions will result in the formation of a single oxide of the metal in its most oxidized form. Typical examples are the metallic oxides of sodium, calcium, lanthanum, titanium, vanadium, and tungsten.
In the cases of elements capable of forming reduced oxides, in particular the early transition metals, the reduced oxides can be formed by heating the highest oxide, formed as above, to very high temperatures (1,500 K or higher) either in an inert container or in the presence of the metallic element. The reduced oxides that result exhibit a variation in the extent and importance of direct metal-metal bonding in the compounds, and this variation gives rise to a variety of electrical and magnetic properties. The more metal-rich of these oxides are metallic conductors and tend to be nonstoichiometric; that is, they are observed to exist over a range of compositions all possessing the same underlying structure. A number of these titanium oxides exhibit more than one crystal structure (polymorphism). The most oxidized compound, titanium, is widely used in the RUTILE form as a white pigment in paints.
Ternary oxides, consisting of two metallic elements plus oxygen, are of great interest to solid-state scientists. For example, compounds such as the SPINELS and the PEROVSKITES are studied extensively because of their interesting magnetic and electrical properties. Examples of important ternary oxides are the magnetic FERRITES, whose magnetic properties can be tailored, making them useful in computer memory units. The ferrites are prepared by firing compacted mixtures of iron oxide and one or more metal oxides (such as those of nickel, copper, zinc, magnesium, and manganese).
Also of importance in inorganic chemistry are the oxides of the nonmetals. Most of the nonmetals are known to form a wide variety of compounds with oxygen. The nitrogen oxides are undesirable by-products of high-temperature combustion in air (as in an internal combustion engine) and can cause serious environmental pollution.
f:\12000 essays\sciences (985)\Chemistry\Oxygen.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oxygen, symbol O, colorless, odorless, tasteless, slightly magnetic gaseous element. On earth, oxygen is more abundant than any other element. Oxygen was discovered in 1774 by the British chemist Joseph Priestley and, independently, by the Swedish chemist Carl Wilhelm Scheele; it was shown to be an elemental gas by the French chemist Antoine Laurent Lavoisier in his classic experiments on combustion.
Large amounts of oxygen are used in high-temperature welding torches, in which a mixture of oxygen and another gas produces a flame of much higher temperature than is obtained by burning gases in air. Oxygen is administered to patients whose breathing is impaired and also to people in aircraft flying at high altitudes, where the poor oxygen concentration cannot support normal respiration. Oxygen-enriched air is used in open-hearth furnaces for steel manufacture.
Most of the oxygen produced in the United States is used to make a mixture of carbon monoxide and hydrogen called synthesis gas, used for the synthesis of methanol and ammonia. High-purity oxygen is used also in the metal-fabrication industries; in liquid form it is of great importance as a propellant for guided missiles and rockets2.
I have chosen the element "Oxygen" because without Oxygen, human beings would not be able to live. Oxygen is probably the single most important element in the world as we know it. With out Oxygen we would not breath, have water, eat plants.
Oxygen's Electron configuration is 1S2 + 2S2 + 2P4, it's electron dot symbol is: .
Gaseous oxygen can be condensed to a pale blue liquid that is strongly magnetic. Pale blue solid oxygen is produced by compressing the liquid. The atomic weight of oxygen is 15.9994.Oxygen composes 21 percent by volume or 23.15 percent by weight of the atmosphere; 85.8 percent by weight of the oceans and, as a constituent of most rocks and minerals, 46.7 percent by weight of the solid crust of the earth. Oxygen comprises 60 percent of the human body. It is a constituent of all living tissues; almost all plants and animals, including all humans, require oxygen, in the free or combined state, to maintain life.3
Three structural forms of oxygen are known: ordinary oxygen, containing two atoms per molecule, formula O2; ozone, containing three atoms per molecule, formula O3; and a pale blue, nonmagnetic form, O4, containing four atoms per molecule, which readily breaks down into ordinary oxygen. Three stable isotopes of oxygen are known; oxygen-16 (atomic mass 16) is the most abundant. It comprises 99.76 percent of ordinary oxygen and was used in determination of atomic weights until the 1960s.
Oxygen is prepared in the laboratory from salts such as potassium chlorate, barium peroxide, and sodium peroxide. The most important industrial methods for the preparation of oxygen are the electrolysis of water and the fractional distillation of liquid air. In the latter method, air is liquefied and allowed to evaporate. The nitrogen in the liquid air is more volatile and boils off first, leaving the oxygen. Oxygen is stored and shipped in either liquid or gaseous form.
Oxygen is a component of many organic and inorganic compounds. It forms compounds called oxides with almost all the elements, including some of the noble gases. A chemical reaction in which an oxide forms is called oxidation. The rate of the reaction varies with different elements. Ordinary combustion, or burning, is a very rapid form of oxidation. In spontaneous combustion, the heat evolved by the oxidation reaction is sufficiently great to raise the temperature of the substance to the point that flames result. For example, phosphorus combines so vigorously with oxygen that the heat liberated in the reaction causes the phosphorus to melt and burn. Certain very finely divided powders present so much surface area to the air that they burst into flame by spontaneous combustion; they are called pyrophoric substances. Sulfur, hydrogen, sodium, and magnesium combine with oxygen less energetically and burn only after ignition. Some elements, such as copper and mercury, form oxides slowly, even when heated. Inactive metals, such as platinum, iridium, and gold, form oxides only through indirect methods. For discussion of oxides of elements see separate articles on each element.3
A Guy jumps of a ship in the middle of the ocean and he swims and swims towards an island. Having second thoughts about leaving the world, he started screaming at a passing ship. The added oxygen to his blood caused his face to turn dark purple. The captain of the ship saw the man, waived to him, and didn't pick him up. I guess it was because he was "Marooned."
f:\12000 essays\sciences (985)\Chemistry\ozone layer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In environmental science the green house effect is a common term for the role water vapor; carbon dioxide and ozone play in keeping the earth's surface warmer than it would normally be. The atmosphere is primarily transparent to infrared radiation from the sun, which is mostly absorbed by the earth's surface. The earth being much cooler than the sun, remits radiation most strongly at infrared wavelengths. Water vapor, carbon dioxide and ozone then absorb much of this radiation and remit a large proportion back towards the earth. The atmosphere thus acts as a kind of blanket: without its presents the earth's average ground temperature of 15 degrees Celsius would fall to -28 degrees Celsius. The termed greenhouse effect implies that a comparable effect keeps the interior of the green house warm. Actually, the man role of the glass in a green house is to prevent convection currents from mixing cooler air outside with the warmer air inside.
Although water is the most important factor in the greenhouse effect, is a major reason why human regions experience less cooling at night than do dry regions. Changes in both water and carbon dioxide play an important role in climate changes. For this reason many scientist have expressed concerns over the global increase of carbon dioxide in resent decades, largely as a result of the burring of fossil fuels. In many other factors of the earth's present climate remain more or less constant, the carbon dioxide increase should raise the average temperature at the earth's surface. Because warm air can contain warm water before reaching saturation than cooler air can, the amount of water would probably increase as the atmosphere got warmer . This process could go on forever. Although this considered unlikely many negative feed backs could as so occur, such as increase in cloud cover or increase carbon dioxide absorption by the oceans, the results of even a limited rise in average surface temperature remains sufficiently dramatic to justify concern.
In October 1983 the US Environmental Protection Agency released a report that projected the irreversible onset of the greenhouse effect by the 1990's. Shortly there after the National Academy of Sciences issued its own report, in which the matter of irreversibility remain more in question. Both reports, however, strongly indicated the need for measures to check the rise in carbon dioxide.
No matter what term you use global warming or greenhouse effect, they both play a major role in the earth's climate. Climate researchers are attempting to predict, based on ocean and air circulation, how great an increase there will be. If global warming continues then the polar ice caps will melt and most of the earth will be flooded and a lot of lives will be lost.
The Ozone is located in the stratosphere, approximately 10 km - 50 km above the earth. The density of ozone gas at zero degrees Celsius is 1 ATM. The Ozone is relatively unstable form of molecular oxygen containing three oxygen atoms (O ). Radiation from the sun continuously bombards the Earth's atmosphere, causing molecules to break apart into component elements that form into new chemical compounds. Ozone is produced when upper-atmosphere oxygen molecules (O ) are broken apart by ultra-violet light. Most of the freed oxygen atoms immediately bond with nearby oxygen molecules to form ozone (O + O =O ).
The only method used to make commercially is to pass gaseous oxygen or air through a high voltage alternating-current electric discharge called a silent electric discharge. Ozone near the earth's surface is generally regarded to as a bad . It is created from industrial, transportation, and some natural sources. It is the most noxious component of smog. All high concentration, 0 is known to reduce human lung capacity, as well as damage the cells of many plants, animals, and other organisms. For these reasons, ozone is treated as and air pollutant in most industrial countries. Furthermore, O in the upper troposphere is a powerful greenhouse gas and is believed to play a role in global warming.
On the other hand, ozone in the stratosphere is highly valued. It serves as a protective radiation shield that interprets solar ultraviolet light harmful to living things. Ultraviolet light splits the relatively unstable O molecules into O and atomic O. Most of the time, the O atom created by Ozone breakup recombines with one of the plentiful O molecules to re-form O . This Ozone-creation process is constantly at work producing more ozone. Scientist can't predict with certainly the consequences for life on the earth if the stratosphere ozone layer weakens. In general, biologists and health professionals recognize that life on earth enveloped under the protection of an ozone layer thick enough to remove much of the UV-B solar radiation known to damage cellular organisms. Accordingly, various organisms--including humans--may have difficult adjusting to the higher UV-B levels resulting from a thinner ozone Layer.
Medical studies have quantified some of the expected effects of increased UV-B levels, based on real-life information form people exposed to greater than average UV-B levels--populations living at high altitudes and in the tropics, where the average ozone layer is thinner and the sunlight more direct. The most serious medical effects include increased incidence of cataracts and skin cancer, as well as evidence of weakened immune-system response. Ecological research indicated that some crop yields will decrease and disruption in marine food chains may occur.
A weakened ozone layer may also case climatological effects. The stratosphere warms with altitude because the splitting of stratospheric ozone is caused by ultraviolet photons, which contain much more energy than that required to break the O-O bond. This extra energy is converted to heat. Less stratospheric ozone means less local heating, but it also means that more UV light is transmitted to heat the lower atmosphere and the earth's surface.
Ozone can be destroyed by chemicals that react directly with it, or by those that react with the oxygen atom temporarily freed whenever an o molecule breaks apart. However, since ozone concentrations are higher than those of most reactive chemicals in the stratosphere, the only ozone destroyers of concern are those that can participate in a "catalytic cycle" that is, where one trace catalytic chemical can be responsible for destroying terns or even hundreds of thousands of ozone molecules.
In the last few years, various human activities have released ozone -destroying chemicals into the atmosphere. Of Particular importance are halogen atoms -- chlorine and bromide. Chemicals release into the ozone by industrial particles include chlorocarbon compounds (such as CCL and CHI CL ), chloroflucarbon compounds CFCs and halon compounds.
Chlorocarcon compounds are used primarily as industrial solvents, degreasing compounds, and CFC precursors. The CFCs are used as working fluids in refrigeration and air conditioning units. Aerosol propellant agents. The Halons are used as fire suppressants. Once in the stratosphere, all these chlorine and bromine containing compounds are broken apart by solar ultraviolet radiation, releasing their Cl or Br atoms. These atoms start the process of Ozone destruction. Each Chlorine of Bromine atom that starts the destruction cycle can destroy 100,000 ozone molecules. There are on the other hand natural ozone depletes such as volcanic eruptions.
A hole in the Ozone has emerged because of all of the depletion. Starting in the spring of 1980, a massive 8.2 square mile ozone hole accounting for one quarter to one half the appeared over the continent of Antarctica. For the past 16 years the hole has grown larger. There was a theory that predicted that the most severe O loss would occur relatively high in the stratosphere above 30 km). It was called the Roland/Molina theory. In fact the largest depletion over Antarctica occurred in the middle range between 13 and 21 km. Atmospheric chlorine and bromine levels are expected to peak around 1998.
There are currently steps being made to protect the ozone. One such step is the 1987 Montreal Protocol on substances that harmed the ozone. 37 nations signed the bill, it read that the signing nations cut down on the use of chlorofluorocarbons and to completely stop CFC emissions by the year 2000. In general people are waking up to this serious problem; as well they should. The more ozone that is destroyed the more UVA rays and UVB rays that reach the earth. It is hypothesized that by the year 2086 if depletion continues at the current rate that no living organism will be able to survive on on earth unless they are underwater.
Ozone is debatibly the most important thin known man. The survival of the human race is really dependent upon the ozone layer. If we keep using these dangerous chemicals, such as CFC, found in arisol cans, we could ultimatly destroy the ozone layer. If we destroy the ozone layer then we are really killling ourselves. If the ozone layer is destroyed then powerful ultriviolent rays will penitrate the earth and everyone would get skin cancer and eventually die. I hope organizations will continue to work to prevent the distruction of ozone because I would like to see man survive for a while.
f:\12000 essays\sciences (985)\Chemistry\Ozone.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ozone
Danielle Farrar
March 16, 1997
Triatomic oxygen, O3, is most commonly known as ozone. It has a resonance structure, and can be drawn in two different ways:
O=O- O-O=O
It is a bluish, explosive gas at room temperature, and has a boiling point of -119°C. It has a melting point of -193°C, and is a blue liquid. It's critical temperature and pressure are -12.1°C and 53.8 atm, respectively. It has a pleasant odor in concentrations of less than 2 ppm, and is irritating and injurious in higher concentrations. The density of ozone gas is 2.144 g/L, and the density of ozone as a liquid is 1.614 g/mL. It is extremely unstable, and solutions containing ozone explode upon warming. It is found in varying proportions on Earth, but it is about 0.05 ppm at sea level.
Ozone absorbs harmful ultraviolet radiation in the upper atmosphere, and protects humans from skin cancer. But ozone is also the main ingredient of smog, and causes serious health effects and forest and crop damage in the lower atmosphere. Ozone is formed through the chemical reaction of volatile organic compounds and nitrogen dioxide, in the atmosphere, in the presence of sunlight. This reaction is called a photochemical reaction, because sunlight is required. The product is known as smog. The notorious brownish color of smog is due to nitrogen dioxide of the mixture. Increased temperature stimulate the reaction, which is why ozone conditions are worse in the summer. It is an oxidant, meaning it takes electrons away from other molecules, and disrupts key structures in cells by starting chain reactions.
Ozone is a serious national problem. Half of the largest urban areas in the United States exceed the ozone standards. The worst regions in the US include California and the Texas Gulf coast, and the northeast and the Chicago-Milwaukee area during the summer. The ozone condition varies from year to year, as the temperature and weather fluctuate. This fluctuation also occurs throughout the day, as emissions from morning traffic builds up, the levels rise. Ozone emissions come from many things, such as automobiles, gas stations, power plants, dry cleaners, paint shops, chemical manufacturing pants, oil refineries, and other business that release volatile organic compounds.
The health effects of ozone are chest pain, coughing, wheezing, lung and nasal congestion, labored breathing, sore throat, nausea, rapid breathing, and eye and nose irritation. The symptoms occur when the levels of ozone are only slightly higher than the legal standard. Living in San Diego during my elementary school year, I personally felt the effects of ozone; the tightness of the chest, wheezing, and labored breathing on certain hot, humid days. Days would be labeled "smog days", and children wouldn't be able to play outside during recess, the air was so polluted. Heavy exercise can drive ozone deeper into the respiratory system, and interferes with lung operation, and children growing up in smog-polluted areas have been found to have lost 10-15% of their lung capacity.
Ozone severely damages crops, forests, and man-made materials. The crops affected are ones such as soybeans, peanuts, corn, and wheat, and more extensively to tomatoes, beans, snapbeans. Cash losses of these crops are estimated at several billion dollars a year. Evidence points towards the fact that ozone is severely damaging forest in the eastern United States, and ozone is responsible for the reduced growth rate of commercial yellow pines in the southeast U.S. Organisms such as lichens, and ecosystem processes such as nutrient cycling, are also affected. Ozone can also damage materials, such as causing cracking of plastics and rubber, and decomposition and fading of fibers and dyes.
Ozone has been in the news a lot in the past decade or so. Not only the effects of ozone as smog in the lower atmosphere, but ozone depletion in the upper atmosphere. It seems rather ironic that something we have such an abundance of that it becomes a problem, should also present the problem that we are lacking in it. However, the focus of my research was primarily on smog and the effects of the lower atmosphere. The health problems presented, the money lost on crops, and forests, have made ozone quite a prevalent issue, mainly because it affects everyone, all over the planet. This invisible gas has and will continue to be a source of intense interest for scientists in the coming years.
Contributing pollutants such as automobiles, power plants, and other things I mentioned previously have led to controversy over these items. Huge amounts of money have been put into research for decreasing the amount of ozone produced. For instance, Los Angeles installed a subway-like system in order to decrease traffic in the city, thereby cutting down smog. Power plants have shut down, and increased regulations have been installed in order to remedy the serious problem of pollution.
Bibliography
Harte, John, and Cheryl Holdre, Richard Schneider, and Christine Shirley. Toxics A to Z. pp 372-74. University of California Press: Los Angeles, 1991.
"Ozone Most Harmful to Trees" USA Today Magazine. June 1992. pp 9-10
Scott, Geoff. "The Two Faces of Ozone," Current Health. September 1992. pp 24-25.
f:\12000 essays\sciences (985)\Chemistry\Pheromones.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Pheromones
Do you often wonder what makes someone attracted to you or what makes you attracted to that other person? Sometimes you can look at the person and not even be attracted to their looks, but you feel compelled to talk to them or just contact them in any form. These urges could be induced by a compound group most commonly called "Pheromones."
Pheromones{fair'-uh-mohn} (from the Greek pher, "to carry" and horman "to stimulate") are chemicals released by organisms into the environment, where they serve as signals or messages to alter behavior in other organisms of the same species. Pheromones are a class of compounds that insects and animals produce to attract members of their own species. These compounds are secreted by the body in very small amounts but are never-the-less effective in producing instinctive behavior when detected by the nose. In insects and animals, most sexual and social behavior is controlled by pheromones.
Humans have used perfumes for thousands of years, but there is a basic difference between perfume and pheromones. Pheromones are produced by the body and usually do not smell at all pleasant, whereas perfumes are either synthesized or extracted from natural products and are employed because of their pleasant smell.
Scientific research suggests that there are human pheromones for both the male and the female. Females have a better developed sense of smell and testing indicates that they are more responsive to male pheromones than the reverse.
Research over the years has found that the male pheromones belong to a class of compounds called steroids, in particular derivatives of androstenone, which are secreted by perspiration glands on our bodies.
The compound androstenone, and a related product androstenol, are the most commonly used compounds for testing the effects of pheromones on humans. The tests usually involve getting subjects to select preferred objects from a group of objects some of which have been sprayed with pheromones. For example; statistics are taken of the use of the chairs in a dental waiting room when one of the chairs has been sprayed with pheromones. It is reported that females favor the chair marked with a male pheromones whereas males tend to avoid the chair. Usually the subjects report that they are not conscious of the smell of the pheromones, the preference shown appears to be a subtle unconscious action in most cases.
To the perfume industry, the pheromone compound frequently contains androstenone dissolved in a solvent. Also included is a masculine-like cologne that is attractive to women. Upon application the solvent will evaporate leaving the residual perfume and the androstenone. The small amount of perfume is designed to attract women consciously while the androstenone-pheromone works in the background in the unconscious way of most pheromones. Next time you are strangely attracted to a person of the opposite sex, you could think whether or not they are consciously using a pheromone.
f:\12000 essays\sciences (985)\Chemistry\phosphates 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chemistry: Water Pollution
Phosphates in Water Pollution
Phosphates may be created by substituting some or all of the
hydrogen of a phosphoric acid by metals. Depending on the number of
hydrogen atoms that are replaced, the resulting compound is described as
a primary, secondary or tertiary phosphate. Primary and secondary
phosphates contain hydrogen and are acid salts. Secondary and tertiary
phosphates, with the exception of those of sodium, potassium and
ammonium are insoluble in water. Tertiary sodium phosphate is valuable
as a detergent and water softener. The primary phosphates tend to be
more soluble.
Phosphates, which are an important component to metabolism in
both plants and animals, help in the first step in oxidation of glucose
in the body. Primary calcium phosphate is an ingredient of plant
fertilizer.
Phosphates have caused increasing attention recently. The focus
is on the environmentally harmful effects in household detergents.
Wastewater, from laundering agents, contains phosphates, which are said
to be a water pollutant.
Most laundry detergents contain approximately 35% to 75% sodium
triphosphate (Na5P3O10), which serves two purposes. Providing an
alkaline solution (pH 9.0 to 10.5) is necessary for effective cleansing
and also to tie up calcium and magnesium ions found in natural waters
and prevent them from interfering with the cleansing role of the
detergent.
Eutrophication is the progressive over-fertilization of water,
in which festering masses of algae's blooms, choking rivers and lakes.
Phosphorus compounds act as a fertilizer for all plant life, whether
free-floating algae or more substantial rooted weeds, and are implicated
in eutrophication. Many countries control phosphate levels, whereas
Switzerland has banned the use of phosphates.
The marine environment is both fragile and more resistant than
the terrestrial ecosystem. It is fragile for the reasons that nutrients
are generally present in very low concentrations, permanently consumed
by living organisms and pollutants diffuse rapidly.
Lakes and rivers are extremely complex ecosystems. Nutrients are
taken up by both algae and rooted weeds. The weeds act as a shelter for
fish larvae and zooplankton, both of which eat algae and are, in turn,
eaten by larger fish. Scientists have concluded that unpolluted lakes
can absorb surprisingly large amounts of phosphates without uncertainty.
When a fertilizer, such as a phosphate, is added more algae will grow,
and consequently will the populations of zooplankton and fish.
Difficulties only arise when the lake is already impure. Zooplankton are
sensitive to their environment and many substances are toxic to them. If
any of these substances, including phosphates, are present the
zooplankton population cannot increase. Adding phosphates to this
polluted system will case algae growth. The floating masses cut off the
light supply. Weeds die and decompose using up dissolved oxygen, and
causing sulfurous smells and plagues. Deprived of shelter and food, the
fish larvae starve. The lake is well on the way to catastrophe.
Without wetlands there would be a minimal amount of fresh
drinking water due to the fact that wetlands filter the waters of our
lakes, rivers and streams, sequentially reducing contamination of water.
The plant growth in wetlands removes phosphates and other plant
nutrients washed in from the surrounding soil, consequently restricting
the growth of algae and aquatic weeds. This growth is a serious problem
in some of Canada's major waterways, where dead and decaying algae
deprive the deeper waters of their oxygen.
Researches at Lancaster University have studied lakes whose
plant and animal life has been killed by ace lake. Contradictory, these ions also are produced by
acid rain, contain oxides of nitrogen from combustion sources. These
fertilizers do not alter the pH level of the water. Instead, they
stimulate the growth of plants. The plants absorb the dissolved
nitrates, generating hydroxide ions, which in return neutralize the
excess acid.
Removal of phosphates from detergent is not likely to slow algae
growth in containing substances. It may actually prove disastrous. Its
replacement with borax will definitely be disastrous. Scientists are
unsure of borax role in plant growth. It is not required by algae and
other micro plants, but it is essential to higher plants. However in
excessive quantities, about 5 micrograms of boron per gram of water,
boron severely damages plant life. Highly alkaline substances, gel
proteins and sodium hydroxide is hazardous substances.
Another concern is the fact that each year thousands of children swallow
detergents resulting in serious injuries or death.
In conclusion, the only way to overcome the disastrous effects
of phosphates is to find an alternate. However, an acceptable substitute
for phosphates has not yet been found. Washing only with synthetic
detergents would require so much detergent that the cost per wash would
increase significantly. Another alternative is the substitution of
synthetic nonionic detergents for ionic detergents in use. Nonionic
detergents are not precipitated by Calcium of Magnesium ions. This
would reduce the risk contaminating our lakes and rivers.
f:\12000 essays\sciences (985)\Chemistry\Phosphates.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Phosphates may be created by substituting some or all of the hydrogen of a phosphoric acid by metals. Depending on the number of hydrogen atoms that are replaced, the resulting compound is described as a primary, secondary or tertiary phosphate. Primary and secondary phosphates contain hydrogen and are acid salts. Secondary and tertiary phosphates, with the exception of those of sodium, potassium and
ammonium are insoluble in water. Tertiary sodium phosphate is valuable as a detergent and water softener. The primary phosphates tend to be more soluble.
Phosphates, which are an important component to metabolism in both plants and animals, help in the first step in oxidation of glucose in the body. Primary calcium phosphate is an ingredient of plant fertilizer.
Phosphates have caused increasing attention recently. The focus is on the environmentally harmful effects in household detergents. Wastewater, from laundering agents, contains phosphates, which are said to be a water pollutant.
Most laundry detergents contain approximately 35% to 75% sodium
triphosphate (Na5P3O10), which serves two purposes. Providing an alkaline solution (pH 9.0 to 10.5) is necessary for effective cleansing and also to tie up calcium and magnesium ions found in natural waters and prevent them from interfering with the cleansing role of the detergent.
Eutrophication is the progressive over-fertilization of water, in which festering masses of algae's blooms, choking rivers and lakes. Phosphorus compounds act as a fertilizer for all plant life, whether free-floating algae or more substantial rooted weeds, and are implicated in eutrophication. Many countries control phosphate levels, whereas
Switzerland has banned the use of phosphates.
The marine environment is both fragile and more resistant than the terrestrial ecosystem. It is fragile for the reasons that nutrients are generally present in very low concentrations, permanently consumed by living organisms and pollutants diffuse rapidly.
Lakes and rivers are extremely complex ecosystems. Nutrients are taken up by both algae and rooted weeds. The weeds act as a shelter for fish larvae and zooplankton, both of which eat algae and are, in turn, eaten by larger fish. Scientists have concluded that unpolluted lakes can absorb surprisingly large amounts of phosphates without uncertainty. When a fertilizer, such as a phosphate, is added more algae will grow,
and consequently will the populations of zooplankton and fish. Difficulties only arise when the lake is already impure. Zooplankton are sensitive to their environment and many substances are toxic to them. If any of these substances, including phosphates, are present the zooplankton population cannot increase. Adding phosphates to this polluted system will case algae growth. The floating masses cut off the light supply. Weeds die and decompose using up dissolved oxygen, and causing sulfurous smells and plagues. Deprived of shelter and food, the fish larvae starve. The lake is well on the way to catastrophe.
Without wetlands there would be a minimal amount of fresh drinking water due to the fact that wetlands filter the waters of our lakes, rivers and streams, sequentially reducing contamination of water. The plant growth in wetlands removes phosphates and other plant nutrients washed in from the surrounding soil, consequently restricting
the growth of algae and aquatic weeds. This growth is a serious problem in some of Canada's major waterways, where dead and decaying algae deprive the deeper waters of their oxygen.
Researches at Lancaster University have studied lakes whose plant and animal life has been killed by acid rain. The excess acid in the lakes can be neutralized easily by adding lime, but this makes the waters rich in calcium. Life will gradually return to the lake but, as these lakes should have low calcium levels, it will not be the same kind
of life that existed in lakes before pollution. The answer, they have concluded, is to add phosphates.
These phosphates work by shielding the water. This depends upon nitrate ions in the lake. Contradictory, these ions also are produced by acid rain, contain oxides of nitrogen from combustion sources. These fertilizers do not alter the pH level of the water. Instead, they stimulate the growth of plants. The plants absorb the dissolved nitrates, generating hydroxide ions, which in return neutralize the excess acid.
Removal of phosphates from detergent is not likely to slow algae growth in containing substances. It may actually prove disastrous. Its replacement with borax will definitely be disastrous. Scientists are unsure of borax role in plant growth. It is not required by algae and other micro plants, but it is essential to higher plants. However in
excessive quantities, about 5 micrograms of boron per gram of water, boron severely damages plant life. Highly alkaline substances, gel proteins and sodium hydroxide is hazardous substances. Another concern is the fact that each year thousands of children swallow detergents resulting in serious injuries or death.
In conclusion, the only way to overcome the disastrous effects of phosphates is to find an alternate. However, an acceptable substitute for phosphates has not yet been found. Washing only with synthetic detergents would require so much detergent that the cost per wash would increase significantly. Another alternative is the substitution of
synthetic nonionic detergents for ionic detergents in use. Nonionic detergents are not precipitated by Calcium of Magnesium ions. This would reduce the risk contaminating our lakes and rivers.
f:\12000 essays\sciences (985)\Chemistry\Photochemical Smog.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Xxxxx Xxxxxx
Gifted Chemistry IB
Alternative Assessment
1997 March 19
Photochemical Smog
Historically, the term smog referred to a mixture of smoke and fog, hence the name smog. The industrial revolution has been the central cause for the increase in pollutants in the atmosphere over the last three centuries. Before 1950, the majority of this pollution was created from the burning of coal for energy generation, space heating, cooking, and transportation. Under the right conditions, the smoke and sulfur dioxide produced from the burning of coal can combine with fog to create industrial smog. In high concentrations, industrial smog can be extremely toxic to humans and other living organisms. London is world famous for its episodes of industrial smog. The most famous London smog event occurred in December, 1952 when five days of calm foggy weather created a toxic atmosphere that claimed about 4000 human lives. Today, the use of other fossil fuels, nuclear power, and hydroelectricity instead of coal has greatly reduced the occurrence of industrial smog. However, the burning of fossil fuels like gasoline can create another atmospheric pollution problem known as photochemical smog. Photochemical smog is a condition that develops when primary pollutants (oxides of nitrogen and volatile organic compounds created from fossil fuel combustion) interact under the influence of sunlight to produce a mixture of hundreds of different and hazardous chemicals known as secondary pollutants. Development of photochemical smog is typically associated with specific climatic conditions and centers of high population density. Cities like Los Angeles, New York, Sydney, and Vancouver frequently suffer episodes of photochemical smog.
One way in which the production of photochemical smog is initiated is through the photochemical reaction of nitrogen dioxide (NO2) to form ozone. There are many sources of photochemical smog, including vehicle engines (the number one cause of photochemical smog), industrial emissions, and area sources (the loss of vapors from small areas such as a local service station, surface coatings and thinners, and natural gas leakage).
Vehicle engines, which are extremely numerous in all parts of the world, do not completely burn the petroleum they use as fuel. This produces nitrogen dioxide which is released through the vehicle exhaust along with a high concentration of hydrocarbons. The absorption of solar radiation by the nitrogen dioxide results in the formation of ozone (O3). Ozone reacts with many different hydrocarbons to produce a brownish-yellow gaseous cloud which may contain numerous chemical compounds, the combination of which, we call photochemical smog.
Both types of smog can greatly reduce visibility. Even more importantly, they pose a serious threat to our health. They form as a result of extremely high concentrations of pollutants that are trapped near the surface by a temperature inversion. Many of the components which make up these smogs are not only respiratory irritants, but are also known carcinogens.
There are many conditions for the development of photochemical smog:
1. A source of nitrogen oxides and volatile organic compounds.
2. The time of day is a very important factor in the amount of photochemical smog present.
• Early morning traffic increases the emissions of both nitrogen oxides (NOx) and Peroxyacetyl Nitrates (PAN) as people drive to work.
• Later in the morning, traffic dies down and the nitrogen oxides and volatile organic compounds begin to react forming nitrogen dioxide, increasing its concentration.
• As the sunlight becomes more intense later in the day, nitrogen dioxide is broken down and its by-products form increasing concentrations of ozone.
• At the same time, some of the nitrogen dioxide can react with the volatile organic compounds (VOCs) to produce toxic chemicals.
• As the sun goes down, the production of ozone is halted. The ozone that remains in the atmosphere is then consumed by several different reactions.
3. Several meteorological factors can influence the information of photochemical smog. These conditions include :
• Precipitation can alleviate photochemical smog as the pollutants are washed out of the atmosphere with the rainfall.
• Winds can blow photochemical smog away replacing it with fresh air. However, problems may arise in distant areas that receive the pollution.
• Temperature inversions can enhance the severity of a photochemical smog episode. Normally, during the day the air near the surface is heated and as it warms it rises, carrying the pollutants with it to higher elevations. However, if a temperature inversion develops pollutants can be trapped near the Earth's surface. Temperature inversions cause the reduction of atmospheric mixing and therefore reduce the vertical dispersion of pollutants. Inversions can last from a few days to several weeks.
4. Topography is another important factor influencing how severe a smog event can become. Communities situated in valleys are more susceptible to photochemical smog because hills and mountains surrounding them tend to reduce the air flow, allowing for pollutant concentrations to rise. In addition, valleys are sensitive to photochemical smog because relatively strong temperature inversions can frequently develop in these areas.
Possible Solutions
A possible solution to the problem of photochemical smog is to enforce stricter emission laws all over the globe. Many countries have varying laws on the legal limits of NOx, Carbon Dioxide, and Sulfur Dioxide. For example, the United States has a lower legal limit for CO2 than Mexico, which is just south of the U.S. My point is that you can go from one country to another, and notice the differences between the two levels of photochemical smog. If the world were to enforce the same legal smog levels, we wouldn't have to worry about concentrations of smog in some places more than others.
Another possible solution is to come up with a cleaner burning fuel for automobiles. Some cars already are being experimented running hydrogen, electricity, solar power, and even water. The problem is that these automobiles are not in mass production, therefore, leaving the world to rely on gasoline/diesel as the primary source for power. If the world were to accept the hydrogen car or electric car more openly and develop them for mass production, we would have lower levels of the photochemical pollutants altogether
Abstract 1
"Photochemical Smog and the Okanagan Valley"
Photochemical smog can be a significant pollution problem in the Okanagan Valley. The Okanagan meets all the requirements necessary for the production of photochemical smog, especially during the summer months. During this time period there is an abundance of sunlight, temperatures are very warm, and temperature inversions are common and can last for many days. The Okanagan Valley also has some very significant sources of nitrogen oxides and volatile organic compounds, including:
1. High emissions of nitrogen oxides and volatile organic compounds primarily from burning fossil fuels in various forms of transportation.
2. The release of large amounts of nitrogen oxides and volatile organic compounds into the atmosphere from forestry and agriculture. Forestry contributes to the creation of photochemical smog creation in two ways: the burning of slash from logging; and, the burning of woodchip wastes in wood product processing plants. Agriculture produces these chemicals through the burning of prunings and other organic wastes.
The idea that the Okanagan is immune to the big city problems of photochemical smog may simply be wishful thinking. In fact, recent monitoring of ground level ozone has shown that the values between here and the Lower Mainland are quite comparable. In addition, research over a 4 year period (1985-1989) has shown that ozone levels can at times be higher over the Okanagan Valley than the Lower Mainland of British Columbia by almost 49 %.
Abstract 2
"The Photochemical Problem in Perth"
The Perth Photochemical Smog Study, a joint effort of Western Power Corporation and the Department of Environmental Protection (DEP), was undertaken to determine, for the first time, the extent to which photochemical smog had become a problem in Perth.
Measurements of photochemical smog in Perth's air began in 1989, at a single site in the suburb of Caversham, 15 kilometers north-east of the city center. Despite the common perception that Perth is a windy city and therefore not prone to air pollution, the first summer of measurements revealed that the city was sometimes subjected to smog levels which approached or exceeded the guidelines recommended by the National Health and Medical Research Council of Australia (NHMRC).
In 1991 the State Energy Commission of Western Australia (SECWA, now Western Power Corporation) sought to extend the capacity of the gas turbine power station it operated at Pinjar, some 40 kilometers north of the Perth central business district. In view of the Caversham data, the Environmental Protection Authority expressed concern that increasing the NOx emissions at Pinjar could contribute to Perth's emerging photochemical smog problem which, at that stage, was poorly defined.
A consequent condition on the development at Pinjar was that SECWA undertake a study of the formation and distribution of photochemical smog in Perth, a particular outcome of which would be to determine the effect of the Pinjar power station's emissions on smog in the region.
Given the DEP's concerns and responsibility in relation to urban air quality, the Perth Photochemical Smog Study (PPSS) was developed as a jointly operated and managed project, funded by SECWA and with DEP contributing facilities and scientific expertise.
The primary objective of the Perth Photochemical Smog Study was to measure, for the first time, the magnitude and distribution of photochemical smog concentrations experienced in the Perth region and to assess these against Australian and international standards, with consideration given to health and other environmental effects.
The study's monitoring and data analysis program was very successful in defining the distribution of Perth's smog. The Perth region experiences photochemical smog during the warmer months of each year. On average, during the three year period July 1992 to June 1995, there have been 10 days per year on which the peak hourly ozone concentration exceeded 80 parts per billion (ppb) somewhere over the Perth region.
Bibliography
1. Cope, M.C. and Ischtwan, J., 1995, "Perth Photochemical Smog Study, Airshed Modelling Component", EPA of Victoria, August 1995.
2. Minderly, Calvin 1995, "Photochemical Smog and the Okanagan Valley", Okanagan University Publishings, June 7-8, 1995.
3. Pidwirny, Michael, Gow, Tracy, et al. "Photochemical Smog", Microsoft Encarta 1996 Multimedia Encyclopedia. Microsoft Corporation, 1996.
4. Woodward, A.J., Calder, I., McMichael, A.J., Pisaniello, D., Scicchitano, R., Steer, K. and Guest, C.S., 1996, "Options for Revised Air Quality Goals for Ozone (Photochemical Oxidants)", Project Report to the British Commonwealth Department of Health, Housing and Community Services, August 1993.
f:\12000 essays\sciences (985)\Chemistry\Plutonium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Plutonium
Plutonium is a radioactive metallic element. Although it is occasionally found in nature, mostly all of our plutonium is produced artificially in a lab. The official chemical symbol for plutonium is Pu, coming from its first and third letters. Its atomic number is ninety-four. Plutonium is able to maintain its solid state until very high temperatures, melting at six hundred and forty degrees Celsius, and boiling at three thousand four hundred and sixty degrees. The density of Plutonium, at twenty degrees centigrade, is 19.86 grams per cubic centimeter.
Plutonium was discovered, in the laboratory, by Glenn Theodore Seaborg, and his associate Edward M. McMillan. The two shared the Nobel prize in 1951 for their discoveries of Plutonium, Americium (Am), Curium (Cm), Berkelium (Bk), and Californium (Cf). In addition, Seaborg later contributed to the discovery of three more radioactive elements, Einsteinium (Es), Mendelevium (Md), and Nobelium (No). Plutonium was Seaborg's first discovery. Its name came from Pluto, the planet after Neptune for which Neptunium was named. In 1940, at the University of California at Berkeley, he bombarded a sample of Uranium with deuterons, the nuclei in atoms of deuterium, transmuting it into plutonium. Shortly after, Seaborg was able to isolate plutonium 239, an isotope used in atomic bombs.
Plutonium is a highly dangerous and poisonous element because it rapidly gives off radiation in the form of alpha particles. Alpha particles, which are identical to the nucleus of a helium atom, consist of two protons and two neutrons tightly bound together. Although the particles can only travel about five centimeters in the air, they can cause great damage when the enter the body, causing cancer and other serious health problems. Beyond the danger of their radiation, Plutonium will spontaneously explode when a certain amount, called critical mass, is kept together. Soon after the discovery of Plutonium, it was discovered that at least two oxidation states existed. It is now known to exist in oxidation states of +3, +4, +5, and +6.
Currently, there are fifteen known isotopes of Plutonium, with mass numbers ranging between 232 and 246. The most important isotope is plutonium 239, or Pu-239. When struck by a neutron, this isotope undergoes a process called fission. In fission, When struck by a neutron, the nucleus of the plutonium atom is split into two nearly equal parts, and energy is released. Although the energy released by one atom is not much, the splitting of the nucleus releases more neutrons, which strike more plutonium atoms. This process, called a chain-reaction, produces enormous amounts of energy. This energy is often used to power nuclear reactors, or to provide the energy for nuclear weapons. Although Pu-239 is such an efficient use for energy, disposing of its waste has become a major problem. When uranium is converted to Pu-239, a waste with a half-life of around 24,100 years is produced.
Another large problem for scientists creating power with plutonium is actually getting the chain-reaction to work. Often, only the first few atoms struck by the deuterons convert to Plutonium. Unfortunately for the scientists, the whole problem is a matter of probabilities and chance. There are four factors that determine whether the reaction occurs. They are
1) escape, 2) non-fission capture by uranium, 3) non-fission capture by impurities, and 4) fission capture. The first three factors cause the uranium to lose neutrons, the last is what causes the reaction. If the loss of neutrons is less than that of those produced, by fission capture, the reaction occurs. Otherwise, plutonium is not made, and the chain-reaction stops immediately.
Using the chain-reaction system, the first operating nuclear reactor of a reasonable amount of power was built in 1943. It was called the X-10 reactor. The core of the reactor was a twenty- four foot cube of graphite blocks, with 1248 fuel channels each 1.75 inches square. Each hole was fueled with four inch long uranium rod, jacketed in aluminum to protect against oxidation. The entire core was surrounded by a seven-foot thick concrete shield, with openings at one end to replace the uranium rods. At a cost of $1,000,000 for the building and $2,000,000 each for the graphite and uranium, this plant produce about 190 Mevs per fission.
In addition to its uses as fuel for a reactor or in a bomb, plutonium has some practical, everyday uses as well. For example, the original plutonium isotope, Pu-238, is used today to power pacemakers for people with deficient hearts. Also, isotopes Pu-242 and Pu-244, which occurs naturally, are used in studying chemicals and metals.
The half-life of atoms of plutonium was very important to seaborg and his assistants back in 1940. In fact, all of his other radioactive discoveries were based on the finding of Pu-238. For example, Pu-241 decays with a half-life of about thirteen years emitting negatively charged beta particles, or electrons. It then converts to Am-241, an isotope of americium, which emits alpha particles for 470 years, before turning into Am-242, which converts to Cm-242, an isotope of curium, in only sixteen hours. The Cm-242 emits alpha particles for about 162 days before ending the decay of Plutonium 241.
Chemical Equation for Producing Plutonium:
92U-238 ----> 92U-239 ----> 93Np-239 ----> 94Pu-239
f:\12000 essays\sciences (985)\Chemistry\Pyrite.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Disclaimer:
Any material contained here after is not
to be taken seriously on the grounds that
it was written when all of the mortal and/or normal
world should be and probably is asleep and the
author may and probably does have a distorted
interpretation and/ or perception of reality and
of the facts contained , so the material proceeding
this disclaimer does not necessarily represent the
authors views of reality, pyrite, family values, and
whatever other information he doesn't remember
he put in this report at the time of writing but still
did whether or not he believes it to be true and/ or
interesting ,which it probably isn't, weather you like it or not...
Pyrite, also known as Fools Gold is the most
common of the sulfide minerals. Pyrite is called
Fool's Gold because of it's pale brass yellow color
and glistening metallic luster, but it may be told from
gold by it's cubic, dodecahedral, and octahedral
crystals and fine grain masses. Some interesting facts
about pyrite are that it has a greenish black streak, a
rating of 6.0 to 6.5 on the mohs scale, a specific
gravity of 5.00 to 5.02 , it creates a weak electric
current when heated and in some of it's various form's
it has been used in early fire arms and fire starting
kits. It's biggest commercial use is in making sulfuric
acid but it is also important in nature for forming ore
and mineral deposits. Any questions?
f:\12000 essays\sciences (985)\Chemistry\Quarks.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Quarks
Quarks- any group of subatomic particles believed to be among the basic components if matter
Quarks are believed to be the fundamental constituents of matter, and have no apparent structure. They are the particles that make up protons and neutrons, which make up the nucleus of atoms. Also, particles that interact by means of the strong force, the force that holds parts of the nucleus together, are explained in terms of quarks. Other baryons are explained in terms of quarks(1985 Quarks).
Quarks have mass and exhibit spin, the type of intrinsic angular momentum corresponding to rotation around an axis, equal to half the basic quantum mechanical unit of angular momentum, obeying Pauli's exclusion principle. This principle that no two particles having half integral spin can exist in the same quantum state(1985 Quarks).
Quarks always occur in combination with other quarks, they never occur alone. Physicists have attempted to knock a single quark free from a group using a particle accelerator, but have failed. Mesons contain a quark and an antiquark, up, down, and strange, while baryons contain three quarks distinguished by flavours. Each has a charge that is a fraction of that of an electron. Up and down quarks make up protons and neutrons, and can be observed in ordinary matter. Strange quarks can be observed in omega-minus and other short lived subatomic particles which play on part in ordinary matter(1985 Quarks).
The interpretation of quarks as physical entities poses two problems. First, sometimes two or three identical quarks have to be in the same quantum state which, because they have to have half integral spin, violates Pauli's exclusion principal. Second, quarks appear to not be able to be separated from the particles they make up. Although the force holding the quarks together is strong it is improbable that it could withstand bombardment from high energy and neutrinions in a particle accelorator(1985 Quarks).
Quantum chromodynamics(QCD) ascribes colours red, green, and blue to quarks and minus-red, minus-green, and minus-blue to antiquarks. Combinations of quarks must contain equal mixtures of colours so that they cancel each other out. Colour involves the exchange of massless particles, gluons. Gluons transfer the forces which bind quarks together. Quarks
change colour as they emit and absorb gluons. The exchange of gluons is what maintains the right quark colour distribution. The forces carried by gluons weaken when they are close together , at a distance of about 10-13 cm, about the diameter of a proton, quarks behave as if they were free. This is called asymptomatic freedom(1985 Quarks).
When one draws the quarks apart the force gets stronger, this is in direct contrast with electromagnetic force which gets weaker with the square of the distance between the two bodies. Gluons can create other gluons when they move between quarks. If a quark moves away from a group of others because it has been hit by a speeding particle, gluons draw from the quarks motion in order to create more gluons. The larger the number of gluons exchanged the stronger the binding force. Supplying additional energy to quarks results in conversion of energy to new quarks and antiquarks with which the first quark combines(1985 Quarks).
After the discovery of "bottom" and "charm" it was believed that all quarks occur in pairs. This led to the effort to find "top" quark. In 1984 the laboratory of the European Council for Nuclear Research (CERN) in Geneva obtained experimental evidence of "top's" existence. The discovery of "top" completes the theory of natures basic components, quarks(1985 Quarks).
Bibliography
(1985) Quarks, Encyclopedia Britanica, Encyclopedia Britanica Inc. USA.
f:\12000 essays\sciences (985)\Chemistry\rates of reaction.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BACKGROUND INFORMATION
What affects the rate of reaction?
1) The surface area of the magnesium.
2) The temperature of the reaction.
3) Concentration of the hydrochloric acid.
4) Presence of a catalyst.
In the experiment we use hydrochloric acid which reacts with the
magnesium to form magnesium chloride. The hydrogen ions give
hydrochloric acid its acidic properties, so that all solutions of
hydrogen chloride and water have a sour taste; corrode active metals,
forming metal chlorides and hydrogen; turn litmus red; neutralise
alkalis; and react with salts of weak acids, forming chlorides and the
weak acids.
Magnesium, symbol Mg, silvery white metallic element that is
relatively unreactive. In group 2 (or IIa) of the periodic table,
magnesium is one of the alkaline earth metals. The atomic number of
magnesium is 12.
Magnesium(s) + Hydrochloric acid(aq) = Magnesium Chloride(aq) +
Hydrogen(g)
Mg + 2HCl = MgCl2
+ H2
In the reaction when the magnesium hits the acid when dropped in,
it fisses and then disappears giving of hydrogen as it fisses and it
leaves behind a solution of hydrogen chloride.
The activation energy of a particle is increased with heat. The
particles which have to have the activation energy are those particles
which are moving, in the case of magnesium and hydrochloric acid, it is
the hydrochloric acid particles which have to have the activation energy
because they are the ones that are moving and bombarding the magnesium
particles to produce magnesium chloride.
The rate at which all reactions happen are different. An example
of a fast reaction is an explosion, and an example of a slow reaction is
rusting.
In any reaction,
reactants chemical reactions(r) products.
We can measure reactions in two ways:
1) Continuous:- Start the experiment and watch it happen; you can use a
computer "logging" system to monitor it. I.e. Watching a colour fade or
increase.
2) Discontinuous:- Do the experiments and take readings/ samples from
the experiment at different times, then analyse the readings/samples to
see how many reactants and products are used up/ produced.
Reaction rate = amount of reactant used up
time taken
If the amount used up is the same each time then the only thing
that changes is the time taken.
so, reaction rate µ 1
time taken.
rate = K
time taken.
Where K is the constant for the reaction.
For particles to react:-
a) They have to collide with each other.
b) They need a certain amount of energy to break down the bonds of the
particles and form new ones. This energy is called the "Activation
Energy" or Ea.
When we increase the temperature we give the particles more
energy which:
1) Makes them move faster which In turn makes them collide with each
other more often.
2) Increases the average amount of energy particles have so more
particles have the "activation energy"
Both of these changes make the rate of reaction go up so we see a
decrease in the amount of time taken for the reaction and an increase in
1
time taken.
= 1
time taken. Reflects the rate of reaction.
Because temperature has an effect on both the speeds at which the
particles react and the activation energy they have a greater effect on
the rate of reaction than other changes.
A change in concentration is a change in the number of particles
in a given volume.
If we increase the volume:-
a) The particles are more crowded so they collide more often.
b) Although the average amount of energy possessed by a particle does
not change, there are more particles with each amount of energy;- more
particles with the activation energy.
a) is a major effect which effects the rate, but b) is a minor
effect which effects the rate very slightly.
In this experiment we are not concerned with whether the reaction
is exothermic or endothermic because we are concerned with the activation
energy needed to start and continue the reaction.
PREDICTIONS
I predict that as we increase the temperature the rate of
reaction will increase.
If we increase the temperature by 100C the rate of reaction will
double.
I predict that if we increase the concentration of the acid the
reaction rate will increase.
If the concentration of the acid doubles, the rate of the
reaction will also double.
LINKING PREDICTION TO
THEORY
Reaction Rate and Temperature.
The collision theory describes how the rate of reaction increases
as the temperature increases. This theory states that as the temperature
rises, more energy is given to the particles so their speed increases,
this increases the number of collisions per unit of time. This increase
in collisions increases the rate of reaction.
The collision theory explains how the rate of reaction increases,
but it does not explain by how much or by how fast the rate increases.
The Kinetic energy of a particle is proportional to its absolute (Kelvin)
temperature.
1/2 mv 2µ T
But the mass of the particles remains constant so we can
eliminate that part of the equation so;
? V2µT
Therefore we can fit this into a formula:
V21/V22 = T1/T2
If we substitute the temperature into the formula we can work out
the average speed of the formula:
V21/V22 = 310/300
\V 1 = Ö310/300V 2
= Ö1.033V2
= 1.016V2
However if we look at this it is only 1.016 times greater than
the speed at 300K, in other words we can see that it has only increased
by 1.6%.
The frequency of the collisions depends on the speed of the
particles, this simple collision theory only accounts for the 1.6%
increase in the rate, but in practice the reaction rate roughly doubles
in a 10K rise, so this simple theory cannot account for an 100% increase
in the reaction rate.
During a chemical reaction the particles have to collide with
enough energy to first break the bonds and then to form the new bonds and
the rearranged electrons, so it is "safe" to assume that some of the
particles do not have enough energy to react when they collide.
The minimum amount of energy that is needed to break down the
bonds is called the activation energy (EA). If the activation energy is
high only a small amount of particles will have enough energy to react so
the reaction rate would be very small, however, if the activation energy
is very low the number of particles with that amount of energy will be
high so the reaction rate would be higher. An example of a low EA would
be in explosives when they need only a small input of energy to start
their exceedingly exothermic reactions.
In gases the energy of the particles is mainly kinetic, however
in a solid of a given mass this amount of energy is determined by their
velocities.
This graph below shows how the energies of particles are
distributed.
This graph is basically a histogram showing the number of
particles with that amount of energy. The area underneath the curve is
proportional to the total number of particles. The number of particles
with > EA is proportional to the total area underneath the curve.
The fraction of particles with > EA is given by the ratio:
Crosshatched area under the curve
total area under curve
Using the probability theory and the kinetic theory of gases,
equations were derived for the distribution of kinetic energy amongst
particles. From these equations the fractions of particles with an
energy > EA J mole-1 is represented by the equation: e -Ea/RT where R=
the gas constant (8.3 J K-1 mole -1)
T= absolute temperature.
This suggests that at a given temperature, T,
The reaction rate µ e -Ea/RT
If we use k as the rate constant, as a measure of the reaction
rate we can put this into the equation also.
kµ e -Ea/RT
? k= A e -Ea/RT
The last expression is called the Arrhenius equation because it
was developed by Srante Arrhenius in 1889. In this equation A can be
determined by the total numbers of collisions per unit time and the
orientation of the molecules when the collide, whilst e -Ea/RT is
determined by the fraction of molecules with sufficient amounts of energy
to react.
Putting the probability theory and the kinetic theory together
this now gives us a statement which accounts for the 100% increase in the
rate of reaction in a 10K rise.
Reaction Rate and Concentration.
The reaction rate increases when the concentration of the acid
increases because:
If you increase the concentration of the acid you are introducing more
particles into the reaction which will in turn produce a faster reaction
because there will be more collisions between the particles which is what
increases the reaction rate.
METHOD.
To get the amount of magnesium and the amount of hydrochloric
acid to use in the reaction, we have to use an excess of acid so that all
of the magnesium disappears.
Mg + 2HCl = MgCl2
+ H2
1 mole 2 moles 1 mole
1 mole
So, we can say that one mole of magnesium reacts with 2 moles of
hydrochloric acid.
If we use 1 mole of magnesium and 2 moles of hydrochloric acid we
will get a huge amount of gas, too much for us to measure. We would get
24,000 cm3 of hydrogen produced where we only want 100 cm3 of hydrogen
produced. So to get the formula for the amount of moles that we have to
use the formula:
Moles = mass of sample 100 =0.004 moles.
volume with 1 mole 24,000
To get the maximum mass we can use:
Mass = moles x RAM.
= 0.004 x 24
= 0.0096g
So, this is the maximum amount of magnesium we can use. To the
nearest 0.01 of a gram = 0.01. This is the maximum amount of magnesium
we can use.
Because the reaction reacts one mole of magnesium to two moles of
hydrochloric acid we have to make sure that even with the lowest
concentration of acid we still have an excess of acid.
The acid that we were using was 2 moles per dm2 which means that
it is 0.2 moles per 100 cm2 of acid.
We need to make the reaction work to have double the amount of
magnesium. The maximum number of moles that the magnesium needed was
0.004 moles so the amount of acid that we needed was double that so that
equals 0.008 moles. As you can see from the table below we have the acid
in excess throughout the experiment.
Amount of HCl (cm3) Amount of H2O (cm3) Moles of acid.
100 0 0.2
75 25 0.15
50 50 0.1
25 75 0.05
The reason why we used 0.01g of magnesium was because it was therefore
easy to measure because there was not too much, or too little. Therefore
we had no problem with too much gas.
Apparatus
This is the apparatus we used to measure the amount of H2 that was
produced in the reactions. We measured the amount of gas that was given
of every two seconds to get a good set of results. We used this
apparatus with the reaction changing the concentration, and then the
temperature. To accurately measure the amount of gas given of we used a
pen and marked on the gas syringe at the time intervals.
This is the apparatus we used to measure how long it took for the
magnesium to totally disappear. We used this apparatus in both of the
experiments, changing the temperature and the concentration of the acid
to water.
Temperature.
When we did the experiment changing temperature we used both of
the sets of apparatus. To get a fair reaction we had to keep the amount
of magnesium the same and the concentration of the acid. In the
experiment we used 0.1g of magnesium and the concentration of the acid
was 50cm3 of acid to 50cm3 of water. This is because if we used 100cm3
of acid the reaction would be too fast. Still we had an excess amount of
acid, so one mole of magnesium can react with two moles of HCl.
Concentration.
When we did the reaction changing the concentration we changed
the concentration until we had just enough for 1 mole of magnesium to
react with two of HCl. To get a fair reaction we had to keep the amount
of magnesium the same and the temperature. We used 0.1g of magnesium.
RESULTS
Temperature
From this graph you can see that if we do increase the
temperature the rate of reaction also increases, but it does not show
that if you increase the temperature the rate of reaction doubles.
This graph shows that there is an increase in the rate of
reaction as the temperature increases. This shows a curve, mainly
because our results were inaccurate in a number of ways. This is because
the concentration is changed during the experiment because at high
temperatures the acid around the magnesium is diluted. If this
experiment was accurate it would be also a curve but if you made it into
1/time the result would be a straight line showing a clear relationship.
Even though I changed it to 1/time it still does not show a clear
relationship because of the factors mentioned in the conclusion.
Concentration
This graph shows an increase in the amount of gas given off and the speed
at which it is given off. This graph also does not show the rate
increase, it just shows how it increases with a change in concentration.
This graph shows that if you increase the concentration of the molar
solution of the acid the time in which the Mg takes to disappear becomes
a lot slower. This does not show the rate at which this happens, the
graph of rate vs. conc. would show a straight line.
This shows a straight line, thus proving that there is a relationship
between the time it takes the magnesium to disappear and the
concentration of the acid. If we take a gradient of it, it would show
the rate at which the reaction was happening.
Because this shows a straight line we can say that it is a second
order reaction.
This graph shows a nearly straight line which shows that there is
a relationship between the temperature and the rate of reaction, as
the gradient shows the rate of reaction. If you look at this graph it
comes out to show that if you increase the temperature by 100C the
gradient of the line is doubled. This shows that rate µ temp.
This graph shows that if you increase the molar concentration of the
acid, you will increase the rate of reaction. From this you can see from
the gradient, that if you double the molar concentration of the acid the
rate of reaction will double because the gradient is a way of showing the
rate of reaction.
If you compare the quantitative observations to see which the
faster reaction is you can see that after 10 seconds:
Temp. 2 10 20 30 40 50
Amount of H2 produced after 10s 7.5 16 25 54 57
83
Even though there is a greater increase in the amount of H2 given
off in each of the different reactions you can see that there is a change
in the amount given off, but between the temperatures 30 and 400C there
is not much of a change, this could be because of our human error, there
should be a big change in the amount given off.
Molar conc. 0.5 1 1.5 2
Amount of H2 produced after 10s 6 25 60 90
This table shows a nice spread of results throughout the range of
concentration. It clearly shows that the reaction is at different stages
so is therefore producing different amounts of H2. This shows also that
the reaction is affected by the concentration of the acid.
CONCLUSIONS
I conclude that if you increase the temperature by 10oC the rate
of reaction would double, this is because of using the kinetic theory and
the probability theory. Even though our results did not accurately prove
this, the theory that backs it up is sufficient. the kinetic theory
explains that if you provide the particles with a greater amount of
kinetic energy they will collide more often, therefore there will be a
greater amount of collisions per unit time. The probability theory
explains that there is only a number of particles within the reaction
with the amount of Ea to react, so if you increase the amount of kinetic
energy there will be more particles with that amount of Ea to react, so
this will also increase the reaction rate.
If you double the concentration of the acid the reaction rate
would also double, this is because there are more particles in the
solution which would increase the likelihood that they would hit the
magnesium so the reaction rate would increase. The graph gives us a good
device to prove that if you double the concentration the rate would also
double. If you increase the number of particles in the solution it is
more likely that they will collide more often.
There should be more H2 given off if we compare it across the
range of temperatures because the reaction is going quicker and so more
H2 is given off in that amount of time.
There is more H2 given off if you compare it to the range of
concentrations that you are using, this shows that the reaction is at
different stages and so is therefore producing different amounts of H2.
Also our results were not accurate but this could be because of a number
of reasons.
There our many reasons why our results did not prove this point
accurately.
· At high temperatures the acid around the magnesium starts to starts to
dilute quickly, so if you do not swirl the reaction the magnesium would
be reacting with the acid at a lower concentration which would alter the
results.
· Heating the acid might allow H Cl to be given off, therefore also
making the acid more dilute which would also affect the results.
· When the reaction takes place bubbles of H2 are given off which might
stay around the magnesium which therefore reduces the surface area of the
magnesium and so the acid can not react properly with it so this affects
the results.
To get more accurate results, we could have heated the acid to a
lower temperature to stop a large amount of H Cl being given off. The
other main thing that could have helped us to get more accurate results
is we cold have swirled the reaction throughout it to stop the diluting
of the acid and the bubbles of H2 being given off.
If I had time I could have done the reactions a few more times to
get a better set of results. This would have helped my graphs to show
better readings.
f:\12000 essays\sciences (985)\Chemistry\Robert Boyle.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Robert Boyle is considered both the founder of modern chemistry and the greatest English scientist to live during the first thirty years of the existence of the Royal Society. He was not only a chemist and a physicist as we know him to be, but also an avid theologian, a philanthropist, an essayist, and a beginner in medicine. Born in Lismore, Ireland to Richard Boyle, first earl of Cork, and Katherine Fenton, his second wife, Boyle was the youngest son in a family of fourteen. However he was not shortchanged of anything. After private tutoring at home for eight years, Robert Boyle was sent to Eton College where he studied for four years. At the age of twelve, Boyle traveled to the Continent, as it was referred to at the time. There he found a private tutor by the name of Marcombes in Geneva. While traveling between Italy, France, and England, Boyle was being tutored in the polite arts, philosophy, theology, mathematics, and science.
As the years went by, Boyle became more and more interested in medicine. His curiosity in this field led him to chemistry. At first Boyle was mainly interested in the facet of chemistry that dealt with the preparation of drugs, but soon he became genuinely interested in the subject and started to study it in great detail. His studies led him to Oxford where he joined such scientists as John Wilkins and John Wallis, and together in 1660, they founded the Royal Society of London for the Advancement of Science.
From this point onwards, Boyle seriously undertook the reformation of science. For centuries scientists had been explaining the unknown with the simple explanation that god made it that way. Though Boyle did not argue with this, he did believe that there was a scientific explanation for god's doings. Boyle's point of view can be seen by his dealings with the elements. At this time it was thought that an element was not only the simplest body to which something could be broken down, but also a necessary component of all bodies. Meaning that if oil was an element, it would not be able to be broken down, and it would be found in everything. Boyle did not accept this theory, whether it referred to the earth, air, fire, and water of the Aristotelians, the salt, sulfur, and mercury of the Paracelsans, or the phlegm, oil, spirit, acid, and alkali of later chemists. He did not believe that these elements were truly fundamental in their nature. Boyle thought that the only things common in all bodies were corpuscles, atom-like structures that were created by god and that now occupy all void space. He began to preform experiments, concentrating on the color changes that took place in reactions. He started to devise a system of classification based on the properties of substances. By showing that acids turned the blue syrup of violets red, Boyle claimed that all acids react in the same manner with violet syrup and those that did not, were not acids. Similarly, he showed that all alkalies turned the syrup of violets green. Observing that the blue opalescence of the yellow solution of lignum nephriticum was destroyed when the solution was acidified and could be restored by the addition of alkali, Boyle used this experiment to test the strength of acids and alkalies. His system therefore consisted of three categories: acids, alkalies, and those substances that are neither acids nor alkalies. However he purposefully avoided any investigation of corpuscles. Boyle continued his work on acids and alkalies. He devised tests for the identification of copper by the blue of its solutions, for silver by its ability to form silver chloride, with its blackening over time, and for sulfur and many other mineral acids by their distinctive reactions.
Therefore, knowing that it was not actually Boyle who discovered his law, but Towneley and Power who did in 1662 and then Hooke who confirmed it soon thereafter, it can be said that this was Boyle's greatest achievement. His achievement being the conversion of scientific thought from one in which the spirits and the heavens were kept in mind at all times, to one based on experimentation and the use of deduction, not assumption. It cannot be stressed strongly enough what this did for science in general. Boyle's work sparked the beginning of a new era, one in which careful experimentation was the justification for a hypothesis, and thus he is accordingly bestowed with the honor of being the founder of modern chemistry.
Boyle also did extensive work with the air pump, proving such things as the impossibility for sound to be present in a vacuum, the necessity of air for fire and life, and the permanent elasticity of air. Also using the air pump, Boyle discovered that "fixed air" was present in all vegetables. Through other experimental methods, mainly the use of steel filings and strong mineral acid, he also found hydrogen. Yet his greatest achievement, apart from his influence on scientific thought, were his writings. Boyle wrote about the connections of God with the physical universe. He wrote numerous books on religious subjects, not all of which were related to science, but the most influential being so. At his death in the December of 1691, Boyle left a sum of money for the foundation of the Boyle lectures, a group of sermons that were intended for the disputation of atheism. Robert Boyle opened the way for future scientists, changing their methods of experimentation, thought, and outlook on chemistry as a whole, forever.
f:\12000 essays\sciences (985)\Chemistry\Role of Catalyists in Industry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
OXFORD AND CAMBRIDGE SCHOOLS EXAMINATION BOARD.
General Certificate Examination - Advanced Level
Chemistry (Salters') - Paper 3 mock.
ROBERT TAYLOR U6JW.
THE ROLE CATALYSTS IN CHEMICAL REACTIONS, THEIR IMPORTANCE IN INDUSTRY,
PROBLEMS AND NEW DEVELOPMENTS.
A Catalyst is a substance that alters the rate of a reaction.
The catalyst remains unchanged at the end of the reaction. The process
is called catalysis. In this report I aim going to explain the role of
catalysts in chemical reactions and their importance in industry.
I will also outline the problems associated with the use of some
catalysts and discuss, using appropriate examples, new developments in
this area which will help reduce damage to the environment.
The process of catalysis is essential to the modern day manufacturing
industry. Ninety per cent, over a trillion dollars' worth, of
manufactured items are produced with the help of catalysts every year.
It is therefore logical that scientists are constantly searching for
new improved catalysts which will improve efficiency or produce a
greater yield.
An acidic catalyst works due its acid nature. Catalysts are strong
acids and readily give up hydrogen ions, or protons: H+. Protons can be
released from hydrated ions, for example H3O+, but more commonly
they are released from ionisable hydroxyl groups (R-OH) where the O-H
bond is broken to produce R-O- and H+. When the reactant receives
protons from an acid it undergoes a conformational change, (change in
shape and configuration), and becomes a reactive intermediate. The
intermediate can then either become an isomer by returning a proton to
the catalyst, or it may undergo a further reaction and form a
completely new molecule.
Up until the mid - 1960's silica-alumina gels were used to catalyse the
cracking of hydrocarbons. This form of cracking is where the large
molecules in oil are converted into small, highly volatile molecules.
However because the size of the pores of silica-alumina gels was so
variable, (ranging from 0.1nm to 50nm), and the fact that their shape
was so variable, they were hardly ideal catalysts. Due to the large
size of their cavities, large carbonaceous products were able to form
in the cavities thus lowering the reactivity if the catalyst. Catalysis
with alumina silica-gels was also difficult to control precisely
because of their indefinite structure, and therefore uneven distribution
of protons.
By the mid-1960's it was obvious that silica-alumina gels were inefficient
as catalysts and they were replaced by zeolites. Zeolites are highly porous
crystals with minute channels ranging from 0.3nm to 0.8nm in diameter. Due to
their definite crystalline structure and the fact that their pores are too
small to contain carbonaceous build-up, zeolites do not share the problems of
silica-alumina gels.
Zeolites are able to exhibit shape-selective crystals i.e.. their active sites
are specific to only a few product molecules (the ones that will fit into the
tiny pores).
An example of this is when the zeolite ZSM-5 is used to catalyse the synthesis
of 1,4-dimethylbenzine. When molecules of methylbenzene combine with methanol in
the ZSM-5 catalyst, only rod-shaped molecules 1,4-dimethylbenzene are released,
(these are the commercially desirable ones). The boomerang shaped molecules are
unable to pass through the catalysts pores and are therefore not released.
Until relatively recently, one of the large drawbacks with catalysts was the highly
toxic by-products which they became after use. This was because the catalysts were
often corrosive acids with a high toxicity level in liquid form. Examples include
hydrogen fluoride. Once these catalysts had been used this promoted great problems
in terms of disposal as these acids corrode disposal containers and are highly
dangerous to transport and handle.
These problems have been solved by a new type of catalyst. Solid acid catalysts, such
as silica-alumina gels and zeolites, hold their acidity internally and are therefore
much safer to work with and to dispose of.
More recently, pressure from environmentalists has led to a search for more
environmentally friendly forms of catalysis. There is now a need to replace both the
Friedel - Crafts process which involves the unwanted production of hydrated
aluminium chloride and the Oxidation process which forms by-products containing nitric
acid, chromate (VI) and manganate (VII). The leading contender for an environmentally
acceptable alternative to the Friedel - Crafts and Oxidation processes is the process
of using Supported reagents. These are materials where a reagent such as ZnCl2 or FeCl3
has been absorbed on to an insoluble inorganic or organic solid (e.g. silica, alumina,
clay or charcoal). When a reagent has been well dispersed on the surface of the support
material, the effective surface area of the reagent can be increased by up to one
hundred times. This improves reagent activity and selectivity, along with the fact
that supported reagents are easier to handle as they invariably low-toxic, non-corrosive
free flowing powders. Also the reagents can be filtered from the mixture after use and
therefore be subsequently re-used. Supported reagents have good thermal and mechanical
stability's and their reactions are more often than not carried out in non-polar solvents.
This is due to the fact that the reaction takes place on the surface of the solid
therefore the solvent only acts as a form of heat transfer and a working fluid.
In summary I see Supported reagents as the best possible solution to the problems associated
with catalysis due to their easy use and their ability to be recovered and re-used.
They have a high level of activity and improved selectivity in reactions. This is
accompanied by their highly catalytic activity which leads to the best possible level of
performance in commercial uses. This has already been proven by the use of active reagents
in Friedel - Crafts reactions. These reactions originally had the drawbacks of firstly
the hydrolysed aluminium chloride containing aqueous effluent which is produced, and
secondly the by-products such as polymeric tars and di- and polysubstituted by-products
which are produced which unless they can be successfully removed make the product impure.
By using a supported reagent catalyst, in most cases the desired level of activity can be
achieved but the catalyst can be removed easily from the reaction mixture and re-used. I
personally therefore feel that the future of environmentally friendly catalysis lies with
supported reagent catalysts.
WORD COUNT = 998
NB: Athough this essay is headed as being a mock exam it was never assessed by an examining
board, only my chemistry teacher so there is no chance of an examining board also having a
copy. Rob Taylor 25/11/1996. ~:o)
f:\12000 essays\sciences (985)\Chemistry\Safety Inspector.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Safety Inspector
Mr. Redos, I am an inspector for the OSHA. I have noticed the following safety objects missing in the room F 203, chemistry room. These are sprinklers, a drain, and a glass wall. You must have these objects for the safety of the students and the faculty. I have also observed the following safety objects present in the regarded room. Fire blanket, fire extinguisher, fire shower, first-aid kit, an apron, eye goggles, and an eye shower.
I am very concerned in the following situations. If a fire spread throughout the classroom, there are no sprinklers to extinguish the fire only a fire extinguisher. Another situation is if someone were to use the fire shower, there would be no drain for the water to go to, thus a very slippery floor that is unsafe. When the teacher is conducting experiments in the front of the room, there is no glass wall to protect the students in the case of an explosion.
Some improvements that must be made are installing sprinklers. Another must is the glass wall, the last thing a school would want to do is to deal with would be an injured kid. Not a necessary improvement, but suggested is to put in a drain for the fire shower. I like your regulations on everyone must wear goggles and the use of a fire blanket. I am also very pleased with the amount of exits from the room in the case of fire.
Overall you have the basic safety functions intact but you still need to add a couple of more precautions for when an emergency might take place.
f:\12000 essays\sciences (985)\Chemistry\Science Fair Project Melting Ice.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Science Fair Project
Melting Ice
Materials:
- 3 aluminum baking pans
- water to fill each pan
- 1/2 cup of salt
- 1/2 cup of rubbing alcohol
- 1/2 cup of sand
- watch
- large freezer
- thremometor
Procedure:
First fill three aluminum baking pans with water and
freeze them in alarge freezer for three hours. Then get a
thremometor to get the tempature of the freezer. Then make sure
that you put them all in for the same amount of time. Label each
ofthe aluminum baking pans A and the other B and the other C.
Then check on them every half a hour to see if they are frozen
yet. Then when they turn completly into a solid,(take them
out of the freezer) pour 1/2 cup of each subtance into a seperate
baking pan. Make sure that when you pour the subtance in pour
it evenly. Then label the pan with the name of its subtance.
Then time to see which one melts first and then second and the
last one to melt. Then record how long it took each one to
melt.
Research:
The temperature at which water forms into a solid is 32 degrees Fahrenheit or 0 degrees Celsius. The temperature 32 degrees Fahrenheit or 0 degrees Celsius is called freezing point. The freezing point is decreased by about .55 degrees Celsius or 1 degree Fahrenheit for each increase of 80 atmospheres of pressure.
f:\12000 essays\sciences (985)\Chemistry\Science Fossil.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Fossilized Story of Mr. Allosaurus
As the mud starts to surround me I am no longer able to breathe. I catch my last gasp of air and feel a few last raindrops fall on my head. I say goodbye to my Earth and my land. My mouth and lungs fill with mud that travels throughout my system. I am blinded by the wet black soil that has been downpoured on so hard that it has become deadly. I am frightened. Slowly inch by inch I sink farther and deeper in the mud. My life will come to an end soon, and I, the last remaining creature of my kind, will become extinct. I struggle and fight to survive, but the downcoming mud has to great of a force. I feel the mud take the place of my heart, and I die. I feel dazed and confused. I always thought I would die of starvation, not from actually trying to catch my prey.
For thousands of years I have lived underground. I have become a petrified fossil. All the flesh and skin has either rotted away, or was eaten by bugs and other things underground. All that remains of me are my bones. I became petrified, because when I was burried under the ground all those years the groundwater dissolved all my bones. They were then replaced, a molecule at a time, by the minerals in the water. This long process involved all these tingly sensations. I felt odd for the longest time, but now I'm a new me!
About 900 years ago I received company from someone up above. His name is Mr. Wolly Mammoth. Wolly died because of a volcanic eruption, and was trapped in the burning lava. He's my best buddy and I was so glad he decided to come join me. We always talk about what we think goes on above us. Sometimes the Earth rumbles in a strange vibration. Wolly and I call these vibrations Earth shakes.
100,000 years later and my friend Wolly has left me. He was dug up and carried away by these "humans." I guess this is what these creatures are called. I've heard echoes in the ground from younger fossils that the "humans" killed them and buried them. One night me and Wolly were talking about these humans coming a digging us up one day and putting us in their museums. It was our dream and today Wolly's dream came true. I heard that the humans were coming back tomorrow to dig some more. I hope my dream will also be fullfilled.
It is my lucky day! In the words of Pinnochio "dreams really do come true." I heard that one from the fossil of the whale that ate Pinnochio. I was put in The Museum Of Science And Industry in Illinois. And guess what, Wolly was there, too. We were hung from the ceiling by wires, so we went fall, for everyone to see. I just love being a fossil!
f:\12000 essays\sciences (985)\Chemistry\Sequence of Chemical Reactions.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Laboratory #8 The Sequence of Chemical Reactions
Drew Selfridge
Dave Allen, Lab partner
Instructor Yang
February 11, 1997
INTRODUCTION
This experiment was to recover the most amount of copper after it is subjected to a sequence of reactions. The copper is originally in solid form, but the reactions will turn it into free Cu+2 ions floating in solution. The ions will then be regrouped to form solid copper once again. During this process, however, some of the Cu+2 ions may be lost. The copper will subjected to changes in pH and heat. These steps were responsible for the breakdown and reconstruction of the copper. The percent of copper retrieved will reflect the skill with which the reactions were administered.
EXPERIMENTAL
On an analytical balance, measure the mass of the copper while in the vial. Remove approximately 0.35 g into a 250 mL beaker. check the balance and record the mass of the remaining mixture in the vial. In the laboratory hood, dissolve the copper with ~ 3 mL of nitric acid. Allow the beaker to remain under the hood until the fumes cease. The remaining solution should be blue. Bring the beaker back to the lab station and add ~ 10 mL of distilled water. Stir the mixture, all the while adding ~ 8 mL of 6M of NaOH to the beaker. Check with litmus paper to ensure that it is slightly basic. Fill the beaker with up to 100 mL mark with distilled water. Heat the solution and allow it to boil for 5 minutes. Prepare a squirt bottle with hot water. Filter the solution and rinse the beaker with the hot water. Rinse the filter cake with hot distilled water. Transfer the filter paper into a clean beaker. Add ~ 10 mL of 3M sulfuric acid to the beaker in order to dissolve the filtrate. Remove and rinse the filter paper. Now add ~ 0.35 g of zinc powder to the solution and stir until the solution becomes clear. Dissolve the excess zinc with more sulfuric acid. Decant the liquid with a stirring rod, retaining only the copper. Rinse the copper with distilled water and steam dry. Weigh the mass.
DATA/RESULTS
initial mass of copper (g) 0.319
final mass of copper (g) 0.305
% recovery = (final mass/initial mass) x 100 95.6
OBSERVATIONS
-between steps 1 through 4 the solution is blue.
-between steps 5 through 8 the solution is dark brown.
-between steps 9 through 12 the solution is blue-green.
-between steps 13 through 16 the Zinc turns red as the blue color slowly leaves the solution.
CALCULATIONS
% Recovery = (final mass / initial mass) x 100
% Recovery = (0.305 - 0.319) x 100
% Recovery = 95.6%
CONCLUSION
(a) The overall yield of the reaction was 95.6%. There may have been copper lost in transfer from beaker to beaker or stuck to the stirring rod while the copper was in the ionic state. The solid copper may have been lost in the filter paper or in the decanting of the liquid. The majority of the copper lost was probably lost when the copper was transferred from beaker to beaker or during the recanting of the liquid. The filter paper and stirring rod probably account for a small fraction of the copper lost.
(b) The class average for the experiment was 96.11%. Based on this average our results were very precise to 0.5% The hypothesis would be that 100% of copper could be recovered there for our results were also accurate to 4.39%.
(c) The hypothesis was supported by the experimental results because two groups recovered 100%.
(d) Our results were less then the class average. This explained by possilbe loss of copper when transferring between different stages of the experiment.
(e) Buring of the copper during the drying stage would be a systematic error that would result in a class average greater than 100% yield of copper.
f:\12000 essays\sciences (985)\Chemistry\Shakespeare.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chemical Engineer
The chemical engineer is an invaluable link between scientific principles and manufacturing realities. It involves the use of chemical, physical, and engineering principles.
The scientist in a laboratory does basic research to develop new compounds and processes. When the scientist discovers a product that may be useful, the chemical engineer takes over. They adapt the product for big scale manufacturing. They do this by designing a plant to produce the item on large scale. Thus the engineer is the link between the laboratory and commercial production.
The chemical engineer's earnings depend on several factors. Their educational background dictates much of what the engineer will earn. Also, experience and the location of the employer will make a very big difference. The starting salary for a chemical engineer with a Bachelor's Degree can range from $30,000 to over 35,000 per year. An engineer with a Master's Degree can earn anywhere from $35,000 to over $40,000. A chemical engineer with a doctorate can earn $45,000 to well over $60,000.
"To be successful in chemical engineering, one must be curious and persevering" (Finney IV 13). The person must be flexible in order to adapt to each phase encountered. They must also be ambitious. Honesty is another very important trait. They must be cooperative since they are a member of a team.
In order to get a job as a chemical engineer, a person should have at least a Bachelor's Degree. The degree should be in chemical engineering. The degree is acquired by four years of study. Subjects studied include engineering, drawing, chemistry, mathematics, English and speech, computing, economics, and social studies. The actual specialization in chemical engineering is usually in the third year of study.
There are many advantages that go along with this job. The career offers challenges in both science and industry. Also, the work allows for other companies to expand and hire more people. Thus, this creates new jobs. There are also disadvantages. First, there is a great responsibility placed onto the engineer. Also, there is a great deal of pressure involved with this kind of work.
The future for the chemical engineer looks very promising. As new drugs and vaccines develop, the chemical engineer will be needed. This a new and exciting field to work in. Many people are becoming more and more interested in it. This increase in engineers called for and increase in jobs.
Someone interested in becoming a chemical engineer should concentrate on the sciences in high school. They should be "good" at chemistry and physics. Also, they should enjoy these classes. Mathematics classes are also important. A knowledge of the computer is extremely important.
Many colleges offer engineering programs. More specifically, most offer chemical engineering programs. MIT offers an excellent chemical engineering program. It is known world-wide for its engineering department. Carnegie Melon also has a great program. Montana University is of another college with a great engineering program.
The occupation of a chemical engineer is a very exciting one. It requires a lot of responsibility and hard work. But, if you enjoy being part of a team and working hard, this is the right job for you.
f:\12000 essays\sciences (985)\Chemistry\silicon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Silicon is the raw material most often used in integrated circuit (IC)
fabrication. It is the second most abundant substance on the earth. It is
extracted from rocks and common beach sand and put through an exhaustive
purification process. In this form, silicon is the purist industrial
substance that man produces, with impurities comprising less than one part in
a billion. That is the equivalent of one tennis ball in a string of golf
balls stretching from the earth to the moon.
Semiconductors are usually materials which have energy-band gaps smaller
than 2eV. An important property of semiconductors is the ability to change
their resistivity over several orders of magnitude by doping. Semiconductors
have electrical resistivities between 10-5 and 107 ohms.
Semiconductors can be crystalline or amorphous. Elemental semiconductors
are simple-element semiconductor materials such as silicon or germanium.
Silicon is the most common semiconductor material used today. It is
used for diodes, transistors, integrated circuits, memories, infrared
detection and lenses, light-emitting diodes (LED), photosensors, strain
gages, solar cells, charge transfer devices, radiation detectors and a
variety of other devices. Silicon belongs to the group IV in the periodic
table. It is a grey brittle material with a diamond cubic structure.
Silicon is conventionally doped with Phosphorus, Arsenic and Antimony and
Boron, Aluminum, and Gallium acceptors. The energy gap of silicon is 1.1
eV. This value permits the operation of silicon semiconductors devices at
higher temperatures than germanium.
Now I will give you some brief history of the evolution of electronics
which will help you understand more about semiconductors and the silicon
chip. In the early 1900's before integrated circuits and silicon chips
were invented, computers and radios were made with vacuum tubes. The
vacuum tube was invented in 1906 by Dr.Lee DeForest. Throughout the first
half of the 20th century, vacuum tubes were used to conduct, modulate and
amplify electrical signals. They made possible a variety of new products
including the radio and the computer. However vacuum tubes had some
inherent problems. They were bulky, delicate and expensive, consumed a
great deal of power, took time to warm up, got very hot, and eventually
burned out. The first digital computer contained 18,000 vacuum tubes,
weighed 50 tins, and required 140 kilowatts of power.
By the 1930's, researchers at the Bell Telephone Laboratories were
looking for a replacement for the vacuum tube. They began studying the
electrical properties of semiconductors which are non-metallic substances,
such as silicon, that are neither conductors of electricity, like metal,
nor insulators like wood, but whose electrical properties lie between these
extremes. By 1947 the transistor was invented. The Bell Labs research
team sought a way of directly altering the electrical properties of
semiconductor material. They learned they could change and control these
properties by "doping" the semiconductor, or infusing it with selected
elements, heated to a gaseous phase. When the semiconductor was also
heated, atoms from the gases would seep into it and modify its pure,
crystal structure by displacing some atoms. Because these dopant atoms had
different amount of electrons than the semiconductor atoms, they formed
conductive paths. If the dopant atoms had more electrons than the
semiconductor atoms, the doped regions were called n-type to signify and
excess of negative charge. Less electrons, or an excess of positive
charge, created p-type regions. By allowing this dopant to take place in
carefully delineated areas on the surface of the semiconductor, p-type
regions could be created within n-type regions, and vice-versa. The
transistor was much smaller than the vacuum tube, did not get very hot, and
did not require a headed filament that would eventually burn out.
Finally in 1958, integrated circuits were invented. By the mid
1950's, the first commercial transistors were being shipped. However
research continued. The scientist began to think that if one transistor
could be built within one solid piece of semiconductor material, why not
multiple transistors or even an entire circuit. With in a few years this
speculation became one solid piece of material. These integrated
circuits(ICs) reduced the number of electrical interconnections required in
a piece of electronic equipment, thus increasing reliability and speed. In
contrast, the first digital electronic computer built with 18,000 vacuum
tubes and weighed 50 tons, cost about 1 million, required 140 kilowatts of
power, and occupied an entire room. Today, a complete computer, fabricated
within a single piece of silicon the size of a child's fingernail, cost
only about $10.00.
Now I will tell you the method of how the integrated circuits and the
silicon chip is formed. Before the IC is actually created a large scale
drawing, about 400 times larger than the actual size is created. It takes
approximately one year to create an integrated circuit. Then they have to
make a mask. Depending on the level of complexity, an IC will require from
5 to 18 different glass masks, or "work plates" to create the layers of
circuit patterns that must be transferred to the surface of a silicon
wafer. Mask-making begins with an electron-beam exposure system called
MEBES. MEBES translates the digitized data from the pattern generating
tape into physical form by shooting an intense beam of electrons at a
chemically coated glass plate. The result is a precise rendering, in its
exact size, of a single circuit layer, often less than one-quarter inch
square. Working with incredible precision , it can produce a line one-
sixtieth the width of a human hair.
After purification, molten silicon is doped, to give it a specific
electrical characteristic. Then it is grown as a crystal into a
cylindrical ingot. A diamond saw is used to slice the ingot into thin,
circular wafers which are then polished to a perfect mirror finish
mechanically and chemically. At this point IC fabrication is ready to
begin.
To begin the fabrication process, a silicon wafer (p-type, in this
case) is loaded into a 1200 C furnace through which pure oxygen flows. The
end result is an added layer of silicon dioxide (SiO2), "grown" on the
surface of the wafer. The oxidized wafer is then coated with photoresist,
a light-sensitive, honey-like emulsion. In this case we use a negative
resist that hardens when exposed to ultra-violet light. To transfer the
first layer of circuit patterns, the appropriate glass mask is placed
directly over the wafer. In a machine much like a very precise
photographic enlarger, an ultraviolet light is projected through the mask.
The dark pattern on the mask conceals the wafer beneath it, allowing the
photoresist to stay soft; but in all other areas, where light passes
through the clear glass, the photoresist hardens. The wafer is then washed
in a solvent that removes the soft photoresist, but leaves the hardened
photoresist on the wafer. Where the photoresist was removed, the oxide
layer is exposed. An etching bath removes this exposed oxide, as well as
the remaining photoresist. What remains is a stencil of the mask pattern,
in the form of minute channels of oxide and silicon. The wafer is placed
in a diffusion furnace which will be filled with gaseous compounds (all n-
type dopants), for a process known as impurity doping. In the hot furnace,
the dopant atoms enter the areas of exposed silicon, forming a pattern of
n-type material. An etching bath removes the remaining oxide, and a new
layer of silicon (n-) is deposited onto the wafer. The first layer of the
chip is now complete, and the masking process begins again: a new layer of
oxide is grown, the wafer is coated with photoresist, the second mask
pattern is exposed to the wafer, and the oxide is etched away to reveal new
diffusion areas. The process is repeated for every mask - as many as 18 -
needed to create a particular IC. Of critical importance here is the
precise alignment of each mask over the wafer surface. It is out of
alignment more than a fraction of a micrometer (one-millionth of a meter),
the entire wafer is useless. During the last diffusion a layer of oxide is
again grown over the water. Most of this oxide layer is left on the wafer
to serve as an electrical insulator, and only small openings are etched
through the oxide to expose circuit contact areas. To interconnect these
areas, a thin layer of metal (usually aluminum) is deposited over the
entire surface. The metal dips down into the circuit contact areas,
touching the silicon. Most of the surface metal is then etched away,
leaving an interconnection pattern between the circuit elements. The final
layer is "vapox", or vapour-deposited-oxide, a glass-like material that
protects the IC from contamination and damage. It, too, is etched away,
but only above the "bonding pads", the square aluminum areas to which wires
will later be attached.
----------------
f:\12000 essays\sciences (985)\Chemistry\Technetium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nalin Balan
NUCL 200
Paper due 02/07/97
TECHNETIUM
Atomic Number: 43
Atomic Symbol: Tc
Atomic Weight: (97)
Electron Configuration: -18-13-2
History, Properties and Uses:
(Gr. technetos, artificial) Element 43 was predicted on the basis of the periodi
c table,
and was erroneously reported as having been discovered in 1925, at which
time it was
named masurium. The element was actually discovered by Perrier and Segre
in Italy in
1937. It was found in a sample of molybdenum, which was bombarded by deut
erons in the Berkeley cyclotron, and which E. Lawrence sent to these investigators
.
Technetium was the first element to be produced artificially. Since its d
iscovery,
searches for the element in terrestrial material have been made without s
uccess. If it
does exist, the concentration must be very small. Technetium has been fou
nd in the
spectrum of S-, M-, and N-type stars, and its resence in stellar matter i
s leading to
new theories of the production of heavy elements in the stars. Nineteen i
sotopes of
technetium, with atomic masses ranging from 90 to 108, are known. 97Tc ha
s a half-life of 2.6 x 10^6 years. 98Tc has a half-life of 4.2 x 10^6 years. The isomer
ic isotope
95mTc, with a half-life of 61 days, is useful for tracer work, as it prod
uces energetic
gamma rays. Technetium metal has been produced in kilogram quantities. Th
e metal
was first prepared by passing hydrogen gas at 1100C over Tc2S7. It is now
conveniently
prepared by the reduction of ammonium pertechnetate with hydrogen. Techne
tium is a
silvery-gray metal that tarnishes slowly in moist air. Until 1960, techne
tium was
available only in small amounts and the price was as high as $2800/g. It
is now
commercially available to holders of O.R.N.L. permits at a price of $60/g
. The
chemistry of technetium is said to be similar to that of rhenium. Technet
ium dissolves in
nitric acid, aqua regia, and conc. sulfuric acid, but is not soluble in h
ydrochloric acid of
any strength. The element is a remarkable corrosion inhibitor for steel.
It is reported
aerated distilled water at temperatures up to 250C. This corrrosion prote
ction is
limited to closed systems, since technetium is radioative and must be con
fined. 98Tc
has a specific activity of 6.2 x 10^8 Bq/g. Activity of this level must n
ot be allowed to
spread. 99Tc is a contamination hazard and should be handled in a glove b
ox. The metal
is an excellent superconductor at 11K and below.
Source: CRC Handbook of Chemistry and Physics, 1913-1995. David R. Lide, Editor
in Chief. Author:
C.R. Hammond
f:\12000 essays\sciences (985)\Chemistry\Thallium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THALLIUM
My element was Thallium. It is atomic number 81. It has 81 protons and electrons and 123 neutrons. Thallium has a mass of 204.3833 atomic mass units. Its symbol is Tl. It resides in Group IIIA of the periodic table. That is the aluminum family. Thallium has a bluish color after exposure to the air. It is a very soft and malleable metal. It has an electron configuration of 1s22s23s23p64s23d104p65s24d105p64f145d106s26p1. It has 6 electron shells. It melts at 576.7 K and boils at 1730 K. It is a solid when at room temperature.
Thallium was discovered in 1861 by a British chemist and physicist. His name was Sir William Crookes. He discovered it spectroscopically in England. He isolated it. In 1862, the French chemist Claude August again isolated it. Thallium comes form the Greek word "thallos". "Thallos" meant "green twig" or "green shoot".
Thallium does not have many uses. It is used in photocells because of the electrical conductivity of thallium sulphide. Thallium was originally used to help treat ringworm and many other skin infections. It was then limited because of the narrow margin between the benefits and its health risks. Thallium bromide-iodide crystals are still used as infrared detectors. Thallium sulphate used to be widely used as a pesticide and an ant killer. It was odorless and tasteless and worked well, but it was found to be too toxic. Thallium slats which burn with a bright green flame are used in flares and rockets. Thallium is the 60th most abundant element in the Earth's crust. There are 3.6 parts of Thallium in every million parts of the Earth's crust. Thallium compounds are extremely toxic. The negative effects are cumulative and can be taken in through the skin. Poisoning from Thallium takes several days to effect you and when it does, it hits the nervous system. Thallium should only be handled by trained professionals with the right equipment and safety precautions.
Thallium deposits are occasionally found in Sweden and the Former Yugoslav Republic of Macedonia. It is also extracted from the mud produced in lead chambers that are used in the manufacturing of sulfuric acid. Thallium is also used in Thallium high-Tc superconductors. Just recently, Thallium is beginning to be used to visualize the reduced flow of blood into the heart muscle. It is injected into the veins and then a camera records the thallium penetration and shows the areas of the reduced flow of blood.
Thallium is an element that most people have never heard of before, but they will in the future. If the heart exams prove to be beneficial, it should become more popular. Thallium is very dangerous though. It is not very common though and therefore should not be worried about.
f:\12000 essays\sciences (985)\Chemistry\The Alkanes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ALKANES
The alkanes are the simplest form of organic compounds. They are made up of
only Carbon atoms and Hydrogen atoms. All of the bonds are single and the number
of hydrogen atoms versus carbon atoms follows this formula: CnH2n+2 Alkanes are all
non-polar molecules so they aren't soluble in water. Here are some more facts.
-Referred to as "Saturated"
-They have
-low densities
-low melting points
-low boiling points
-Refer to "Slide 29" sheet
We couldn't find any information on who discovered them. Or on the what,
where, or when. However the first alkane that was discovered was probably methane.
Because, of course, this is the gas that cows belch.
The journal of toxology report that a 15 year old boy was stricken with
hemiparesis "resulting from acute intoxication following inhalation of butane gas."
Hemiparesis is when half of a person's body is paralyzed. Through reactions alkanes
can be transformed into chloroform. This has been shown to accumulate in lungs of
swimmers after they swim for extended periods of time.
As mentioned above chloroform can be produced which can be used for
anesthesia. Also dichloromethane, or paint stripper and 1,2-dichloroethane which is a
dry cleaning fluid. Here is a sample reaction where a halogen replaces a hydrogen.
CH4(g) + C12(g) ----> CH3Cl(g) + HCl(g)
There are many uses for alkanes, for instance: Propane is used in gas grills,
butane is used in cigarette lighters, through various reactions scientists can make paint
stripper, anesthesia or dry cleaning fluid. The Pentanes and Hexanes are also highly
flammable and make really cool explosions. Heptane, octane and nonane make up
gasoline. The "Octane Scale" on gas pumps uses a system which rates n-heptane at a
0 and isooctane at 100.
Currently propane gas is being studied to use it as a fuel for more efficient cars.
Here is the reaction when propane is oxidized. C3H8 + 2O2 ----> H2O + 3C
Technically under perfect conditions only water and carbon are given off. But I'm
sure that there would be Carbon dioxide or monoxide also.
In the anything else category, goes the cow belching money. The Environmental
protection Agency allocates $500,000 annually to do research on belching cows.
f:\12000 essays\sciences (985)\Chemistry\The Atom.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AP Physics
Period 2
The Atom
In the spring of 1897 J.J. Thomson demonstrated that the beam of glowing matter in
a cathode-ray tube was not made of light waves, as "the almost unanimous opinion of
German physicists" held. Rather, cathode rays were negatively charged particles boiling
off the negative cathode and attracted to the positive anode. These particles could be
deflected by an electric field and bent into curved paths by a magnetic field. They were
much lighter than hydrogen atoms and were identical "what ever the gas through which the
discharge passes" if gas was introduced into the tube. Since they were lighter than the
lightest known kind of matter and identical regardless of the kind of matter they were born
from, it followed that they must be some basic constituent part of matter, and if they were a
part, then there must be a whole. The real, physical electron implied a real, physical atom:
the particulate theory of matter was therefore justified for the first time convincingly by
physical experiment. They sang success at the annual Cavendish dinner.
Armed with the electron, and knowing from other experiment that what was left
when electrons were stripped away from an atom was much more massive remainder that
was positively charged, Thomson went on in the next decade to develop a model of the
atom that came to be called the "plum pudding" model. The Thomson atom, "a number of
negatively electrified corpuscles enclosed in a sphere of uniform positive electrification"
like raisins in a pudding, was a hybrid: particulate electrons and diffuse remainder. It
served the useful purpose of demonstrating mathematically that electrons could be arranged
in a stable configurations within an atom and that the mathematically stable arrangements
could account for the similarities and regularities among chemical elements that the
periodic table of the elements displays. It was becoming clear that the electrons were
responsible for chemical affinities between elements, that chemistry was ultimately
electrical.
Thomson just missed discovering X rays in 1884. He was not so unlucky in legend
as the Oxford physicist Frederick Smith, who found that photographic plates kept near a
cathode-ray tube were liable to be fogged and merely told his assistant to move them to
another place. Thomson noticed that glass tubing held "at a distance of some feet from the
discharge-tube" fluoresced just as the wall of the tube itself did when bombarded with
cathode rays, but he was too intent on studying the rays themselves to purse the cause.
Rontgen isolated the effect by covering his cathode-ray tube with black paper. When a
nearby screen of florescent material still glowed he realized that whatever was causing the
screen to glow was passing through the paper and intervening with the air. If he held his
hand between the covered tube and the screen, his hand slightly reduced the glow on the
screen but in the dark shadow he could see his bones.
Rontgen's discovery intrigued other researchers beside J.J. Thomson and Ernest
Rutherford. The Frenchman Hernri Becquerel was a third-generation physicist who, like
his father and grandfather before him, occupied the chair of physics at the Musee Historie
in Pairs; like them also he was an expert on phosphorescence and fluorescence. In his
case, particular of uranium. He heard a report of Rontgen's work at the weekly meeting of
the Academie des Sciences on January 20, 1896. He learned that the X rays emerged from
the fluorescence glass, which immediately suggested to him that he should test various
fluorescence materials to see if they also emitted X rays. He worked for ten days without
success, read an article on X rays in January 30 that encouraged him to keep working and
decided to try a uranium slat, uranyl potassium sulfate.
His first experiment succeeded-he found that the uranium salt emitted radiation but
misled him. He had sealed a photographic plate in black paper, sprinkled a layer of
uranium salt onto the paper and "exposed the whole thing to the sun for several hours."
When he developed the photographic plate "I saw the silhouette of the phosphorescent
substance in black on the negative." He mistakenly thought sunlight activated the effect,
much as a cathode ray releases Rontgen's X rays from the glass.
The story of Becqueerel's subsequent serendipity is famous. When he tried to
repeat his experiment on Feb. 26 and again on February 27 Paris was covered with clouds.
He put the uncovered photographic plate away in a dark drawer, with the uranium salt in
place. On March 1 he decided to go ahead and develop the play, "expecting to find the
images very feeble. On the contrary, the silhouettes appeared with great intensity. I
thought a t once that the action might be able to go on in the dark." Energetic, penetrating
radiation from inert matter unstimulated by rays or light: now Rutherford had his subject, as
Marie and Pierre Curie, looking for the pure element that radiated, had their backbreaking
work.
But no one understood what produced the lines. At best, mathematicians and
spectroscopists who liked to play with wavelength numbers were able to find beautiful
harmonic regularities among sets of spectral lines. Johann Balmer, a nineteenth-century
Swiss mathematical physicist, identified in 1885 one of the most basic harmonies, a
formula for calculating the wavelengths of the spectral lines of hydrogen. these
collectively called the Balmer series.
It is not necessary to understand mathematics to appreciate the simplicity of the
formula Balmer derived that predicts a line's location on spectral bad to an accuracy of
within on part in a thousand, a formula that has only on arbitrary number:
lambdda=3646(n^2/n^2-4). Using this formula, Balmer was able to predict the
wavelengths of lines to be expected for parts of the hydrogen spectrum not yet studied./
They were found where he said they would be.
Bohr would have known these formula and numbers from undergraduate physics
especially since Christensen was an admirer of Rydberg and had thoroughly studied his
work. But spectroscopy was far from Bohr's field and he presumably had forgotten them.
He sought out his old friend and classmate, Hans Hansen, a physicists and student of
spectroscopy just returned from Gottigen. Hansen reviewed the regularity of line spectra
with him. Bohr looked up the numbers. "As soon as I saw Balmer's formula," he said
afterward, "the whole thing was immediately clear to me."
What was immediately clear was the relationship between his orbiting electrons
and the lines of spectral light. Bohr proposed that an electron bound to a nucleus normally
occupies a stable, basic orbit called a ground state. Add energy to the atom, heat it for
example, the electron responds by jumping to a higher orbit, one of the more energetic
stationary states farther away from the nucleus. Add more energy and the electron
continues jumping to higher orbits. Cease adding energy-leaving the atom alone-and the
electron jump back to their ground states. With each jump, each electron emits a photon of
characteristic energy. The jumps, and so the photon energies , are limited by Plank's
constant. Subtract the value of a lower-energy stationary state W2 from the value of a
higher energy stationary state W1 and you can get exactly the energy of light as hv. So here
was the physical mechanisms of Plank's cavity radiation.
From this elegant simplification, W1-W2=hv, Bohr was able to derive the Balmer
series. The lines of the Balmer series turn out to be exactly the energies of the photons that
the hydrogen electron emits when it jumps down from orbit to orbit to its ground state.
Then, sensationally, with the simple formula, R=2pi^2me^4/h^3, Bolar produced
Rydberg's constant, calculation it within 7 percent of its experimentally measured value.
"There is nothing in the world which impresses a physicist more," an American physicist
comments, "than a numerical agreement between experiment and theory, and I do not think
that there can ever have been a numerical agreement more impressive than this one, as I can
testify who remember its advent."
"On the constitution of atoms and molecules" was seminally important to physics.
Bexzides proposing a useful model for the atom, it demonstrated that events ensts that take
place on the atomic scale are quantized: that just as matter exits as atoms and particle s in
a state of essential graininess, so also does process. Process is discontinuous and the
"granule" of mechanistic physics was therefore imprecise; though a good approximation
that worked for large-scale events, it failed to account for atomic subtleties.
Bohr was happy to force this confrontation between the old physics and the new.
He felt that it would be fruitful for physics. because original work is inherently rebellious,
his paper was not only an examination of the physical world but also a political document.
It proposed, in a sense, to begin a reform movement in physics: to limit claims and clear up
epistemological fallacies. Mechanistic physics had become authoritarian. It had
outreached itself to claim universal application, to claim that the universe and everything in
it is rigidly governed by mechanistic cause and effect. That was Haeckelism carried to a
cold extreme. It stifled Neils Bohr as a biological Haeckelism and stifled Christian Bohr
and as a similar authoritarianism in philosophy and in bourgeois Christianty had stifled
Soren Kierkegaard.
Bibliography
Rodes, Richard. The Making of the Atomic Bomb. New York: Ssimon and Schuster,
1986.
"Nuclear Wapon." The Enclopedia Britannica. Encylopedia Britannica In. Chicago
V8; 1991, p 820-821.
f:\12000 essays\sciences (985)\Chemistry\The Classification and Formation of Crystals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
Purpose
My purpose of this experiment is to find out how crystals are formed and how they are classified. For a long time, I've been interested in crystals, so I've decided this experiment would be perfect for me!
Crystallography
The study of the growth, shape, and geometric characteristics of crystals is called crystallography. When the conditions are right, each chemical element and compound can crystallize in a definite and characteristic form.
Thirty-two classes of crystals are theoretically possible, almost all common minerals fall into one of about twelve classes, and some classes have never been seen. The thirty-two classes are grouped into six crystal systems, based on the length and position of the crystal axes. Crystal axes are imaginary lines passing through the center of the crystals. Minerals in each system share certain proportions and crystal form and many important optical properties.
The six crystal systems are very important to a mineralogists and geologists; specification of the system is necessary in the description of each crystal system.
Isometric
This system comprises crystals with three axes, all perpendicular to one another and all have equal length.
Tetragonal
This system comprises crystals with three axes, all perpendicular to one another; but only two are equal in length.
Orthorhombic
This system comprises crystals with three mutually perpendicular axes, all of different lengths.
Monoclinic
This system comprises crystals with three axes, all unequal in length, two o which are not perpendicular to another, but both of which are perpendicular to the third.
Triclinic
This system comprises crystals with three axes, all unequal in length and is not perpendicular to one another.
Hexagonal
This system comprises crystals with four axes. Three of these axes are in a single plane, proportionally spaced, and of equal length. The fourth axis is perpendicular to the other three. Some crystallographers split the hexagonal in two, calling the seventh system trigonal or rhombohedral.
Formation of Crystals
Crystals are formed when a liquid becomes solid or when a vapor or liquid solution becomes supersaturated. Some substances tends to form seed crystal (I grew my crystals from seed crystals). If a solution like this is cooled slowly, a few seeds grow into large ones; but if it is cooled rapidly, numerous seeds form and grow only into tiny crystals. Table salt, purified at a factory by recrystallization, is composed of lots of cubed crystals, which are barely visible with the naked eye; rock salt, formed in a really long time, contains enormous crystals of the same cubed form.
f:\12000 essays\sciences (985)\Chemistry\The Comparative Abundance Of The Elements.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CHEMISTRY ASSIGNMENT
THE COMPARATIVE ABUNDANCE OF THE ELEMENTS
· There are 92 naturally occurring elements, only 17 of them make up 99.5% of the earth's crust (including oceans and atmosphere).
· In living things (plants, animals, people) the six most abundant elements are carbon, hydrogen, oxygen, nitrogen, phosphorus and sulfur.
· The universe is dominated by the elements hydrogen (83%) and helium (16%)
1. The Crust
The outside of the earth is a thin crust which is approximately 20 to 40km thick. The crust is a formation of dips and hollows which are filled with water to form the oceans and seas. On top of the earth's crust is an atmosphere, this is a thin layer of gases, 95% of these gases are within the first 20km of the earth's surface. Of the 17 elements that make up 99.5%, the most abundant of these are Oxygen 49.2%, Silicon 25%, and Aluminum 7.5%. Then the next most abundant elements are Iron 4.7%, Calcium 3.4%, Sodium 2.6%, Potassium 2.4%, Magnesium 1.9%, Hydrogen 0.9%, titanium 0.6%, Chlorine 0.2%, Phosphorus Manganese and Carbon are all 0.1%, Sulfur 0.05% Barium 0.04%, Nitrogen 0.03% and the rest of the elements on the periodic table take up about 0.5%.
The elements of the crust are graphed below, but only ones that are the most abundant due to the fact that the abundance of the other elements of the crust are too low to graph accurately on one graph.
Almost all elements are found as compounds, however Oxygen, Nitrogen, and to a lesser extent sulfur, gold, silver and platinum are the only elements which can be found in almost there raw sate. The atmosphere contains Oxygen and nitrogen, but it only contains a small portion of the earth's oxygen, this is because most of the world's oxygen is found in water, oxides of metals, and as silicates. Common soils and clays are silicates.
2. Living Things
In living things (plants, animals, people) the six most abundant elements are carbon, hydrogen, oxygen, nitrogen, phosphorus and sulfur (known as CHONPS). Most compounds in living matter are radically complex, each molecule could contain hundreds or thousand's of atoms. Carbohydrates and fats are compounds which contain carbon, hydrogen and oxygen only. Proteins are also compounds and they contain nitrogen, sulfur and occasionally phosphorus. Living matter cannot live on these six elements alone; even though they make up 99% of the mass, they also need some compounds of other elements such as calcium, potassium, sodium, magnesium, iron, zinc, fluorine and others. These elements are required as compounds so that livings things can use them.
3. The Universe
The universe is dominated by the elements hydrogen 83%, and helium 16%. Other elements in the universe are oxygen 0.1%, carbon 0.03%, nitrogen 0.01%, silicon magnesium and neon are all about 0.003% of the elements in the universe. The abundance of hydrogen and helium in this cosmic distribution of the elements, proves all the elements were formed by nuclear fusion in the stars, for example the Sun. Hydrogen is a basic material for which the other elements are gradually built.
By Derrick Deacon
f:\12000 essays\sciences (985)\Chemistry\The Effect of Concentrations of Starch and Sugar Solutions on.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Effect of Concentrations of Starch and Sugar Solutions on Synthetic Semi-Permeable Membranes
By: Jamie Hardy
Question:
Is dialysis tubing selectively permeable?
Hypothesis:
If one has dialysis tubing, which is dipped in water, filled with Gatorade and starch and is left for 15 minutes, the sugar in the Gatorade will exit the dialysis and into the water. So the dialysis is semi-permeable.
Materials:
16 cm dialysis tubing
beaker
cylinder
test tubes
transfer pipettes
Gatorade
Starch solution 10 g/1000 ml water
Benedict¹s Solution
Iodine Solution
string
ring stand
water bath
boiling chips
goggles
Scientific Method:
1. Each group will make up a Gatorade solution as follows:
Group 1 1.0 M 3.4 g/10 ml
Group 2 0.5 M 1.7 g/10 ml
Group 3 0.5 M 1.7 g/10 ml
Group 4 0.2 M 0.7 g/10 ml
Group 5 0.2 M 0.7 g/10 ml
NB: Tare the balance.
Tare the balance with a test tube inside a beaker.
Add the amount to the tube.
Vortex, gradually turning the speed up.
Heat in a water bath.
Cool the tube.
2. Wet the dialysis tubing with the beaker of distilled water.
3. Twist one end of the tubing.
4. Fold the twisted end over on itself.
5. Tie a tight knot. Leave the extra string.
6. Insert the transfer pipette (cut off top) about one third of the way into the tubing.
7. Tie a knot securely around the transfer pipette. Leave the extra string.
8. Add two transfer pipettes (two squirts) of the Gatorade solution to the tubing.
9. Add two transfer pipettes (two squirts) of the starch solution to the tubing.
10. If you spilled any solutions while transferring, carefully rinse the tubing.
11. Fill the cylinder with distilled water to about 2.54 cm from the top.
12. Place the tubing into the cylinder of water.
13. Rest the apparatus against the ring stand.
14. Note the height of the water in the tube.
15. Record the time the tube was placed into the cylinder (it will stay for about 20 minutes)
Time tubing was placed into cylinder: 1:15.00 p.m.
Time tubing was taken out of cylinder: 1:31.12.87
16. Note any changes in the height of the water in the transfer pipette.
The height of the water in the transfer pipette was that it did not change.
Observations and Data:
Tests:
Positive Negative
Test for starch and Iodine Solution blue black clear yellow
Test for Gatorade and Benedict¹s Solution orange/green clear blue
17. Using transfer pipettes and dry test tubes, test the solution that has been holding the tubing for Gatorade and starch.
Test each separately.
Run a control for each.
Test each of the original solutions with Benedict¹s and Iodine solutions.
18. Record Observations:
When starch is added to Iodine Solution it turns clear yellow and when Gatorade and Benedict¹s Solution are mixed it turns orange. The Gatorade solution smells like burnt Gatorade and the starch solution smells like a very dull baby powder.
19. Test a known glucose solution with Benedict¹s and Iodine solutions.
20. Record observations:
The color of the glucose did not change when mixed with Benedict¹s and Iodine Solutions. The color did get greyer though.
Discussion:
In the observations unit of the lab report it explains that the test on starch added to Iodine Solution and Gatorade added to Benedict¹s Solution the Gatorade solution was a thick orange color and the starch solution was a very clear yellow. The Gatorade Solution smelled like something was burnt and placed in the solution and the starch solution smelled like a very dull baby powder. When the Gatorade solution was placed over the flame the solution bubbled to the top of the test tube and almost went over.
Conclusion:
My hypothesis, ³If one has dialysis tubing, which is dipped in water, filled with Gatorade and starch and is left for 15 minutes, the sugar in the Gatorade will exit the dialysis and into the water. So the dialysis is semi-permeable.², was proven correct. The tubing is semi-permeable.
f:\12000 essays\sciences (985)\Chemistry\The Element Chlorine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Element: Chlorine
General Information
We researched the chemical element known as chlorine. Chlorine has an atomic number of 17 and an atomic weight of 35.453. It has a valence number of 3. The element has 3 energy levels. Chlorine exists as a greenish-yellow gas at normal temperatures and pressures. Chlorine is second in reactivity only to fluorine among the halogen elements. Chlorine is a nonmetal. It is estimated that 0.045% of the earth's crust and 1.9% of sea water are chlorine. Chlorine combines with metals and nonmetals and organic materials to form hundreds of chlorine compounds. Chlorine is about 2.5 times as dense as air and moderately soluble in water, forming a pale yellowish green solution. Chlorine is so reactive that it never occurs free in nature.
Chemical Properties
Chlorine is in the halogen family, and like all the other halogen elements chlorine has a strong tendency to gain one electron and become a chloride ion. Chlorine readily reacts with metals to form chlorides, most of which are soluble in water. Chlorine also reacts directly with many nonmetals such as sulfur, phosphorus, and other halogens. Chlorine can support combustion; if a candle were to be thrown into a vessel of chlorine, it would continue to burn, releasing dense, black clouds of smoke, The chlorine combines with hydrogen of the paraffin, forming hydrogen chloride, and uncombined carbon is left in the form of soot. Soot is black residue from fuel. Chlorine replaces iodine and bromine from their salts. Dry chlorine is somewhat inert or not able to move, but moist chlorine unites directly with most of the elements.
History
Chlorine was discovered in 1774 by Karl Scheele. Humphry Davy proved that chlorine was an element. Extensive production began 100 years later. During the 20th Century. The amount of Chlorine used was considered a measure of industrial growth. In, 1975 chlorine productions ranked seventh on the list of largest-volume chemicals produced in the United States. The importance of chlorine has changed as new uses have been added. In 1925 paper and pulp used over one-half . The chlorine made and chemical products only 10%. By the 1960's paper and pulp use accounted for only 15-17% and the chemical uses increased to 75-80%. Peoples uses have contributed to the growth of large cities, and new textiles, plastics, paints, and miscellaneous uses have raised man's standard of living. Many large companies are based primarily on the manufacture of chlorine and its compounds. In 1978 17% of the United States production went into the production of vinyl chloride monomer. Other chlorinated organics consumed 48% of United States Production.
Toxicity and Precautions
Chlorine was used in World War I as a poison gas. In fact most poisonous gases have chlorine in them. Chlorine is very corrosive to moist tissue and has a very irritating effect on the lungs and mucous membranes of the nose and throat. Inhalation of chlorine gas can cause edema of the lungs and respiratory stoppage. When hydrogen and chlorine gases are mixed together, the mixture is stable if kept in a cool, dark place. If heated or exposed to sunlight, the mixture explodes. Chlorine is easily liquefied and usually transported in its liquid state in pressurized drums. Great care must be taken, however, to prevent the containers from bursting and liberating large amounts of the gas. In the United States most European countries, large quantities of chlorine may only be transported by train. The present trend is to limit the transport of chlorine as much as possible by producing and using the element in the same location.
Uses
Chlorine has many great uses. Chlorine is an excellent oxidizing agent. At first. The use of Chlorine was used as a bleaching agent in the paper, pulp, and textile industries and as a germicide for drinking water preparation swimming pool purification, and hospital sanitation has made community living possible.
Chlorine is used in bleaching as said before. The bleaching action of chlorine in aqueous solution is due to the formation of hypochlorous acid, a powerful oxidizing agent. If a colored, oxidizable material is present, hypochlorous acid releases its oxygen to oxidize the material to a colorless compound. Liquid bleach is usually an aqueous solution of sodium hypochlorite, and dry powder bleaches contain chloride of lime. Since chlorine destroys silk and wool, commercial hypochlorite bleaches should never be used on these fibers.
Chlorine is also used as a disinfectant. The oxidizing ability of chloride of lime enables it to destroy bacteria; therefore large amounts are used to treat municipal water systems. This chemical is also used in swimming pools and for treating sewage.
Chlorine is used as rock salt. Sodium chloride, NaCl, is used directly as mined (rock salt), or as found on the surface, or as brine also known as salt water. It can be dissolved, purified, and reprecipated or given in return for use in foods or when chemical purity is required. Its main uses are in the production of soda ash and chlorine products. The form uses it as refrigeration, dust, and ice control, food processing, and food preservation. Calcium chloride, CaCl2, is usually obtained from salt water or as a by product of chemical processing. Its main uses are road treatment, coal treatment, and concrete conditioning.
In addition to these products, for which chlorine is needed, various other chlorine compounds play an important part in chemistry and the chemical industry. The chlorides of most metals are easily soluble in water, which widens their applicability. Some other important compounds are the chlorates, the perchlorates, and the hypochlorites. Hydrochloric acid is one of the most frequently used acids.
Preparation
The most important method for preparation of chlorine is the electrolysis of a solution of common salt, sodium chloride. The chlorine gas is liberated at the positive anode or positively charged electrode, which is made of graphite since a metal anode would react with chlorine. At the iron cathode or negatively charged electrode, sodium ions are reduced to sodium metal, which reacts immediately with water to form sodium hydroxide.
Another method of preparing chlorine is by the electrolysis of molten salt. This process is used specifically to produce sodium, and the chlorine is a commercial by product. When large quantities of waste hydrochloric and are available. Chlorine may be recovered by oxidation of the acid. This method has the advantage of converting great quantities of waste acid to useful substances.
No matter what process is used to prepare chlorine, the gas must be well dried. Dry chlorine is much less corrosive than moist chlorine gas. In the laboratory chlorine may be prepared by heating manganese oxide with hydrochloric acid.
Conclusion
In conclusion chlorine is a very wonderful element. Chlorine has hundreds of compounds. If we did not have these compounds we would not have clean water, we would have an insect problem, we could not make many important compounds that are used in medicine, and some of the battles in World War I might have been lost if it were not for chlorine. Our world would not be the same if not for chlorine.
f:\12000 essays\sciences (985)\Chemistry\The Fountainhead.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Fountainhead
Ayn Rand, Philosophical Fiction
In the novel The Fountianhead, Ayn Rand uses the main character, Howard Roark, to express her daringly original philosophy--Objectivism. Like Rousseau's "Natural Man" in The Social Contract, Ayn Rand presents Howard as a man, as man should be-- strong-willed, self-sufficient , self-confident, and self motivated. A man who, in spite of cruelty from an unaccepting society, fights to work and live as only he chooses to do so. Through the course of the story the reader sees how Roark completely disregards the norms and principles that define society. He does this to maintain the idea that true happiness cannot be achieved through the standards of others. Rather, happiness can only be attained by subsisting on one's own canon, never for a moment yielding the integrity of his/her ego. This idea, in short, is the basis of Objectivism.
In my opinion, I think Ayn Rand's philosophy is completely ridiculous. According to The Fountainhead our entire society is based upon the unchanging principles made up and maintained solely by powerful, influential old men (Elsworth Toohey). Furthermore, Miss Rand dictates that true happiness can only be found by defying these principles. I would have to say that although Miss Rand's Objetivism works well with in the realm of the book, I fail to see it in the "real world." In the "real world" these underlying principles are ever-changing. Brought out by constantly advancing ideas, technology, and influences, old conventions become replaced everyday. I fail to see the social beauracracy that Miss Rand seems to believe there is. Besides even if it did exist, I don't see how intentionally going against it would make anyone happier.
Although I have to say that I did not agree with Ayn Rand's ideas, I did however find The Fountainhead an excellent read. The story-telling itself makes it a book that is hard to put down. I would definitely recommend it to anyone.
f:\12000 essays\sciences (985)\Chemistry\The Greenhouse Effect.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Greenhouse Effect 1
The Greenhouse Effect
Greenhouse Effect 2
When one starts a car or burns wood, the last thought on their mind is the consequences to these actions. Unfortunately, the daily dangers to earth are not widely know. Due to the constant change of society, this planet must cope with various problems. One of the most important ecological structures is the ozone layer. The same shield that protects us from the sun's deadly radiation, can also act as a blanket engulfing us in heat. This situation is know as the greenhouse effect. What is the greenhouse effect, what causes it, and what can be done to control it?
The problem of global warming has been around for some time now. Though not until recently has it become a priority. So important, that figures such as Vice President Al Gore have spoken out. Many are realizing that the greenhouse effect is not something to be put aside, yet rather something to be worked on and studied. "The greenhouse effect displays that nature is not immune to our presence" (Kralijic, 1992). Ways must be found to lessen the threat of this growing crisis. If this effect were to continue and grow, the earth's population would be exposed to serious threats.
Carbon dioxide is essential for plants who use it for photosynthesis, yet too much can lead to serious threats. The problem lies in the disruption of
Greenhouse Effect 3
the balance between how much carbon dioxide plants intake, and what our population produces. If this natural filtering process is unbalanced, the atmosphere will receive too much carbon dioxide and other greenhouse gases. Once these gases form in the atmosphere, they act as barriers trapping in heat and warming the earth.
This process is not new. In fact, without the greenhouse effect, the average surface temperature of the earth would be about 59 Fahrenheit degrees lower than it is today. "Long before civilization intervened , the thin blanket of gases that surround the earth was efficiently trapping a tiny portion of the sun's heat and keeping it near the surface to warm up the air just enough to prevent temperatures from plunging to frigid extremes every night- which, of course, is exactly what happens on the moon and on planets like Mars that have very thin atmospheres" (Gore, 1992).
The greenhouse effect received its name because the atmosphere of the earth acts much like the glass roof on a greenhouse. Sunlight enters the greenhouse through the glass as it heats up the plants. Then, the warmth is trapped as the glass slows the withdraw of heat. Similarly, the earth's atmosphere lets most of the sun's light enter and heat the surface. The earth
Greenhouse Effect 4
then sends this energy, called Infrared Radiation, back into the atmosphere (showed in the diagram on page 7). This is when the actual effect takes place. Not all of the Infrared Radiation is sent freely into space. Certain gases in the atmosphere absorb it and send it back toward earth. Such gases are Carbon Dioxide, Ozone, and water vapor. As stated earlier, this process becoming more important. Due to the burning of fossil fuels such as coal, oil, and natural gas, carbon dioxide is increasing thus increasing the amplitude of the greenhouse effect.
The greenhouse effect would have disastrous on this planet's population. "Climate changes will threaten agriculture and our food supply, probably eliminating the Great Plains or prairies of North America as a region in which crops may be grown, for example. Also, melting of parts of the Antarctic ice sheet will cause flooding of coastal cities such as London , New York, Beijing, Amsterdam, St. John's, Halifax, Vancouver , even Montreal and of entire countries, such as Bangladesh" (Johnson, 1990). The greenhouse effect is not limited to certain countries or states. The entire world will suffer if it is allowed to grow. "Some scientists think that from the late 1990's to the late 2000's the amount of carbon dioxide in the atmosphere
Greenhouse Effect 5
could double. If this doubling were to occur, it would intensify the greenhouse effect and result in an increase of 2.7 to 11 Fahrenheit degrees (1.5 to 6 Celsius degrees) in the earth's average temperature" (Gille, 1988). The results are real and quite intense. The outlook is not good.
Something must be done quickly to slow the growth of the greenhouse effect. It is not a hopeless situation. "Reverse your oxygen debt. The less fuel you burn and the more oxygen-producing plants you grow, the more you will add oxygen to the atmosphere and lower your output of greenhouse gases" (British Columbia Medical Association, 1990). Solar power also plays a part in this. Although just recently taking hold, solar power could greatly lessen the output of the greenhouse gases as could hydroelectric power could. Besides different power sources, many companies are producing environment safe products and even air cleaners. It may be a nuisance, but it is possible to lessen the greenhouse effect.
The issue of the greenhouse effect is not seriously taken by many people. If it doesn't jump out in front of their faces or directly and immediately concern them, they pay no attention. If we do not take action,
Greenhouse Effect 6
the earth may become Venus's exact "twin-planet", covered in a thick cloud of sulfuric acid and sulfur. It is not a pretty outlook to say the least. Any little thing can help, and, if we all take action, we can slow the greenhouse effect down to the point where it won't be a serious threat for many, many years.
Greenhouse Effect 7
DEPARTMENT OF METEOROLOGY
University of Maryland College Park
Greenhouse Effect 8
References
Gille, J. (1988). Greenhouse effect. World Book Encyclopedia. Chicago, IL: Scott Fetzer company.
Gorder, C. (1991). Green Earth Resource Guide. Tempe, AZ: Blue Bird Publishing.
Gore, A. (1992). Earth in the Balance. Boston, MA: Houghton Mifflin Company.
Hammond, Scully, Mast, & Powell. (1991). Environmental Almanac. Boston, MA: Houghton Mifflin Company.
Johnson, G. (1990). Environmental Tips on how you can save this planet. Calgary, Alberta: Detselig Enterprises.
Kralijic, M. (1992). The Greenhouse Effect. New York, NY: The H.W. Wilson Company.
f:\12000 essays\sciences (985)\Chemistry\The History of Carbon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I. Introduction
A. The History of Carbon
II. Occurrences in Nature
A. Diamond
B. Graphite
C. Coal and Charcoal
D. Amorphous Carbon
III. Carbon Compounds
A. Inorganic
B. Organic
IV. The Carbon Cycle
IV. Conclusion
Carbon, an element discovered before history itself, is one of the
most abundant elements in the universe. It can be found in the sun, the
stars, comets, and the atmospheres of most planets. There are close to ten
million known carbon compounds, many thousands of which are vital to the
basis of life itself (WWW 1).
Carbon occurs in many forms in nature. One of its purest forms is
diamond. Diamond is the hardest substance known on earth. Although
diamonds found in nature are colorless and transparent, when combined with
other elements its color can range from pastels to black. Diamond is a
poor conductor of heat and electricity. Until 1955 the only sources of
diamond were found in deposits of volcanic origin. Since then scientists
have found ways to make diamond from graphite and other synthetic
materials. Diamonds of true gem quality are not made in this way (Beggott
3-4).
Graphite is another form of carbon. It occurs as a mineral in
nature, but it can be made artificially from amorphous carbon. One of the
main uses for graphite is for its lubricating qualities. Another is for
the "lead" in pencils. Graphite is used as a heat resistant material and
an electricity conductor. It is also used in nuclear reactors as a
lubricator (Kinoshita 119-127).
Amorphous carbon is a deep black powder that occurs in nature as a
component of coal. It may be obtained artificially from almost any organic
substance by heating the substance to very high temperatures without air.
Using this method, coke is produced from coal, and charcoal is produced
from wood. Amorphous carbon is the most reactive form of carbon. Because
amorphous carbon burns easily in air, it is used as a combustion fuel. The
most important uses for amorphous carbon are as a filler for rubber and as
a black pigment in paint (WWW 2).
There are two kinds of carbon compounds. The first is inorganic.
Inorganic compounds are binary compounds of carbon with metals or metal
carbides. They have properties ranging from reactive and saltlike; found
in metals such as sodium, magnesium, and aluminum, to an unreactive and
metallic, such as titanium and niobium (Beggott 4).
Carbon compounds containing nonmetals are usually gases or liquids
with low boiling points. Carbon monoxide, a gas, is odorless, colorless,
and tasteless. It forms during the incomplete combustion of carbon
(Kinoshita 215-223). It is highly toxic to animals because it inhibits the
transport of oxygen in the blood by hemoglobin (WWW 2). Carbon dioxide is
a colorless, almost odorless gas that is formed by the combustion of
carbon. It is a product that results from respiration in most living
organisms and is used by plants as a source of carbon. Frozen carbon
dioxide, known as dry ice, is used as a refrigerant. Fluorocarbons, such
as Freon, are used as refrigerants (Kinoshita 225-226).
Organic compounds are those compounds that occur in nature. The
simplest organic compounds consist of only carbon and hydrogen, the
hydrocarbons. The state of matter for organic compounds depends on how
many carbons are contained in it. If a compound has up to four carbons it
is a gas, if it has up to 20 carbons it is a liquid, and if it has more
than 20 carbons it is a solid (Kinoshita 230-237).
The carbon cycle is the system of biological and chemical processes
that make carbon available to living things for use in tissue building and
energy release (Kinoshita 242). All living cells are composed of proteins
consisting of carbon, hydrogen, oxygen, and nitrogen in various
combinations, and each living organism puts these elements together
according to its own genetic code. To do this the organism must have these
available in special compounds built around carbon. These special
compounds are produced only by plants, by the process of photosynthesis.
Photosynthesis is a process in which chlorophyll traps and uses energy from
the sun in the form of light. Six molecules of carbon dioxide combine with
six molecules of water to form one molecule of glucose (sugar). The
glucose molecule consists of six atoms of carbon, twelve of hydrogen, and
six of oxygen. Six oxygen molecules, consisting of two oxygen atoms each,
are also produced and are discharged into the atmosphere unless the plant
needs energy to live. In that case, the oxygen combines with the glucose
immediately, releasing six molecules of carbon dioxide and six of water for
each molecule of glucose (Beggott 25-32). The carbon cycle is then
completed as the plant obtains the energy that was stored by the glucose.
The length of time required to complete the cycle varies. In plants
without an immediate need for energy, the chemical processes continue in a
variety of ways. By reducing the hydrogen and oxygen content of most of
the sugar molecules by one water molecule and combining them to form large
molecules, plants produce substances such as starch, inulin , and fats
and store them for future use. Regardless of whether the stored food is
used later by the plant or consumed by some other organism, the molecules
will ultimately be digested and oxidized, and carbon dioxide and water will
be discharged. Other molecules of sugar undergo a series of chemical
changes and are finally combined with nitrogen compounds to form protein
substances, which are then used to build tissues (WWW 2).
Although protein substances may pass from organism to organism,
eventually these too are oxidized and form carbon dioxide and water as
cells wear out and are broken down, or as the organisms die. In either
case, a new set of organisms, ranging from fungi to the large scavengers,
use the waste products or tissues for food, digesting and oxidizing the
substances for energy release (WWW 1).
At various times in the Earth's history, some plant and animal
tissues have been protected by erosion and sedimentation from the natural
agents of decomposition and converted into substances such as peat,
lignite, petroleum, and coal. The carbon cycle, temporarily interrupted in
this manner, is completed as fuels are burned, and carbon dioxide and water
are again added to the atmosphere for reuse by living things, and the solar
energy stored by photosynthesis ages ago is released (Kinoshita 273-275).
Almost everything around us today has some connection with carbon
or a carbon compound. Carbon is in every living organism. Without carbon
life would not exist as we know it.
Works Cited
1. Beggott, Jim Great Balls of Carbon New Scientist, July 6, 1991
2. Kinoshita, Kim Carbon Compounds Random, New York 119-275
1987
3. WWW Carbon http://www.usc.edu/chem/carbon.html 1995
4. WWW Carbon Compounds
http://www.harvard.edu/depts/chem/carbon.html
1995
f:\12000 essays\sciences (985)\Chemistry\The History Use and Effectiveness of Medicinal Drugs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The History, Use, and Effectiveness of Medicinal Drugs
I. The History, Use, and Effectiveness of Medicinal Drugs
A. Introduction (Pg's 1-2)
II. Aspirin
(Pg's 3-6)
A. Its Origin
B. Dosages
C. Relative Effectiveness
D. Side Effects
E. Alternate Treatment
III. Sulfa Drugs
(Pg's 7-10)
A. Its Origin
B. Dosages
C. Relative Effectiveness
D. Side Effects
E. Alternate Treatment
IV. Antibiotics
(Pg's 11-14)
A. Its Origin
B. Dosages
C. Relative Effectiveness
D. Side Effects
E. Alternate Treatment
V. Antihistamines (Pg's
15-17)
A. Its Origin
B. Dosage and Use
C. Relative Effectiveness
D. Side Effects
E. Alternate Treatment
VI. History, Use and Effectiveness of Vitamins (Pg's 18-31)
and Nutrient Supplements
VII. Future Prospects and Trends in Pharmacology (Pg's 32-42)
VIII. Recipe
(Pg's 43-44) Endnotes
(Pg's 45-46) Bibliography
(Pg's 47-48)
THE HISTORY, USE, AND EFFECTIVENESS
OF MEDICINAL DRUGS
The science and ambidexterity of treating, diagnosing, and preventing disease is known as
the field of Medicine. In ancient times Medicine was a vague field, mostly incorporated
with magic and superstition, it was not like our modern medical system of scientific
analysis.
Early Amputation Tools
Shown here are the contents of a case of amputation instruments dating from about 1800.
Within medicine the most crucial component, besides the professional Doctors,
Nurses and Pharmacologists are the drugs that make it possible for millions of
humans everyday to overpower their ailments. Within the field of Medicine,
Pharmacology is the study and methodology behind the actions of drugs and
their reactions in the human body. Many early treatments didn't actually heal
the patient, but just gave him a slight euphoria from the pain.
(Pg 1)
In today's culture, the medicines of our ancestors are now considered to be harmful to
oneself and are classified as illegal such as the drugs of marijuana and opium which were
key in the Chinese, and Native American medical system. The origin of drugs vary from
common plants, (Aspirin, Digitalis, Ergot, Opium, Quinine, Reserpine) to minerals, (Boric
Acid, Epsom Salts, Iodine) or synthetic compounds. The difference in a drug from being
helpful to being deadly is all in the dosage, which is determined by the amount of the drug
that is found in the blood, this process is known as Serum Monitoring. The risk-to-benefit
ratio of drug use is also extremely important, a drug could totally help one ailment but in
turn cause another such as the drug niridazole, which helps schistosomaisis but is known to
cause cancer. Even the national government has some control over the regulation of drug
use. Proprietary drugs are sold over the counter and promote less addiction that Ethical
drugs which can only be obtained legally by a written prescription by a registered doctor.
Jurisdiction of illegal drugs which produce a strong addiction is given to the Drug
Enforcement Administration of the U.S. Department of Justice. The most important article
about doctor prescribed drugs is that the doctor is aware about other drugs in which the
patient is taking, because one drug alone may be helpful but administered with another
could cause adverse side effects preventing recovery for the patient.
(Pg 2)
ASPIRIN
1"Acetylsalicylic acid, commonly known as aspirin is one of the most widely used analgesics
in the world". Used by Ancient Greeks and Native Americans it was used to reduce fever and
pain and could also be used as an anti-inflammatory agent. It interferes with tissue
contractions of the prostaglandin's which are chemicals involved in the production of
inflammation and pain. It modifies the temperature-regulating portion of the brain,
dilating blood vessels in the skin and increases sweating which in turn cools the body
reducing fever.
Aspirin also prevents the production of thromboxane which plays a key role in coagulation
cascade, which slows blood clotting and is helpful in preventing heart attacks and strokes.
It is derived from the bark of the willow tree, and its activity is produced from chemicals
called salicylates.
(Pg 3)
2"Charles Gerhardt a French chemist first synthesized the acetyl derivative from the
salicylic acid in 1853 developing the first type of aspirin," but Felix Hoffman a German
chemist was the first to realize its medical value in 1893. Over a long period of constant
use aspirin can cause iron deficiency, gastric ulcers, kidney damage and if given to
children having chicken pox or influenza could cause the risk of contracting the fatal
brain disease known as Reye's syndrome.
Usage of Aspirin varies. Short term use of about 3"6-10 days" is recommended without
physician supervision but long term use requires periodic evaluations and dosage
restrictions.
(Pg 4)
Side effects also vary with the individual, they could include mild drowsiness, allergic
reactions, skin rash, hives, nasal discharge, stomach irritation, heart burn, nausea,
vomiting, constipation and in extreme cases erosion of the stomach lining, activation of a
peptic ulcer, bone marrow depression, hepatitis, and kidney damage. Overdosing on this drug
produces side effects such as stomach distress, nausea, vomiting, ringing in the ears,
sweating, stupor, deep and rapid breathing, twitching, and convolutions.
Aspirin was the first non-steroidal anti-inflammatory drug (NASID) but by far not the
last. 4"One and a half million Americans suffer from heart attacks each year and about
200,000 suffer from heart related deaths". Aspirin helps millions of people each year
because it prevents premature blood clotting. No other NSAID can compare with the price or
efficacy of aspirin but ibuprofen can come very close. Where aspirin might take up to 12
doses to relieve pain over a long period a strong dosage of ibuprofen could help the
patient in one dose.
Even though Ibuprofen works as well as aspirin care must be taken with its usage, although
it does not irritate the stomach as much as aspirin it accelerates the damaging of the
kidneys. Another problem between aspirin and ibuprofen is price. 5"A 30 day supply of
aspirin could cost $3.95 where a 30 day supply of ibuprofen could cost $22.95".
(Pg 5)
New studies prove that aspirin also has effect over migraine headaches, cataracts,
gallstones, diabetic eye problems, insomnia, weight loss for women, wheat intolerance,
leprosy and even hip replacement complications. Aspirin plays an important role in keeping
our bodies resistant against various illnesses and helping in times of injury.
(Pg 6)
SULFA DRUGS
Sulfonamides, the chemical name for sulfa drugs were the first chemical compounds to
provide safe and effective treatment to most common bacterial infections. Before the use of
penicillin after the mid 1940's, Sulfa drugs played a major role in antibacterial treatment
which resulted in a sharp decrease in deaths due to such bacterial infections. In today's
modern medical system sulfa drugs are used to treat patients of urinary tract infections.
Sulfur
Sulfur in its natural form is a tasteless, odorless, light yellow solid, once forcefully fed
to children in the belief that it was good for their health. Sulfur compounds, found in
dairy products and eggs, are an essential dietary ingredient.
Instead of killing bacteria sulfa drugs prevent them from multiplying, making it easier for
the bodies natural defenses to overcome and destroy them. Bacteria require certain
chemicals know as 6"para-aminobenzoic acids" to multiply, sulfa drugs resemble the chemical
structure of the acids and can be absorbed by the bacteria. The sulfa drugs combine with
the outer shells of the bacteria therefore not allowing the real acids to penetrate.
(Pg 7)
All bacteria are not reactant to sulfonamides and have to be screened by the physician to
see if it is necessary to take a more serious action. Sulfa drugs can be taken orally which
is most common, or by an injection just beneath the skin. In former medical history they
were used to treat pneumonia, dysentery, blood poising, cellulitis, bubonic plague, and
conjunctivitis.
Since the recognition of penicillin as an effective bacterial assailant and some bacteria
becoming resistant to sulfonamides physicians have been less likely to prescribe them since
the late 1940's. The combination of sulfamethoxazole and trimethoprim have given a new
usage for sulfa drugs, now they can be used for such ailments as middle ear infections,
shigellosis and recurring urinary tract infections.
(Pg 8)
7"Paul Gelmo in 1908 discovered the first sulfa drug" accidentally while looking for dies
to better color woolen clothing unaware its future lye in the medical profession. In 8"1953
a German pathologist named Gerhard Domagk" reported that this dye killed streptococcal
bacteria in mice leading to the first research in to the bacteria fighting drug.
Major problems included with the first line of the drug sulfanilamide were included in the
administration of the drug. It sometimes crystallized in the urine of the patient causing
kidney damage. Later development of water soluble sulfa drugs solved the problem of
crystallization in the urine and gave the hope of a longer life span to people living in
the 1930's.
(Pg 9)
(Pg 10)
ANTIBIOTICS
In the ancient language of Greek the term antibiotic meant 8against life . They are
chemical substances produced by one organism that in turn are destructive to another. This
process traditionally has been called antibiosis and is the opposite of symbiosis. An
antibiotic is a type of chemotherapeutic agent, it has a toxic effect on certain types of
disease-producing microorganisms without acting dangerously on the patient. Some
chemotherapeutic agents differ from antibiotics in that they are not secreted by
microorganisms, as are antibiotics, but rather are made synthetically in a chemical
laboratory. 9"Alternately examples are quinine, used against malaria; arsphenamine, used
against syphilis; the sulfa drugs, used against a wide variety of diseases, notably
pneumonia; and the quinolones, used against hospital-derived infections (zoonoses)". A few
antibiotics, among them penicillin and chloramphenicol, have now been produced
synthetically also. The first observation of what would now be called an antibiotic effect
was made in the 10"19th century by the French chemist Louis Pasteur", who discovered that
certain saprophytic bacteria can kill anthrax germs. Around the year 11"1900 the German
bacteriologist Rudolf von Emmerich isolated a substance called pyocyanase", which can kill
the germs of cholera and diphtheria in the test tube. It was not useful, however, in curing
disease. In the 12"1920s the British bacteriologist Sir Alexander Fleming, who later
discovered penicillin", found a substance called lysozyme in many of the secretions of the
body such as tears and sweat, and in certain other plant and animal substances. Lysozyme
has strong antimicrobial activity, but mainly against harmless bacteria.
(Pg 11)
(Sir Alexander Fleming)
Discovery of Penicillin
The research of Alexander Fleming in 1928 led to the discovery of penicillin, an important
antibiotic derived from the mold Penicillin notatum. Penicillin is effective against a wide
range of disease-causing bacteria. It acts by killing bacteria directly or by inhibiting
their growth.
Penicillin, the archetype of antibiotics was discovered by accident in 13"1928 by Fleming",
who showed its effectiveness in laboratory cultures against many disease-producing
bacteria, such as those that cause gonorrhea and certain types of meningitis and bacteria
(blood poisoning); however, he performed no experiments on animals or humans. Penicillin
was first used on humans by the British scientists 14"Sir Howard Florey and Earnest Chain
during the 1940-41 winter".
(Pg 12)
The first antibiotic to be used in the treatment of human diseases was tyrothricin (one of
the purified forms of which was called gramicidin), which was isolated from certain soil
bacteria by the American bacteriologist 15"RenT Dubos in 1939". This substance was too
toxic for general use, but it is employed in the external treatment of certain infections.
Other antibiotics produced by actinomycetes, filamentous and branching bacteria, occurring
in soil have proved more successful. One of these, streptomycin, discovered in 15"1944 by
the American microbiologist Selman Waksman and his associates", is effective against many
diseases, including several in which penicillin is useless, especially tuberculosis. Since
then, such antibiotics as chloramphenicol, the tetracyclines, erythromycin, neomycin,
nystatin, amphotericin, cephalosporins, and kanamycin have been developed and may be used
in the treatment of infections caused by some bacteria, fungi, viruses, rickettsia, and
other microorganisms. In clinical treatment of infections, the causative organism must be
identified and the antibiotics to which it is sensitive must be determined in order to
select an antibiotic with the greatest probability of killing the infecting organism.
Recently, strains of bacteria have arisen that are resistant to commonly used antibiotics;
for example, gonorrhea-causing bacteria that high doses of penicillin are not able to
destroy may transfer this resistance to other bacteria by exchange of genetic structures
called plasmids. Some bacteria have become simultaneously resistant to two or more
antibiotics by this mechanism. New antibiotics that circumvent this problem, such as the
quinolones, are being developed.
(Pg 13)
The cephalosporins, for instance, kill many of the same organisms that penicillin does, but
they also kill strains of those bacteria that have become resistant to penicillin. Often the
resistant organisms arise in hospitals, where antibiotics are used most often, especially to
prevent infections from surgery.
Another problem in hospitals is that many old and very ill patients develop infections from
organisms that are not pathogenic in healthy persons, such as the common intestinal
bacterium Escherichia coli. New antibiotics have been synthesized to combat these
organisms. Fungus infections have also become more common with the increasing use of
chemotherapeutic agents to fight cancer, and more effective antifungal drugs are being
sought. The search for new antibiotics continues in general, as researchers examine soil
molds for possible agents. Among those found in the 16"1980s, for example, are the
monobactams", which may also prove useful against hospital infections. Antibiotics are
found in other sources as well, such as the family of magainins 17"discovered in the late
1980s in frogs; although untested in humans as yet, they hold broad possibilities".
Antibiotics have also been used effectively to foster growth in animals. Concern has
arisen, however, that this widespread use of antibiotics in animal feed can foster the
emergence of antibiotic-resistant organisms that may then be transmitted to human beings.
(Pg 14)
ANTIHISTAMINES
Antihistamines are drugs that block the action of histamine. Histamine, also known as
histamine phosphate, an amine (beta-imidazolyl-ethylamine, ergamine, or ergotidime) that is
a normal constituent of almost all animal body cells. Histamine is also found in small
quantities in ergot and purified meat products and is produced synthetically for medicinal
purposes. In the body, it is synthesized in a type of leukocyte called a basophil or mast
cell. In response to certain stimuli these cells release histamine, which immediately
effects a dilation of the blood vessels. This dilation is accompanied by a lowering of
blood pressure and an increased permeability of the vessel walls, so that fluids escape
into the surrounding tissues. This reaction may result in a general depletion of vascular
fluids, causing a condition known as histamine poisoning or histamine shock. Allergic
reactions in which histamine is released, resulting in the swelling of body tissue, show
similarities to histamine poisoning; the two may be basically similar, and the two
conditions are treated similarly. The release of histamine might also be partly responsible
for difficult breathing during an asthma attack. 18"In the 1930s the Italian pharmacologist
Daniel Bovet who live in 1907-1972, working at the Pasteur Institute in Paris", discovered
that certain chemicals counteracted the effects of histamine in guinea pigs. The first
antihistamines were too toxic for use on humans, but 19"by 1942 they had been modified for
use in the treatment of allergies". More than 25 antihistamine drugs are now available.
(Pg 15)
Histamine also causes contraction of involuntary muscles, especially of the genital tract
and gastrointestinal canal, with an accompanying secretion by associated glands. Because
histamine stimulates the flow of gastric juices, it is used diagnostically in patients with
gastric disturbances. One drug effective in treating gastric ulcers acts by antagonizing
the action of histamine. The ability of the body to localize infections may be due to the
secretion of histamine and the subsequent increased local blood supply and increased
permeability of the blood vessels. Antihistamines are used primarily to control symptoms of
allergic conditions such as hay fever. They alleviate runny nose and sneezing and to a
lesser extent, minimize conjunctivitis and breathing difficulties. Antihistamines can also
alleviate itching and rash caused by food allergy. Chemically, antihistamines comprise
several classes and a person who does not obtain relief from one type may benefit from
another. Side effects of these drugs can include drowsiness, loss of concentration, and
dizziness. People taking antihistamines should not drink alcoholic beverages or perform
tasks requiring mental alertness, such as driving. A few antihistamines, such as
terfenadine and astemizole, are nonsedating. Although antihistamines are included in many
over-the-counter cold remedies, their usefulness in such preparations is questionable.
(Pg 16)
Antihistamines may relieve symptoms of allergy accompanying a cold, or they may have an
anticholinergic effect that dries cold secretions, but they do not have any influence on
viral infections, which are the cause of colds . Moreover, the drying effect may be
undesirable, especially for persons with bronchial infection, glaucoma, or urinary tract
difficulties. Although there are not many alternate drugs that have the same properties as
antihistamines some non-drug treatments are also effective against allergies. The use of
High-Efficiency-Particulate-Arresting (HEPA) filters, eliminate microscopic particles which
cause allergies. The use of mattress covers decrease the reaction to dust mites in the
mattress itself. These treatments are not equivalent to drug use but could decrease the
amount of allergenic agents in the house hold air. Vitamin C also plays a role in the
elimination of allergic reactions. 20"Researchers at the University of California have
found that patients that suffer from atopic dermatitis benefited from large dosages of
vitamin C".
(Pg 17)
THE HISTORY, USE AND EFFECTIVENESS OF
VITAMINS AND NUTRIENT SUPPLEMENTS
A Vitamin is any organic compound required by the body in small amounts for metabolism, to
protect health, and for proper growth in children. Vitamins also assist in the formation of
hormones, blood cells, the chemicals of the nervous-system, and genetic material. The
various vitamins are not chemically related, and most differ in their physiological
actions. They generally act as catalysts, combining with proteins to create metabolically
active enzymes that in turn produce hundreds of important chemical reactions throughout the
body. Without vitamins, many of these reactions would slow down or stop. The intricate ways
in which vitamins act on the body, however, are still far from clear. The 13
well-identified vitamins are classified according to their ability to be absorbed in fat or
water. The fat-soluble vitamins A, D, E, and K are generally consumed along with
fat-containing foods, and because they can be stored in the body's fat, they do not have to
be consumed every day. The water-soluble vitamins, the eight B vitamins and vitamin C,
cannot be stored and must be consumed frequently, preferably every day. The body can
manufacture only vitamin D, all others must be derived from the diet. Lack of them causes a
wide range of metabolic and other dysfunction's. In 21"the U.S., since 1940, the Food and
Nutrition Board of the National Research Council has published recommended dietary
allowances for vitamins, minerals, and other nutrients". Expressed in milligrams or
international units for adults and children of normal health, these recommendations are
useful guidelines not only for professionals in nutrition
(Pg 18)
but also for the growing number of families and individuals who eat irregular meals and rely
on prepared foods, many of which are now required to carry nutritional labeling.
A well-balanced diet contains all the necessary vitamins, and most individuals who follow
such a diet can correct any previous vitamin deficiencies. However, persons who are on
special diets, who are suffering from intestinal disorders that prevent normal absorption
of nutrients, or who are pregnant or lactating may need particular vitamin supplements to
bolster their metabolism. Beyond such real needs, vitamin supplements are also often
believed to offer cures for many diseases, from colds to cancer; but in fact the body
quickly eliminates most of these preparations without absorbing them. In addition, the
fat-soluble vitamins can block the effect of other vitamins and even cause severe poisoning
when taken in excess. Vitamin A is a pale yellow primary alcohol derived from carotene. It
affects the formation and maintenance of skin, mucous membranes, bones, and teeth, vision,
and reproduction. An early deficiency symptom is night blindness which is the difficulty in
adapting to darkness. Other symptoms are excessive skin dryness, lack of mucous membrane
secretion, causing susceptibility to bacterial invasion, and dryness of the eyes due to a
malfunctioning of the tear glands, a major cause of blindness in children in developing
countries. The body obtains vitamin A in two ways. One is by manufacturing it from
carotene, a vitamin precursor found in such vegetables as carrots, broccoli, squash,
spinach, kale, and sweet potatoes. The other is by absorbing ready-made vitamin A from
plant-eating organisms. In animal form, vitamin A
(Pg 19)
is found in milk, butter, cheese, egg yolk, liver, and fish-liver oil. Although one-third of
American children are believed to consume less than the recommended allowance of vitamin A,
sufficient amounts can be obtained in a normally balanced diet rather than through
supplements. Excess vitamin A can interfere with growth, stop menstruation, damage red blood
corpuscles, and cause skin rashes, headaches, nausea, and jaundice. Known also as vitamin B
complex, these are fragile, water-soluble substances, several of which are particularly
important to carbohydrate metabolism.
Thiamine, or vitamin B1, a colorless, crystalline substance, acts as a catalyst in
carbohydrate metabolism, enabling pyruvic acid to be absorbed and carbohydrates to release
their energy. Thiamine also plays a role in the synthesis of nerve-regulating substances.
Deficiency in thiamine causes beriberi, which is characterized by muscular weakness,
swelling of the heart, and leg cramps and may, in severe cases, lead to heart failure and
death. Many foods contain thiamine, but few supply it in concentrated amounts. Foods
richest in thiamine are pork, organ meats such as liver, heart, and kidney, brewer's yeast,
lean meats, eggs, leafy green vegetables, whole or enriched cereals, wheat germ, berries,
nuts, and legumes. Milling of cereal removes those portions of the grain richest in
thiamine; consequently, white flour and polished white rice may be lacking in the vitamin.
Widespread enrichment of flour and cereal products has largely eliminated the risk of
thiamine deficiency, although it still occurs today in nutritionally deficient alcoholics.
Riboflavin, or vitamin B2, like thiamine, serves as a coenzyme, one that must combine with
a portion of another enzyme to be effective, in the metabolism of carbohydrates, fats, and,
especially, respiratory proteins. It
(Pg 20)
also serves in the maintenance of mucous membranes. Riboflavin deficiency may be complicated
by a deficiency of other B vitamins; its symptoms, which are not as definite as those of a
lack of thiamine, are skin lesions, especially around the nose and lips, and sensitivity to
light. The best sources of riboflavin are liver, milk, meat, dark green vegetables, whole
grain and enriched cereals, pasta, bread, and mushrooms.
Niacin, or vitamin B3, also works as a coenzyme in the release of energy from nutrients. A
deficiency of niacin causes pellagra, the first symptom of which is a sunburnlike eruption
that breaks out where the skin is exposed to sunlight. Later symptoms are a red and swollen
tongue, diarrhea, mental confusion, irritability, and, when the central nervous system is
affected, depression and mental disturbances. The best sources of niacin are liver,
poultry, meat, canned tuna and salmon, whole grain and enriched cereals, dried beans and
peas, and nuts. The body also makes niacin from the amino acid tryptophan. Megadoses of
niacin have been used experimentally in the treatment of schizophrenia, although no
experimental proof has been produced to show its efficacy. In large amounts it reduces
levels of cholesterol in the blood, and it has been used extensively in preventing and
treating arteriosclerosis. Large doses over long periods cause liver damage. Pyridoxine, or
vitamin B6, is necessary for the absorption and metabolism of amino acids. It also plays
roles in the use of fats in the body and in the formation of red blood cells. Pyridoxine
deficiency is characterized by skin disorders, cracks at the mouth corners, smooth tongue,
convulsions, dizziness, nausea, anemia, and kidney stones. The best sources of pyridoxine
are whole (but not enriched) grains, cereals, bread, liver, avocados, spinach, green beans,
and bananas.
(Pg 21)
Pyridoxine is needed in proportion to the amount of protein that is consumed.
Cobalamin, or vitamin B12, one of the most recently isolated vitamins, is necessary in
minute amounts for the formation of nucleoproteins, proteins, and red blood cells, and for
the functioning of the nervous system. Cobalamin deficiency is often due to the inability
of the stomach to produce glycoprotein, which aids in the absorption of this vitamin.
Pernicious anemia results, with its characteristic symptoms of ineffective production of
red blood cells, faulty myelin (nerve sheath) synthesis, and loss of epithelium the
membrane lining of the intestinal tract. Cobalamin is obtained only from animal sources
such as liver, kidneys, meat, fish, eggs, and milk. Vegetarians are advised to take vitamin
B12 supplements. Folic acid, or folacin, is a coenzyme needed for forming body protein and
hemoglobin; its deficiency in humans is rare. Folic acid is effective in the treatment of
certain anemias and sprue. Dietary sources are organ meats, leafy green vegetables,
legumes, nuts, whole grains, and brewer's yeast. Folic acid is lost in foods stored at room
temperature and during cooking. Unlike other water-soluble vitamins, folic acid is stored
in the liver and need not be consumed daily. Pantothenic acid, another B vitamin, plays a
still-undefined role in the metabolism of proteins, carbohydrates, and fats. It is abundant
in many foods and is manufactured by intestinal bacteria as well. Biotin, a B vitamin that
is also synthesized by intestinal bacteria and widespread in foods, plays a role in the
formation of fatty acids and the release of energy from carbohydrates. Its deficiency in
humans is unknown.
This well-known vitamin is important in the formation and maintenance of collagen, the
protein that supports many body structures and plays a major
(Pg 22)
role in the formation of bones and teeth. It also enhances the absorption of iron from foods
of vegetable origin. Scurvy is the classic manifestation of severe ascorbic acid deficiency.
Its symptoms are due to loss of the cementing action of collagen and include hemorrhages,
loosening of teeth, and cellular changes in the long bones of children. Assertions that
massive doses of ascorbic acid prevent colds and influenza have not been borne out by
carefully controlled experiments. In other experiments, however, ascorbic acid has been
shown to prevent the formation of nitrosamines which are compounds found to produce tumors
in laboratory animals and possibly also in humans. Although unused ascorbic acid is quickly
excreted in the urine, large and prolonged doses can result in the formation of bladder and
kidney stones, interference with the effects of blood-thinning drugs, destruction of B12,
and the loss of calcium from bones. Sources of vitamin C include citrus fruits, fresh
strawberries, cantaloupe, pineapple, and guava. Good vegetable sources are broccoli,
Brussels sprouts, tomatoes, spinach, kale, green peppers, cabbage, and turnips.
This vitamin is necessary for normal bone formation and for retention of calcium and
phosphorus in the body. It also protects the teeth and bones against the effects of low
calcium intake by making more effective use of calcium and phosphorus. Also called the
sunshine vitamin, vitamin D is obtained from egg yolk, liver, tuna, and vitamin-D fortified
milk. It is also manufactured in the body when sterols, which are commonly found in many
foods, migrate to the skin and become irradiated. Vitamin D deficiency, or rickets, occurs
f:\12000 essays\sciences (985)\Chemistry\The Life and Career of Lord Kelvin William Thomson.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LORD KELVIN (1824 - 1907)
William Thomson (later Lord Kelvin) was born June 26, 1824 in Belfast, Ireland, and was part of
a large family whose mother died when he was six. His father taught Kelvin and his brothers mathematics
to a level beyond that of university courses of the time.
Kelvin was somewhat of a genius, and had his first papers published in 1840. These papers
contained an argument defending the work of Fourier (Fourier transforms), which at the time was being
heavily criticized by British scientists. He proved Fourier's theories to be right. In 1839 Kelvin wrote an
essay which he called " An Essay on the Figure of the Earth." He used this essay as a source and
inspiration for ideas all his life and won an award from the University of Glasgow in Scotland. Kelvin
remained at the University for the rest of his working life.
Kelvin first defined the absolute temperature scale in 1847, which was later named after him. In
1851 he published the paper, "On the Dynamical Theory of Heat", and in the same year was elected to the
Royal Society. This work contained his ideas and version of the second law of thermodynamics as well as
James Joule's idea of the mechanical equivalent of heat. This idea claimed that heat and motion were
combined, which now is taken as second nature. At the time, heat was thought to have been a fluid of
some kind.
Kelvin also maintained an interest in the age of the sun and calculated values for it. He assumed
that the sun produced its radiant energy from the gravitational potential of matter falling into the sun. In
collaboration with Hermann von Helmholtz, he calculated and published in 1853 a value of 50 million
years. He also had an interest in the age of the earth, and he calculated that the earth was a maximum of
400 million years old. These calculations were based on the rate of cooling of a globe of matter after first
solidification occurs ( such as the beginning of the earth). He also calculated that molecular motion stops
at -273 degrees Celsius. He called this temperature absolute zero.
Kelvin started work in 1854 on the project of laying transatlantic cables. His idea was that
electrical current flow was similar to that of heat flow, and by applying ideas on heat flow, helped in the
problem of transmitting electrical signals over long distances. In 1866, Kelvin succeeded in laying the
first successful transatlantic cable.
Kelvin invented the mirror galvanometer which he patented in 1858 as a long distance telegraph
receiver. Other inventions by Kelvin include the flexible wire conductor of 'flex', a law which calculated
how much a cable costs in respect of electrical losses and a gyro-compass among a host of others.
In 1889 Kelvin retired from the university after having been professor there for 53 years. In the
year 1890 he became president of the Royal Society and held that position until 1895. He was created
Baron Kelvin of Largs in 1892 and in 1902 received the Order of Merit. After a long and successful
career, publishing many papers and being granted numerous patents, Kelvin died at his home on
December 17, 1907 in his estate close to Largs, Scotland. He is buried at Westminster Abbey, London.
Interesting Information
· William Thomson went to the University of Glasgow at age 10.
· He had his first papers published at the ages of 16 and 17.
· In 1839 at the age of 15 he wrote a major essay called "An Essay on the Figure of the Earth".
· At the age of 22 Thomson was elected to professor of physics at the University of Glasgow.
· He was Knighted in 1866 by Queen Victoria for his work.
· The transatlantic cable laying expeditions made Thomson an extremely wealthy man.
· He published 661 papers in his career.
· He patented 70 inventions in his lifetime.
· In 1892 he was created Baron Kelvin of Largs.
· In 1902 he received the Order of Merit.
f:\12000 essays\sciences (985)\Chemistry\The Orgins of Atomic Theory.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Orgins of
Atomic Theory
By Levi Pulkkinen
The Origins of Atomic Theory
By Levi Pulkkinen
There is an eternal human compulsion to unlock the mysteries of our lives and our world. This search for knowledge has guided us to many beneficial new understandings. It has lead us into this new age where information is its own reward, an age where enlightenment is an end, not simply a means to an end. Enlightenment has been the aim of many great people. It has inspired many scientists and artists to construct articles of infinite beauty and value.
At times this quest for understanding has been embraced by entire civilizations, and when an entire society commits to one noble cause only good can come from it. In Ancient Greece there was such a civilization, and even today we use their theories to initiate our scientific and artistic endeavors. All western thought can find its roots in the philosophy and science of the Greeks, even the way we see the world is influenced by the ideologies of Ancient Greece. The Greeks were the first to seek a greater understanding of the world, to know ³why² we are not just ³what² we are.
The Greeks invented science and explored it in its truest form, philosophy. Through the years we have developed tools that we hope can prove or disprove various hypothesizes, to further our understanding of any number of things. We divide science into categories and then sub-divide it even farther, until we can hide the connections and pretend that they really are separate. The difference between psychology and physics is not as extreme as one would believe if they were to read their definitions. Though the means are different the goal is the same for all science: to increase our understanding of our earthly domain, and to improve ourselves. The Greeks created this guiding principle more than two thousand years ago.
Greek atomic theory was not the work of a single person, in fact it was a product of many great minds. There were many fundamental ideas that formed the basis for their theory on the make up of the universe. One-hundred and forty years before Socrates there was a lesser-known scholar named Thales, and he was the Father of Philosophy.
Thales was from a part of Greece called Miletus, and it was for his skill as an engineer, not as a philosopher, that he was recognized during his life. Before his time, the Greeks had no clear concept of matter, and did not use science to broaden their understanding of the universe. Because of the focus on the practical that was prevalent during that time, it was not until years later that Thales¹ scientific genius was recognized by the scholars of Greece.
Thales re-invented science, changing it into what we see today. Without Thales there would have been no Einstein or Bohr, there would have been no Apollo and no penicillin. But Thales¹ influence was not confined to the more technical sciences, such as chemistry. He was the first scholar to explore the idea of the human soul, that a body is more than a machine. He was the first to see that, for most people at least, life is more than a physical condition, it is also involves spiritual fulfillment and growth. From this theory sprung social-scientific disciplines like psychology and anthropology.
Thales is most famous for his statement that ³all things are water,² water meaning ³liquid² rather than ³H20². Through the years we have found the literal meaning to be untrue, but at the time it¹s meaning was earth-shattering. Before Thales¹ statement it was believed that things were unchanging, and that which could not be immediately or adequately explained was supernatural. Thales felt that all things were in a state of constant flux, and that all things were uniform in their make-up but different in their order and number. This would be proven thousands of years later and become the basis for modern Chemistry.
Roughly one hundred years passed before any of the great thinkers of Athens looked further into Thales¹ theories on matter. They focused on the philosophical aspects of the world, the hidden meaning of life and other timeless questions. Socrates and his cohorts formulated grand theories about the human soul and psyche, the search for knowledge of self consumed their thoughts and their writings. Their focus was on the building rather than the bricks.
Democritus was different. Born in the city of Abdera, he traveled to Athens when he was a young man hoping to speak with Anaxagoras, a well-known scientist. When he arrived in Athens he was unable talk with Anaxagoras, who thought his time far too valuable to be wasted on a man with no reputation. His statement ³I came to Athens, and no one knew me² has been an anthem for many unrecognized geniuses.
The years past slowly as Democritus lived and worked in obscurity. He referred back to other scientists, hoping to glean a bit of inspiration from their work. As he read he became intrigued by a concept first envisioned by Empedocles, a philosopher from the island of Sicily who believed that all things are composed of smaller particles. Democritus took this idea and ran with it, developing the first atomic theory. Democritus¹ theory contained four basic ideas: matter is made up of indivisible particles of the smallest possible size; empty space exists between these particles in which they move; the atoms differ in size and shape but not content; and all change is the result of atoms bumping in to other atoms.
Democritus came to the first conclusion because he saw that nothing could be divided past a certain point. An example would be a block of stone. It could be ground into a fine sand, and ground again even more finely, but eventually one would reach a point where it could no longer be broken down. He believed that this lowest form of a substance was the basic matter, and that matter in that state was attoma or ³indivisible².
With the passage of time we have built instruments that can see deep into the heart of matter, much farther that the naked eye that was the Greeks¹ only tool. We have divided the atom into electrons, protons, and neutrons, we know of its power and promise. We may think that Democritus was wrong, for he believed that the atom was the beginning of the universe. In a sense he was also right, because today we believe that the atom¹s components are indivisible. We think that they are the beginning of the universe, but who is to say that in two thousand years people will not be writing about how short-sighted we were in our assumption that because we could not see any division in the proton we assumed there was not one. Would it make the work of Rutherford or Bohr any less important?
Democritus¹ second principle, that there is space between atoms, has been proven nearly universally correct. Electrons glide through empty space orbiting the nucleus, and between atoms there exists a gap where nothing exists at all.
Is must be remembered that although Democritus was a scientist he was also a philosopher, and as a philosopher he always endeavored to find a new cosmic truth. From his observations of atoms he drew the following conclusion about the universe, ³Nothing exists but atoms and the void.²
The third atomic principle that Democritus developed has been proven correct through modern chemistry. We have found that atoms have the same ingredients, and that it is the order and number of these ingredients that gives an element its characteristics. Democritus did not know of, or even suspect the existence of, any thing smaller than an atom, therefore he believed that it was the atom¹s shape that gives it its¹ qualities.
He also believed that the human senses only picked up the interactions between atoms, and not the atoms themselves. One must remember that at this time the concept of energy was not yet developed, and would not be for quite some time. Democritus believed that there were ³fire atoms² and other such things. Scientifically speaking such thinking was a giant step forward from the days before Thales when it was believed that fire was magical. It is an interesting example of how genius and foolishness can fit together so well.
Democritus¹ final theory was a reflection of his background in philosophy. He was attempting to answer one of the prevailing questions of philosophy: Why does change exist? He believed that change was the result of an atomic version of bumper cars, with atoms slamming into each other and rebounding from each collision only to strike another atom. He felt the bouncing atoms were physical reflections of the changes in one¹s soul, or a model of life. Only through interaction was there change, and since interaction cannot be abolished, change cannot be completely stopped.
Thus, Democritus believed, the world is and will always be in a state of constant flux, with nothing remaining constant aside from the fact of constant change.
Two millennia later his ideas would be proven in many different ways. In this instance we see, as we have seen before, that Democritus¹ idea is not true in the literal sense, but if we look at it more abstractly we begin to see the genius of Democritus. In our search for a greater understanding of our world we look to find limits. When we attempt to reach absolute zero, the point when electrons stop moving and change stops, we find that it takes more energy to do less the closer we get to zero. Calculus shows us that there are lines that cannot be crossed, that one can spend an infinite amount of energy and still not be able to break through the barrier. This illustrates the Greek¹s belief that change was unceasing.
In the end it is the simplicity of Democritus¹ theories that has let them stand the test of time. In actuality all science is simple. The aim is understanding and that is simple. Science must be examined as a painting is examined, one must not spend so much time looking at the brush strokes that one fails to see the masterpiece. Democritus¹ and Thales¹ science was not technical, and their theories were full of holes. In many instances they were wrong, but when it really counted, they were correct. Atoms are divisible and all change is not caused by the rubbing of atoms, but the theories were sound.
We are on the verge of a great new era, with a future free of any kind of limits. We could explode into a brave new age, an age when knowledge is its own reward. The scientific successes we realize today are the result of many civilizations¹ hard work and commitment to knowledge. We have an undeniable debt to the scientists and philosophers of Ancient Greece, for their thoughts have shaped those of our greatest thinkers. We can only hope that in two thousand more years our achievement will be so influential in the course science takes.
Works Cited
Barnes, Jonathan. The Presocratic Philosophers. Routledge, New York, NY, 1989. Pages 217-222, 342-377, 594.
Cahn, Steven M. Classics of Western Philosophy. Hacket, Indianapolis, IN, 1977. Pages 115-129.
Brumbaug, Robert S. The Philosophers of Greece. SUNY, Albany, NY, 1981. Pages 11-17, 78-92.
f:\12000 essays\sciences (985)\Chemistry\The Quicksilver.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Quicksilver
Chemistry I
October 25, 1996
One day an ancient alchemist was sitting at his and noticed a strange silvery liquid-like metal. He called several of his colleagues over to admire it. It was passed down through the years, this chemical reaction, that formed this "Quicksilver" as the alchemists called it. One day a French chemist Antoine Laurent Lavoisier tested and proclaimed it a metal. And he named it Mercury (Hg). With strong controversy from scientists around the world, Lavoisier was never given credit until after his death.. During the late nineteenth century and early twentieth is when a significant amount of work went into developing a good use to mercury- thermometers. Before people had been developing thermometers but they were not as accurate as the ones produced around 1900.
In the later twentieth century people developed a increasing "need" for pure gold and silver. European and American scientists developed a new advanced way for this- amalgams. Amalgams are alloys of mercury usually used to extract elements from there various ores. Then, once the common metal is extracted mercury is then separated through distillation.
Without mercury our world would be much different. We would have different, if any, ways of determining temperature. Mercury is also used in cleaning modern day swimming pools as "Mercury Vapor lamps" for sterilization. Mercury can be used in both reconstructing and destroying life in water ways depending upon the attention people give it. We would have no fast, economical ways of cleaning large pools; no fast, economical way of controlling river clean-ups. Life in our modern day households would be much, much colder because we would have no way of having a auto-start heater- people would have to turn on their heater manually. Yet we would also need to look at the positive side of no mercury. We would have little, if any at all, severe river life loss, therefore little need for the time and effort we spend clearing our water of mercury contamination.
f:\12000 essays\sciences (985)\Chemistry\Through a Narrow Chink An Ethical Dilemma.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Through A Narrow Chink: An Ethical Dilemma
by Pablo Baez
Chemistry 104
Prof. Holme
In 1951 Carl Djerassi, with the Mexican pharmaceutical company Syntex, developed the first oral contraceptive by synthesizing and altering the natural hormone Progesterone into a superpotent, highly effective oral progestational hormone called "norethindrone".
Admittedly, the dynamics and importance of this find were astounding, since before this the only means of contraception was abortion, and even that was not legalized at the time.
The race to produce this synthetic agent was highly competitive, being sought after by many pharmaceuticals throughout the world, and for a small fledgling company in Mexico of all places to find it first only added to the excitement of the achievement.
Yet aside from all this excitement and competitive fervor something great and disturbing was being bypassed. Science, in my view had done something great without looking into the possibilities of where this would lead.
I believe Djerassi, similar to most scientists of his day, was so entranced by the excitement of synthesizing his product and achieving his goal that he did not stop to think of the ramifications of his accomplishment. The ethical dilemma was not explored before hand, and this to me is the great tragedy of most scientific discovery, since I firmly believe each scientist is responsible for that which he creates.
Djerassi does confront a few questions of ethics and morality after the fact.
On page 61, in chapter 6, he reflects on the argument of the use of poor Mexican and Puertorrican women for preliminary experiments. Is this just another manifestation of exploitation of the poor?
Djerassi says absolutely not.
Yes, the poor our the initial guinea pigs for research but this is no different from what dentists, barbers, and young surgeons do. All of these groups use the poor to hone their skills, not because of the poor women's ignorance but because middle class, suburbanite, white women are unlikely to volunteer their services for the sake of science.
My main problem with this is that he claims they will not "volunteer" their services. Of course not, they are aware of the possible detrimental effects of such experimentation. This is obviously because they are probably more highly educated the poor Hispanic women. Poverty often precludes a lack of good schooling and education. Thus the awareness of such a group to scientific studies will most likely be much lower. They probably knew nothing of scientific research at all, let alone how to read a consent form that leaves them without legal recourse.
Djerassi mentions this as well, the idea that he can not offer them consent forms because they can't read.
That seems preposterous to me!
If he can not inform his patients of the possible side effects then what chance do they have at justice if some carelessly administered drug causes them harm?
Coming back to his original argument, he claimed suburbanites were not likely to volunteer their services for the sake of scientific study, but I dare argue the poor women most likely did not volunteer but were asked. Did he ask the suburbanites? I highly doubt it was even proposed.
In chapter 9 Djerassi addresses another question he was often confronted with. "How do you feel about the social outcome of the work?". He answered this with a shrug of his shoulders and a simple, "I couldn't have changed things".
Again, I am disturbed by the flippant manner of his response. Yes, he acknowledged the impact the Pill had on the sexual revolution, but fails to see beyond what has already occurred, claiming powerlessness against the pace of science.
Let me say that he is most likely partially correct. There is very little to be done when science determines to do something and the race begins toward that goal. But to claim oneself unable to have made a difference, especially someone of his intelligence and influence, is remarkably sad.
I firmly believe that the direction of science, though difficult to stop or turn entirely, can be manipulated by those forefront scientists enough to at least seek discovery with a certain social awareness.
This claim of powerlessness is a cop out, clear and simple, and no euphemistic jargon or claim of ignorance will give the victims their normal lives back. This has been the case in nuclear, medical, and chemical research. Invariably someone suffers due to the insincerity of others.
Maybe I am being a bit harsh. Djerassi's Pill did give women a great power, the power to control childbirth, as well as a greater freedom toward sexuality that before this was monopolized by men. But medical ethics and moral responsibility must become wed with research in the minds of scientists for a real change in perspective to occur.
In 1994, my wife came home one day with tears in her eyes after having gone to the Gynecologist for a regular check up. She mumbled through shaky lips the words cervical cancer and something about a biopsy. I was mortified. Somehow, at the young age of 25, my wife had gotten the beginnings of cervical cancer and something had to be done fast. After a few tense days of waiting for the biopsy results we were told she should have cryogenic surgery for the removal of the tissue. It was removed and we were told not to worry.
Inquiring as to how such a young woman could have gotten cancer our doctor said it was a possible side effect of using the same Pill prescription for so long. We had never known this. If we had known we would never have used it!
Personally, that scare was enough to prove to me that scientific research and development must be extraordinarily careful as to what it finds as acceptable risk.
William Blake was quoted by Djerassi as saying in The Marriage of Heaven and Hell: "If the doors of perception were cleansed everything would appear to man as it is, infinite". This infinite view of all means that everything overlaps, interconnects into an almost constant dance between particles, people, and ideas. In other words nothing is independent and of itself. If this simple concept of everything being related were to be assimilated into all that we think and do, imagine the difference it would make. But our problem as Blake continues is: "... man has closed himself up, till he sees all things thro' narrow chinks of his cavern."
Djerassi admits that only late in his life did he begin to widen those chinks. He realized he had not seem all that was there, leading a sheltered life with a somewhat narrow scientific perspective. He sustained social and political attacks about the side effects of the Pill, survived through three marriages, and dealt with the suicide of his depressed daughter. Arguably, he had had a rewarding yet tough life.
But like my incidence with the side effects of the pill, his lack of respect for the relationships between science and the rest of the world has cost many dearly.
Yes, he has later in life admitted to his narrow sighted perspective of his younger years, but that still doesn't address the issue that today's scientists are still being trained in the same manner and with the same tunnel vision. Something must be done, and it falls to the senior scientists such as himself to rectify the problem!
f:\12000 essays\sciences (985)\Chemistry\Tin.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tin
Tin's discoverer is unknown but one thing is known. Tin has been used and
discovered by the ancients. Tin was an accidental discovery.
Tin has been around for many years. Proof is in the fact that tin is
mentioned in the old testament of the bible.
Tin had a great effect on the world because of its low price, high electric
conductivity, and because tin protects against rust and weak acids in food if the
can is made out of tin instead of aluminum.
Some common compounds of tin are organtin a combination of carbon and
tin. When tin is formed with carbon to make organtin it can make more than 500
organtin compounds. These compounds are used in everyday things. One is toothpaste
containers and also are things such as wood, paper, textile, farm sprays, and
Hospital disinfectants.
To get pure tin you must first find ore cassiterite or tin stone, a dioxide of
tin. The ore cassiterite before smelting and roasting must be crushed into a powder
to remove the arsenic and sulfur from the ore cassiterite. When you smelter the
tin you must heat it with carbon to remove the zinc, copper, bismuth, and iron
from the tin.
Tin had been used for many things but tins use is dropping rapidly although
tin is still used a lot for plating. Plating such things as electical contacts.
Tin is also used as a protective coating. This protective coating can be as
small as 15/1,000,000 of an inch. This protective coating protects against rust on
steel and other metals. A coating of tin also gives a great look to plain old
steel.
Tin cans for food prevents weak acids from damaging the inside of
the can. Not many cans are made of tin since aluminum started to be used
for cans tins use dropped sharply.
Tin is also used to coat staples, pins, bronze bell, pewter pitchers and many
others things.
Another popular tin mixture is tin and lead. Tin and lead make solder for
electric work. Battery contacts in the Black and Decker snake lights are also tin
plated.
A compound tin salt is used to spray onto glass windows to produce
electrically conductive coating for panel lighting and frost free windshields
for cars.
One last use for tin is in the making of glass windows that are made by
floating molten glass on molten tin. This produces a flat piece of glass to be used
as a windows.
Industries basically only use tin for plating for electricity or for protection
on there metals such as tin.
Tin is found in Molaya, Bolivia, Indonesia, Zaire, Thailand, Nigeria, but
almost no tin is found in America, although some tin has been found in Alaska
and California.
Tins basic information. Tin has an atomic number fifty on the periodical
table of the elements. Tin's atomic symbol is Sn. Tin's atomic weight is 119 amu
(118.69 amu). Tin's electronic configuration is 2-8-18-18-4. Tin is in Group 14.
Tin has a gray color unless heated then it turns white. Tin is also a malleable and
ductile element. Tin is a very soft metal. Tin melts at 450 degrees F. Tins density
is 7.2984 centimeters squared. Tin has 69 neutrons and 70 protons.
f:\12000 essays\sciences (985)\Chemistry\Xenon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
XENON
Xenon is element number 54 on the periodic table of the elements. It has a mass of roughly 131 atomic mass units. There are 77 neutrons and 54 protons in the nucleus of the atom. The symbol for xenon is Xe and it belongs to the family of elements called the noble gases. It is called a noble gas because the valance shell of one atom contains a full shell. Xenon is one of the most stable elements on the table. The 54 electrons are arranged, so that there are 2 in the 1st shell, 8 in the 2nd, 18 in the 3rd, 18 in the 4th and lastly 8 in the 5th shell. The melting and boiling points of xenon are extremely low. They are -111°C and -107°C respectively.
Xenon and most of the other noble gases were discovered by Sir William Ramsey and M.W. Travers from England in 1898. The two scientists discovered it mistakenly while experimenting with crude krypton, another noble gas. They were separating the elements in the crude krypton through a process called fractional distillation. In fractional distillation, the process separates two elements that have different boiling points. Basic-ally, when a sample is heated, the faster element leaves first, leaving the second element behind. Krypton was known to have a boiling point at a temperature that is lower than xenon. So the scientists could predict that heating the mixture would leave krypton in the container, while the faster boiling xenon leaving it. After the two scientists separated krypton and xenon, they identified it as a new element through the emission spectrum of the gas.
Xenon is used heavily in light bulbs. Many of the bulbs in camera flashes have xenon in them, because they can be used over 10,000 times without burning out, as well as producing a good balance of all colors. Xenon is also used in medical purposes. Local anesthesia is made up of 20% oxygen and 80% xenon. Xenon also can be injected or breathed into the body to give clearer M.R.I.'s or X-rays. In addition to the uses above, xenon is also in movie projector lamps, advertising lights, and bubble chambers, Bubble chambers are devices used by physicists that are used to detect nuclear radiation. The element is very chemically stable and unradioactive and is generally not harmful to man. Xenon is also nonflammable. It is only when it combines with other elements that xenon becomes hazardous. Xenon compounds are highly radioactive. This element accounts for a very minimal amount of the earth's crust. Only 3x10-9% of the earth contains xenon. The element is mostly found in the air, and is only collected through special air separation plants.
In "Science News Magazine," I found an article of interest to me concerning xenon. It stated that scientists have discovered a way to use xenon to make M.R.I.s of lungs come out clearer. After exciting rubidium atoms and adding it to xenon, all a patient has to do is breathe in the xenon to have clearer M.R.I.s. People who breathe in the xenon have results that are 10,00 to 100,000 clearer than other people who didn't breathe in the mixture. Scientists have predicted that this technique will be used much more frequently in the future whenever M.R.I.s are needed to be taken.
In conclusion, doing this report helped me learn a lot about what I think is a relatively unnoticed element. I have learned the uses of xenon as well has how it can be helpful in medical procedures. People should try to learn more about this element as one day it may help you in life. The many uses of this element makes it a very valuable addition to the periodic table of elements.
f:\12000 essays\sciences (985)\Computer\2000 Problem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fiction, Fantasy, and Fact:
"The Mad Scramble for the Elusive Silver Bullet . . . and the Clock Ticks Away."
Wayne Anderson
November 7, 1996
The year 2000 is practically around the corner, promising a new era of greatness and
wonder . . . as long as you don't own a computer or work with one. The year 2000 is bringing a
Pandora's Box of gifts to the computer world, and the latch is slowly coming undone.
The year 2000 bug is not really a "bug" or "virus," but is more a computer industry
mistake. Many of the PC's, mainframes, and software out there are not designed or
programmed to compute a future year ending in double zeros. This is going to be a costly "fix"
for the industry to absorb. In fact, Mike Elgan who is the editor of Windows Magazine, says " . .
. the problem could cost businesses a total of $600 billion to remedy." (p. 1)
The fallacy that mainframes were the only machines to be affected was short lived as industry
realized that 60 to 80 million home and small business users doing math or accounting etc. on
Windows 3.1 or older software, are just as susceptible to this "bug." Can this be repaired in
time? For some, it is already too late. A system that is devised to cut an annual federal deficit to
0 by the year 2002 is already in "hot water." Data will become erroneous as the numbers "just
don't add up" anymore. Some PC owners can upgrade their computer's BIOS (or complete
operating system) and upgrade the OS (operating system) to Windows 95, this will set them up
for another 99 years. Older software however, may very well have to be replaced or at the very
least, upgraded.
The year 2000 has become a two-fold problem. One is the inability of the computer to
adapt to the MM/DD/YY issue, while the second problem is the reluctance to which we seem to
be willing to address the impact it will have. Most IS (information system) people are either
unconcerned or unprepared.
Let me give you a "short take" on the problem we all are facing. To save storage space
-and perhaps reduce the amount of keystrokes necessary in order to enter the year to date-most
IS groups have allocated two digits to represent the year. For example, "1996" is stored as "96"
in data files and "2000" will be stored as "00." These two-digit dates will be on millions of files
used as input for millions of applications. This two digit date affects data manipulation,
primarily subtractions and comparisons. (Jager, p. 1) For instance, I was born in 1957. If I ask
the computer to calculate how old I am today, it subtracts 57 from 96 and announces that I'm 39.
So far so good. In the year 2000 however, the computer will subtract 57 from 00 and say that I
am -57 years old. This error will affect any calculation that produces or uses time spans, such as
an interest calculation. Banker's beware!!!
Bringing the problem closer to the home-front, let's examine how the CAPS system is
going to be affected. As CAPS is a multifaceted system, I will focus on one area in particular,
ISIS. ISIS (Integrated Student Information System) has the ability to admit students, register
them, bill them, and maintain an academic history of each student (grades, transcripts, transfer
information, etc.) inside of one system. This student information system has hundreds and
hundreds of references to dates within it's OS. This is a COBOL system accessing a ADABAS
database. ADABAS is the file and file access method used by ISIS to store student records on
and retrieve them from. (Shufelt, p.1) ADABAS has a set of rules for setting up keys to specify
which record to access and what type of action (read, write, delete) is to be performed. The
dates will have to have centuries appended to them in order to remain correct. Their (CAPS)
"fix" is to change the code in the Procedure Division (using 30 as the cutoff >30 century = "19"
<30 century = "20"). In other words, if the year in question is greater than 30 (>30) then it can
be assumed that you are referring to a year in the 20th century and a "19" will be moved to the
century field. If the year is less than 30 (<30) then it will move a "20" to the century field. If
absolutely necessary, ISIS will add a field and a superdescriptor index in order to keep record
retrieval in the order that the program code expects. The current compiler at CAPS will not
work beyond the year 2000 and will have to be replaced. The "temporary fix" (Kludge) just
discussed (<30 or >30) will allow ISIS to operate until the year 2030, when they hope to have
replaced the current system by then.
For those of you with your own home computers, let's get up close and personal. This
problem will affect you as well! Up to 80% of all personal PCs will fail when the year 2000
arrives. More than 80,000,000 PCs will be shut down December 31, 1999 with no problems.
On January 1, 2000, some 80,000,000 PCs will go "belly up!" (Jager, p. 1) These computers
will think the Berlin Wall is still standing and that Nixon was just elected President! There is
however, a test that you can perform in order to see if you are on of the "lucky" minority that do
not have a problem with the year 2000 affecting their PC.
First, set the date on your computer to December 31, 1999. Next, set the time to 23:58
hours (if you use a 24 hour clock (Zulu time)) or 11:58 p.m. for 12 hour clocks. Now, Power Off
the computer for at least 3 to 5 minutes. Note: ( It is appropriate at this time to utter whatever
mantras or religious chants you feel may be beneficial to your psyche ). Next, Power On the
computer, and check your time and date. If it reads January 1, 2000 and about a minute or two
past midnight, breathe a sigh of relief, your OS is free from the year 2000 "bug." If however,
your computer gives you wrong information, such as my own PC did (March 12, 1945 at 10:22
a.m.) welcome to the overwhelming majority of the population that has been found "infected."
All applications, from spreadsheets to e-mail, will be adversely affected. What can you
do? Maybe you can replace your computer with one that is Year 2000 compatible. Is the
problem in the RTC (Real Time Clock), the BIOS, the OS? Even if you fix the hardware
problem, is all the software you use going to make the "transition" safely or is it going to corrupt
as well?!
The answers to these questions and others like them are not answerable with a yes or a
no. For one thing, the "leading experts" in the computer world cannot agree that there is even a
problem, let alone discuss the magnitude upon which it will impact society and the business
world. CNN correspondant Jed Duvall illustrates another possible "problem" scenario. Suppose
an individual on the East Coast, at 2 minutes after midnight in New York City on January 1,
2000 decides to mark the year and the century by calling a friend in California, where because of
the time zone difference, it is still 1999. With the current configurations in the phone company
computers, the NewYorker will be billed from 00 to 99, a phone call some 99 years long!!! (p. 1)
What if you deposit $100 into a savings account that pays 5% interest annually. The
following year you decide to close your account. The bank computer figures your $100 was
there for one year at 5% interest, so you get $105 back, simple enough. What happens though, if
you don't take your money out before the year 2000? The computer will re-do the calculation
exactly the same way. Your money was in the bank from '95 to '00. That's '00 minus '95, which
equals a negative 95 (-95). That's -95 years at 5% interest. That's a little bit more than $10,000,
and because of the minus sign, it's going to subtract that amount from your account. You now
owe the bank $9,900. Do I have your attention yet??!!
There is no industry that is immune to this problem, it is a cross-platform problem. This
is a problem that will affect PCs, minis, and mainframes. There are no "quick fixes" or what
everyone refers to as the "Silver Bullet." The Silver Bullet is the terminology used to represent
the creation of an automatic fix for the Yk2 problem. There are two major problems with this
philosophy. First, there are too many variables from hardware to software of different types to
think that a "cure-all" can be found that will create an "across-the-board" type of fix. Secondly,
the mentality of the general population that there is such a "fix" or that one can be created rather
quickly and easily, is creating situations where people are putting off addressing the problem due
to reliance on the "cure-all." The " . . . sure someone will fix it." type attitude pervades the
industry and the population, making this problem more serious than it already is. (Jager, p. 1)
People actually think that there is a program that you can start running on Friday night . . .
everybody goes home, and Monday morning the problem has been fixed. Nobody has to do
anything else, the Yk2 problem poses no more threat, it has been solved. To quote Peter de
Jager,
"Such a tool, would be wonderful.
Such a tool, would be worth Billions of dollars.
Such a tool, is a na ve pipe dream.
Could someone come close? Not very . . .
Could something reduce this problem by 90%? I don't believe so.
Could it reduce the problem by 50%? Possibly . . . but I still don't believe so.
Could it reduce the workload by 30%? Quite likely."
(p. 2)
Tools are available, but are only tools, not cures or quick fixes.
How will this affect society and the industry in 2000? How stable will software design
companies be as more and more competitors offer huge "incentives" for people to "jump ship"
and come work for them on their problems!? Cash flow problems will put people out of
business. Computer programmers will make big bucks from now until 2000, as demand
increases for their expertise. What about liability issues that arise because company "A" reneged
on a deal because of a computer glitch. Sue! Sue! Sue! What about ATM lockups, or credit card
failures, medical emergencies, downed phone systems. This is a wide spread scenario because
the Yk2 problem will affect all these elements and more.
As is obvious, the dimensions to this challenge are apparent. Given society's reliance on
computers, the failure of the systems to operate properly can mean anything from minor
inconveniences to major problems: Licenses and permits not issued, payroll and social service
checks not cut, personnel, medical and academic records malfunctioning, errors in banking and
finance, accounts not paid or received, inventory not maintained, weapon systems
malfunctioning (shudder!), constituent services not provided, and so on, and so on, and so on.
Still think you'll be unaffected . . . highly unlikely. This problem will affect computations which
calculate age, sort by date, compare dates, or perform some other type of specialized task. The
Gartner Group has made the following approximations:
At $450 to $600 per affected computer program, it is estimated that a medium size company will
spend from $3.6 to $4.2 million to make the software conversion. The cost per line of code is
estimated to be $.80 to $1. VIASOFT has seen program conversion cost rise to $572 to $1,204.
ANDERSEN CONSULTING estimates that it will take them more than 12,000 working days to
correct its existing applications. YELLOW CORPORATION estimates it will spend
approximately 10,000 working days to make the change. Estimates for the correction of this
problem in the United States alone is upward of $50 to $75 Billion dollars.
(ITAA, p. 1)
Is it possible to eliminate the problem? Probably not, but we can make the transition
much smoother with cooperation and the right approach. Companies and government agencies
must understand the nature of the problem. Unfortunately, the spending you find for new
software development will not be found in Yk2 research. Ignoring the obvious is not the way to
approach this problem. To assume that the problem will be corrected when the system is
replaced can be a costly misjudgment. Priorities change, development schedules slip, and
system components will be reused, causing the problem to be even more widespread.
Correcting the situation may not be so difficult as it will be time consuming. For
instance, the Social Security Administration estimates that it will spend 300 man-years finding
and correcting these date references in their information systems - systems representing a total of
30 million lines of code. (ITAA, p. 3) Common sense dictates that a comprehensive conversion
plan be developed to address the more immediate functions of an organization (such as invoices,
pay benefits, collect taxes, or other critical organization functions), and continue from there to
finish addressing the less critical aspects of operation. Some of the automated tools may help to
promote the "repair" of the systems, such as in:
* line by line impact analysis of all date references within a system, both in terms of data and
procedures;
* project cost estimating and modeling;
* identification and listing of affected locations;
* editing support to make the actual changes required;
* change management;
* and testing to verify and validate the changed system.
(ITAA, p. 3)
Clock simulators can run a system with a simulated clock date and can use applications that
append or produce errors when the year 2000 arrives while date finders search across
applications on specific date criteria, and browsers can help users perform large volume code
inspection. As good as all these "automated tools" are, there are NO "Silver Bullets" out there.
There are no quick fixes. It will take old fashioned work-hours by personnel in order to make
this "rollover" smooth and efficient.
Another area to look at are the implications for public health information. Public health
information and surveillance at all levels of local, state, federal, and international public health
are especially sensitive to and dependent upon dates for epidemiological (study of disease
occurrence, location, and duration) and health statistics reasons. The date of events, duration
between events, and other calculations such as age of people are core epidemiologic and health
statistic requirements. (Seligman, p. 1) Along with this, public health authorities are usually
dependent upon the primary data providers such as physician practices, laboratories, hospitals,
managed care organizations, and out-patient centers etc., as the source for original data upon
which public health decisions are based. The CDC (Centers for Disease Control and Prevention)
for example, maintains over 100 public health surveillance systems all of which are dependent
upon external sources of data. (Issa, p. 5) This basically means that it is not going to be
sufficient to make the internal systems compliant to the year 2000 in order to address all of the
ramifications of this issue. To illustrate this point, consider the following scenario: in April
2000, a hospital sends an electronic surveillance record to the local or state health department
reporting the death of an individual who was born in the year "00"; is this going to be a case of
infant mortality or a geriatric case??
Finally, let's look at one of the largest software manufacturing corporations and see what
the implications of the year 2000 will be for Microsoft products. Microsoft states that Windows
95 and Windows NT are capable of supporting dates up until the year 2099. They also make the
statement however:
"It is important to note that when short, assumed dates (mm/dd/yy) are entered, it is impossible
for the computer to tell the difference between a day in 1905 and 2005. Microsoft's products,
that assume the year from these short dates, will be updated in 1997 to make it easier to assume
a 2000-based year. As a result, Microsoft recommends that by the end of the century, all PC
software be upgraded to versions from 1997 or later."
(Microsoft, p. 1)
PRODUCT NAME
DATE LIMIT
DATE FORMAT
Microsoft Access 95
1999
assumed "yy" dates
Microsoft Access 95
9999
long dates ("yyyy")
Microsoft Access (next version)
2029
assumed "yy" dates
Microsoft Excel 95
2019
assumed "yy" dates
Microsoft Excel 95
2078
long dates ("yyyy")
Microsoft Excel (next version)
2029
assumed "yy" dates
Microsoft Excel (next version)
9999
long dates ("yyyy")
Microsoft Project 95
2049
32 bits
Microsoft SQL Server
9999
"datetime"
MS-DOS(r) file system (FAT16)
2099
16 bits
Visual C++(r) (4.x) runtime library
2036
32 bits
Visual FoxPro
9999
long dates ("yyyy")
Windows 3.x file system (FAT16)
2099
16 bits
Windows 95 file system (FAT16)
2099
16 bits
Windows 95 file system (FAT32)
2108
32 bits
Windows 95 runtime library (WIN32)
2099
16 bits
Windows for Workgroups (FAT16)
2099
16 bits
Windows NT file system (FAT16)
2099
16 bits
Windows NT file system (NTFS)
future centuries
64 bits
Windows NT runtime library (WIN32)
2099
16 bits
Microsoft further states that its development tools and database management systems provide
the flexibility for the user to represent dates in many different ways. Proper training of
developers to use date formats that accommodate the transition to the year 2000 is of the utmost
importance. For informational purposes, I have included a chart that represents the more
popular Microsoft products, their date limits, and date formats. (Chart on previous page)
(Microsoft, p. 3)
So . . . is everyone affected? Apparently not. In speaking with the owners of St. John
Valley Communications, an Internet-Access provider based in Fort Kent, they are eagerly
awaiting the coming of 2000. They, Alan Susee and Dawn Martin had enough foresight to make
sure that when they purchased their equipment and related software, that it would all be year
2000 compliant. It can be done, as evidenced by this industrious couple of individuals. The key
is to get informed and to stay informed. Effect the changes you can now, and look to remedy the
one's that you can't. The year 2000 will be a shocker and thriller for many businesses, but St.
John Valley Communications seem to have it under control and are holding their partry hats in
one hand and the mouse in the other.
As is obviously clear from the information presented, Yk2 is a problem to be reckoned
with. The wide ranging systems (OS) and software on the market lend credence to the idea that
a "silver bullet" fix is a pipe dream in the extreme. This is not however, an insurmountable
problem. Efficient training and design is needed, as well as a multitude of man-hours to effect
the "repairs" needed to quell the ramifications and repercussions that will inevitably occur
without intervention from within. The sit back and wait for a cure-all approach will not work,
nor is it even imaginable that some people (IS people) with advanced knowledge to the contrary,
would buy into this propaganda of slow technological death. To misquote an old adage, "The
time for action was 10 years ago." Whatever may happen, January 1, 2000 will be a very
interesting time for some, a relief for others . . . and a cyanide capsule for the "slackers." What
will you do now that you are better "informed?" Hopefully you will effect the necessary "repairs
and pass the word to the others who may be taking this a little too lightly. It may not be a matter
of life or death, but it sure as heck could mean your job and financial future.
WORKS CITED
Elgan, Mike. "Experts bemoan the denial of "2000 bug"."
Http://www.cnn.com/2000. ( 31 October 1996).
Jager, Peter de. "DOOMSDAY." Http://www.year2000.com/doom
(2 November 1996).
* " Believe me it's real ! Early Warning." Http://www.year2000.com
(4 November 1996).
* " Biting the Silver Bullet." Http://www.year2000.com/bullet
(2 November 1996).
Shufelt, Ursula. "Yk2." Ursula@maine.maine.edu. ( 7 November 1996).
Duvall, Jed. "The year 2000 does not compute." Http://www.cnn.com/news
(3 November 1996).
ITAA. "The Year 2000 Software Conversion: Issues and Observations."
Http://www.itaa.org/yr2000-1.htm ( 7 November 1996).
Seligman, James & Issa, Nabil. "The Year 2000 Issue: Implications for Public
Health Information and Surveillance Systems."
Http://www.cdc.gov/year2000.htm (9 November 1996).
Microsoft. "Implications of the Year 2000 on Microsoft Products."
Http://army.mil/army-yk2/articles/y2k.htm (9 November 1996).
2
f:\12000 essays\sciences (985)\Computer\A Brief History of Databases.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Brief History Of Data Bases
In the 1960's, the use of main frame computers became widespread in many companies. To access vast amounts of stored information, these companies started to use computer programs like COBOL and FORTRAN. Data accessibility and data sharing soon became an important feature because of the large amount of information recquired by different departments within certain companies. With this system, each application owns its own data files. The problems thus associated with this type of file processing was uncontrolled redundancy, inconsistent data, inflexibility, poor enforcement of standards, and low programmer maintenance.
In 1964, MIS (Management Information Systems) was introduced. This would prove to be very influential towards future designs of computer systems and the methods they will use in manipulating data.
In 1966, Philip Kotler had the first description of how managers could benefit from the powerful capabilities of the electronic computer as a management tool.
In 1969, Berson developed a marketing information system for marketing research. In 1970, the Montgomery urban model was developed stressing the quantitative aspect of management by highlighting a data bank, a model bank, and a measurement statistics bank. All of these factors will be influential on future models of storing data in a pool.
According to Martine, in 1981, a database is a shared collection of interrelated data designed to meet the needs of multiple types of end users. The data is stored in one location so that they are independent of the programs that use them, keeping in mind data integrity with respect to the approaches to adding new data, modifying data, and retrieving existing data. A database is shared and perceived differently by multiple users. This leads to the arrival of Database Management Systems.
These systems first appeared around the 1970=s as solutions to problems associated with mainframe computers. Originally, pre-database programs accessed their own data files. Consequently, similar data had to be stored in other areas where that certain piece of information was relevant. Simple things like addresses were stored in customer information files, accounts receivable records, and so on. This created redundancy and inefficiency. Updating files, like storing files, was also a problem. When a customer=s address changed, all the fields where that customer=s address was stored had to be changed. If a field happened to be missed, then an inconsistency was created. When requests to develop new ways to manipulate and summarize data arose, it only added to the problem of having files attached to specific applications. New system design had to be done, including new programs and new data file storage methods. The close connection between data files and programs sent the costs for storage and maintenance soaring. This combined with an inflexible method of the kinds of data that could be extracted, arose the need to design an effective and efficient system.
Here is where Database Management Systems helped restore order to a system of inefficiency. Instead of having separate files for each program, one single collection of information was kept, a database. Now, many programs, known as a database manager, could access one database with the confidence of knowing that it is accessing up to date and exclusive information.
Some early DBMS=s consisted of:
Condor 3
dBaseIII
Knowledgeman
Omnifile
Please
Power-Base
R-Base 4000
Condor 3, dBaseIII, and Omnifile will be examined more closely.
Condor 3
Is a relational database management system that evolved in the microcomputer environment since 1977. Condor provides multi-file, menu-driven relational capabilities and a flexible command language. By using a word processor, due to the absence of a text editor, frequently used commands can automated.
Condor 3 is an application development tool for multiple-file databases. Although it lacks some of the capabilities like procedure repetition, it makes up for it with its ease to use and quick decent speed.
Condor 3 utilizes the advantages of menu-driven design. Its portability enables it to import and export data files in five different ASCII formats. Defining file structures is a relatively straightforward method by typing the field names and their length, the main part of designing the structure is about complete. Condor uses six data types:
alphabetic
alphanumeric
C. numeric
C. decimal numeric
C. Julian date
C. dollar
Once the fields have been designed, data entry is as easy as pressing enter and inputting the respective values to the appropriate fields and like the newer databases, Condor too can use the Update, Delete, Insert, and Backspace commands. Accessing data is done by creating an index. The index can be used to perform sorts and arithmetic.
dBaseIII
DbaseIII is a relational DBMS which was partially built on dbaseII. Like Condor 3, dbaseIII is menu-driven and has its menus built in several levels. One of the problems discovered, was that higher level commands were not included in all menu levels. That is, dBaseIII is limited to only basic commands and anything above that is not supported.
Many of the basic capabilities are easy to use, but like Condor, dBaseIII has inconsistencies and inefficiency. The keys used to move and select items in specific menus are not always consistent through out. If you mark an item to be selected from a list, once it=s marked it can not be unmarked. The only way to correct this is to start over and enter everything again. This is time consuming and obviously inefficient. Although the menus are helpful and guide you through the stages or levels, there is the option to turn off the menus and work at a little faster rate.
DBaseIII=s command are procedural (function oriented) and flexible. It utilizes many of the common functions like:
select records
C. select fields
C. include expressions ( such as calculations)
C. redirect output to the screen or to the printer
C. store results separately from the application
Included in dBaseIII is a limited editor which will let you create commands using the editor or a word processor. Unfortunately, it is still limited to certain commands, for example, it can not create move or copy commands. It also has a screen design package which enables you to design how you want your screen to look. The minimum RAM requirement of 256k for this package really illustrates how old this application is. The most noticeable problem documented about dBaseIII is inability to edit command lines. If, for example, an error was made entering the name and address of a customer, simply backing up and correcting the wrong character is impossible without deleting everything up to the correction and re-entering everything again.
DBaseIII is portable and straightforward to work with. It allows users to import and export files in two forms: fixed-length fields and delimited fields. It can also perform dBaseII conversions. Creating file structures are simple using the menus or the create command. It has field types that are still being used today by applications such as Microsoft Access, for example, numeric fields and memo fields which let you enter sentences or pieces of information, like a customer=s address, which might vary in length from record to record. Unlike Condor 3, dBaseIII is able to edit fields without having to start over. Inserting new fields or deleting old fields can be done quite easily.
Data manipulation and query is very accessible through a number of built-in functions. The list and display commands enable you to see the entire file, selected records, and selected files. The browse command allows you to scroll through all the fields inserting or editing records at the same time. Calculation functions like sum, average, count, and total allow you to perform arithmetic operations on data in a file. There are other functions available like date and time functions, rounding, and formatting.
Omnifile
Omnifile is a single-file database system. This database is form oriented meaning that it has a master form with alternate forms attached to it. Therefore, you can work with one file and all of its subsets at the same time. The idea of alternating forms provides for a greater level of security, for example, if a user needed to update an address field, they would not be able to access any fields which displayed confidential information. The field in need of updating would only display the necessary or relevant information.
Menus are once again present and used as a guide. The use of function keys allows the user to move about screens or forms quite easily. Menus are also used for transferring information, either for importing or for exporting. One inflexibility noted was that when copying files the two files must have the exact same fields in the same order as the master file. This can be problem if you want to copy identical fields from different files.
Forms design is simple but tedious. Although it may seem flexible to be able to paint the screen in any manner that you wish, it can be time consuming because no default screen is available. Like other database management systems, the usual syntax for defining fields apply, field name followed by the length of the field in braces. However, editing is a little more difficult. Changing the form can be done by inserting and deleting, one character at a time. Omnifile does not support moving fields around, nor inserting blank lines. This means that if a field was to be added at the beginning of the record, the entire record would have to be re-entered.
Records are added and viewed in the format that the user first designed it. Invalid entries are not handled very well. Entering an illegal value in a certain field results in a beep and no message, the user is left there to try and decide what the error is. Omnifile does support the ability to insert new records while viewing existing records and to make global or local changes.
Querying can be performed by using an index or using a non-indexed search. If a search for a partial entry is made like ARob@ instead of ARobinson@, a message is then displayed stating that not an exact match was found.
Overall
These are just a few of the database programs that help start the whole database management system era. It is apparent that DBMS=s today still use some of the fundamentals first implemented by these >old= systems. Items like menus, forms, and portability are still key parts to current applications. However, programs have come along since then, but still have as their bases the same fundamental principles.
f:\12000 essays\sciences (985)\Computer\A Brief History of Library Automation 19301996.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
An automated library is one where a computer system is used to
manage one or several of the library's key functions such as
acquisitions, serials control, cataloging, circulation and the public
access catalog. When exploring the history of library automation, it
is possible to return to past centuries when visionaries well before
the computer age created devices to assist with their book lending
systems. Even as far back as 1588, the invention of the French "Book
Wheel" allowed scholars to rotate between books by stepping on a pedal
that turned a book table. Another interesting example was the "Book
Indicator", developed by Albert Cotgreave in 1863. It housed miniature
books to represent books in the library's collection. The miniature
books were part of a design that made it possible to determine if a
book was in, out or overdue. These and many more examples of early
ingenuity in library systems exist, however, this paper will focus on
the more recent computer automation beginning in the early twentieth
century.
The Beginnings of Library Automation: 1930-1960
It could be said that library automation development began in the
1930's when punch card equipment was implemented for use in library
circulation and acquisitions. During the 30's and early 40's progress
on computer systems was slow which is not surprising, given the
Depression and World War II. In 1945, Vannevar Bush envisioned an
automated system that would store information, including books,
personal records and articles. Bush(1945) wrote about a hypothetical
"memex" system which he described as a mechanical library that would
allow a user to view stored information from several different access
points and look at several items simultaneously. His ideas are well
known as the basis for hypertext and mputers for their operations. The
first appeared at MIT, in 1957, with the development of COMIT,
managing linguistic computations, natural language and the ability to
search for a particular string of information. Librarians then moved
beyond a vision or idea for the use of computers, given the
technology, they were able make great advances in the use of computers
for library systems. This lead to an explosion of library automation
in the 60's and 70's.
Library Automation Officially is Underway: 1960-1980
The advancement of technology lead to increases in the use of
computers in libraries. In 1961, a significant invention by both
Robert Noyce of Intel and Jack Kirby of Texas Instruments, working
independently, was the integrated circuit. All the components of an
electronic circuit were placed onto a single "chip" of silicon. This
invention of the integrated circuit and newly developed disk and tape
storage devices gave computers the speed, storage and ability needed
for on-line interactive processing and telecommunications.
The new potential for computer use guided one librarian to develop a
new indexing technique. HP. Luhn, in 1961, used a computer to produce
the "keyword in context" or KWIC index for articles appearing in
Chemical Abstracts. Although keyword indexing was not new, it was
found to be very suitable for the computer as it was inexpensive and
it presented multiple access points. Through the use of Luhn's keyword
indexing, it was found that librarians had the ability to put
controlled language index terms on the computer.
By the mid-60's, computers were being used for the production of
machine readable catalog records by the Library of Congress. Between
1965 and 1968, LOC began the MARC I project, followed quickly by MARC
II. MARC was designed as way of "tagging" bibliographic records using
3-digit numbers to identify fields. For example, a tag might indicate
"ISBN," while another tag indicates "publication date," and yet
another indicates "Library of Congress subject headings" and so on. In
1974, the MARC II format became the basis of a standard incorporated
by NISO (National Information Standards Organization). This was a
significant development because the standards created meant that a
bibliographic record could be read and transferred by the computer
between different library systems.
ARPANET, a network established by the Defense Advanced Research
Projects Agency in 1969 brought into existence the use of e-mail,
telnet and ftp. By 1980, a sub-net of ARPANET made MELVYL, the
University of Californiaís on-line public access catalog, available on
a national level. ARPANET, would become the prototype for other
networks such as CSNET, BITNET, and EDUCOM. These networks have almost
disappeared with the evolution of ARPANET to NSFNET which has become
the present day Internet.
During the 1970's the inventions of the integrated computer chip
and storage devices caused the use of minicomputers and microcomputers
to grow substantially. The use of commercial systems for searching
reference databases (such as DIALOG) began. BALLOTS (Bibliographical
Automation of Large Library Operations) in the late 1970's was one of
the first and later became the foundation for RLIN (the Research
Libraries Information Network). BALLOTS was designed to integrate
closely with the technical processing functions of the library and
contained four main files: (1)MARC records from LOC; (2) an in-process
file containing information on items in the processing stage; (3) a
catalog data file containing an on-line record for each item; and (4)
a reference file. Further, it contained a wide search retrieval
capability with the ability to search on truncated words, keywords,
and LC subject headings, for example.
OCLC, the On-line Computer Library Center began in 1967, chartered in
the state of Ohio. This significant project facilitated technical
processing in library systems when it started it's first cooperative
cataloging venture in 1970. It went on-line in 1971. Since that time
it has grown considerably, providing research and utihypermedia.
In order to have automation, there must first be a computer. The
development of the computer progressed substantially from 1946 to
1961, moving quickly though a succession of vacuum tubes, transistors
and finally to silicon chips. From 1946 to 1947 two significant
computers were built. The ENIAC I (Electronic Numerical Integrator and
Calculator) computer was developed by John Mauchly and J. Presper
Eckert at the University of Pennsylvania. It contained over 18,000
vacuum tubes, weighed thirty tons and was housed in two stories of a
building. It was intended for use during World War II but was not
completed in time. Instead, it was used to assist the development of
the hydrogen bomb. Another computer, EDVAC, was designed to store two
programs at once and switch between the sets of instructions. A major
breakthrough occurred in 1947 when Bell Laboratories replaced vacuum
tubes with the invention of the transistor. The transistors decreased
the size of the computer, and at the same time increased the speed and
capacity. The UNIVAC I (Universal Automatic Computer) became the
first computer using transistors and was used at the U.S. Bureau of
the Census from 1951 until 1963.
Software development also was in progress during this time.
Operating systems and programming languages were developed for the
computers being built. Librarians needed text-based computer
languages, different from the first numerical languages invented for
the number crunching "monster computers", in order to be able to use
colities designed to provide users with the ability to access
bibliographic records, scientific and literary information which
continues to the present .
Library Automation 1980-present
The 70's were the era of the dummy terminal that were used to gain
access to mainframe on-line databases. The 80's gave birth to a new
revolution. The size of computers decreased, at the same time,
technology provided faster chips, additional RAM and greater storage
capacity. The use of microcomputers during the 1980's expanded
tremendously into the homes, schools, libraries and offices of many
Americans. The microcomputer of the 80's became a useful tool for
librarians who put to them to use for everything from word processing
to reference, circulation and serials.
On-line Public Access Catalogs began to be used extensively the
1980's. Libraries started to set-up and purchase their own computer
systems as well as connect with other established library networks.
Many of these were not developed by the librarians themselves, but by
vendors who supplied libraries with systems for everything from
cataloging to circulation. One such on-line catalog system is the CARL
(Colorado Alliance of Research Libraries) system. Various other
software became available to librarians, such as spreadsheets and
databases for help in library administration and information
dissemination.
The introduction of CD-ROMs in the late 80ís has changed the way
libraries operate. CD-ROMs became available containing databases,
software, and information previously only available through print,
making the information more accessible. Connections to "outside"
databases such as OCLC, DIALOG, and RLIN continued, however, in the
early 90's the databases that were previously available on-line became
available on CD-ROM, either in parts or in their entirety. Libraries
could then gain information through a variety of options.
The nineties are giving rise to yet another era in library
automation. The use of networks for e-mail, ftp, telnet, Internet, and
connections to on-line commercial systems has grown. It is now
possible for users to connect to the libraries from their home or
office. The world wide web which had it's official start date as
April of 1993 is becoming the fastest growing new provider of
information. It is also possible, to connect to international library
systems and information through the Internet and with ever improving
telecommunications. Expert systems and knowledge systems have become
available in the 90ís as both software and hardware capabilities have
improved. The technology used for the processing of information has
grown considerably since the beginnings of the thirty ton computer.
With the development of more advanced silicon computer chips, enlarged
storage space and faster, increased capacity telecommunication lines,
the ability to quickly process, store, send and retrieve information
is causing the current information delivery services to flourish.
Bibliography
Bush, V. (1945).As we may think. Atlantic Monthly. 176(1), 101-8.
Duval, B.K. & Main, L. (1992). Automated Library Systems: A Librarians
Guide and Teaching Manual. London: Meckler
Nelson, N.M., (Ed.) (1990). Library Technology 1970-1990: Shaping the
Library of the Future. Research Contributions from the 1990 Computers
in Libraries Conference. London: Meckler.
Pitkin, G.M. (Ed.) (1991). The Evolution of Library Automation:
Management Issues and Future Perspectives. London: Meckler.
Title:
A Brief History of Library Automation: 1930-1996
f:\12000 essays\sciences (985)\Computer\A Brief Look at Robotics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
=================================================================== WIRED HANDS - A Brief Look at Robotics NEWSCIENCE ------------------------------------------------------------------- Two years ago, the Chrysler corporation completely gutted its Windsor, Ontario, car assembly plant and within six weeks had installed an entirely new factory inside the building. It was a marvel of engineering. When it came time to go to work, a whole new work force marched onto the assembly line. There on opening day was a crew of 150 industrial robots. Industrial robots don't look anything like the androids from sci-fi books and movies. They don't act like the evil Daleks or a fusspot C-3P0. If anything, the industrial robots toiling on the Chrysler line resemble elegant swans or baby brontosauruses with their fat, squat bodies, long arched necks and small heads. An industrial robot is essentially a long manipulator arm that holds tools such as welding guns or motorized screwdrivers or grippers for picking up objects. The robots working at Chrysler and in numerous other modern factories are extremely adept at performing highly specialized tasks - one robot may spray paint car parts while another does spots welds while another pours radioactive chemicals. Robots are ideal workers: they never get bored and they work around the clock. What's even more important, they're flexible. By altering its programming you can instruct a robot to take on different tasks. This is largely what sets robots apart from other machines; try as you might you can't make your washing machine do the dishes. Although some critics complain that robots are stealing much-needed jobs away from people, so far they've been given only the dreariest, dirtiest, most soul-destroying work. The word robot is Slav in origin and is related to the words for work and worker. Robots first appeared in a play, Rossum's Universal Robots, written in 1920 by the Czech playwright, Karel Capek. The play tells of an engineer who designs man-like machines that have no human weakness and become immensely popular. However, when the robots are used for war they rebel against their human masters. Though industrial robots do dull, dehumanizing work, they are nevertheless a delight to watch as they crane their long necks, swivel their heads and poke about the area where they work. They satisfy "that vague longing to see the human body reflected in a machine, to see a living function translated into mechanical parts", as one writer has said. Just as much fun are the numerous "personal" robots now on the market, the most popular of which is HERO, manufactured by Heathkit. Looking like a plastic step-stool on wheels, HERO can lift objects with its one clawed arm and utter computer-synthesized speech. There's Hubot, too, which comes with a television screen face, flashing lights and a computer keyboard that pulls out from its stomach. Hubot moves at a pace of 30 cm per second and can function as a burglar alarm and a wake up service. Several years ago, the swank department store Neiman-Marcus sold a robot pet, named Wires. When you boil all the feathers out of the hype, HERO, Hubot, Wires et. al. are really just super toys. You may dream of living like a slothful sultan surrounded by a coterie of metal maids, but any further automation in your home will instead include things like lights that switch on automatically when the natural light dims or carpets with permanent suction systems built into them. One of the earliest attempts at a robot design was a machine, nicknamed Shakey by its inventor because it was so wobbly on its feet. Today, poor Shakey is a rusting pile of metal sitting in the corner of a California laboratory. Robot engineers have since realized that the greater challenge is not in putting together the nuts and bolts, but rather in devising the lists of instructions - the "software - that tell robots what to do". Software has indeed become increasingly sophisticated year by year. The Canadian weather service now employs a program called METEO which translates weather reports from English to French. There are computer programs that diagnose medical ailments and locate valuable ore deposits. Still other computer programs play and win at chess, checkers and go. As a results, robots are undoubtedly getting "smarter". The Diffracto company in Windsor is one of the world's leading designers and makers of machine vision. A robot outfitted with Diffracto "eyes" can find a part, distinguish it from another part and even examine it for flaws. Diffracto is now working on a tomato sorter which examines colour, looking for no-red - i.e. unripe - tomatoes as they roll past its TV camera eye. When an unripe tomato is spotted, a computer directs a robot arm to pick out the pale fruit. Another Diffracto system helps the space shuttle's Canadarm pick up satellites from space. This sensor looks for reflections on a satellites gleaming surface and can determine the position and speed of the satellite as it whirls through the sky. It tells the astronaut when the satellite is in the right position to be snatched up by the space arm. The biggest challenge in robotics today is making software that can help robots find their way around a complex and chaotic world. Seemingly sophisticated tasks such as robots do in the factories can often be relatively easy to program, while the ordinary, everyday things people do - walking, reading a letter, planning a trip to the grocery store - turn out to be incredibly difficult. The day has still to come when a computer program can do anything more than a highly specialized and very orderly task. The trouble with having a robot in the house for example, is that life there is so unpredictable, as it is everywhere else outside the assembly line. In a house, chairs get moved around, there is invariably some clutter on the floor, kids and pets are always running around. Robots work efficiently on the assembly line where there is no variation, but they are not good at improvisation. Robots are disco, not jazz. The irony in having a robot housekeeper is that you would have to keep your house perfectly tidy with every item in the same place all the time so that your metal maid could get around. Many of the computer scientists who are attempting to make robots brighter are said to working in the field of Artificial Intelligence, or AI. These researchers face a huge dilemma because there is no real consensus as to what intelligence is. Many in AI hold the view that the human mind works according to a set of formal rules. They believe that the mind is a clockwork mechanism and that human judgement is simply calculation. Once these formal rules of thought can be discovered, they will simply be applied to machines. On the other hand, there are those critics of AI who contend that thought is intuition, insight, inspiration. Human consciousness is a stream in which ideas bubble up from the bottom or jump into the air like fish. This debate over intelligence and mind is, of course, one that has gone on for thousands of years. Perhaps the outcome of the "robolution" will be to make us that much wiser.
f:\12000 essays\sciences (985)\Computer\A Computer Science Report format.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER SCIENCE REPORT
QUESTIONAIRE
1. DO YOU HAVE A COMPUTERIZED SYSTEM?
IF NOT WHAT WOULD YOU LIKE COMPUTERIZED?
2. WHAT TYPE COMPUTER SYSTEM DO YOU HAVE?
3. ARE THERE ANY SETBACKS IN USING THIS SYSTEM?
4. IS THIS SYSTEM DOING ALL THAT IS REQUIRED TO BE
DONE?
5. WHAT ARE THE ADVANTAGES?
6. WHAT ARE THE DISADVANTAGES?
7. ARE THERE ANY IMPROVEMENTS YOU WOULD LIKE IN
YOUR COMPUTER SYSTEM?
8. IF SO WHAT IMPROVEMENTS WOULD YOU RECOMMEND?
ANSWERS FROM QUESTIONAIRE
THE CIFTON DUPINY COMMUNITY COLLEGE
1. NO WE HAVE A COMPUTERIZED SYSTEM.
ALL ASPECTS OF THE SCHOOL'S RECORD.
2. WE DO MOST OF OUR WORK IN WORDPERFECT WE HAVE NO
COMPUTER PROGRAMMES.
3. YES THERE ARE MANY SETBACKS, THE PERSON USING THE COMPUTER
HAS TO FIGURE OUT EVERYTHING , THIS LEADS TO A VERY HEAVY
WORK LOAD AND LOSS OF TIME.
4. NO, THIS SYSTEM IS NOT DOING WHAT IS REQUIRED.
5. THE ONLY ADVANTAGE IS THAT WE CAN STORE OUR WORK ON THE
COMPUTER.
6. DISADVANTAGES OF OUR SYSTEM ARE SLOW, TIME CONSUMING,
INEFFICIENT AMONG OTHERS.
7. YES WE WOULD LIKE ALOT OF IMPROVEMENTS IN THE SYSTEM.
8.THE IMPROVEMENTS I WOULD RECOMMEND IS A COMPUTER
PROGRAM TO REGISTER STUDENTS,TO CHECK CLASS SCHEDULES,
TO STORE STUDENT FILES, CHECK ON STUDENTS MARKS, THE
ARRANGEMENTS OF TIMETABLES AND TEACHER'S SCHEDULE, CLASS
USAGE, TO DETERMINE THE PROMOTION OF STUDENTS AND THE
RECORD OF THE SCHOOL'S FINANCES.
f:\12000 essays\sciences (985)\Computer\A Hacker.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A Hacker
A hacker is a person obsessed with computers.
At the heart of the obsession is a drive master the
computer. The classic hacker was simply a
compulsive programmer. It is only recently that the
term hacker became associated with the computerized
vandalism.
Great description of Hackers:
Bright young men of disheveled apperance,Often with
sunken, glowing eyes.Seen sitting at computer
consoles, their arms tense and waitingTo fire their
fingers which are already posed to strike at the
buttons and keys on which their attention seems to
dice.They work until they nearly drop,twenty or
thirty hours at a time if possible.They sleep on
cots near the computer,but only a few hours-then
back to the console, or printouts.Their crumpled
clothes, their unwashed, unsheven faces, and
uncombed hair, testify that they are oblivious to
their bodies and to the word in which they move.
They exist, at least when so gaged, only through
and for the computers.
The majority of hackers are mostly young men,
often teenagers who have found within the computer
world, something into which they can mold their
desires.
Another definition
-a person totally engrossed in computer
programming and computer technology. In the
1980s, with the advent of personal computers
and dial-up computer networks, hacker
acquired a pejorative connotation, often
referring to someone who secretively invades
others' computers, inspecting or tampering
with the programs or data stored on them.
(More accurately, though, such a person would
be called a "cracker.") Hacker also means
someone who, beyond mere programming, likes
to take apart operating systems and programs
to see what makes them tick
f:\12000 essays\sciences (985)\Computer\A Long Way From Univac.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Adv. Eng. 9
Computers
A Long Way From Univac
Can you imagine a world without computers? You most probably interact with some form of a computer every day of your life. Computers are the most important advancement our society has ever seen. They have an interesting history, many interesting inner components, they are used nearly everywhere, and continue to advance incredibly fast. Because the field of computers is so broad, this paper will focus mainly on personal computers.
Although computers have been evolving for quite some time, they really didn¹t gain popularity until the introduction of the personal computer. In 1977, Steve Jobs, co-founder of the Apple Computer Company, unveiled what is generally considered to be the first personal computer, the Apple II. This computer was introduced on April 16, 1977, at the First West Coast Computer Faire, in San Francisco. In 1981, the International Business Machines Company introduced the first IBM PC. Unlike Apple, IBM used a policy of open architecture for their computer. They bought all of their components from the lowest bidder, such as the 8086 and 8088 microprocessor chips, made by a Intel, a Hillsboro, Oregon company. When IBM¹s computer¹s design had been finalized, they shared most of the inner workings of the computer with everyone. IBM hoped that this would encourage companies to manufacture computers that were compatible
with theirs, and that in turn, would cause software companies to create operating systems, or OS, and other programs for the ³IBM Compatible² line of computers. One of the computer manufacturers was a Texas company called Compaq. A company called Dell Computers was the first ³factory direct² computer seller. A small Redmond, Washington company called Microsoft made a large amount of software for the ³IBM Compatible² line of computers. This open architecture policy of IBM was not without it¹s flaws, however. IBM lost some business to the ³clones² who could offer more speed, more memory, or a smaller price tag. IBM had considered this an acceptable loss. One of the few components of the IBM PC that was kept from the clone manufacturers was the Basic Input Output System, or BIOS. This program, which was usually etched permanently on a chip, controlled the interactions between the internal hard and floppy drives, the external drives, printers, and monitors, etc. Clone manufactures had to make their own versions of an input output system. Some manufacturers copied the IBM BIOS exactly, such as Eagle Computers, and Corona Data Systems. This is one adverse affect that IBM had not thought of. However, all of IBM¹s copyright violation lawsuits against these companies ended in IBM¹s favor. IBM has continued to grow to this day, however, the clone manufacturers make far more personal computers than IBM, while IBM makes more business machines, and the Power PC microprocessor, used in Macintosh computers. IBM clone are now made by Packard Bell, Sony, Acer, Gateway 2000, and more. The clones have continued to use software and operating systems made by Microsoft, including: DOS (Disk
Operating System), Windows, Windows 95, and Windows NT. The clones also primarily use microprocessors manufactured by Intel, including the 8086, 8088, 80286, 80386, 80486, Pentium and Pentium Pro, which offer speeds over 200 megahertz, and will be even faster in the near future (Silver 7-28).
Apple took a somewhat different course during this period. Not willing to enter the IBM clone manufacturing market, Apple continued to make their own kind of computers. They made minor improvements on the Apple II line, but eventually decided they needed to make a new type of computer. They first introduced the Apple III in September of 1980. It was a dismal failure. The first buyers encountered numerous system errors and failures, because of a poor OS. Besides that, it was poorly manufactured, with improperly fitting circuitry, loose wires and screws, etc. The later released Apple III+ did poorly because of it¹s brother¹s poor debut. The next big release was the Lisa in January of 1983. It was the first personal computer with a mouse, and nice graphic capabilities. Experiments showed that it was 20 times as easy to use as the IBM PC, and it drew enormous praise from computer magazines. It had flaws too, however. It strained the power of the aging Motorola 68000 microprocessor, so it lost in speed tests to the IBM PC. It also came with a $10,000 price tag, over twice as much as most IBM clones. The Lisa failed, not as catastrophically as the Apple III, but failed, nevertheless. Apple had but one more ace up their sleeve, and they released it in January of 1984. They called it the Macintosh, and it was very
popular. Apple still uses the Macintosh series of computers to this day. In 1995, Apple finally allowed other companies to use their OS, and manufacture clones. Some clone manufacturers include: Power Computing, Umax, Radius, and Motorola. Unlike IBM, Apple still sells more computers than it¹s clones, but Power Computing is steadily gaining in sales. Macintoshes and Mac clones use System 6, System 7, System 7.1, System 7.5, and System 7.6, all made by Apple. Macintoshes and their clones use microprocessors manufactured by Motorola, including, 68000, 68881, 68020, 68030, 68040, and the Power PC 601, 603, and 604, made by Motorola and IBM, with speeds up to 225 megahertz, and a 603e, available in January of 1997, operating at 300 megahertz (Hassig 45-68)
Computers have many interesting components, including: motherboards, microprocessors, FPUs (Floating Point Unit), hard disk drives, floppy disk drives (5.25² and 3.5²), CD ROM drives (Compact Disc Read Only Memory), cartridge drives, ROM chips (Read Only Memory), RAM (Random Access Memory), VRAM (Video Random Access Memory), NuBus or PCI expansion cards (Peripheral Complement Interface), monitors, keyboards, mice, speakers, microphones, printers, network systems, and modems. The motherboard is what the microprocessor, FPU, ROM, RAM, VRAM and all the circuitry are attached to. The microprocessor, also called a CPU (Central Processing Unit) and FPU are what everything goes through, and tell what to do with data. Most CPUs operate from 2.5 megahertz (MHz, millions of cycles per second) to 300
MHz. The hard disk holds large amounts of data for a long time. Most hard disks can hold from 1 megabyte (MB) to 10 gigabytes (GB). *NOTE: (1 GB is 1,024 MB, 1 MB is 1,024 kilobytes (K), 1 K is 1,024 bytes, 1 byte is 8 bits, and a bit is an on/off code (binary code uses 0 for off and 1 for on), therefore a 10 GB hard disk can have 8,589,934,592 bits!). Floppy disks are for putting small amounts data on, and being able to take them with you. The old 5.25² disks held a few K of data, while the new 3.5² type holds 800 K or 1.4 MB. CD ROMs are relatively new. They have very fine lines on their surface read by a laser, and can usually hold 650 MB of data (which is unchangeable). CD ROM drives range in disc reading time from 1X (real time, 150 K/sec) to 15X (2.2 MB/sec). Cartridges store large amounts of data and are removable, like floppy disks. They can store up to 1 GB, and come in all shapes and sizes, each type with a different drive. ROM is unchangeable data soldered on the motherboard. RAM is memory the computer uses for immediate access, such as open applications. Everything on the RAM is lost when the computer is shut down. VRAM is used to display a higher resolution or greater color depth on the monitor. 512 K or 1 MB is the standard amount on most computers, and 8 MB is the most available. The resolution ranges from 400 by 300 pixels to 1,920 by 1,440 pixels. The color depth ranges from 1 bit (black and white) to 36 bit (68,719,476,470 colors). NuBus and PCI expansion cards add special features to computers, such as receiving TV transmissions. Monitors display images given to them by VRAM. They range in size from 9 to 21 inches diagonally. Keyboards input data into the computer. Mice have a
track ball that moves around inside, causing a cursor to move across the screen. Speakers amplify the sound output of a computer. Microphones allow sounds to be recorded on a computer. Printers allow computers to put data on paper. Network systems allow data to be easily transmitted from one computer to another. Modems allow data to be transmitted through telephone wires. They have variable speeds from 300 bytes per second (BPS) to 57,600 BPS (Rizzo 5-21).
Today, computers are utilized in just about every field imaginable. A caution for the future of computers is that they could go berserk or, if they had a working artificial intelligence, they could make mankind completely obsolete. Computers have evolved, and will continue to evolve faster than any tech technology to date. Therefore, it is impossible to fathom where computers will be in a thousand, or even a hundred years. One thing, however, is certain: computers are the most important advancement our society has ever seen.
BIBLIOGRAPHY
Rizzo, John and K. Daniel Clarke. How Macs Work. New York: Ziff-Davis Press, 1996.
Hassig, Lee, Margery A. duMond, Esther Ferrington, et al. The Personal Computer. Richmond: Time Life, 1989.
Silver, Gerald A. and Myrna L. Silver. Computers and Information Processing. New York: HarperCollins Publishers, 1993.
f:\12000 essays\sciences (985)\Computer\A Multifacited Interface.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
--------------------------------------------------------
Microsoft Windows 95 README for Microsoft Windows
August l995
--------------------------------------------------------
(c) Copyright Microsoft Corporation, 1995
------------------------
HOW TO USE THIS DOCUMENT
------------------------
To view Readme.txt on screen in Notepad, maximize the Notepad window.
To print Readme.txt, open it in Notepad or another word processor,
then use the Print command on the File menu.
--------
CONTENTS
--------
IF YOU HAVEN'T INSTALLED WINDOWS 95
LIST OF WINDOWS 95 README FILES
HOW TO READ README FILES
UNINSTALLING WINDOWS 95
--------
IF YOU HAVEN'T INSTALLED WINDOWS 95
===================================
Additional setup information is available in Setup.txt. You can view
Setup.txt using Notepad with Windows 3.1. You can find the file on
Windows 95 installation disk 1. If you purchased Windows 95 on a CD-ROM,
you can find Setup.txt in the \Win95 directory.
LIST OF WINDOWS 95 README FILES
===============================
In addition to Readme.txt, Windows 95 provides the following readme
files:
Config.txt Contains syntax information for commands you use
with your Config.sys file.
Display.txt Provides information about how to configure
and correct problems for available drivers
and how to obtain additional display drivers.
Exchange.txt Provides information to help you set up and
run Microsoft Exchange.
Extra.txt Provides information about where to find
additional Windows 95 files, such as updates
and drivers, in addition to files available
only in the CD-ROM version of Windows 95.
Faq.txt Answers frequently asked questions about
Windows 95.
General.txt Provides information about startup problems,
the programs that come with Windows 95, disk
tools, disks and CDs, drivers, removable media,
Microsoft FAX, and pen services.
This file also contains last-minute information
received too late to include in the other readme
files. For example, if you have a question about
a printer, it would be helpful to look in
General.txt as well as in Printers.txt.
Hardware.txt Provides information about known problems and
workarounds for hardware. You may also need
to refer to Printers.txt or Mouse.txt for
specific problems.
Internet.txt Provides information to help you connect to
the Internet if you haven't done so already.
Also provides information about where to
download Microsoft's new Web browser,
Internet Explorer.
Mouse.txt Provides information about known problems
and workarounds specifically for mouse and
keyboard problems.
Msdosdrv.txt Contains syntax information for MS-DOS
device drivers. For additional help on MS-DOS
commands, see Config.txt. You can also use
command-line help at the command prompt by
typing /? following the command name.
Msn.txt Provides information to help you connect to
The Microsoft Network.
Network.txt Provides information about installing and
running network servers.
Printers.txt Provides information about known problems
and workarounds for printers.
Programs.txt Provides information and workarounds for
running some specific Windows-based and
MS-DOS-based programs with Windows 95.
Support.txt Provides Information about how to get
additional support for Windows 95.
Tips.txt Contains an assortment of tips and tricks
for using Windows 95, most of which are not
documented in online Help or the printed book.
HOW TO READ README FILES
========================
When you install Windows 95, all the readme files are copied to the
\Windows directory.
To open a readme file after you install Windows 95:
1. Click the Start menu.
2. Click Run.
3. Type the name of the readme file.
Even if you haven't installed Windows 95 yet, you can still open a
readme file.
To open a readme file before you install Windows 95:
If you purchased Windows 95 on floppy disks:
--------------------------------------------
1. Insert Disk 1 into drive A (or whatever drive you prefer).
2. At the MS-DOS command prompt, type the following:
a:extract.exe /a /l c:\windows win95_02.cab filename.txt
For example, if you want to open General.txt, you would type:
a:extract.exe /a /l c:\windows win95_02.cab general.txt
3. Change to the \Windows directory.
4. At the command prompt, type the following:
edit filename.txt
If you purchased Windows 95 on a CD-ROM:
----------------------------------------
1. Insert the CD into your CD-ROM drive (drive x in this example).
2. Change to the \Win95 directory on your CD-ROM drive.
2. At the MS-DOS command prompt, type the following:
extract.exe /a /l c:\windows win95_02.cab filename.txt
For example, if you want to open General.txt, you would type:
extract.exe /a /l c:\windows win95_02.cab general.txt
3. Change to the Windows directory on your C drive.
4. At the command prompt, type the following:
edit filename.txt
UNINSTALLING WINDOWS 95
=======================
During Setup, you have the option of saving your system files so
that you can uninstall Windows 95 later. If you want to be able to
uninstall Windows 95 later, choose Yes. Setup will save your system
files in a hidden, compressed file. If you don't need to be able to
uninstall Windows 95 later, choose No.
You will not see this Setup option if:
- You are upgrading over an earlier version of Windows 95.
- You are installing to a new directory.
- You are running a version of MS-DOS earlier than 5.0.
NOTE:The uninstall files must be saved on a local hard drive. You
can't save them to a network drive or a floppy disk. If you have
multiple local drives, you will be able to select the one you want
to save the uninstall information on.
To uninstall Windows 95 and completely restore your computer to its
previous versions of MS-DOS and Windows 3.x, carry out the following
procedure:
1. Click the Start button, point to Settings, and then click
Control Panel.
2. Double-click the Add/Remove Programs icon.
3. On the Install/Uninstall tab, click Windows 95, and then click
Remove.
Or, if you are having problems starting Windows 95, use your startup
disk to start your computer, and then run UNINSTAL from the startup
disk.
NOTE: The uninstall program needs to shut down Windows 95. If there is
a problem with this on your computer, restart your computer and press
F8 when you see the message "Starting Windows 95." Then choose Command
Prompt Only, and run UNINSTAL from the command prompt.
If Windows 95 is running and you want to remove the uninstall files to
free up 6 to 9 MB of disk space, carry out the following procedure:
1. Click the Start button, point to Settings, and then click
Control Panel.
2. Double-click the Add/Remove Programs icon.
3. On the Install/Uninstall tab, click Old Windows 3.x/MS-DOS System
Files, and then click Remove.
You will no longer be able to uninstall Windows 95.
f:\12000 essays\sciences (985)\Computer\A short history of computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Whether you know it or not you depend on computers for almost every thing you do in modern day life. From the second you get up in the morning to the second you go to sleep computer are tied into what you do and use in some way. It is tied in to you life in the most obvious and obscure ways. Take for example you wake up in the morning usually to a digital alarm clock. You start you car it uses computers the second you turn the key (General Motors is the largest buyers of computer components in the world). You pick up the phone it uses computers. No mater how hard you try you can get away from them you can't. It is inevitable.
Many people think of computers as a new invention, and in reality it is very old. It is about 2000 years old .1 The first computer was the abacus. This invention was constructed of wood, two wires, and beads. It was a wooden rack with the two wires strung across it horizontally and the beads were strung across the wires. This was used for normal arithmetic uses. These type of computers are considered analog computers. Another analog computer was the circular slide rule. This was invented in 1621 by William Oughtred who was an English mathematician. This slid ruler was a mechanical device made of two rules, one sliding inside the other, and marked with many number scales. This slide ruler could do such calculations as division, multiplication, roots, and logarithms.
Soon after came some more advanced computers. In 1642 came Blaise Pascal's computer, the Pascaline. It was considered to be the first automatic calculator. It consisted of gears and interlocking cogs. It was so that you entered the numbers with dials. It was originally made for his father, a tax collector.2 Then he went on to build 50 more of these Pascaline's, but clerks would not uses them.3 They did this in fear that they would loose their jobs.4
Soon after there were many similar inventions. There was the Leibniz wheel that was invented by Gottfried Leibniz. It got its name because of the way it was designed with a cylinder with stepped teeth. 5 This did the same functions of the other computers of its time.
Computers, such as the Leibniz wheel and the Pascaline, were not used widely until the invention made by Thomas of Colmar (A.K.A Charles Xavier Thomas).6 It was the first successful mechanical calculator that could do all the normal arithmetic functions. This type of calculator was improved by many other inventors so it could do a number of many other things by 1890. The improvements were they could collect partial results, a memory function (could store information), and output information to a printer. These improvement were made for commercial uses mainly, and also required manual installation.
Around 1812 in Cambridge, England, new advancements in computers was made by Charles Babbage. His idea was that long calculations could be done in a series of steps the were repeated over many times.7 Ten years later in 1822 he had a working model and in 1823 he had fabrication of his invention. He had called his invention the Difference Engine.
In 1833 he had stopped working on his Difference Engine because he had another idea. It was to Build a Analytical Engine. This would have been a the first digital computer that would be full program controlled. His invention was to do all the general- purposes of modern computers. This computer was to use punch cards for storage, steam power, and operated by one person.8 This computer was never finished for many reasons. Some of the reasons were not having precision mechanics and could solve problems not needed to be solved at that time.9 After Babbage's computer people lost interest in this type of inventions.10 Eventually inventions afterwards would cause a demand for calculations capability that computers like Babbage's would capable of doing.
In 1890 an new era of business computing had evolved. This was a development in punch card use to make a step towards automated computing, which was first used in 1890 by Herman Holler. Because of this human error was reduced dramatically.11 Punch Cards could hold 80 charters per card and the machines could process about 50 -220 cards a minuet. This was a means of easily accessible me memory of unlimited size.12 In 1896 Hollerith had founded his company Tabulating Machine Company, but later in 1924 after several mergers and take-overs International Business Machines (IBM) was formed.
An invention during this time ,1906, would influence the way that computers were built in the future, it is the first vacuum, and a paper was wrote by Alan Turingthat described a hypothetical digital computer.13
In 1939 there was the first true digital computer. It was called the ABC, and was designed by Dr. John Astanasoff.
In 1942 John O. Eckert, John W. Mauchly, and associates had decided to build a high speed computer. The computer they were to build would become to be known as the ENIAC (Electrical Numerical Integration And Calculator). The reason for building this was there was a demand for high computer capacity at the beginning of World War two.
The ENIAC after being built would take up 1,800 square feet of floor space.14 It would consist of 18,000 vacuum tubes, and would take up 180,000 watts of power.15 The ENIAC was rated to be 1000 times faster than any other previous computer. The ENIAC was accepted as the first successful high speed computer, and was used from 1946 to 1955.16
Around the same time there was a new computer built was more popular. It was more popular because it not only had the ability to do calculations but it could also could do the dissension make power of the human brain. When it was finished in 1950 it became the fastest computer in the world.17 It was built by the National Bureau of standards on the campus of UCLA. It was names the National Bureau of Standards Western Automatic Computer or the SWAC. It could be said that the SWAC set the standards for computers for later up to present times.18 It was because the had all the same primary units. It had a storage device, a internal clock, an input output device, and arithmetic logic unit that consisting of a control and arithmetic unit.
These computers were considered first generation computers (1942 - 1958).
In 1948 John Bardeen, Walter Brattain, and William Schockley of Bell labs file for the firs patent on the transistor.19 This invention would foundation for second generation computers (1958 - 1964).
Computers of the second generation were smaller(about the size of a piano now) and much more quicker because of the new inventions of its time. Computers used the much smaller transistor over the bulky vacuum tubes. Another invention which influenced second generation computers and every generation after it was the discovery of magnetic core memory. Now magnetic tapes and disks were used to store programs instead of being stored in the computer. This way the computer could be used for many operations without totally being reprogrammed or rewired to do another task. All you had to do was pop in another disk.
The third generation(1964 - 1970) was when computers were commercialized then ever before. This was because they were getting smaller and more dependable.20 Also the cost went down and power requirements were less.21 This was probably because of the invention of the silicon semiconductor. These computers were used in mainly medical places and libraries for keep track of records and various other reasons. These computer of the third generation were the first micro computers.
The generation of computers we are in now is the forth generation it started in 1970. The forth generation really started with an idea by Ted Hoff, an employ of Intel, that all the processing units of a computer could be placed on one single chip. This Idea that he had was not bought by many people.22 I believe that with out this idea upgradeable computers would never have been designed. Today, every thing has a microprocessor built into it.23
The microcomputer was changed forever in 1976 when Steve Jobs and Steve Wozniak had sold a Volkswagen and a calculator for $1300 to build the first Apple.24 The work the did was in their garage. They Had founded their company 1983, and had successfully mad the fortune 500 list.25
Two years before Apple was founded IBM had announced the release of the IBM PC. Over the next 18 months the IBM would become an industry standard.26
From the 1980 on there was a was a large demand for microcomputers Suck as the IBM PC and Apple not only in industry but in the homes of many people. Many other computers appeared during the 80's. Some were the Commodore, Tandy, Atari, and game systems such as the nintendo and many others. There was aslo a large demand for computer games for the home PC. Because of these many demands many companies were getting very competitive. They were pushing for the faster better computer. Buy the late 80's because of this demand microprocessors could handle 32 bits of data at a time pushing over 4 million instructions processed a second.27
It seem as if over time computers have evolved in to totally different machines but if you put it in to perspective they are also much alike. But on the other hand With almost every business and many families today are in demand of better and newer computers it seems that if you buy a new computer today industry had made it obsolete before you it. This is probably because the better you make a computer and quicker it can do calculations the quicker it can help you in designing an new computer that is even faster. It is a domino effect that was started back 2000 years ago and will probably never end. Who knows what's in store for the future or you could say the fifth generation of computers.
1. Meyers, Jeremy. A Short History of the Computer.Compatible, http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 1
2. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg.
3. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1
4. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1
5. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 1
6. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 1
7. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 2
8. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3
9. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3
10. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3
11. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 3
12. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 2
13. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4
14. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4
15. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 4
16. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 5
17. Rutland, David. Why Computers Are Computers. New York: Waren Publishers, 1996 p. 2
18. Rutland, David. Why Computers Are Computers. New York: Waren Publishers, 1996 p. 2
19. Polsson, Ken. Chronology of Events in the History of Micro Computer. http://www.islandnet.com/kpolsson/comphist.htm, IBM Compatible, Internet. 1995-96 Ken.polsson@bbc.org . pg. 3
20. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
21. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
22. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
23. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
24. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
25. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
26. Meyers, Jeremy. A Short History of the Computer. http://www.lightning.net/~softlord/comp.html, IBM Compatible, Internet, sotflord@lightning.net pg. 6
27. Hale, Andy. History of Computers. Http://www2.ncsu.edu/eos/service/bae/www/courses/bae221/jeff/comphist.htm, IBM Compatible. 1995-96. Internet. Andy_Hale@ncsu.edu . pg. 8
f:\12000 essays\sciences (985)\Computer\AD and DA Convertors and Display Devices.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
National Diploma In Engineering
Data Communications
Electronics B NIII
Assignment no. 2
A/D and D/A Convertors and Display Devices
Weighting 20%
Name: Malcolm Brown
Class: NDD2
Tutor: Ken Hughs
Contents Page
Task 3
A/D and D/A Convertors 4
Analogue and Digital Signals 4
Analogue / Digital Conversions 5
Analogue to Digital Convertors 6
Digital To Analogue Converter 8
Glossary Of Terms 10
Visual Display Devices 11
Seven-Segment Displays 14
Dot Matrix Displays 16
Bibliology 18
Task
A/D and D/A Convertors
Explain two methods of converting analog signals to digital signals and compare them. Explain one method of digital to analog conversion. Choose two A/D convertor devices from the catalogue and list their characteristics, performance, cost, applications etc.
Display Devices
Describe how LED and LCD display devices operate - ie explain the principle behind their operation. Describe the features of the 7-segment, star-burst and dot matrix displays. Choose some devices from the catalogues and describe them.
You are required to produce a written report on your work. The report should be in standard report format and comprise of a front page with title, contents page summary, introduction, main body of the report describing the task and how you met the requirements of the task, circuit diagrams etc. and conclusions. Appendices may be placed in the report if necessary.
The report should be word processed and presented in a plastic folder. Your name class and subject should be clearly visible.
A/D and D/A Convertors
Analogue and Digital Signals
Analogue Signals - Signals whose amplitude and/or frequency vary continuously eg. sound. Fig 1.1 illustrates an analogue signal:-
Fig 1.1 Illustration of an analogue signal
Digital Signals - Signals which are not continous in nature but consist of discrete pulses of voltage or current known as bits which represent the information to be processed. Digital voltages can vary only in discrete steps. Normally only two levels are used ( 0 and 1 ).Fig 1.2 illustrates a digital signal.
Fig 1.2 Illustration of a digital signal
Analogue / Digital Conversions
In todays electronic system it is often necessary that the overall system may not be entirely analogue or entirely digital in nature. Thus a digital system may be controlled by input signals which are the amplified analogue outputs, perhaps of some measuring transducer (termister, LDR). Similarly a digital system output may be required to control the measured analogue system via analogue control values.Interfacing is therefore required between the analogue and digital subsystems and it is necessary to be able to convert an analogue signal into a digital equivalent signal and visa versa. A/D and D/A convertors are therefore used.
An analogue signal cannot be represented exactly by a digital signal and must be sampled at sufficient intervals for all relevant information to be retained. Sampling theory states that at least two samples must be obtained per period of the highest frequency component. If the highest frequency component is fs then the period of the sampling signal is given by:-
T < 1/2 fs
Fig. 2 Sample and Hold
Fig.2 shows a basic sample and hold circuit. The capacitor C is used as a store or memory to hold the value of the sample. It is connected to the analogue signal input via the resistor R. The time constant CR is chosen to be sufficiently short so that the capacitor voltage can follow the required analogue signal variations. At the instant that the sample is to be taken switch S is changed into the hold position and the sample voltage is available to the succeeding analogue to digital convertor.
The main disadvantage with this simple circuit lies in the voltage drift which occurs in the capacitor during the hold period. This is mainly due to the load placed upon the capacitor by the following circuitry and can be minimized by using a larger capacitor or by the use of a high impedance buffer amplifier.
Analogue to Digital Convertors
The two A/D convertors described below are known as the Ramp and Successive Approximation types.
Ramp A/D Convertor-
analogue
input Output Control
sample 0 if Va > Vc Logic
Va 1 if Va < Vc
Count up if input = 0
Count down if input = 1
VRef
n-bit Counter
Cloc
n-bit D/A
Convertor
n bit parallel digital output
Fig 3.1 Block Diagram of Ramp A/D Convertor
Fig 3.1 shows the block diagram for a Staircase Ramp analogue to digital convertor. This diagram consists of a clock pulse generator which sends clock pulses into the n-bit counter. The counter produces a parallel digital output which is converted into its analogue equivalent by the D/A convertor. The output of the D/A convertor is compared with the analogue input sample by the comparator. The output of the comparator is then fed into the control logic which in turn controls the counter.
The circuit operates as follows, the counter is emptied by resetting all bits to zero before a conversion is started. When the new analogue sample is present the control logic starts the count, ie clock pulses are fed into the counter. The counter digital output thus increases bit by bit at the clock frequency. The output from the digital to analogue convertor is a linear ramp made up of equal incremental steps. The count continues until the generated staircase ramp exceeds the value of the analogue sample voltage, when the capacitor output goes to logic 1 and stops the count.The counter output is at this time the digital equivalent of the analogue voltage.
Successive Approximation A/D Convertor
Shift Register
n-bit
digital
output
D/A Convertor
Fig 3.2 Block Diagram for a Successive Approximation A/D Convertor
Fig 3.2 shows the block diagram for a successive approximation A/D convertor. The diagram consists of a shift register to store the digital output connected to a D/A convertor whose output is compared with the analogue input sample by use of a comparator. The output of the comparator is then fed is then fed into the shift register.
The circuit operates by repeatedly comparing the analogue signal voltage with a number of approximate voltages which are generated at the D/A convertor.
Initially the shift register is cleared and then the D/A convertor output is zero. The first clock pulse applies the MSB to the register to the D/A convertor. The output of the D/A convertor is then one-half of its full scale voltage range (FSR). If the analogue voltage is greater than FSR/2 the MSB is retained (stored by a latch), if it is less than the FSR/2 the MSB is lost. The next clock pulse applies the next lower MSB to the D/A convertor producing a D/A convertor output of FSR/4 . If the MSB has been retained the total D/A convertor output voltage is now 3FSR/4. If the MSB has been lost the output of the D/A convertor is now FSR/4. In either case the analogue and D/A convertor voltages are again compared. If the analogue voltage is the larger of the two the second MSB is retained (latched), if not it is not the MSB is lost.
A succession of similar triats are carried out and after each the shift register output bit is either retained by a latch or is not. Once n+1 clock pulses have been supplied to the register the conversion has been completed and the register output gives the digital word that represents the analogue input sample voltage.
The characterics of two A/D convertors are shown in Appendices 1 +2
Digital To Analogue Converter
A typical 4-bit D/A converter is shown in fig 4.1. The circuit uses precision resistors that are weighted in digital progression ie 1,2,3,4. Vref is an accurate reference voltage. The circuit has 4 inputs (d0,d1,d2,d3) and 1 output Vout. When a bit is high it produces enough base current to saturate its transistor this acts as a closed switch. When a bit is low the transistor is cut off (open switch). By saturating and cutting off the transistor (opening and closing switch ) 16 different output currents from 0 to 1.875 Vref/R can be produced. If for example Vref =5V and R=5KW then the total output current varies from 0 to 1.875 mA as shown in table 1.
Fig 4.1 D/A converter using switching transistors
D3 D2 D1 D0 Output current mA Fraction of maximum
0 0 0 0 0 0
0 0 0 1 0.125 1/15
0 0 1 0 0.25 2/15
0 0 1 1 0.375 3/15
0 1 0 0 0.5 4/.15
0 1 0 1 0.625 5/15
0 1 1 0 0.75 6/15
0 1 1 1 0.875 7/15
1 0 0 0 1 8/15
1 0 0 1 1.125 9/15
1 0 1 0 1.25 10/15
1 0 1 1 1.375 11/15
1 1 0 0 1.5 12/15
1 1 0 1 1.625 13/15
1 1 1 0 1.75 14/15
1 1 1 1 1.875 15/15
Table 1 Output Current
By sending out a nibble to D3 - D0 in ascending levels ie. 0000 , 0001 , 0011 etc. the output current of the D/A converter is shown in fig 4.2. The output moves one step higher until reaching the maximum current. Then the cycle repeats. If all resistors are exact and all transistors matched all steps are identical in size.
Fig 4.2 Output current of D/A convertor
Glossary Of Terms
Resolution - One way to measure the quality of a D/A converter is by its resolution. The resolution is the ratio of the LSB increment to the maximum output. Resolution can be calculated by the formula.-
Resolution = 1 / 2n - 1 where n = number of bits
Percentage resolution = 1 / resolution * 100%
The greater the number of bits the better the resolution table 2 is a summary of the resolution for converters with 4 to 18 bits.
Bit Resolution Percent
4 1 part in 15 6.67
6 1 part in 63 1.54
8 1 part in 255 0.392
10 1 part in 1,023 0.0978
12 1 part in 4095 0.0244
14 1 part in 16,383 0.0061
16 1 part in 65,535 0.00153
18 1 part in 262,143 0.000381
Table 2 Resolution table
Accuracy - The conformance of a measured value with its true value; the maximum error of a device such as a data converter from the true value.
Absolute Accuracy - The worst case input to output error of a data converter referred to the NDS (National Bureau Of Standards) , standard volt.
Relative Accuracy - The worst case input to output error of a data converter as a percent of full scale referred to the converter reference. The error consists of offset gain and linearity components.
Conversion Rate - The number of repetitive A/D or D/A conversions per second for a full scale change to specified resolution and linearity.
Visual Display Devices
Visual displays are often employed in electronic equipment to indicate the numerical value of some quantity eg. digital watches, electronic calculators and digital voltmeters. A variety of display devices are available but the most common are the Light Emitting Diode (LED) and the Liquid Crystal Display (LCD).
Light Emitting Diode (LED)- The majority of Light Emitting Diodes are either gallium phosphide (GaP) or gallium-arsenide-phosphide (GaAsP) devices. An LED radiates energy in the visible part of the electromagnetic spectrum when the forward bias voltage applied across the diode exceeds the voltage that turns it ON. This voltage depends upon the type of LED and the light it emits. Table 3 displays information on different LED types and fig.5.1 the electronic symbol for a LED.
Colour Material Wavelength (peak radiation) nm Forward voltage at 10mA current (V)
Red GaAsp 650 1.6
Green GaP 565 2.1
Yellow GaAsP 590 2.0
Orange GaAsP 625 1.8
Blue SiC 480 3.0
Table 3 LED Types
Blue LEDs are a fairly recent development and these devices use silicon carbide (SiC)
Fig 5.1 LED Symbol
The current flowing in a LED must not be allowed to exceed a safe figure, generally 20-60 mA, and if necessary a resistor of suitable value must be connected in series with the diode to limit the current.
Often a LED is connected between one of the outputs of a TTL device and either earth or +5V depending upon when the LED is required to glow visibly. If for example, a LED is expected to glow when the output to which it is connected is low, the device should be connected as in fig 5.2 . Suppose the low voltage to be 0.4V and the sink current to be 16mA. Then if the LED voltage drop is 1.6V and the value of the series resistor will be
( 5 - 1.6 - 0.4 ) / ( 16 * 10 -3 ) = 188 W
When the output of the device is high (@ 4V), no current flows and the LED remains dark. When the LED is to glow to indicate the high output condition, the circuit shown in fig.5.3 must be used.
R1 = ( 5 - 1.6 ) / (16 * 10-3 ) = 213 W
When a LED is reverse biased it acts very much like a zenar diode with a low breakdown voltage (@ 4 V ).
Light Emitting Diodes are commonly used because they are cheap, reliable, easy to interface and are readily available from a number of sources. Their main disadvantage is that their luminous efficiency is low, typically 1.5 lumens/watt.
Fig 5.2
Fig 5.3
The characteristics of a LED Display is displayed in Appendix 3
Liquid Crystal Displays (LDR)-
A solid crystal is a material in which the molecules are arranged in a rigid lattice structure. If the temperture of the material is increased above it melting point, the liquid that is formed will tend to retain much of the orderly molecular structure. The material is then said to be in its liquid crystalline phase. There are two classes of liquid crystal known, respectively as nematic and smetic but only the former is used for display devices.
A nematic liquid crystal does not radiate light but instead it interferes with the passage of light whenever it is under the influence of an applied electric field. There are two ways in which the optical properties of a crystal can be influenced by an electric field. These are dynamic scattering and twisted nematic. The former was commonly employed in the past but now its application is mainly resisted to large-sized displays. The commonly met liquid crystal displays, eg. those in digital watches and hand calculators, ars all of the twisted nematic type.
Incident Light
Transmitted Light
Fig 6 (B)
Incident light
Fig 6 (A)
V Fig 6 (A) A liquid crystal cell
(B) and (C) operation of a liquid crystal cell No transmitted light
Fig 6 (C)
The construction of a Liquid Crystal cell is shown in fig. 6 (A) . A layer of a liquid crystal is placed in between two glass plates that have transparent metal film electrodes deposited on to their interior faces. A reflective surface, or mirror, is situated on the outer side of the lower glass plate (it may be deposited on its surface) . The conductive material is generally either tin oxide or a tin oxide or a tin oxide/indium oxide mixture and it will transmit light with about 90% efficiency. The incident light upon the upper glass plate is polarized in such a way that, if there is zero electric field between the plates, the light is able to pass right through and arrive at the reflective surface. Here it is reflected back and the reflected light travels through the cell and emerges from the upper plate (fig.6 (B). If a voltage is applied across the plates (fig.6 (C) the polarization of the light entering the cell is altered and it is no longer able to propagate as far as the reflective surface. Therfore no light returns from the upper surface of the cell and the display appears to be dark. Because the LDR does not emit light, it dissipates little power.
Liquid Crystal Displays, unlike LEDs, are not available as signal units and are generally manufactured in the form of a 7-segment display. The metal oxide film electrode on the surface of the upper glass plate is formed into the shape of the required 7 segments, each of which is taken to a separate contact, and the lower glass plate has a common electrode or backplate deposited on it. The idea is shown by fig 7 With this arrangement a voltage can be applied between the backplate and any one, or more of the seven segments to make that, or those particular segment(s) appear to be dark and thereby display the required number.
Nematic liquid crystal displays posses a number of advantages which have led to their widespread use in battery operated equipment. First, their power consumation is very small, about 1 m W per segment (much less than the LED); secondly their visibility is not affected by bright incident light (such as sunlight ); and third, they are compatible with the low-power NMOS/CMOS circuitry.
Fig 7 LCD 7-segment Display
The charactics of LCD display are displayed in appendix 4
Seven Segment Displays
Seven Segment displays are generally used as numerical indicators and consist of a number of LEDS arranged in seven segments as shown in Fig 8 (A). Any number between 0 and 9 can be indicated by lighting the appropriate segments ass shown in Fig 8 (B). A typical 7-segment display is manufactured in a 14-pin dil package with the cathode of each LED being brought out to each terminal with the common anode.
Fig.8 (A)
Fig 8 (B)
Clearly, the 7-seqment display needs a 7-bit input signal and so a decoder is required to convert the digital signal to be displayed into the corresponding 7-segment signal. Decoder/driver circuits can be made using SSI devices but more usually a ROM or a custom-built IC would be used. Fig.9 (A) shows one arrangement, in which the BCD output of a decade counter is converted to a 7-segment signal by a decoder.
When a count in excess of 9 is required, a second counter must be used and be connected in the manner shown by fig 10 (B).The tens counter is connected to the output of the final flip-flop of the units counter in the same way as the flip-flops inside the counters are connected.
Decade BCD to 7-segment
Counter 7-segment decoder display
Fig 10 (A)
Fig 10 (B)
Decade Decoder 7-segment
counter display
Dot Matrix Displays
A dot matrix display allows each alphanumeric character to be indicated by illuminating a number of dots in a 5 * 7 dot matrix. To allow for lower case letters and for spaces in between adjacent rows and columns each character fount is allocated a 6 * 12 space. Fig.11.1 shows 6 * 12 dot matrix. Every location in the dot matrix has a LED connected, as shown by Fig 11.2 for the top two rows of the matrix only. All the cathodes of the LEDs in one row, and all the anodes in one column are connected together. By addressing the appropriate locations in the diode and making the LEDs at those points to glow visibly any number or character in the set can be illuminated. Some examples are given in Fig.???
The circuitry required to drive a dot matrix display is too complex to be implemented using SSI devices. One 3-chip LSI dot matrix display controller, the Rockwell 10939, 10942 and 10943, is a general-purpose controller which is able to interface with other kinds of dot matrix as well as LED type.The controller can drive up to 46 dots and up to 20 characters selected out of the full 96 character ASCII code.
`
Fig 11.1
Fig 11.2
Bibliology
Microelectronic Systems a practical approach W Ditch
Basic Electrical And Electronic Engineering Ec.Bell and R.W. Bolton
Electronic and Electronic Principles for Technicians D.C Green
Data Conversion Components Datel
R S Data Library R S Components
f:\12000 essays\sciences (985)\Computer\An Ergonomic Evaluation of Kinesis Ergonomic Computer Keybo.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Introduction
In this information-technology age, everyday tasks are more and more related to computer. That ranges from basic jobs such as providing food recipes for housewives to complicated ones such as analyzing laboratory experimental data for scientists and engineers. This popularity of computer means that the time one has to spend with computer would be a lot more than in the past.
Until now, the computers and computer peripherals in the market have been made according to the same design as the ones invented decades ago when computers are used only in large-scale scientific projects or big corporations. That means for most people the ergonomic value of these products obviously was not taken into account when designing them. Fortunately, at the moment, more companies are trying to change the way people work with computer by marketing a number of ergonomic products most notably keyboard, mouse and monitor. There are ergonomic keyboards, mice and monitors being released all the time. The reason why the focus is on these products is that they are the parts of computer one interfaces with the most while working with computer.
The subject of whether these ergonomic keyboards, mice, monitors and other products really work attracts a lot of regular computer users. Thus, studies dedicated to it have been done. This report is based on one of the studies about an ergonomic keyboard from a manufacturer called Kinesis.
This study looks not only on the effect of the keyboard on the users' body by mean of electromyographic activity but also on the learning rate of the users changing to this new style of keyboard. This is very useful since slow learning rate would lead to the decrease in effectiveness of work.
Introduced in 1868 by Christopher Sholes, computer keyboard is still the primary data entry mode for most computer users. With the increase of computer, hence keyboard, usage at the moment, these problems of the keyboard users known as operator stress problems have developed. This is a kind of cumulative trauma disorders which is mainly caused by working excessively or repetitively with the same thing, keyboard, in this case, in the same position for a long period of time. This kind of disorder is considered to be the most expensive and severe one occurring in office environment.
This leads to an amount of alternative designs introduced in the market with the main intention of reducing muscular stress required for typing. The reason why these designs have not yet replaced the old one is because of the familiarity of the users to the old design. This means an amount of retraining time is required to familiarize the users to a new design of keyboard and thus the one requiring less time is likely to be the choice.
This study main objectives are to measure and analyze initial learning rate and electromyographic activity, explained later, while using an alternative design of keyboard, the Kinesis Ergonomic Computer Keyboard (figure1.) These data are then used to compare to the standard computer keyboard, the old design, to see if it is worth the time and money spent on the new product.
The electromyographic signals used to examine the muscle activities in this study are signals generated by muscles. These signals can sometimes be used to control artificial body limbs especially ones requiring sensitive or complicated degree of control such as rotary or grasping motion. Systems that use such signals are called myoelectric systems.
The Kinesis keyboard utilizes the same QWERTY layout as the standard design so that users do not have to relearn typing all over again. The key ergonomic features of this keyboard are:
· The distance between centers of the halves of the Kinesis keyboard is approximately 27 cm, reducing the angle of adduction of the wrists to near zero for most adults.
· The keypads slope downward from inside to outside edge, and are concave to better fit the natural shape of the operator's hands. The keys form straight columns and slightly curved rows.
· The keyboard features a built-in forearm-wrist support extending approximately 14 cm from the home row to the edge.
· The keyboard features separate thumb-operated keypads to redistribute the workload from the little fingers to the thumbs. These keypads consists of the enter, space, backspace, delete and combination (ctrl and alt) keys.
· Detachable numeric/cursor pad.
· Integral palm supports.
· Shorter reach for function keys.
Figure 1. The Kinesis Ergonomic Computer Keyboard.
2. Details
2.1 Materials and methods
There were 6 female professional typists participants of age 29 to 52 and typing experience of 10 to 32 years involved in this experiment. Typing speed in words per minute, typing accuracy in percentage of characters typed correctl
f:\12000 essays\sciences (985)\Computer\An essay on computer communications.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Communications. I could barely spell the word, much less comprehend its meaning. Yet when
Mrs. Rubin made the announcement about the new club she was starting at the junior high school, it triggered something in
my mind.
Two weeks later, during the last month of my eighth grade year, I figured it out. I was rummaging through the basement, and
I ran across the little blue box that my dad had brought home from work a year earlier. Could this be a modem?
I asked Mrs. Rubin about it the next day at school, and when she verified my expectations, I became the first member of
Teleport 2000, the only organization in the city dedicated to introducing students to the information highway.
This was when 2400-baud was considered state-of-the-art, and telecommunications was still distant from everyday life. But
as I incessantly logged onto Cleveland Freenet that summer, sending e-mail and posting usenet news messages until my
fingers bled, I began to notice the little things. Electronic mail addresses started popping up on business cards. Those
otherwise-incomprehensible computer magazines that my dad brought home from work ran monthly stories on
communications-program this, and Internet-system that. Cleveland Freenet's Freeport software began appearing on systems
all over the world, in places as far away as Finland and Germany - with free telnet access!
I didn't live life as a normal twelve-year-old kid that summer. I sat in front of the monitor twenty-four hours a day, eating my meals from a plate set next to the
keyboard, stopping only to sleep. When I went back to school in the fall, I was elected the first president of Teleport 2000, partially because I was the only student
in-the school with a freenet account, but mostly because my enthusiasm for this new, exciting world was contagious.
Today, as the business world is becoming more aware of the advantages of telecommunications, and the younger generation is becoming more aware of the
opportunities, it is successfully being integrated into all aspects of our society. Companies are organizing Local Area Networks and tapping into information
resources through internal networking and file sharing, and children of all ages are entertained by the GUI-based commercial systems and amazed by the worldwide
system of gopher and search services. As a result, a million more people join the 'net every month, according to a 1994 article by Vic Sussman in U.S. News
& World Report.
They say that the worldwide community used to double its knowledge every century. Right now, that rate has been reduced to seven years, and is constantly
decreasing. I've learned more since I started traveling the information highway than I could have possibly imagined. Through File Transfer Protocol sites, I can
download anything from virus-detection utilities to song lyrics and guitar tabs. I receive press releases, proclamations and international news from the White House
via a mailing list. I even e-mailed President Clinton recently and received a response the next day. And it was just a few months ago that I hung up my
2400-baud modem for a replacement six times as fast.
The essence of this international system of systems was neatly summed up by David S. Jackson and Suneel Ratan in a recent Time article: "The magic of the Net is
that it thrusts people together in a strange new world, one in which they get to rub virtual shoulders with characters they might otherwise never meet."
To me, this electronic "Cyberspace" was like kindergarten all over again. It was not only an introduction to a whole new world of exciting opportunities, but it
helped me take a step further into maturity. Communicating with others on this alternate plane of reality was so different, yet so similar, to the world I had already
experienced. The Internet is a place where the only way you can view people is by how they choose to display themselves. Because you can't see other users,
you can't make any prejudgments based upon race, sex, or physical handicap. As stated by John R. Levine and Carol Baroudi in The Internet for Dummies,
'Who you are on the Internet depends solely on how you present yourself through your keyboard."
The reason for this is simple. The people who created this form of communication weren't interested in that. They didn't care about political or ethnic boundaries;
they only cared about the abstract. As a result, the parallel world they conceived contained a true form of equality. "One computer is no better than any other, and
no person is better than any other," wrote Levine and Baroudi, and the only way this right can be taken away from you is if you choose to remove it yourself. My
realization of this concept taught me a lot about the faults of the real world, and why so many people feel the need to defect to Cyberspace so frequently.
I believe in the future - not the extreme 1984; 2001: A Space Odyssey future, but the inevitable progression from today into tomorrow. The people of tomorrow
will not be puzzled by the word "Internet" or the mechanics behind networking - these will be basic survival skills in society. The future will see an
electronically-linked global community, in which everyone is a citizen. The constant thickening of the worldwide web of networks excites me, because it
proves that the world is not as big as one may think. You really can reach out to anyone you want in a matter of milliseconds.
The other day, I was helping a ten-year-old girl find an e-mail "key-pal" from Australia. I think I see a lot of me, the curious eighth-grader, in her. Perhaps I see a lot
of the future, too.
f:\12000 essays\sciences (985)\Computer\Anatomia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
nocoes gerais de ANATOMIA
1. Ossos
O osso, ou tecido ¢sseo, ‚ uma forma r¡gida de tecido conectivo que forma a maior parte do esqueleto. O sistema esquel‚tico ou esqueleto (G. seco) do adulto consiste em mais de 200 ossos que constituem a estrutura de sustenta‡Æo do corpo.
Algumas cartilagens tamb‚m sÆo inclu¡das no sistema esquel‚tico ( p. ex., as cartilagens costais que unem as extremidades anteriores das costelas ao esterno). As liga‡äes entre os componentes do esqueleto sÆo denominadas articula‡äes; a maioria delas permite movimento.
O sistema esquel‚tico consiste em duas partes principais; (1) o esqueleto axial, composto do crƒnio, coluna vertebral, esterno e costelas e (2) o esqueleto apendicular, formado pelos c¡ngulos peitoral e p‚lvico e ossos dos membros.
O estudo dos ossos ‚ denominado osteologia. Embora os ossos estudados no laborat¢rio sejam sem vida e secos, devido a remo‡Æo das suas prote¡nas, os ossos sÆo ¢rgÆos vivos no corpo que mudam consideravelmente ... medida que se envelhece.
A exemplo de outros ¢rgÆos, os ossos possuem vasos sangu¡neos, vasos linf ticos e nervos e podem ser comprometidos por doen‡as.
Osteomielite ‚ uma inflama‡Æo da medula ¢ssea e osso adjacente. Quando quebrado ou fraturado, o osso cicatriza. Ossos nÆo usados, p. ex. num membro paralisado, sofrem atrofia (isto ‚, tornam-se mais finos e mais fracos). O osso pode ser absorvido, como ocorre ap¢s a perda ou extra‡Æo de dentes. Os ossos tamb‚m sofrem hipertrofia (ou seja, tornam-se mais espessos e fortes) quando tˆm um maior peso para sustentar.
Ossos de pessoas diferentes exibem varia‡äes anat"micas. Variam de acordo com a idade, sexo, caracter¡sticas f¡sicas, sa£de, dieta, ra‡a e com diferentes condi‡äes gen‚ticas e endocrinol¢gicas.
As varia‡äes anat"micas sÆo proveitosas na identifica‡Æo de restos esquel‚ticos, um aspecto da medicina forense ( a rela‡Æo e aplica‡Æo de fatos m‚dicos aos problemas legais).
Os ossos vivos sÆo tecidos amold veis que contˆm componentes orgƒnicos e inorgƒnicos. Consistem essencialmente em material intercelular impregnado de substƒncias minerais, principalmente fosfato de c lcio hidratado, isto ‚, Ca3 (PO4)2.
As fibras col genas no material intercelular conferem aos ossos elasticidade e resistˆncia, enquanto os cristais de sais na forma de tubos e bastäes lhe conferem dureza e alguma rigidez. Quando um osso ‚ descalcificado no laborat¢rio por submersÆo, por alguns dias, em cido dilu¡do, seus sais sÆo removidos, por‚m o material orgƒnico permanece. O osso ret‚m sua forma, mas est tÆo flex¡vel que se pode dar um n¢. Um osso calcinado tamb‚m ret‚m sua forma, mas seu tecido fibroso ‚ destru¡do. Em consequˆncia, torna-se quebradi‡o, inel stico e esmigalha-se facilmente.
A quantidade relativa de substƒncia orgƒnica para inorgƒnica nos ossos varia com a idade. A substƒncia orgƒnica ‚ maior na infƒncia; por isso, os ossos de crian‡as curvam-se um pouco.
Em alguns dist£rbios metab¢licos como o raquitismo e osteomal cia, h uma calcifica‡Æo inadequada da matriz dos ossos. Como o c lcio confere dureza aos ossos, as reas nÆo-calcificadas arqueiam-se um pouco, particularmente se sÆo ossos que sustentam peso. Isso acarreta deformidades progressivas, como o joelho valgo.
Embora o diagn¢stico de raquitismo seja sugerido pelo alargamento cl¡nico nos locais das placas de cartilagem epifis ria, o diagn¢stico ‚ confirmado pelas t¡picas altera‡äes radiogr ficas que ocorrem nas extremidades em crescimento dos ossos longos e costelas nesses pacientes.
As fraturas sÆo mais comuns em crian‡as que em adultos, devido ... combina‡Æo de seus ossos mais delgados e atividades descuidadas. Felizmente, muitas dessas fraturas sÆo muito finas ou do tipo em galho verde, que nÆo sÆo graves. Numa fratura em galho verde, o osso quebra como um ramo de salgueiro.
As fraturas da placa de cartilagem epifis ria sÆo graves porque podem resultar na fusÆo prematura da di fise e ep¡fise, com subsequente encurtamento do osso; p. ex. , a fusÆo prematura de uma ep¡fise radial acarreta um desvio radial progressivo da mÆo ... medida que a ulna continua a crescer. A existˆncia de ep¡fises nÆo-fundidas em pessoas jovens pode ser muito proveitosas no tratamento delas; p. ex., a coloca‡Æo de grampos atrav‚s da placa de cartilagem epifis ria no joelho interrompe o crescimento no membro inferior. SÆo os ossos da perna normal que sÆo grampeados para permitir que os ossos da perna curta alcancem a primeira.
Felizmente, as fraturas se consolidam mais rapidamente em crian‡as que em adultos. Uma fratura femoral ocorrida ao nascimento est unida em trˆs semanas, enquanto que a uniÆo demora at‚ 20 semanas em pessoas de 20 anos ou mais.
Durante a idade avan‡ada, os componentes orgƒnico e inorgƒnico do osso se reduzem, produzindo uma condi‡Æo denominada osteoporose. H uma redu‡Æo na quantidade de osso (atrofia do tecido esquel‚tico) e, em consequˆncia, os ossos das pessoas idosas perdem elasticidade e fraturam facilmente. Por exemplo, as pessoas senis podem trope‡ar numa pequena saliˆncia enquanto andam, sentir ou ouvir o colo do seu fˆmur (osso da coxa) quebrar e cair no chÆo. As fraturas do colo do fˆmur sÆo especialmente comuns em mulheres idosas, porque a osteoporose ‚ mais intensa nelas do que nos homens idosos.
TIPOS DE OSSO
H dois tipos principais de osso, esponjoso e compacto, mas nÆo h limites preciosos entre os dois tipos, uma vez que as diferen‡as entre eles dependem da quantidade relativa de substƒncia s¢lida e do n£mero e tamanho dos espa‡os em cada um deles.
Todos os ossos tˆm uma arruma‡Æo externa de substƒncia compacta ao redor de uma massa central de substƒncia esponjosa, exceto onde a £ltima ‚ substitu¡da , por uma cavidade medular ou um espa‡o a‚reo, p. ex., os seios paranasais.
A substƒncia esponjosa consiste em trab‚culas finas e irregulares de substƒncia compacta que se ramificam e se unem umas com as outras para formar espa‡os intercomunicantes, que sÆo preenchidos com medula ¢ssea. As trab‚culas da substƒncia esponjosa sÆo arranjadas em linhas de pressÆo e tensÆo. Nos adultos h dois tipos de medula ¢ssea, vermelha e amarela.
A medula ¢ssea vermelha ‚ ativa na forma‡Æo de sangue (hematopoese), ao passo que a medula ¢ssea amarela ‚ sobretudo inerte e gordurosa. Na maioria dos ossos longos h uma cavidade medular no corpo ou di fise, que cont‚m medula ¢ssea amarela na vida adulta. Na medula ¢ssea amarela, a maior parte do tecido hematopo‚tico, foi substitu¡da por gordura.
A substƒncia compacta parece s¢lida, exceto por espa‡os microsc¢picos. Sua estrutura cristalina lhe confere dureza e rigidez e torna-se opaca ao raio-X.
Classifica‡Æo dos ossos. Os ossos podem ser classificados regionalmente como axiais (crƒnio, v‚rtebras, costelas e externo) ou apendiculares (ossos dos membros superiores e inferiores e ossos associados a estes).
Tamb‚m se classificam os ossos de acordo com sua forma.
1. Os ossos longos sÆo de forma tubular e possuem um corpo (di fise) e duas extremidades, que sÆo c"ncavas ou convexas. O comprimento dos ossos longos ‚ maior que sua largura, embora alguns ossos longos sejam pequenos (p. ex., nos dedos).
As extremidades dos ossos longos articulam-se com outros ossos; assim, elas sÆo dilatadas, lisas e cobertas com cartilagem hialina. Geralmente, a di fise de um osso longo ‚ oca e tipicamente apresenta trˆs margens separando suas trˆs faces.
2. Os ossos curtos sÆo de forma cub¢ide e encontrados apenas no p‚ e no pulso, p. ex., os ossos do carpo. Apresentam seis faces, das quais quatro ou menos sÆo articulares e duas ou mais sÆo para fixa‡Æo de tendäes e ligamentos e para entrada de vasos sangu¡neos.
3. Os ossos planos consistem em duas placas de osso compacto com osso esponjoso e medula entre elas, p. ex., os ossos da calv ria (ab¢bada craniana), o externo e a esc pula (exceto pela parte delgada desse osso). O espa‡o medular entre as lƒminas externa e interna dos ossos planos do crƒnio ‚ conhecido como d¡ploe (G. duplo).
A maioria dos ossos planos ajuda a formar paredes de cavidades (p. ex., a cavidade do crƒnio); por isso, a maior parte deles ‚ levemente encurvada ao inv‚s de plana. No in¡cio da vida, um osso plano consiste numa fina camada de substƒncia compacta, por‚m a medula (p. ex., d¡ploe) surge no seu interior durante a segunda infƒncia, resultando em camadas compactas de cada lado da cavidade medular.
4. Os ossos irregulares exibem formas variadas (p. ex., ossos da face e v‚rtebras). Os corpos das v‚rtebras possuem algumas caracter¡sticas de ossos longos.
5. Os ossos pneum ticos cont‚m cavidades (c‚lulas) aer¡feras ou seios, p. ex., as c‚lulas aer¡feras mast¢ideas na parte mast¢idea do osso temporal e os seios paranasais. Evagina‡äes da membrana mucosa da orelha m‚dia e da cavidade do nariz invadem a cavidade medular, produzindo, respectivamente, as c‚lulas aer¡feras e seios.
6. Os ossos sesam¢ides sÆo n¢dulos ¢sseos arredondados ou ovais que se desenvolvem em certos tendäes (p. ex., a patela no tendÆo do quadr¡ceps da coxa, e o pisiforme no tendÆo do flexor ulnar do carpo).
Tais ossos foram denominados sesam¢ides em virtude de sua semelhan‡a a sementes de s‚samo. SÆo comumente encontrados onde tendäes cruzam as extremidades dos ossos longos nos membros. Protegem o tendÆo de um desgaste excessivo e mudam o ƒngulo do tendÆo quando ele passa para se inserir. Isso resulta numa maior vantagem mecƒnica na articula‡Æo. A face articular de um osso sesam¢ide ‚ coberta de cartilagem articular, enquanto o resto ‚ incrustado no tendÆo.
7. Os ossos acess¢rios desenvolvem-se quando surge um centro de ossifica‡Æo adicional dando origem a um osso, ou quando um dos centros usuais nÆo se funde ao osso principal. A parte separada do osso da a impressÆo de um osso supranumer rio.
Os ossos acess¢rios sÆo comuns no p‚ e ‚ importante conhecˆ-los para que nÆo sejam confundidos com lascas de ossos ou fraturas em radiografias.
8. Os ossos heterot¢picos sÆo aqueles que nÆo pertencem ao esqueleto principal, mas podem desenvolver-se em certos tecidos moles e ¢rgÆos em decorrˆncia de doen‡a. Esse tipo de osso pode formar-se em cicatrizes, e uma inflama‡Æo cr"nica, caracter¡stica da tuberculose, pode produzir tecido ¢sseo no pulmÆo.
ACIDENTES àSSEOS
A superf¡cie dos ossos nÆo ‚ lisa e polida nem mesmo em contorno, exceto nas reas cobertas por cartilagem e onde os tendäes , vasos sangu¡neos e nervos passam em sulcos (p. ex., o sulco intertubercular na cabe‡a do £mero e o sulco do nervo radial na sua di fise ).
Os ossos exibem uma variedade de saliˆncias, depressäes e orif¡cios. Os acidentes encontrados em ossos secos em qualquer rea onde os tendäes, ligamentos e f scia estavam fixados. A fixa‡Æo das fibras musculares de um m£sculo nÆo causam nenhum acidente num osso.
Os acidentes ¢sseos come‡am a tornar-se proeminentes durante a puberdade (12 ... 16 anos) e ficam cada vez mais acentuados na idade adulta. Os acidentes recebem nomes para ajudar a distingui-los.
Eleva‡äes. Os diversos tipos de eleva‡Æo nos ossos sÆo citados abaixo em ordem de proeminˆncia. Examine cada tipo num esqueleto.
Uma eleva‡Æo linear ou pouco saliente ‚ referida como uma linha (p. ex., a linha [LMCCL1]nucal superior do osso occipital, e linha supracondilar medial). Linhas muito proeminentes sÆo chamadas cristas (p. ex., a crista il¡aca, e a crista p£bica).
Uma eleva‡Æo arredondada ‚ denominada (1) tub‚rculo (pequena eminˆncia saliente); (2) protuberƒncia (uma tumefa‡Æo ou calombo, p. ex., protuberƒncia occipital externa). (3) trocanter (uma grande eleva‡Æo romba, p. ex., o trocanter maior do fˆmur); (4) tuberosidade ou t£ber (uma grande eleva‡Æo); e (5) mal‚olo (uma eleva‡Æo semelhante ... cabe‡a de um martelo).
Uma eleva‡Æo pontiaguda ou parte salientada ‚ chamada de espinha, p. ex., a espinha il¡aca ƒntero-superior, ou processo, p. ex., o processo espinhoso de uma v‚rtebra. As facetas (Fr. pequenas faces) sÆo pequenas reas ou superf¡cies de um osso, lisa e planas, especialmente onde ele se articula com outro osso. As facetas articulares sÆo cobertas com cartilagem hialina (p. ex., as facetas de uma v‚rtebra).
Uma rea articular arredondada de um osso ‚ denominada cabe‡a (p. ex., a cabe‡a do £mero) ou c"ndilo, p. ex., o c"ndilo lateral do fˆmur. Um epic"ndilo ‚ um processo proeminente logo acima de um c"ndilo.
Depressäes. Pequenas concavidades nos ossos sÆo descritas como fossas, enquanto as depressäes estreitas e longas sÆo referidas como sulcos. Uma reentrƒncia na margem de um osso ‚ chamada de incisura, p. ex., a incisura do acet bulo.
Forames e Canais. Quando uma incisura ‚ fechada por um ligamento ou osso de modo a formar uma perfura‡Æo ou buraco, ‚ denominada forame (p. ex., forame magno). Um forame que tenha extensÆo ‚ chamado de canal (p. ex., canal facial ). Um canal tem um orif¡cio em cada extremidade. Um meato (uma passagem) ‚ um canal que entra numa estrutura mas nÆo a atravessa, p. ex., o meato ac£stico externo ou canal auditivo.
DESENVOLVIMENTO DOS OSSOS
Os tecidos desenvolvem-se a partir de condensa‡äes do mesˆnquima (tecido conectivo embrion rio). O modelo mesˆnquimal de um osso que se forma durante o per¡odo embrion rio pode sofrer ossifica‡Æo direta, denominada ossifica‡Æo intramembran cea (forma‡Æo ¢ssea membran cea), ou ser substitu¡do por um modelo de cartilagem; o £ltimo torna-se ossificado por ossifica‡Æo intracartilaginosa (forma‡Æo ¢ssea endocondral).
Em resumo, o osso substitui membrana ou cartilagem. O processo de ossifica‡Æo ‚ semelhante em ambos os casos e a estrutura histol¢gica final do osso ‚ idˆntica.
A ossifica‡Æo intramembran cea ocorre rapidamente e se d em ossos que sÆo urgentemente necess rios para prote‡Æo (os ossos planos da calv ria ou ab¢boda craniana). A ossifica‡Æo intracartilaginosa, que ocorre na maioria dos ossos do esqueleto, ‚ um processo bem mais lento.
Desenvolvimento dos Ossos longos. A primeira indica‡Æo de ossifica‡Æo no modelo cartilaginoso de um osso longo ‚ vis¡vel pr¢ximo ao centro da futura di fise, denominada centro prim rio de ossifica‡Æo. Os centros prim rios aparecem em ‚pocas diversas os diferentes ossos em desenvolvimento, por‚m a maioria dos centros de ossifica‡Æo surge entre 7( e 12( semanas de vida pr‚-natal. Praticamente todos os centros estÆo presentes ao nascimento. Nessa ‚poca, a ossifica‡Æo a partir do centro prim rio ter quase atingido as extremidades do modelo de cartilagem do osso longo.
A parte do osso formada a partir de um centro prim rio ‚ denominada di fise.
Ao nascimento, centros de ossifica‡Æo adicionais podem surgir nas extremidades cartilaginosas de um osso longo. Estes sÆo referidos como ep¡fises ou centros secund rios de ossifica‡Æo.
A maioria dos centros secund rios de ossifica‡Æo aparece ap¢s a nascimento. As partes de um osso formadas a partir dos centros secund rios sÆo chamadas de ep¡fises. As ep¡fises ou centros secund rios de ossifica‡Æo dos ossos do joelho sÆo os primeiros a aparecer. Podem estar presentes ao nascimento.
As ep¡fises cartilaginosas sofrem as mesmas altera‡äes que ocorrem na di fise. Em consequˆncia, o corpo do osso torna-se revestido em cada extremidade por osso, as ep¡fises, que se desenvolvem a partir dos centros secund rios de ossifica‡Æo.
A parte da di fise mais pr¢xima da ep¡fise ‚ referida como met fise.
A di fise cresce em extensÆo por prolifera‡Æo da cartilagem na met fise. A fim de possibilitar a continua‡Æo do crescimento em extensÆo at‚ que o comprimento adulto de um osso seja alcan‡ado, o osso formado a partir do centro prim rio de ossifica‡Æo na di fise nÆo se funde com aquele formado a partir dos centros secund rios nas ep¡fises enquanto o osso nÆo atingir o tamanho adulto. Durante o crescimento de um osso, uma lƒmina de cartilagem, conhecida como placa de crescimento ou placa de cartilagem epifis ria, interpäe-se entre a di fise e a ep¡fise. Por concisÆo, ‚ ami£de chamada de placa epifis ria.
A di fise consiste num tubo oco de substƒncia compacta circundando a cavidade medular, ao passo que as ep¡fises e met fises consistem em substƒncia esponjosa coberta por uma fina camada de substƒncia compacta. O osso compacto sobre as faces articulares das ep¡fises ‚ logo coberto por uma cartilagem hialina denominada cartilagem articular.
Durante os dois primeiros anos p¢s-natais, surgem centros de ossifica‡Æo secund rios nas ep¡fises que sÆo expostas a pressÆo (p. ex., no joelho e quadril ). Tais centros, geralmente referidos como ep¡fises de pressÆo, estÆo situados nas extremidades dos ossos longos, onde estÆo sujeitas ... pressÆo de ossos opostos na articula‡Æo que eles formam.
Alguns centros de ossifica‡Æo secund rios ossificam partes de um osso associadas ... fixa‡Æo de m£sculos e fortes tendäes. Estes centros sÆo geralmente chamados de ep¡fises de tra‡Æo (p. ex., os tub‚rculos do £mero ). Tais ep¡fises estÆo sujeitas ... tra‡Æo mais do que ... pressÆo.
As placas de cartilagem epifis ria sÆo posteriormente substitu¡das pelo desenvolvimento de osso em cada um dos seus lados, diafis rio e epifis rio. Quando isso ocorre , o crescimento do osso cessa e a di fise funde-se ...s ep¡fises por uniÆo ¢ssea ou sinostose.
O osso formado no local da placa de cartilagem epifis ria ‚ particularmente denso e ainda reconhec¡vel nas radiografias de crian‡as e adolescentes. O conhecimento desse detalhe previne a confusÆo com linhas de fratura.
Em geral, a ep¡fise de um osso longo cujo centro de ossifica‡Æo apareceu por £ltimo ‚ a primeira a fundir-se com a di fise. Quando uma ep¡fise se forma a partir de mais de um centro (p. ex., a extremidade proximal do £mero ), os centros fundem-se entre si antes da uniÆo da ep¡fise com a di fise.
As altera‡äes nos ossos em desenvolvimento sÆo clinicamente importantes. Os m‚dicos e dentistas, especialmente os radiologistas, pediatras, ortodentistas e cirurgiäes ortopedistas, devem estar instru¡dos acerca do crescimento ¢sseo.
A ‚poca de aparecimento das diversas ep¡fises varia com a idade cronol¢gica. Como estÆo dispon¡veis boas tabelas de referˆncias, nÆo tem sentido memorizar as datas de aparecimento e desaparecimento dos centros de ossifica‡Æo de todos ossos.
Um radiologista determina a idade ¢ssea de uma pessoa estudando os centros de ossifica‡Æo. Dois crit‚rios sÆo usados: (1) o aparecimento de material calcificado na di fise e/ou ep¡fises. A ‚poca de aparecimento ‚ especificada para cada ep¡fise e di fise de cada osso para cada um dos sexos; e (2) o desaparecimento da linha escura que representa a placa de cartilagem epifis ria. Isto indica que a ep¡fise se fundiu ... di fise e ocorre em ‚pocas determinadas para cada ep¡fise.
A fusÆo das ep¡fises com a di fises ocorre 1 a 2 anos mais cedo no sexo feminino do que no masculino. A determina‡Æo da idade ¢ssea ‚ ami£de usada na defini‡Æo da idade aproximada de restos de esqueletos humanos em casos m‚dico-legais.
Algumas doen‡as aceleram e outras alentecem os tempos de ossifica‡Æo em compara‡Æo com a idade cronol¢gica do indiv¡duo. O esqueleto em crescimento ‚ sens¡vel a doen‡as relativamente leves e transit¢rias e a per¡odos de desnutri‡Æo.
A prolifera‡Æo de cartilagem na met fise se reduz durante a inani‡Æo e doen‡as, mas a degenera‡Æo de c‚lulas cartilaginosas nas colunas prossegue, produzindo uma linha densa de calcifica‡Æo provis¢ria que depois se torna osso com trab‚culas mais grossas, denominadas de linhas de parada do crescimento.
Sem um conhecimento b sico do crescimento ¢sseo e do aspecto dos ossos nas radiografias em idades diversas, poder-se-ia confundir uma placa de cartilagem epifis ria com uma fratura ou interpretar a separa‡Æo de uma ep¡fise como normal. Se vocˆ conhece a idade do paciente e a localiza‡Æo das ep¡fises, esses erros podem ser evitados, especialmente se vocˆ notar que as margens da di fise e ep¡fise sÆo suavemente encurvadas na regiÆo da cartilagem epifis ria. Uma fratura deixa uma margem abrupta e geralmente irregular de osso. Uma lesÆo que cause uma fratura no adulto pode causar deslocamento de uma ep¡fise num jovem.
Desenvolvimento dos Ossos curtos. O desenvolvimento dos ossos curtos ‚ semelhante ao do centro prim rio dos ossos longos e apenas um osso, o calcÆneo, desenvolve um centro secund rio de ossifica‡Æo.
Suprimento sangu¡neo dos ossos. Os ossos sÆo ricamente supridos de vasos sangu¡neos que os penetram a partir do peri¢steo, a membrana de tecido conectivo fibroso que os reveste.
As art‚rias periostais entram na di fise em in£meros pontos e sÆo respons veis por sua nutri‡Æo. Assim, um osso cujo peri¢steo ‚ removido morrer .
Pr¢ximo ao centro da di fise de um osso longo, uma art‚ria nutr¡cia passa obliquamente atrav‚s da substƒncia compacta e alcan‡a a substƒncia esponjosa e a medula.
Algumas ep¡fises de pressÆo sÆo, na sua maior parte, cobertas por cartilagem articular hialina. Recebem seu suprimento sangu¡neo da regiÆo da placa de cartilagem epifis ria. Tais ep¡fises (p. ex., cabe‡a do fˆmur) sÆo quase completamente cobertas por cartilagem articular e recebem seu suprimento sangu¡neo de vasos que penetram logo externamente ... margem da cartilagem articular.
A perda de suprimento sangu¡neo para uma ep¡fise ou para outras partes de um osso resulta em morte do tecido ¢sseo, uma condi‡Æo denominada necrose avascular (necrose isquˆmica ou ass‚ptica) do osso. Ap¢s toda fratura, diminutas reas cont¡guas de osso sofrem necrose avascular. Em algumas fraturas, pode ocorrer necrose de um grande fragmento do osso
caso seu suprimento sangu¡neo tenha sido interrompido. Um grupo de desordens das ep¡fises em crian‡as resulta de necrose avascular de etiologia desconhecida. SÆo referidas como osteocondroses e geralmente envolvem uma ep¡fise de pressÆo na extremidade de um osso longo.
Inerva‡Æo nos ossos. O peri¢steo ‚ rico em nervos sensitivos, chamados de nervos periostais. Isto explica por que a dor por lesÆo ¢ssea geralmente ‚ intensa. Os nervos que acompanham as art‚rias no interior dos ossos sÆo provavelmente vasomotores (ou seja, causam constri‡Æo ou dilata‡Æo dos vasos nutr¡cios).
ARQUITETURA DOS OSSOS
A estrutura do osso varia de acordo com sua fun‡Æo. Nos ossos longos concebidos para rigidez e que servem de fixa‡äes de m£sculos e ligamentos, a quantidade de osso compacto ‚ relativamente maior pr¢ximo ao meio da di fise, onde estÆo sujeitos a empenar. A substƒncia compacta da di fise assegura arquiteturalmente resistˆncia para a sustenta‡Æo de peso. Ademais, conforme descrito previamente, os ossos longos tem eleva‡äes (linhas, cristas, tub‚rculos e tuberosidades) que servem como contrafortes nas reas onde os m£sculos potentes se fixam.
Os ossos vivos possuem alguma elasticidade (flexibilidade) e muita rigidez (dureza). A elasticidade decorre da sua substƒncia orgƒnica (tecido fibroso), e a rigidez, das suas lƒminas e tubos de fosfato de c lcio inorgƒnico. Os sais, representando cerca de 60 % do peso de um osso, sÆo depositados na matriz de fibras col genas.
Os ossos sÆo como madeira-de-lei ao resistir ... tensÆo e como concreto ao resistir ... compressÆo.
Por dentro da arma‡Æo externa de substƒncia compacta, particularmente nas extremidades dos ossos longos, h substƒncia esponjosa que tem um aspecto semelhante a tela de arame. A substƒncia esponjosa nÆo ‚ disposta de maneira casual, mas sim composta de tubos e lƒminas que sÆo arranjados como escoras ao longo das linhas de pressÆo e tensÆo.
A arquitetura das trab‚culas ¢sseas ‚ peculiar a cada pessoa, um fato de valor na identifica‡Æo de restos esquel‚ticos e um parte importante da medicina forense.
Fun‡äes dos ossos. As principais fun‡äes dos ossos sÆo fornecer:
1. Prote‡Æo formando as paredes r¡gidas de cavidades (p. ex., cavidade do crƒnio) que contˆm estruturas vitais (p. ex., o enc‚falo).
2. Sustenta‡Æo (p. ex., a estrutura r¡gida para o corpo).
3. Uma base mecƒnica para o movimento ao assegurar fixa‡äes para os m£sculos e servir como alavancas para aqueles que produzem os movimentos permitidos pelas articula‡äes.
4. Forma‡Æo de c‚lulas sang¡neas. A medula ¢ssea vermelha nas extremidades dos ossos longos, esterno e costelas, v‚rtebras e na d¡ploe dos ossos planos do crƒnio sÆo os locais de desenvolvimento de hem cias, alguns linf¢citos, granul¢citos e plaquetas do sangue.
5. Armazenamento de sais. Os sais de c lcio, f¢sforo e magn‚sio nos ossos proporcionam uma reserva mineral para o corpo
2. Articula‡äes
O sistema articular consiste em articula‡äes ou junturas onde dois ou mais ossos relacionam-se entre si na sua regiÆo de contato. O estudo das articula‡äes ‚ chamado de artrologia.
As articula‡äes sÆo classificadas segundo o tipo de material que as mant‚m unidas (p. ex., articula‡äes fibrosas, cartilag¡neas e sinoviais).
ARTICULAیES FIBROSAS
Os ossos envolvidos nessa articula‡äes estÆo unidos por tecido fibroso. A quantidade de movimento permitida na articula‡Æo depende do comprimento das fibras que unem os ossos.
Suturas. Os ossos estÆo separados, embora mantidos unidos por uma fina camada de tecido fibroso. A uniÆo ‚ extremamente firme e h pouco ou nenhum movimento entre os ossos.
As suturas ocorrem apenas no crƒnio; por isso, ...s vezes sÆo chamadas de articula‡äes "do tipo craniano". As margens dos ossos podem superpor-se (sutura escamosa) ou entrela‡ar-se (sutura serr til).
No crƒnio de um rec‚m-nascido, os ossos da calv ria em crescimento nÆo estÆo em contato completo uns com os outros . Nos locais onde nÆo ocorre contato, as suturas sÆo reas largas de tecido fibroso conhecidas como fontanelas ou font¡culos. Os termos fontanelas e font¡culos significam "pequenas nascentes ou fontes". Provavelmente receberam essa denomina‡Æo porque em tempos remotos ter-se-iam realizado aberturas nesses pontos do crƒnio em lactentes com fontanelas abauladas em decorrˆncia da hipertensÆo intracraniana. Nesses casos, o l¡quido cerebrospinal (LCE) e o sangue que jorraram provavelmente lembravam uma fonte d' gua.
O font¡culo mais proeminente ‚ o anterior, que as pessoas leigas chamam de moleira. A separa‡Æo dos ossos nas suturas e font¡culos do crƒnio do rec‚m-nado permite que eles se superponham durante o nascimento, facilitando a passagem de sua cabe‡a atrav‚s do canal de parto. O font¡culo anterior nÆo costuma estar presente ap¢s os 18 a 24 meses de idade (isto ‚, apresenta a mesma largura das suturas do crƒnio). A uniÆo dos ossos no pt‚rio, situado no local do font¡culo ƒntero-lateral, ter ocorrido aos seis anos em cerca de 50% da crian‡as.
A fusÆo dos ossos atrav‚s das linhas de sutura (sinostose) come‡a na face interna da calv ria ou ab¢bada craniana no in¡cio da segunda d‚cada e progride por toda vida. Quase todas as suturas do crƒnio estÆo obliteradas em pessoas muito idosas.
Sindesmose. Nesse tipo de articula‡Æo fibrosa, os dois ossos sÆo unidos por uma lƒmina de tecido fibroso. O tecido pode ser um ligamento ou uma membrana fibrosa inter¢ssea; p. ex., as margens inter¢sseas do r dio e ulna estÆo unidas pela membrana inter¢ssea do antebra‡o.
Nas sindesmoses, consegue-se realizar um movimento de leve a consider vel. O grau de movimento depende da distƒncia entre os ossos e do grau de flexibilidade do tecido fibroso. A membrana inter¢ssea entre o r dio e a ulna no antebra‡o ‚ suficientemente larga e flex¡vel para permitir movimentos consider veis, como ocorre durante a prona‡Æo e supina‡Æo do antebra‡o.
ARTICULA€åES CARTILAGÖNEAS
Os ossos envolvidos nessa articula‡äes sÆo unidos por cartilagem.
Articula‡äes Cartilag¡neas Prim rias (Sincondroses). Os ossos estÆo ligados por cartilagem hialina, que permite uma leve flexÆo no in¡cio da vida.
As sincondroses geralmente representam condi‡äes tempor rias, p. ex., durante o per¡odo de desenvolvimento endocondral de um osso longo. Conforme descrito previamente, uma placa de cartilagem epifis ria separa as extremidades (ep¡fises) e corpo (di fise) de um osso longo.
Uma articula‡Æo cartilag¡nea do tipo sincondrose permite crescimento em extensÆo do osso. Quando o crescimento pleno ‚ atingido, a cartilagem ‚ convertida em osso e a ep¡fise funde-se ... di fise; isto ‚, uma sincondrose ‚ convertida numa sinostose.
Outras sincondroses sÆo permanentes, p. ex., onde a cartilagem costal da primeira costela une-se ao man£brio do esterno.
Articula‡äes Cartilag¡neas Secund rias (S¡nfises). As faces articulares dos ossos nessas articula‡äes estÆo cobertas por cartilagem hialina e essas faces cartilag¡neas estÆo unidas por tecido fibroso e/ou fibrocartilagem.
As s¡nfises sÆo articula‡äes fortes e poucos mov¡veis. As articula‡äes intervertebrais anteriores com seus discos intervertebrais sÆo classificadas como s¡nfises. SÆo concebidas para resistˆncia e absor‡Æo de choque. Os corpos das v‚rtebras estÆo ligados por ligamentos longitudinais e pelos an‚is fibrosos dos discos intervertebrais. Cumulativamente, esses discos fibrocartilag¡neos conferem uma flexibilidade consider vel ... coluna vertebral.
Outros exemplos de s¡nfises sÆo a s¡nfise p£bica entre os corpos dos ossos p£bis e a articula‡Æo manubriosternal entre o man£brio e corpo do esterno.
Durante a gravidez, a s¡nfise p£bica e outras articula‡äes da pelve sofrem altera‡äes que possibilitam movimentos mais livres. Acredita-se que os ligamentos associados a essas articula‡äes sejam "amolecidos" pelo horm"nio relaxina. As altera‡äes produzidas nas articula‡äes permitem que a cavidade p‚lvica aumente, o que facilita o parto.
ARTICULAیES SINOVIAIS
As articula‡äes sinoviais, tipo mais comum e mais importante funcionalmente, normalmente proporcionam livres movimentos entre os ossos unidos.
As quatro caracter¡sticas t¡picas de uma articula‡Æo sinovial sÆo que elas tˆm (1) uma cavidade articular, (2) uma cartilagem articular, (3) uma membrana s
f:\12000 essays\sciences (985)\Computer\Aol is it for me .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You have probably heard of the Internet, but you weren't really sure if it was for you. You thought about it, but after all it costs so much and things like pornography and improper language are used everywhere, right? Wrong! Perhaps, I can convince you that America Online will be worth your time and money.
One of the main reasons that people don't go online is that they think that it costs too much. America Online or AOL doesn't really cost all that much. When you sign on you get from 10 to 50 hours free, depending on the software that you download. Once you run out of free hours you may choose to stay online with a monthly fee. This monthly fee can be either $9.95 or $19.95 depending on how many hours you plan on using. If you are concerned that your children will visit web pages you prefer that they don't, then you can put parental guards on that don't allow them to visit those web pages. If you aren't familiar with web pages, they are basically ads that you look at containing information about the company, person, or product. Also you can sign your child on as a child or teen which keeps them out of restricted areas. Perhaps your main concern is people finding out things that you don't want them to. They only know as much as you tell them. If they ask for your password, credit card number, or any other personal info, you don't have to tell them that information. When you first sign on AOL staff will ask for things like name, age, address, phone number, and your credit card or checking account number. These things remain confidential and are used only for billing purposes.
If anyone ask for personal information you can easily report them to AOL. When someone is reported they are either warned or kicked off the Internet. You can also report people that swear or use any kind of offensive words. Many of the chat rooms are guarded by "online hosts" or people that belong to AOL. These "guards" make sure nothing bad happens in chat rooms. You can be sure that there are AOL staff in the romance rooms, especially, because that is where the most foul and vulgar language takes place. If you are too young to be in the room, they will tell you to leave and go to a room where people your age belong.
The world "online" also offers thousands of Reference sources like Groliers Multimedia Encyclopedia and over 100 magazines. These Magazines alone are of great value to anyone who enjoys reading magazines. These References will tell you almost anything, but if you wanted to know about something that was not in these sources, then you can leave the New York Public Library's librarian a message. This person will respond within the week to your question.
Finally, with your America Online subscription you get unlimited E-Mail. What is E-Mail you ask? Well E-Mail stands for electronic mail. It is a way to send letters to anyone in the world that is hooked up to The Internet or other online services. This mail is received almost instantly, within a few seconds. This way you could send letters to a pen pal in Egypt. Instead of waiting up to a month or more, he will receive it the same day.
Having America Online opens you up to a whole new world of information and people. America Online provides an inexpensive yet secure place for work, education, and recreation. A family has so much to gain and little to lose by signing on today.
f:\12000 essays\sciences (985)\Computer\Application Software.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
John Hassler
Professor C. Mason
Computer Information systems 204
September 13, 1996
Application Software
Computer systems contain both hard and software. Hardware is any tangible item in a computer system, like the system unit, keyboard, or printer. Software, or a computer program, is the set of instruction that direct the computer to perform a task. Software falls into one of two categories: system software and application software. System software controls the operation of the computer hardware; whereas, application software enables a user to perform tasks. Three major types of application software on the market today for personal computers are word processors, electronic spreadsheets, and database management systems (Little and Benson 10-42).
A word processing program allows a user to efficiently and economically create professional looking documents such as memoranda, letters, reports, and resumes. With a word processor, one can easily revise a document. To improve the accuracy of one's writing, word processors can check the spelling and the grammar in a document. They also provide a thesaurus to enable a user to add variety and precision to his or her writing. Many word processing programs also provide desktop publishing features to create brochures, advertisements, and newsletters.
An electronic spreadsheet enables a user to organize data in a fashion similar to a paper spreadsheet. The difference is the user does not have to perform calculations manually; electronic spreadsheets can be instructed to perform any computation desired. The contents of an electronic spreadsheet can be easily modified by the user. Once the data is modified, all calculations in the spreadsheet are recomputed automatically. Many electronic spreadsheet packages also enable a user to graph the data in his or her spreadsheet (Wakefield 98-110).
A database management system (DBMS) is a software program that allows a user to efficiently store a large amount of data in a centralized location. Data is one of the most valuable resources to any organization. For this reason, user desire data be organized and readily accessible in a variety of formats. With aDBMS, a user can then easily store data, retrieve data, modify data, analyze data, and create a variety of reports from the data(Aldrin 25-37).
Many organizations today have all three of these types of application software packages installed on their personal computers. Word processors, electronic spreadsheets, and database management systems make users' tasks more efficient. When users are more efficient, the company as a whole operates more economically and efficiently.
Works Cited
Aldrin, James F. "A Discussion of Database Management Systems." Database Monthly May 1995: 25-37.
Little, Karen A. And Jeffrey W. Benson. Word Processors. Boston: Boyd Publishing Company,
1995.
Wakefield, Sheila A. "What Can An Electronic Spreadsheet Do For You," PC Analyzer Apr.
1995: 98-110.
f:\12000 essays\sciences (985)\Computer\Applications of shit in the computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Argumentative Essay
In the summer of 1996 Gwen Jacobs enjoyed a topless summer stroll during which she was seen by a local O.P.P officer, was apprehended and subsequently charged with indecent exposure. Gwen Jacobs pleaded not guilty in court and won the right to go topless in Ontario. This incident brought up an excellent question: should women be allowed to go topless on public beaches and in other public areas? The answer is strictly no, women should not be allowed to go topless anywhere outside of their own home.
One of the many reasons why I believe that women should not be allowed to go topless is with respect to the safety of women. Men and boys have, in recent years, been using short, tight, skirts and shirts as an excuse for rape or date rape. Men have said that the girl was wearing a tight shirt and short skirt and it was obvious that she was easy and wanted the attention. This statement leads me to my next point.
The average human being upon first contact with a stranger bases his initial impression of that person solely on the person's appearance. This is only natural as the only thing that we know about this stranger is what we see of them the first time we meet. We all are aware of the sayings "Preppy","Jockish","Skater","Sluty" etc. This final saying, "Sluty" is interpreted by 90 percent of North Americans as a tight skirt and tight tank top which happens to be the usual ensemble of a prostitute. This first impression of a girl in nothing but a skirt and a bare chest will no doubt elevate to the new version of a "Slut" and a girl that wants it.
My second point is, what kind of questions will a mother be asked by her son when he sees a half nude woman walking down the street. The first question that this child will ask is why do these women have no shirt on and you do? Your reply will be well ahhh go talk to your father. This dilemma will no doubt be brought about as these and other questions about the sexual nature of the body will be put forth by young children. Questions that you as a parent do not feel should be answered truthfully to such a young child.
My third point begins thousands of years ago when man first walked on the earth. When man first walked he hunted and his wife(clothless) cleaned the game and took care of the young. As centuries have progressed women have stepped forth into a new era of equal rights. We've seen the first women doctors, astronauts, business owners and many other firsts in numerous professions. Women have made giant leaps when it comes to respect from men in their professional field. This respect which women have been fighting for over the past century, is on the verge of collapse. Women seem to be taking this new law allowing them to go topless to an extreme. Walking their dogs, walking on the beach and strolling through public places with no tops on. This display of nudity, in the average person's eyes, whether they admit to it or not, will cause men to look down again on women. If, for example, the first woman astronaut (Sally Ride) were to start going topless in public places it would be plastered on the front page of every newspaper. This in turn would lead to her fellow colleagues looking down on her. This would be a giant step backwards in respect to equal rights for women.
Following the changes to this law allowing women to go topless our cities will slowly begin to diverge into places that encourage nudity and places that do not encourage nudity. Our economy will begin to collapse, as store owners appalled by this nudity will be forced to close their stores and move, if this nudity is surrounding them. This also applies to stores that want to have workers that want to go topless, they will be forced to relocate to places of nudity. As this begins to happen slowly our cities will become two sided and our economy's stability will collapse beneath our feet. An excellent example of this situation is taking place in Quebec. A law in Quebec states that a women may work in nothing less than lingerie. So a Quebec barber shop run by a well endowed women decided to charge an extra ten dollars per haircut and she'd remove her shirt so they could watch her cut their hair in just a bra. She also charged an extra fifteen to remove her bottoms so she had only her underwear on. This new business skyrocketed and now there is currently 15 of these hair dressers presently in Quebec. The neighborhoods surrounding these barbershops are appalled by what is going on and many people have relocated there families away from this nudity.
In conclusion to the question: should women be allowed to go topless in public places? It has been clearly shown that women should not be allowed to go topless anywhere outside of their own home.
f:\12000 essays\sciences (985)\Computer\Artificial Inteligence.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ABSTRACT
Current neural network technology is the most progressive of the artificial intelligence
systems today. Applications of neural networks have made the transition from laboratory
curiosities to large, successful commercial applications. To enhance the security of automated
financial transactions, current technologies in both speech recognition and handwriting
recognition are likely ready for mass integration into financial institutions.
RESEARCH PROJECT
TABLE OF CONTENTS
Introduction 1
Purpose 1
Source of Information 1
Authorization 1
Overview 2
The First Steps 3
Computer-Synthesized Senses 4
Visual Recognition 4
Current Research 5
Computer-Aided Voice Recognition 6
Current Applications 7
Optical Character Recognition 8
Conclusion 9
Recommendations 10
Bibiography 11
INTRODUCTION
· Purpose
The purpose of this study is to determine additional areas where artificial intelligence
technology may be applied for positive identifications of individuals during financial
transactions, such as automated banking transactions, telephone transactions , and home
banking activities. This study focuses on academic research in neural network technology .
This study was funded by the Banking Commission in its effort to deter fraud.
Overview
Recently, the thrust of studies into practical applications for artificial intelligence
have focused on exploiting the expectations of both expert systems and neural network
computers. In the artificial intelligence community, the proponents of expert systems
have approached the challenge of simulating intelligence differently than their counterpart
proponents of neural networks. Expert systems contain the coded knowledge of a human expert
in a field; this knowledge takes the form of "if-then" rules. The problem with this approach
is that people don't always know why they do what they do. And even when they can express this
knowledge, it is not easily translated into usable computer code. Also, expert systems are
usually bound by a rigid set of inflexible rules which do not change with experience gained
by trail and error. In contrast, neural networks are designed around the structure of a
biological model of the brain. Neural networks are composed of simple components called
"neurons" each having simple tasks, and simultaneously communicating with each other by
complex interconnections. As Herb Brody states, "Neural networks do not require an explicit
set of rules. The network - rather like a child - makes up its own rules that match the
data it receives to the result it's told is correct" (42). Impossible to achieve in expert
systems, this ability to learn by example is the characteristic of neural networks that makes
them best suited to simulate human behavior. Computer scientists have exploited this system
characteristic to achieve breakthroughs in computer vision, speech recognition, and optical
character recognition. Figure 1 illustrates the knowledge structures of neural networks
as compared to expert systems and standard computer programs. Neural networks restructure
their knowledge base at each step in the learning process.
This paper focuses on neural network technologies which have the potential to increase security
for financial transactions. Much of the technology is currently in the research phase and has
yet to produce a commercially available product, such as visual recognition applications.
Other applications are a multimillion dollar industry and the products are well known, like
Sprint Telephone's voice activated telephone calling system. In the Sprint system the neural
network positively recognizes the caller's voice, thereby authorizing activation of his
calling account.
The First Steps
The study of the brain was once limited to the study of living tissue. Any attempts at an
electronic simulation were brushed aside by the neurobiologist community as abstract conceptions
that bore little relationship to reality. This was partially due to the over-excitement in
the 1950's and 1960's for networks that could recognize some patterns, but were limited in
their learning abilities because of hardware limitations. In the 1990's computer simulations
of brain functions are gaining respect as the simulations increase their abilities to predict
the behavior of the nervous system. This respect is illustrated by the fact that many
neurobiologists are increasingly moving toward neural network type simulations. One such
neurobiologist, Sejnowski, introduced a three-layer net which has made some excellent predictions
about how biological systems behave. Figure 2 illustrates this network consisting of three
layers, in which a middle layer of units connects the input and output layers. When the network
is given an input, it sends signals through the middle layer which checks for correct output.
An algorithm used in the middle layer reduces errors by strengthening or weakening connections
in the network. This system, in which the system learns to adapt to the changing conditions,
is called back-propagation. The value of Sejnowski's network is illustrated by an experiment
by Richard Andersen at the Massachusetts Institute of Technology. Andersen's team spent years
researching the neurons monkeys use to locate an object in space (Dreyfus and Dreyfus 42-61).
Anderson decided to use a neural network to replicate the findings from their research. They
"trained" the neural network to locate objects by retina and eye position, then observed
the middle layer to see how it responded to the input. The result was nearly identical to what
they found in their experiments with monkeys.
Computer-Synthesized Senses
· Visual Recognition
The ability of a computer to distinguish one customer from another is not yet a reality. But, recent breakthroughs in neural network visual technology are bringing us closer to the time when computers will positively identify a person.
· Current Research
Studying the retina of the eye is the focus of research by two professors at the California
Institute of Technology, Misha A. Mahowald and Carver Mead. Their objective is to electronically
mimic the function of the retina of the human eye. Previous research in this field consisted
of processing the absolute value of the illumination at each point on an object, and required
a very powerful computer.(Thompson 249-250). The analysis required measurements be taken over
a massive number of sample locations on the object, and so, it required the computing power of a
massive digital computer to analyze the data.
The professors believe that to replicate the function of the human retina they can use a neural
network modeled with a similar biological structure of the eye, rather than simply using massive
computer power. Their chip utilizes an analog computer which is less powerful than the previous
digital computers. They compensated for the reduced computing power by employing a far more
sophisticated neural network to interpret the signals from the electronic eye. They modeled the
network in their silicon chip based on the top three layers of the retina which are the best
understood portions of the eye.(250) These are the photoreceptors, horizontal cells, and bipolar cells.
The electronic photoreceptors, which make up the first layer, are like the rod and cone cells in the eye.
Their job is to accept incoming light and transform it into electrical signals. In the second
layer, horizontal cells use a neural network technique by interconnecting the horizontal cells
and the bipolar cells of the third layer. The connected cells then evaluate the estimated
reliability of the other cells and give a weighted average of the potentials of the cells
around it. Nearby cells are given the most weight and far cells less weight.(251)
This technique is very important to this process because of the dynamic nature of image
processing. If the image is accepted without testing its probable accuracy, the likelihood
of image distortion would increase as the image changed.
The silicon chip that the two professors developed contains about 2,500 pixels- photoreceptors
and their associated image-processing circuitry. The chip has circuitry that allows a professor
to focus on each pixel individually or to observe the whole scene on a monitor. The professors
stated in their paper, "The behavior of the adaptive retina is remarkably similar to that of
biological systems" (qtd in Thompon 251).
The retina was first tested by changing the light intensity of just one single pixel while the
intensity of the surrounding cells was kept at a constant level. The design of the neural network
caused the response of the surrounding pixels to react in the same manner as in biological retinas.
They state that, "In digital systems, data and computational operations must be converted into
binary code, a process that requires about 10,000 digital voltage changes per operation.
Analog devices carry out the same operation in one step and so decrease the power consumption
of silicon circuits by a factor of about 10,000" (qtd in Thompson 251).
Besides validating their neural network, the accuracy of this silicon chip displays the usefulness
of analog computing despite the assumption that only digital computing can provide the accuracy
necessary for the processing of information.
As close as these systems come to imitating their biological counterparts, they still have a long
way to go. For a computer to identify more complex shapes, e. g., a person's face, the professors
estimate the requirement would be at least 100 times more pixels as well as additional circuits
that mimic the movement-sensitive and edge-enhancing functions of the eye. They feel it is possible
to achieve this number of pixels in the near future. When it does arrive, the new technology will
likely be capable of recognizing human faces.
Visual recognition would have an undeniable effect on reducing crime in automated financial transactions.
Future technology breakthroughs will bring visual recognition closer to the recognition of individuals,
thereby enhancing the security of automated financial transactions.
· Computer-Aided Voice Recognition
Voice recognition is another area that has been the subject of neural network research.
Researchers have long been interested in developing an accurate computer-based system capable
of understanding human speech as well as accurately identifying one speaker from another.
· Current Research
Ben Yuhas, a computer engineer at John Hopkins University, has developed a promising system for
understanding speech and identifying voices that utilizes the power of neural networks. Previous attempts
at this task have yielded systems that are capable of recognizing up to 10,000 words, but only when each
word is spoken slowly in an otherwise silent setting. This type of system is easily confused by back
ground noise (Moyne 100).
Ben Yuhas' theory is based on the notion that understanding human speech is aided, to some small degree,
by reading lips while trying to listen. The emphasis on lip reading is thought to increase as the
surrounding noise levels increase. This theory has been applied to speech recognition by adding a
system that allows the computer to view the speaker's lips through a video analysis system while
hearing the speech.
The computer, through the neural network, can learn from its mistakes through a training session. Looking
at silent video stills of people saying each individual vowel, the network developed a series of
images of the different mouth, lip, teeth, and tongue positions. It then compared the video images
with the possible sound frequencies and guessed which combination was best.
Yuhas then combined the video recognition with the speech recognition systems and input a video frame
along with speech that had background noise. The system then estimated the possible sound frequencies
from the video and combined the estimates with the actual sound signals. After about 500 trial runs the
system was as proficient as a human looking at the same video sequences.
This combination of speech recognition and video imaging substantially increases the security factor by
not only recognizing a large vocabulary, but also by identifying the individual customer using the system.
· Current Applications
Laboratory advances like Ben Yuhas' have already created a steadily increasing market in speech recognition.
Speech recognition products are expected to break the billion-dollar sales mark this year for the first time.
Only three years ago, speech recognition products sold less than $200 million (Shaffer, 238).
Systems currently on the market include voice-activated dialing for cellular phones, made secure by their
recognition and authorization of a single approved caller. International telephone companies such as Sprint
are using similar voice recognition systems. Integrated Speech Solution in Massachusetts is investigating
speech applications which can take orders for mutual funds prospectuses and account activities (239).
· Optical Character Recognition
Another potential area for transaction security is in the identification of handwriting by optical
character recognition systems (OCR). In conventional OCR systems the program matches each letter in a
scanned document with a pre-arranged template stored in memory. Most OCR systems are designed specifically
for reading forms which are produced for that purpose. Other systems can achieve good results with
machine printed text in almost all font styles. However, none of the systems is capable of recognizing
handwritten characters. This is because every person writes differently.
Nestor, a company based in Providence, Rhode Island has developed handwriting recognition products based
on developments in neural network computers. Their system, NestorReader, recognizes handwritten characters
by extracting data sets, or feature vectors, from each character. The system processes the input
representations using a collection of three by three pixel edge templates (Pennisi, 23). The system then
lays a grid over the pixel array and pieces it together to form a letter. Then the network discovers
which letter the feature vector most closely matched. The system can learn through trial and error,
and it has an accuracy of about 80 percent. Eventually this system will be able to evaluate all symbols
with equal accuracy.
It is possible to implement new neural-network based OCR systems into standard large optical systems.
Those older systems, used for automated processing of forms and documents, are limited to reading typed
block letters. When added to these systems, neural networks improve accuracy of reading not only typed
letters but also handwritten characters. Along with automated form processing, neural networks will
analyze signatures for possible forgeries.
Conclusion
Neural networks are still considered emerging technology and have a long way to go toward achieving their
goals. This is certainly true for financial transaction security. But with the current capabilities,
neural networks can certainly assist humans in complex tasks where large amounts of data need to be analyzed.
For visual recognition of individual customers, neural networks are still in the simple pattern matching
stages and will need more development before commercially acceptable products are available. Speech
recognition, on the other hand, is already a huge industry with customers ranging from individual computer
users to international telephone companies. For security, voice recognition could be an added link to the
chain of pre-established systems. For example, automated account inquiry, by telephone, is a popular method
for customers to determine the status of existing accounts. With voice identification of customers, an
option could be added for a customer to request account transactions and payments to other institutions.
For credit card fraud detection, banks have relied on computers to identify suspicious transactions.
In fraud detection, these programs look for sudden changes in spending patterns such as large cash withdrawals
or erratic spending. The drawback to this approach is that there are more accounts flagged for possible
fraud than there are investigators. The number of flags could be dramatically reduced with optical character
recognition to help focus investigative efforts.
It is expected that the upcoming neural network chips and add-on boards from Intel will add blinding speed
to the current network software. These systems will even further reduce losses due to fraud by enabling
more data to be processed more quickly and with greater accuracy.
Recommendations
Breakthroughs in neural network technology have already created many new applications in financial transaction
security. Currently, neural network applications focus on processing data such as loan applications, and
flagging possible loan risks. As computer hardware speed increases and as neural networks get smarter,
"real-time" neural network applications should become a reality. "Real-time" processing means the network
processes the transactions as they occur.
In the mean time,
1. Watch for advances in visual recognition hardware / neural networks. When available, commercially produced
visual recognition systems will greatly enhance the security of automated financial transactions.
2. Computer aided voice recognition is already a reality. This technology should be implemented in automated
telephone account inquiries. The feasibility of adding phone transactions should also be considered.
Cooperation among financial institutions could result in secure transfers of funds between banks when
ordered by the customers over the telephone.
3. Handwriting recognition by OCR systems should be combined with existing check processing systems.
These systems can reject checks that are possible forgeries. Investigators could follow-up on the
OCR rejection by making appropriate inquiries with the check writer.
BIBLIOGRAPHY
Winston, Patrick. Artificial Intelligence. Menlo Park: Addison-Wesley Publishing, 1988.
Welstead, Stephen. Neural Network and Fuzzy Logic in C/C++. New York: Welstead, 1994.
Brody, Herb. "Computers That Learn by Doing." Technology Review August 1990: 42-49.
Thompson, William. "Overturning the Category Bucket." BYTE January 1991: 249-50+.
Hinton, Geoffrey. "How Neural Networks Learn from Experience." Scientific American September 1992: 145-151.
Dreyfus, Hubert., and Stuart E. Dreyfus. "Why Computers May Never Think Like People." Technology Review January 1986: 42-61.
Shaffer, Richard. "Computers with Ears." FORBES September 1994: 238-239.
f:\12000 essays\sciences (985)\Computer\Artificial Intellegence.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Artificial Intellegence
Identification And Description Of The Issue
Over the years people have been wanting robots to become more Intelligent. In the past 50 years since computers have been around, the computer world has grown like you wouldn't believe. Robots have now been given jobs that were 15 years ago no considered to be a robots job. Robots are now part of the huge American government Agency the FBI. They are used to disarm bombs and remove dangerous products from a site without putting human life in danger.
You probably don't think that when you are in a carwash that a robotic machine is cleaning your car. The truth is that they are. The robot is uses senses to tell the main computer what temperature the water should be and what style of wash the car is getting e.g. Supreme or Normal wash.
Computer robots are being made, that learn from their mistakes. Computers are now creating their own programs. In the past there used to be some problems, now they are pretty much full proof.
The Television and Film business has to keep up with the demands from the critics sitting back at home, they try and think of new ideas and ways in which to entertain the audiences. They have found that robotics interests people. With that have made many movies about robotics (e.g. Terminator, Star Wars, Jurassic Park ).
Movie characters like the terminator would walk, talk and do actions by its self mimicking a human through the use of Artificial Intelligence.
Movies and Television robots don't have Artificial Intelligence ( AI ) but are made to look like they do. This gives us the viewers a reality of robotics with AI.
Understanding Of The IT Background Of The Issue
Artificial Intelligence means " Behavior performed by a machine that would require some degree of intelligence if carried out by a human ".
The Carwash machine has some intelligence which enables it to tell the precise temperature of the water it is spraying onto your car. If the water is to hot it could damage the paint work or even make the rubber seals on the car looser. The definition above shows that AI is present in everyday life surrounding humans where ever they go.
Alan Turing Invented a way in which to test AI. This test is called the Turing Test. A computer asks a human various questions. Those conducting the test have to decide whether the human or the computer is asking the questions.
Analysis Of The Impact Of The Issue
With the increasing amount of robots with AI in the work place and in everyday life, it is making human jobs insecure for now and in the future. If we take a look at all the major car factories 70 years ago they were all hand crafted and machinery was used very little. Today we see companies like TOYOTA who produce mass amounts of cars with robots as the workers. This shows that human workmanship is required less and less needed.
This is bad for the workers because they will then have no jobs and will be on the unemployment benefit or trying to find a new job.
The advantage of robots is that they don't need a coffee break or need to have time of work. The company owns the machinery and therefore they have control over the robot.
Solutions To Problems Arising From The Issue
Some problems arising from the issue would include job loss, due to robots taking the place of humans in the work place. This could be resolved by educating the workers to do other necessary jobs in the production line. Many of the workers will still keep their other jobs that machines can't do.
If robots became to intelligent this could be a huge disaster for human kind. We might end up being second best to robots. They would have the power to do anything and could eliminate humans from the planet especially if they are able to programme themselves without human help. I think the chance of this happening is slim but it is a possibility.
f:\12000 essays\sciences (985)\Computer\Asts ADVANTAGE 9312 Communicator.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AST's Advantage 9312
(Communicator)
Advantage 9312
THE COMMUNICATOR
The Communicator is AST's newest addition to the Advantage line of personal computers. The Advantage 9312 comes with a 28.8 kbps DSVD modem (Digital Simultaneous Voice and Data), Digital Camera, and a varitiy of software programs that let you interact with friends, family, or people on the otherside of the planet. This is where the 9312 picked-up its nickname the "Communicator".
The modem and its phone capabilities are what truely sets this computer apart from the rest of the personal computers. The 28.8kbps DSVD modem lets you talk on the phone at the same time you're using the modem features. It also works with the Intel analog video camera, which plugs into a video-capture card in the pc and can transmit pictures at up to 12 frames per second. And it also comes with Intel's Video Phone software, which lets you use that camera to see both yourself and the person at the other end of the line on the screen (this is assuming your conversation partner has the same type of hook-up.
Another advantage of the Communicator is AST's LifeLine which is a standard component of their technical support. This simultaneous telephone and data support, which makes use of Radish Communications' TalkShop software and the Advantage's DSVD modem, allows technicians to take information directly from your computer as soon as you authorize them to do so. Which means no more reading line after line of cumbersome configuration files, instead, the technician can download the files directly from your computer, make the appropriate changes and return them to your system in just a few seconds.
The Advantage 9312 uses a 166-Mhz Pentium processor which makes it fast and reliable. The 166Mhz processor has 64-bit Data bus and is capiable of dynamic branch prediction, data integrity, error detection, multiprocessing support and performance monitoring. The 166 also has 4GB of physical address space and its clock speed range from 60 MHz to 120 MHz. The storage system comes with a 1.44MB, 3,5" floppy drive and 2.5 GB hard drive which should give the user enough space , but if not an addtional hard drive can be added to the unit. Twenty four meg. of EDO RAM is used to allow large programs to be brought up with easy and speed.
There is 256KB of external cache.
The multimedia package has a 8x speed IDE CD-Rom thats backed up with a 16 bit Sound Blaster card, 3D sound wavetable , and amplified stereo speakers that are controlled by remote control, along with video MPEG playback, and a microphone.
Graphic are supported with 1MB of graphic memory, a 64 bit local bus SVGS graphic's card and is capiable of a resolution up to 1280 x 1064 x 16.
Included in the package is one infared remote control and receiver, video capture and T.V. tuner card, and one analog video camera with all this the Communicator is sure to be around for awhile.
The 9312 has two 32 bit ISA compatible I/O slots and five 16bit ISA compatible I/O slots. The interface has two serial ports, one parallel port, one PS/2 compatible mouse port, one analog VGA connector, and one keyboard port.
A full duplex speaker phone utillizes the 28.8Kbps DSVD data/ fax/ voice to set this modem apart from the rest. Which is a big plus if your using the InterNet lot and you don't have a dedicated phone line. The DSVD makes it possible to do both at the same time, talk on the telephone and manuver around on the InterNet.
The accessories include a high resolution, two button mouse ,Winows 95 keyboard, and thirty-one different software tiltes; which range from early learning for kids,to Lotus and Quicken, to Proidgy.
This system is topped off with AST's LifeLine voice and data technical support and a free one year, on-site warranty.
Technical Specifications
Processor:
166MHz Intel Pentium processor
Cache:
256KB external cache
Memory:
24MB EDO RAM
Storage:
2.5GB hard drive
One 1.44MB, 3.5" floppy drive
Multimedia:
8x speed IDE CD- ROM
16-bit Sound Blaster card
3D sound with Hardware Wavetable
MPEG playback
Amplified stereo speakers
Microphone
Graphics:
1MB graphics memory
64-bit local bus SVGA graphic
Resolutions up to 1280 x 1064 x 16
Modem:
28.8 kbps DSVD data/ fax/ voice modem
Full duplex speaker phone
I/O:
Two 32-bit PCI compatible slots
Five 16-bit ISA compatible slots
Interfaces:
Two serial ports
One parellel port
One PS/2 compatible mouse port
One analog VGA connector
One keyboard port
Accessories:
High-resolution, two-button mouse
Windows 95 keyboard
`
f:\12000 essays\sciences (985)\Computer\Battle of the Bytes Windows95 vs Macs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Battle of the Bytes
Macintosh vs. Windows 95
It used to be that the choice between a Mac and a PC was pretty clear. If
you wanted to go for the more expensive, easier to use, and better graphics and
sound, you went to buy a Macintosh, for the cheaper price, it was the PC. Now it
is a much different show. With the release of Windows 95 and the dynamics of
the hardware market have changed the equation.
On the other hand, Apple has made great price reductions on many of
their computers last October. You can now buy a reasonably equipped Power
Macintosh at about the same price as a PC that has about the same things. This
makes the competition much harder.
Windows 3.x have been great improvements over the earlier versions of
Windows, and of course over DOS, but it still didn't compete against the ease of
use on a Mac. The Windows 95 interface is much better than Windows 3.x. It
borrows some from the Macintosh interface and has improved on it.
Some improvements are the ability to work with folder icons that represent
directories and subdirectories in DOS. Windows 95, unlike the Mac, logically
groups data and resources. A Taskbar menu lets you call up and switch between
any software application at any time. Thus feature is better than the Mac's
because its use is more obvious. It clearly shows what is running and allows you
to switch programs with a single click of the mouse. Control panels have been
added so you can configure your hardware. There is easy access to frequently
used files. You can make very long file names on Windows 95 instead of short
and strange names that leave you wondering about, such as on Windows 3.x I
could not name a folder This is stuff for school it must be a lot shorter. The Help
system helps you implement its suggestions. A multilevel Undo command for all
file operations safeguards your work, something Macintosh does not have.
Something that Windows 95 has, similar to the Macintosh Alias function, is
shortcut icons. It calls up a program very easily, instead of searching through
your hard drive. The Windows 95 shortcuts go beyond the Mac's, they can refer
to data inside documents as well as to files and folders, and can also call up
information on a local area network server or Internet site. Windows 95's plug
and play system allows the operating system to read what's on your machine
and automatically configure your new software that you need to install, however,
this only works if the added hardware is designed to support it, and it will for a
majority of hardware.
All these things are major improvements, but hardware and CONFIG.SYS
settings left over from earlier programs can conflict with the new system, causing
your hard drive to crash. This is something all users of Windows 95 will dread.
Even though Microsoft has made many wonderful changes to Windows,
Apple is working on developing a new operation system, called Copland. It may
beat many of the Windows 95 improvements. Apple is still deciding on what new
things to add when the system will start shipping later in the year. Some new
things may be a customizable user interface and features such as drawers,
built-in indexing and automatically updated search templates to help users
manger their hard drives much more efficiently. The biggest improvement is to
be able to network systems from multiple vendors running multiple operating
systems. Like Windows 95, Copland will also have a single in-box for fax, e-mail,
and other communications. The disadvantage of Copland is it can only be used
on Power Macintoshes.
I would personally go for a PC with Windows 95. I choose it because of
the many programs that can be used on PC's. Whenever I walk into a computer
store, such as Electronics Boutique, half of the store is taken up by programs
that can be used on an IBM compatible PC. There is only one little shelf for
things that run on Macs. It seems that the more people use PC's. I have met very
few people with a Macintosh. I can bring many things from my computers to
theirs and the other way around without worrying, "What if I need to find this for
a Mac?"
Schools should use Windows95 PC's because of the many more
educational programs available for PC's. Since of the making of Windows 95
many companies now make programs for the PC. It may be a long time, if ever,
that they will decide to make it for a Mac. Plus since of the many people with IBM
PC's at home, people can bring their work to and from school. If everyone had
the same kind of computer on a network, students could go into the computers at
schools all over the world to use programs there.
So since now that the quality of computers are equal it is very hard to
make your decision. For those that are not computer literate, the best thing to do
is to go for the Mac because of the easiness involved in using one. This means
you get less choice of programs in a store, and if you go online, many people will
be using something different from you so you have no idea what they are talking
about. If you know how a computer is basically used, a Windows 95 PC will be
no problem. It doesn't take that long to learn. You will have a bigger choice of
programs and may be able to do more things with other people that have a
computer. It comes down to this choice. Most of the choosing will go to schools
because of the many using Macintosh computers, which most of Apple's money
comes from. It is only recently companies that made software for PC's that got
interested in making programs for educational purposes.
So if you are deciding a computer. I leave you to decide this. Windows 95
or Macintosh, the choice is yours.
I feel that this is the best journal entry I have ever written. It informs the
reader a great deal about the subject and it helps you make a decision that is
very important if you decide to buy a computer for work or home use. It is very
helpful because it can educate people in the world that are not computer literate
in a world that is being taken over by computers. Things such as the internet are
used by many people, and it would certainly help if you needed to know what kind
to buy so your would be compatible with someone else's. This entry tells that I am
one that is around computers a lot and have an interest in them.
f:\12000 essays\sciences (985)\Computer\Beyaunt force.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Buoyant Force
The purpose of this lab is to calculate bouyant forces of objects submerged in water.
The first step in the lab was to measure the mass of a metal cylinder, which was found to be 100g, and then to calculated it's weight, which was .98 newtons. Then next step was to measure the apparent weight of the cylinder when it is completely submerged in a bath of water using the formula Wa=ma*g , this was found to be 88.5grams. Knowing these two numbers, the bouyant force that the water places on the object can be calculated using the formula Fb=W-Wa , Wa=.8673n W=.98n Fb=.1127n
Part 2 of this lab consisted of weighing an empty cup, which was 44grams. And then filling another cup up to a certain point the if any more water was added, it would spill out of a little opening in the cup, the water spilled out could be caught in the first cup. This is done so that the water spilled out can be weighed and compared to a calculated weight of which the water should be. After filling the cup, the cylinder was put into the cup , allowing the water to spill out and be caught in the first cup. After the water had spilled out it was weighed, which was 8.3g, converted to kg was .0083g. The weight of this displaced water in Newtons was 0.081423n.
The percentage error with the buoyant force from step one was calculated using , this resulted, using .114 for Fb and .0813 for Wdisp, a 28.7% error.
After completing this lab, it has become more apparent as to how to calculate boyant forces and how that information can be used.
Buoyant Forces
f:\12000 essays\sciences (985)\Computer\Bill Gates biography.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
William H. Gates
Chairman and Chief Executive Officer
Microsoft Corporation
William (Bill) H. Gates is chairman and chief executive officer of
Microsoft Corporation, the leading provider, worldwide, of software
for the personal computer. Microsoft had revenues of $8.6 billion for
the fiscal year ending June 1996, and employs more than 20,000
people in 48 countries.
Born on October 28, 1955, Gates and his two sisters grew up in
Seattle. Their father, William H. Gates II, is a Seattle attorney. Their
late mother, Mary Gates, was a schoolteacher, University of Washington regent and
chairwoman of United Way International.
Gates attended public elementary school and the private Lakeside School. There, he began his
career in personal computer software, programming computers at age 13.
In 1973, Gates entered Harvard University as a freshman, where he lived down the hall from
Steve Ballmer, now Microsoft's executive vice president for sales and support. While at
Harvard, Gates developed the programming language BASIC for the first microcomputer -- the
MITS Altair.
In his junior year, Gates dropped out of Harvard to devote his energies to
Microsoft, a company he had begun in 1975 with Paul Allen. Guided by a
belief that the personal computer would be a valuable tool on every office
desktop and in every home, they began developing software for personal
computers.
Gates' foresight and vision regarding personal computing have been central
to the success of Microsoft and the software industry. Gates is actively involved in key
management and strategic decisions at Microsoft, and plays an important role in the technical
development of new products. A significant portion of his time is devoted to meeting with
customers and staying in contact with Microsoft employees around the world through e-mail.
Under Gates' leadership, Microsoft's mission is to continually advance and
improve software technology and to make it easier, more cost-effective and
more enjoyable for people to use computers. The company is committed to
a long-term view, reflected in its investment of more than $2 billion on
research and development in the current fiscal year.
As of December 12, 1996, Gates' Microsoft stock holdings totaled
282,217,980 shares.
In 1995, Gates wrote The Road Ahead, his vision of where information technology will take
society. Co-authored by Nathan Myhrvold, Microsoft's chief technology officer, and Peter
Rinearson, The Road Ahead held the No. 1 spot on the New York Times' bestseller list for
seven weeks. Published in the U.S. by Viking, the book was on the NYT list for a total of 18
weeks. Published in more than 20 countries, the book sold more than 400,000 copies in China
alone. In 1996, while redeploying Microsoft around the Internet, Gates thoroughly revised The
Road Ahead to reflect his view that interactive networks are a major milestone in human
history. The paperback second edition has also become a bestseller. Gates is donating his
proceeds from the book to a non-profit fund that supports teachers worldwide who are
incorporating computers into their classrooms.
In addition to his passion for computers, Gates is interested in
biotechnology. He sits on the board of the Icos Corporation and is a
shareholder in Darwin Molecular, a subsidiary of British-based
Chiroscience. He also founded Corbis Corporation, which is developing
one of the largest resources of visual information in the world-a
comprehensive digital archive of art and photography from public and
private collections around the globe. Gates also has invested with
cellular telephone pioneer Craig McCaw in Teledesic, a company that is working on an
ambitious plan to launch hundreds of low-orbit satellites around the globe to provide worldwide
two-way broadband telecommunications service.
In the decade since Microsoft has gone public, Gates has donated more than $270 million to
charities, including $200 million to the William H. Gates Foundation. The focus of Gates' giving
is in three areas: education, population issues and access to technology.
Gates was married on Jan. 1, 1994 to Melinda French Gates. They have one child, Jennifer
Katharine Gates, born in 1996.
Gates is an avid reader and enjoys playing golf and bridge.
f:\12000 essays\sciences (985)\Computer\Bootlog in Standard Unix.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[boot]
LoadStart = system.drv
LoadSuccess = system.drv
LoadStart = keyboard.drv
LoadSuccess = keyboard.drv
LoadStart = mscmouse.drv
LoadSuccess = mscmouse.drv
LoadStart = vga.drv
LoadSuccess = vga.drv
LoadStart = mmsound.drv
LoadSuccess = mmsound.drv
LoadStart = comm.drv
LoadSuccess = comm.drv
LoadStart = vgasys.fon
LoadSuccess = vgasys.fon
LoadStart = vgaoem.fon
LoadSuccess = vgaoem.fon
LoadStart = GDI.EXE
LoadStart = FONTS.FON
LoadSuccess = FONTS.FON
LoadStart = vgafix.fon
LoadSuccess = vgafix.fon
LoadStart = OEMFONTS.FON
LoadSuccess = OEMFONTS.FON
LoadSuccess = GDI.EXE
LoadStart = USER.EXE
INIT=Keyboard
INITDONE=Keyboard
INIT=Mouse
STATUS=Mouse driver installed
INITDONE=Mouse
INIT=Display
LoadStart = DISPLAY.drv
LoadSuccess = DISPLAY.drv
INITDONE=Display
INIT=Display Resources
INITDONE=Display Resources
INIT=Fonts
INITDONE=Fonts
INIT=Lang Driver
INITDONE=Lang Driver
LoadSuccess = USER.EXE
LoadStart = setup.exe
LoadStart = LZEXPAND.DLL
LoadSuccess = LZEXPAND.DLL
LoadStart = VER.DLL
LoadSuccess = VER.DLL
LoadSuccess = setup.exe
INIT=Final USER
INITDONE=Final USER
INIT=Installable Drivers
INITDONE=Installable Drivers
f:\12000 essays\sciences (985)\Computer\Bugged.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Bugged
In our high tech world, what was once a complicated
electronic task is no longer such a big deal. I'm talking
about "bugging". No, I don't mean annoying people; I mean
planting electronic listening devices for the purpose of
eavesdropping. Bugging an office is a relatively simple
process if one follows a few basic steps.
First, a person needs to select the bug. There are
many different types of bugs ranging from the infinity bug
with which you can listen in on a telephone conversation from
over 200 miles away to an electaronic laser beam which can
pick up the vibrations of a person's voice off a window pane.
The infinity bug sells for $1,000 on the black market and the
laser for $895. Both, however, are illegal.
Second, one needs to know where to plant the bug. A bug
can be hidden in a telphone handset, in the back of a desk
drawer, etc. The important thing to remember is to place the
bug in a spot near where people are likely to talk. The bug
may be useless if it is planted too far away from
conversations take place.
Last one needs to know how to plant the bug. One of the
most common ways is to wire a 9-volt battery to the phone's
own microphone and attaching it to a spare set of wires that
the phone lines normally contain. This connection enables
the phone to be live on the hook, sending continuous room
sounds to the eavesdropper.
It used to be that hidden microphones and concealed tape
recorders were strictly for cops and spies. Today such
gadgets have filtered down to the jealous spouse, the nosy
neighbor, the high-level executive, and the local politician.
f:\12000 essays\sciences (985)\Computer\Business 5000.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTEL Knows Best?
A Major Marketing Mistake
Problem Statement
When Thomas Nicely, a mathematician at Lynchburg College in Virginia, first went public with the fact that Intel's new Pentium chip was defective Intel admitted to the fact that it had sold millions of defective chips, and had known about the defective chips for over four
months. Intel said its reasoning for not going public was that most people would never encounter any problems with the chip. Intel said that a spreadsheet user doing random calculations would only have a problem every 27,000 years, therefore they saw no reason to replace all of the defective chips. However if a user possessed a defective chip and could convince Intel that his or her calculations were particularly vulnerable to the flaw in the defective chip then Intel it would supply those people with a new chip. This attitude of 'father knows best' fostered by Intel created an uproar among users and owners of the defective chips. Six weeks after Mr. Nicely went public, IBM, a major purchaser of Pentium chips, stopped all shipments of computers containing the defective Pentium chips. Intel's stock dropped 5% following this bold move by IBM. IBM's main contention was that it puts its customers first, and Intel was failing to do this.
Intel's handling of this defective chip situation gives rise to many questions. During the course of this paper I will address several of them. The first of which is how did a company with such a stellar reputation for consumer satisfaction fall into the trap that the customer does not know best? Secondly, what made this chip defect more of a public issue than other defective products manufactured and sold to the public in the past? Finally, how did Intel recover from such a mistake? How much did it cost them and what lessons can other companies learn from Intel's marketing blunder so that they do not make the same mistake?
Major Findings
Intel is spearheaded by a chief executive named Andrew Grove. Grove is a "tightly wound engineering Ph.D. who has molded the company in his image. Both the secret of his success and the source of his current dilemma is an anxious management philosophy built around the motto 'Only the paranoid survive'." However, even with this type of philosophy the resulting dominance he has achieved in the computer arena cannot be overlooked. Intel practically dominates the computer market with $11.5 billion in sales. Intel has over 70% of the $11 billion microprocessor market, while it's Pentium and 486 chips basically control the IBM-compatible PC market. All of these factors have resulted in an envious 56% profit margin that only Intel can seem to achieve. So what did Intel do to achieve this sort of profit margin?
In mid-1994 Intel launched a $150m marketing campaign aimed at getting consumers to recognize the Pentium name and the "Intel Inside" logo. In order to achieve this goal of brand recognition Intel advertised its own name in conjunction with the "Intel Inside" logo and stated 'with Intel Inside, you know you have got. . . unparalleled quality'. This provided immediate name recognition for the company and led the consumers to associate Intel with high quality computers. Then Intel went the extra mile in the marketing world and spent another $80m to promote its new Pentium chips. The basis for this extra $80m was to "speed the market's acceptance of the new chip". The marketing campaign was a success. Intel had managed to achieve brand recognition. "Once the products were branded, companies found that they could generate even higher sales by advertising the benefits of their products. This advertising led consumers to regard brands as having very human personality traits, with one proving fundamental to brand longevity -- trustworthiness." Consumers readily identified a quality, up to date computer as one with a Pentium chip and the 'Intel Inside' logo stamped on the front. This "push" marketing strategy of Intel totally dominated the market, thus forcing the Pentium chip to the forefront of the computer market, all at the expense of the cheaper 486. This "push strategy" of Intel made it plainly clear to its purchasers that Intel was looking out for number one first and its purchasers such as Compaq and IBM second. Making the Pentium chip the mainstay of the computer industry was the goal of Intel, but a goal that would later come back to haunt them for a brief period of time.
Throughout the history of the computer industry many manufacturers have sold defective products. According to Forbes journalist Andrew Kessler, "Every piece of hardware and software ever shipped had a bug in it. You better get used to it." Whether or not 'every' piece ever shipped has had a bug is debatable, but there have been numerous examples of valid software bugs. For example Quicken 3.0 had a bug that resulted in the capitalizing of the second letter of a name incorrectly. Intuit, however, handled the situation by selling an upgraded version (Quicken 4.0) which fixed the problem, and left the consumer feeling as though he or she had gotten an upgraded version of the existing program. In essence Intuit had not labeled the upgrade as a debugging program, therefore it had fixed the problem and satisfied the customer all at the same time. While Intuit's customers were feeling as though they had a better product by buying the upgrade, Intuit was padding its pocket books through all of the upgrade sales. Other examples of companies standing behind their products are in the news week after week. Just a few years ago Saturn, the GM subsidiary, sent thousands of cars to the junkyards for scrap metal due to corroded engines, a result of contaminated engine coolant. Johnson & Johnson, the maker of Tylenol, recalled every bottle of medicine carrying the Tylenol name and offered a 100% money back guarantee to anyone who had purchased a bottle that might be contaminated. The precedence was already set, so why would a company with the reputation of Intel fail to immediately replace all of the defective chips it had sold? Furthermore, why did Intel not come forth immediately when it first discovered that its chips had a problem?
Intel's engineers said that the defective chips would affect only one-tenth of 1% of all users, and those users would be doing floating-point operations. (Floating point operations utilize a matrix of precomputed values, similar to those found in the back of your 1040 tax booklet. If the values in the table are correct then you will come up with a correct answer. This was not the case with the Pentium. A table containing 1066 entries had five incorrect entries, resulting in certain calculations made by the Pentium chips to be inaccurate as high as the fourth significant digit.) Considering the low number of people that the chip would supposedly affect and the high cost ($475m) associated with replacing the chips, Intel decided a case by case replacement policy "for those limited users doing critical calculations". Intel's VP-corporate marketing director, Dennis Carter, stated, "We're satisfied that it's addressing the real problem. From a customer relations standpoint, this is clearly new territory for us. A recall would be disruptive for PC users and not the right thing to for the consumer". This policy infuriated the millions of Pentium purchasers who had bought a PC with a Pentium chip. Word spread like wildfire throughout the consumer world that Intel had sold a defective product and was now refusing to replace it. This selective replacement policy is a "classic example of a product driven company that feels its technical expertise is more important than buyers' feelings". Intel was faced with a decision. Should they take the attitude of brand is most important and we will take all necessary action to preserve it or take the attitude of what would be the monetary cost of doing the right thing and replacing all of the defective chips, and would it be worth it? Initially they decided that the monetary cost of replacing all defective chip would not be cost efficient due to the sheer numbers involved. Intel had sold an estimated 4.5 million Pentium chips worldwide, and approximately 1.9 million in the U.S. alone. Intel later reversed its selective replacement policy (Intel knows best attitude) and came out with a 100% replacement policy. What was the reasoning behind this change of attitude at Intel?
As a result of the selective replacement policy, IBM announced it would stop all shipments of PCs containing the flawed chips. This combined with the public outcry at having spent thousands of dollars for PCs that did not work as advertised, and the reluctance of corporate users of PCs to purchase new computers resulted in Intel changing its public policy concerning the defective chips. Intel's new policy was to offer a 100% replacement policy to anyone who desired a new chip. This policy entailed either sending replacement chips to those users who wanted to replace the chip themselves, or providing free professional replacement of the chip for those who did not feel comfortable doing it themselves. Intel's new policy was in line with public expectations, but it had been delayed for several precious weeks. So one might ask, "What did this delayed change in attitude cost Intel in terms of dollars and repeat customers?"
The resulting costs to Intel were enormous in some respects, but almost negligible in others. Intel's fourth-quarter earnings were charged $475m for the costs of replacing and writing off the flawed chips. This was 15% more than analysts had predicted. Fourth-quarter profits dropped 37% to $372m. This was a sharp drop in profits, but $372m is still a number to be reckoned with in the fast paced industry of computers. So did this drop in profits mean that Intel was losing its edge? I tend to think not, since Intel reported that the sale of Pentiums had doubled between the third and fourth quarters, thus lifting revenues in 1994 to $11.5 billion, a 31% increase. Apparently consumers rallied around the new replacement policy and continued to purchase the Pentium equipped computers at a very fast rate, despite the initial reaction of Intel towards replacing the defective chips. This renewed faith was not regained overnight, but nevertheless it happened, therefore Intel is unlikely to lose its commanding lead in the industry. So what type of assurance was it that led to this renewed faith in Intel?
Following Intel's announcement of its 100% replacement policy for the defective chips it recalculated its replacement policy on all future defective products. Intel realized that its "fatal flaw was adopting a 'father knows b
f:\12000 essays\sciences (985)\Computer\business in computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I understand that some students that have already graduated from College are
having a bit of trouble getting their new businesses started. I know of a tool that will
be extremely helpful and is already available to them; the Internet. Up until a few years
ago, when a student graduated they were basically thrown out into the real world with just
their education and their wits. Most of the time this wasn't good enough because after
three or four years of college, the perspective entrepreneur either forgot too much of what
they were supposed to learn, or they just didn't have the finances. Then by the time they
save sufficient money, they again had forgotten too much. I believe I have found the
answer. On the Internet your students will be able to find literally thousands of links to
help them with their future enterprises. In almost every city all across North America, no
matter where these students move to, they are able to link up and find everything they
need. They can find links like "Creative Ideas", a place they can go and retrieve ideas,
innovations, inventions, patents and licensing. Once they come up with their own products,
they can find free expert advice on how to market their products. There are easily
accessible links to experts, analysts, consultants and business leaders to guide their way
to starting up their own business, careers and lives. These experts can help push the
beginners in the right direction in every field of business, including every way to
generate start up revenue from better management of personal finances to diving into the
stock market. When the beginner has sufficient funds to actually open their own company,
they can't just expect the customers to come to them, they have to go out and attract them.
This is where the Internet becomes most useful, in advertising. On the Internet, in every
major consumer area in the world, there are dozens of ways to advertise. The easiest and
cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly
newsletters sent all over the world to major and minor businesses informing them about new
companies on the market. It includes everything about your business from what you
make/sell and where to find you, to what your worth. These groups also advertise to the
general public. The major portion of the advertising is done over the Internet, but this
is good because that is their target market. By now, hopefully their business is doing
well, sales are up and money is flowing in. How do they keep track of all their funds
without paying for an expensive accountant? Back to the Internet. They can find lots of
expert advice on where they should reinvest their money. Including how many and how
qualified of staff to hire, what technical equipment to buy and even what insurance to
purchase. This is where a lot of companies get into trouble, during expansion. Too many
entrepreneurs try to leap right into the highly competitive mid-size company world. On the
Internet, experts give their secrets on how to let their companies natural growth force its
way in. This way they are more financially stable for the rough road ahead. The Internet
isn't always going to give you the answers you are looking for, but it will always lead you
in the right direction. That is why I hope you will accept my proposal and make aware the
students of today of this invaluable business tool.
f:\12000 essays\sciences (985)\Computer\Can Computers Think The case for and against artificial int.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Can computers think?
The case for and against artificial intelligence
Artificial intelligence has been the subject of many bad '80's
movies and countless science fiction novels. But what happens when we
seriously consider the question of computers that think. Is it possible for
computers to have complex thoughts, and even emotions, like homo sapien? This
paper will seek to answer that question and also look at what attempts are being
made to make artificial intelligence (hereafter called AI) a reality.
Before we can investigate whether or not computers can think, it is
necessary to establish what exactly thinking is. Examining the three main
theories is sort of like examining three religions. None offers enough support so
as to effectively eliminate the possibility of the others being true. The three main
theories are: 1. Thought doesn't exist; enough said. 2. Thought does exist, but
is contained wholly in the brain. In other words, the actual material of the brain is
capable of what we identify as thought. 3. Thought is the result of some sort of
mystical phenomena involving the soul and a whole slew of other unprovable
ideas. Since neither reader nor writer is a scientist, for all intents and purposes,
we will say only that thought is what we (as homo sapien) experience.
So what are we to consider intelligence? The most compelling
argument is that intelligence is the ability to adapt to an environment. Desktop
computers can, say, go to a specific WWW address. But, if the address were
changed, it wouldn't know how to go about finding the new one (or even that it
should). So intelligence is the ability to perform a task taking into consideration
the circumstances of completing the task.
So now that we have all of that out of that way, can computers think? The
issue is contested as hotly among scientists as the advantages of Superman over
Batman is among pre-pubescent boys. On the one hand are the scientists who say,
as philosopher John Searle does, that "Programs are all syntax and no semantics."
(Discover, 106) Put another way, a computer can actually achieve thought
because it "merely follows rules that tell it how to shift symbols without ever
understanding the meaning of those symbols." (Discover, 106) On the other side
of the debate are the advocates of pandemonium, explained by Robert Wright in
Time thus: "[O]ur brain subconsciously generates competing theories about the
world, and only the 'winning' theory becomes part of consciousness. Is that a
nearby fly or a distant airplane on the edge of your vision? Is that a baby crying
or a cat meowing? By the time we become aware of such images and sounds,
these debate have usually been resolved via a winner-take-all struggle. The
winning theory-the one that best matches the data-has wrested control of our
neurons and thus our perceptual field." (54) So, since our thought is based on
previous experience, computers can eventually learn to think.
The event which brought this debate in public scrutiny was Garry
Kasparov, reigning chess champion of the world, competing in a six game chess
match against Deep Blue, an IBM supercomputer with 32 microprocessors.
Kasparov eventually won (4-2), but it raised the legitimate question, if a computer
can beat the chess champion of the world at his own game (a game thought of as
the ultimate thinking man's game), is there any question of AI's legitimacy?
Indeed, even Kasparov said he "could feel-I could smell- a new kind of
intelligence across the table." (Time, 55) But, eventually everyone, including
Kasparov, realized that what amounts to nothing more than brute force, while
impressive, is not thought. Deep Blue could consider 200 million moves a
second. But it lacked the intuition good human players have. Fred Guterl,
writing in Discover, explains. "Studies have shown that in a typical position, a
strong human play considers on average only two moves. In other words, the
player is choosing between two candidate moves that he intuitively recognizes,
based on prior experience, as contributing to the goals of the position."
Seeking to go beyond the brute force of Deep Blue in separate
projects, are M.I.T. professor Rodney Brooks and computer scientist Douglas
Lenat. The desire to conquer AI are where the similarities between the two end.
Brooks is working on an AI being nicknamed Cog. Cog has
cameras for eyes, eight 32-bit microprocessors for a brain and soon will have a
skin-like membrane. Brooks is allowing Cog to learn about the world like a baby
would. "It sits there waving its arm, reaching for things." (Time, 57) Brooks's
hope is that by programming and reprogramming itself, Cog will make the leap to
thinking. This expectation is based on what Julian Dibbell, writing in Time,
describes as the "bottom-up school. Inspired more by biological structures than
by logical ones, the bottom-uppers don't bother trying to write down the rules of
thought. Instead, they try to conjure thought up by building lots of small, simple
programs and encouraging them to interact." (57)
Lenat is critical of this type of AI approach. He accuses Brooks of
wandering aimlessly trying to recreate evolution. Lenat has created CYC. An AI
program which uses the top down theory which states that "if you can write down
the logical structures through which we comprehend the world, you're halfway to
re-creating intelligence. (Time, 57) Lenat is feeding CYC common sense
statements (i.e. "Bread is food.") with the hopes that it will make that leap to
making its own logical deductions. Indeed, CYC can already pick a picture of a
father watching his daughter learn to walk when prompted for pictures of happy
people. Brooks has his own criticisms for Lenat. "Without sensory input, the
program's knowledge can never really amount to more than an abstract network
of symbols.
So, what's the answer? The evidence points to the position that AI is
possible. What is our brain but a complicated network of neurons? And what is
thought but response to stimuli? How to go about achieving AI is another
question entirely. All avenues should be explored. Someone is bound to hit on it.
Thank you.
f:\12000 essays\sciences (985)\Computer\Censorship on the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Five years after the first world wide web was launched at the end of 1991, The Internet has become very popular in the United States. Although President Clinton already signed the 1996 Telecommunication ActI on Thursday Feb 8, 1996, the censorship issue on the net still remains unresolved. In fact, censorship in cyberspace is unconscionable and impossible. Trying to censor the Internet its problematic because the net is an international issue, there is no standard for judging materials, and censorship is an abridgment of democratic spirit.
Firstly, censorship on the Internet is an international issue. The Internet was constructed by the U.S. military since 1960s, but no one actually owns it. Thus, the Internet is a global network, and it crosses over different cultures. It is impossible to censor everything that seems to be offensive. For example, Vietnam has announced new regulations that forbid "data that can affect national security, social order, and safety or information that is not appropriate to the culture, morality, and traditional customs of the Vietnamese people." on June 4, 1996. It is also impossible to ban all things that are prohibited in a country. For instant, some countries, such as Germany, have considered taking measures against the U.S. and other companies or individuals that have created or distributed offensive material on the Internet. If the United States government really wanted to censor the net, there is only one solution - shut down all network links of other countries. But of course that would mean no Internet access for the whole country and that is disgust by the whole nation.
Secondly, everyone has their personal judgment values. The decision of some people cannot represent the whole population of those using the net. Many people debate that pornography on the net should be censored because there are kids online. However, we can see there are many kids of pornographic magazines on display at newsstands. It is because we have regulations to limit who can read certain published materials. Likewise, some people already use special software to regulate the age limit in cyberspace. Why do people still argue about that? It is all about personal points of views. Justice Douglas said, "To many the Song of Solomon is obscene. I do not think we, the judges, were ever given the constitutional power to make definitions of obscenity."II. In cyberspace, it is hard to set up a pool of judges to censor what could be displayed on the net.
Thirdly, censorship works against democratic spirit, it opposes the right of free speech and is a breach of the First Amendment. Do you remember Salman Rushdie and his book The Satanic Verses? Iranian government announced a death threat to kill Rushdie and his publishers because his book speaks against Islam. No one wants that to happen again. If you are one of the Internet users, you should have seen a blue ribbon logo. The blue ribbon symbolizes a support for the essential human right of free speech. Let think about what happen if we lost the right of free speech. How can we stay online? Who gives courage to the web's designers to put their opinion on the net? On the same day when the 1996 Telecommunication Act signed in law, a bill called House Bill 1630 was introduced by Georgia House of Representatives member Don Parsons. It is so repel that this law even limits the right of choosing email addressesIII. "Freedom of speech on the Internet deserves the same protection as freedom of the press, freedom of speech, or freedom of assembly." said Bill GatesIV.
In addition, information in cyberspace can be changing from second to second. If you put something on the web, everyone on the net can access it instantly. It is totally different from all traditional media. Everything on the Internet is just a combination of zero and oneV. It is very difficult to chase what has been published on the information superhighway.
After President Clinton signed the 1996 Telecommunication Act, lots of net users reacted in outrage. Although the Federal court in Philadelphia and New York have overturned that Act, The government has appealed the ruling and the case has been referred to the U.S. Supreme Court. Since censorship is an international issue, people have different judgment and censorship works against the democratic spirit. Censorship in the Internet is totally unacceptable. According Justice Potter Stewart's words, "Censorship reflects a society's lack of confidence in itself. It is a hallmark of an authoritarian regime. Long ago those who wrote our First Amendment charted a different course. They believed a society can be truly strong only when it is truly free.VI". If we allow those few in society to censor whatever they find offensive, we have forfeited our right of freedom and have lost our power as a democratic nation.
I.) On Thursday Feb 1, 1996, Congress approved legislation to dramatically restrict the First Amendment rights of Internet users. President Clinton signed into law Thursday Feb. 8, 1996
II.) Miller v. California, 413 U.S. 15, 46 (1973), Justice Douglas, dissenting opinion.
III.) The bill makes it illegal for email users to have addresses that do not include their own names.
IV.) Bill Gates, Microsoft Magazine Volume 3 Issue 4 Page 54, TPD Publishing Inc., 1996
V.) The way in which computers read data.
VI.) Ginzburg v. United States, 383 U.S. 463, 498 (1966)
f:\12000 essays\sciences (985)\Computer\Clifford Stoll.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By Clifford Stoll
"The Cuckoo's Egg" is a story of persistence, love for one's work and is just plain funny! The story starts out with Clifford Stoll being "recycled" to a computer analyst/webmaster.
Cliff, as he is affectionately called, is a long-haired ex-hippie that works at Lawrence Berkeley Lab. He originally was an astronomer, but since his grant wore out, he became a mainframe master. He was glad that instead of throwing him out into the unemployment office, the Lab recycled their people and downstairs he went, to the computer lab.
A few days after he becomes the master of the mainframe, his colleague, Wayne Graves, asks him to figure out a 75cent glitch that is in the accounting system. It turns out that a computer guru, "Seventek" seems to be in town. None of his closest friends know that. The Lab becomes suspicious that it might be a hacker. To fill you in who Seventek is, he is a computer guru that created a number of programs for the Berkeley UNIX system. At the time, he was in England far from computers and civilization. The crew does not what to believe that it would be Seventek, so they start to look what the impostor is doing. Cliff hooks up a few computers to the line that comes from the Tymnet. Tymnet is a series of fiber-optic cables that run from a major city to another major city. So if you were in LA and wanted to hook up to a computer in the Big Apple you could call long distance, have a lot of interference from other callers and have a slow connection, or you could sign-up to Tymnet and dial locally, hop on the optic cable and cruise at a T-3 line. The lab had only five Tymnet lines so Cliff could easily monitor every one with five computers, teletypes, and five printers. That was the difficult part, where to get all that equipment. At graduate school they taught Cliff to improvise. It was a Friday, and not many people come to work on Saturday. Since it was easier to make up an excuse than to beg for anything, he "borrowed" everything he needed. Then programmed his computer to beep twice when someone logged on from the Tymnet lines. The thing is, since he was sleeping under his desk, he would gouge his head on the desk drawer. Also, many people like to check their E-mail very late at night, so not to get interference. Because of that his terminal beeped a lot! The next day, he was woken up by the cable operator. Cliff said that he must have smelled like a dying goat. Any way, the hacker only logged on once during the night, but left an 80 foot souvenir behind. Cliff estimated a two to three hours roaming through the three million dollar pieces of silicon that he calls a computer. During that time he planted a "Cuckoo's egg."
The cuckoo is a bird that leaves its eggs in other bird's nest. If it not were for the other species ignorance, the cuckoo would die out. The same is for the mainframe. There is a housecleaning program that runs every five minutes on the Berkeley UNIX. It is called atrun. The hacker put his version of atrun into the computer through a hole in the Gnu-Emacs program. It is a program that lets the person who is sending E-mail put a file anywhere they wished. So that is how the hacker became a "Superuser." A superuser has all the privileges of a system operator, but from a different computer. Cliff called the FBI, the CIA, and all the other three lettered agencies that that had spooks in trench coat and dark glasses (and some of them had these nifty ear pieces too!) Everyone except the FBI lifted a finger. The FBI listened but, they stated that if they hadn't lost millions of dollars in equipment, or classified data, they didn't what to know them. The hodgepodge of information between the CIA, NSA, and Cliff began to worry his lover, Martha. A little background on her. She and Clifford have know each other since they were kids, and lovers since they turned adults. They didn't feel like getting married because they thought that was a thing that you do when you're very bland. They wanted freedom. If they ever wanted to leave they would just pack their bags, pay their share of the utilities and hightail it out of there. Well back to the plot. She too was an ex-hippie and she hated anything that had to do with government. The spook calls were killing their relationship.
When Cliff wanted to trace a phone call to the hacker, the police said. "That just isn't our bailiwick." It seemed that everyone wanted information, wanted Cliff to say open with his monitoring system, but nobody seemed interested in paying for the things that were happening.
When Cliff found the hacker in a supposedly secure system, he called the system administrator. The hacker was using the computer in their system to dial anywhere he wished, and they picked up the tab. The guy was NOT happy. He asked if he was to close up shop for the hacker and change all the passwords. Cliff answered no, he wanted to track the guy/gal. First Cliff strategically master minded a contrivance. He would ask for the secure system's phone records, which would show him (theoretically) where the hacker is calling to. Then that night, Cliff became the hacker. He used his computer to log in to his account at Berkley and then he would Telnet to the hacked system, try the passwords and see what he could see. Boy was he ever surprised! He could call anywhere, for free!! He had access to other computer on the network also, one sensitive at that.
The next day, Cliff called the sys administrator, and told him about his little excursion. The guy answered. "Sorry Cliff, we have to close up shop. This went right up the line, and well, the modems are going down for a long time." This irritated Clifford. He was so close! Anyway his life went back to semi-normal. (Was it ever?!) Then unexpectedly his beeper beeped. To fill you in, he got him self a beeper for those unexpected pleasures. He was in the middle of making scrambled eggs for Martha, who was still asleep. He wrote her a note saying "The case is afoot!!J", leaving the eggs still in the pan.
The hacker didn't come through the now secure system, but through another line, over Tymnet. He called Tymnet and got them to do a search. They traced him over the "puddle" (the Atlantic) to the German Datex Network. They couldn't trace any further because the German's network is all switches, not like the computerized switches of the good ol' US of A! There would have to be a technician, there tracing the wire along the wall, into the ground, and maybe on to a telephone pole. Not only that, the Germans wouldn't do anything without a search warrant.
Every minor discovery was told about six times to the different three letter agencies that were on the case. Mean while, since this was no longer a domestic case, and was remotely interesting for the FBI, they took the case, out of pure boredom.
The CIA affectionately called the FBI the "F entry". Now that the guys at the F entry were in, there was work to be done. They got a warrant, but the guy who was to deliver, never did. This was beginning to be serious. Every time Cliff tried to get some info on what is going on across the puddle, the agencies clamed up.
When the warrant finally came, the Germans let the technicians be there to midnight German time. As soon as the fiend on the other side raised his periscope, they would nail him.
The problem was, to trace him, well, he needed to be on the line for about two hours! The kicker is that he was on for mostly two to three minute intervals. That is when Operation Showerhead came into effect!! Martha came up with this plan while in the shower with Cliff...First make up some cheesy files that sound remotely interesting. Then place them in a spot that only he and the hacker could read. Recall that the hacker was after military files. They take files that were all ready there, change all the Mr. to General, all the Ms to corporal and all the Professors to Sergeant Major. All that day they made up those files. Then they pondered what the title should be, STING or SDINET. They chose SDINET because STING looked too obvious. Then they created a bogus secretary, under the address of a real one. Cliff put enough files on the directory so that it would take the hacker at least three hours of dumping the whole file onto his computer.
In one of the files it said that if you wanted more info, send to this address. Well one day, Cliff was actually doing some work, for a change, when the real secretary called to say that a letter came for the bogus secretary. Cliff ran up the stairs, the elevator was too slow. They opened it and she read it aloud to Cliff who was in utter amassment. Then he called the F entry. They told him not to touch the document and to send to them in a special envelope. He did.
Cliff was at home one day and all of a sudden his beeper beeped. Since he programmed it to beep in Morse code, he knew where the hacker was coming from before he physically saw him on the screen. Martha groaned while Clifford jumped on his old ten speed and rode to work. When he got there, the hacker just started to download the SDINET files from the UNIX. He called Tymnet and started the ball rolling. That day the hacker was on for more than two hours, enough for the trace to be completed. Though he knew that the FBI knew the number, they wouldn't tell him who the predator was.
For the next few days, Clifford expected to get a call from the Germans saying, "You can close up your system, we have him at the police station now." That didn't happen. He got word, though, that there was a search of his home, and they recovered printouts, computer back-up tapes, and disks, and diskettes. That was enough evidence to lock him up for a few years. Then one day, they caught him in the act. That was enough, he was in the slammer awaiting trail.
Clifford's adventure was over, he caught his hacker, and was engaged to Martha. They decided to get married after all. He returned to being an astronomer, and not a computer wizard. Though many people though of him as a wizard, he himself though that what he did was a discovery that he stumbled on. From a 75cent accounting mishap to Tymnet to Virginia, to Germany. What a trace!
At the end of the story, poor Cliff was sobbing because he grew up!! L To him that was a disaster, but the wedding coming up, and his life officially beginning, he forgot it soon. Now he lives in Cambridge with his wife, Martha, and three cats that he pretends to dislike.
f:\12000 essays\sciences (985)\Computer\CMIP vs SNMP Network Management Protocols.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CMIP vs. SNMP : Network Management
Imagine yourself as a network administrator, responsible for a 2000 user network.
This network reaches from California to New York, and some branches over seas. In
this situation, anything can, and usually does go wrong, but it would be your job as a
system administrator to resolve the problem with it arises as quickly as possible. The
last thing you would want is for your boss to call you up, asking why you haven't done
anything to fix the 2 major systems that have been down for several hours. How do
you explain to him that you didn't even know about it? Would you even want to tell
him that? So now, picture yourself in the same situation, only this time, you were
using a network monitoring program. Sitting in front of a large screen displaying a
map of the world, leaning back gently in your chair. A gentle warning tone sounds, and
looking at your display, you see that California is now glowing a soft red in color, in
place of the green glow just moments before. You select the state of California, and it
zooms in for a closer look. You see a network diagram overview of all the computers
your company has within California. Two systems are flashing, with an X on top of
them indicating that they are experiencing problems. Tagging the two systems, you
press enter, and with a flash, the screen displays all the statitics of the two systems,
including anything they might have in common causing the problem. Seeing that both
systems are linked to the same card of a network switch, you pick up the phone and
give that branch office a call, notifying them not only that they have a problem, but
how to fix it as well.
Early in the days of computers, a central computer (called a mainframe) was
connected to a bunch of dumb terminals using a standard copper wire. Not much
thought was put into how this was done because there was only one way to do it: they
were either connected, or they weren't. Figure 1 shows a diagram of these early
systems. If something went wrong with this type of system, it was fairly easy to
troubleshoot, the blame almost always fell on the mainframe system.
Shortly after the introduction of Personal Computers (PC), came Local Area
Networks (LANS), forever changing the way in which we look at networked systems.
LANS originally consisted of just PC's connected into groups of computers, but soon
after, there came a need to connect those individual LANS together forming what is
known as a Wide Area Network, or WAN, the result was a complex connection of
computers joined together using various types of interfaces and protocols. Figure 2
shows a modern day WAN. Last year, a survey of Fortune 500 companies showed that
15% of their total computer budget, 1.6 Million dollars, was spent on network
management (Rose, 115). Because of this, much attention has focused on two families
of network management protocols: The Simple Network Management Protocol
(SNMP), which comes from a de facto standards based background of TCP/IP
communication, and the Common Management Information Protocol (CMIP), which
derives from a de jure standards-based background associated with the Open Systems
Interconnection (OSI) (Fisher, 183).
In this report I will cover advantages and disadvantages of both Common
Management Information Protocol (CMIP) and Simple Network Management Protocol
(SNMP)., as well as discuss a new protocol for the future. I will also give some good
reasons supporting why I believe that SNMP is a protocol that all network
administrators should use.
SNMP is a protocol that enables a management station to configure, monitor, and
receive trap (alarm) messages from network devices. (Feit, 12). It is formally specified
in a series of related Request for Comment (RFC) documents, listed here.
RFC 1089 - SNMP over Ethernet
RFC 1140 - IAB Official Protocol Standards
RFC 1147 - Tools for Monitoring and Debugging TCP/IP
Internets and Interconnected Devices
[superceded by RFC 1470]
RFC 1155 - Structure and Identification of Management
Information for TCP/IP based internets.
RFC 1156 - Management Information Base Network
Management of TCP/IP based internets
RFC 1157 - A Simple Network Management Protocol
RFC 1158 - Management Information Base Network
Management of TCP/IP based internets: MIB-II
RFC 1161 - SNMP over OSI
RFC 1212 - Concise MIB Definitions
RFC 1213 - Management Information Base for Network Management
of TCP/IP-based internets: MIB-II
RFC 1215 - A Convention for Defining Traps for use with the SNMP
RFC 1298 - SNMP over IPX (SNMP, Part 1 of 2, I.1.)
The first protocol developed was the Simple Network Management Protocol
(SNMP). It was commonly considered to be a quickly designed "band-aid" solution to
internetwork management difficulties while other, larger and better protocols were
being designed. (Miller, 46). However, no better choice became available, and SNMP
soon became the network management protocol of choice.
It works very simply (as the name suggests): it exchanges network packets through
messages (known as protocol data units (PDU)). The PDU contains variables that
have both titles and values. There are five types of PDU's which SNMP uses to
monitor a network: two deal with reading terminal data, two with setting terminal data,
and one called the trap, used for monitoring network events, such as terminal start-ups
or shut-downs.
By far the largest advantage of SNMP over CMIP is that its design is simple, so it is
as easy to use on a small network as well as on a large one, with ease of setup, and lack
of stress on system resources. Also, the simple design makes it simple for the user to
program system variables that they would like to monitor. Another major advantage to
SNMP is that is in wide use today around the world. Because of it's development
during a time when no other protocol of this type existed, it became very popular, and
is a built in protocol supported by most major vendors of networking hardware, such as
hubs, bridges, and routers, as well as majoring operating systems. It has even been put
to use inside the Coca-Cola machines at Stanford University, in Palo Alto, California
(Borsook, 48). Because of SNMP's smaller size, it has even been implemented in
such devices as toasters, compact disc players, and battery-operated barking dogs. In
the 1990 Interop show, John Romkey, vice president of engineering for Epilogue,
demonstrated that through an SNMP program running on a PC, you could control a
standard toaster through a network (Miller, 57).
SNMP is by no means a perfect network manager. But because of it's simple
design, these flaws can be fixed. The first problem realized by most companies is that
there are some rather large security problems related with SNMP. Any decent hacker
can easily access SNMP information, giving them any information about the network,
and also the ability to potentially shut down systems on the network. The latest version
of SNMP, called SNMPv2, has added some security measures that were left out of
SNMP, to combat the 3 largest problems plaguing SNMP: Privacy of Data (to prevent
intruders from gaining access to information carried along the network), authentication
(to prevent intruders from sending false data across the network), and access control
(which restricts access of particular variables to certain users, thus removing the
possibility of a user accidentally crashing the network). (Stallings, 213)
The largest problem with SNMP, ironically enough, is the same thing that made it
great; it's simple design. Because it is so simple, the information it deals with is
neither detailed, nor well organized enough to deal with the growing networks of the
1990's.
This is mainly due to the quick creation of SNMP, because it was never designed to be
the network management protocol of the 1990's. Like the previous flaw, this one too
has been corrected with the new version, SNMPv2. This new version allows for more
in-detail specification of variables, including the use of the table data structure for
easier data retrieval. Also added are two new PDU's that are used to manipulate the
tabled objects. In fact, so many new features have been added that the formal
specifications for SNMP have expanded from 36 pages (with v1) to 416 pages with
SNMPv2. (Stallings, 153) Some people might say that SNMPv2 has lost the simplicity,
but the truth is that the changes were necessary, and could not have been avoided.
A management station relies on the agent at a device to retrieve or update the
information at the device. The information is viewed as a logical database, called a
Management Information Base, or MIB. MIB modules describe MIB variables for a
large variety of device types, computer hardware, and software components. The
original MIB for Managing a TCP/IP internet (now called MIB-I) was defined in RFC
1066 in August of 1988. It was updated in RFC 1156 in May of 1990. The MIB-II
version published in RFC 1213 in May of 1991, contained some improvements, and
has proved that it can do a good job of meeting basic TCP/IP management needs.
MIB-II added many useful variables missing from MIB-I (Feit, 85). MIB files are
common variables used not only by SNMP, but CMIP as well.
In the late 1980's a project began, funded by governments, and large corporations.
Common Management Information Protocol (CMIP) was born. Many thought that
because of it's nearly infinite development budget, that it would quickly become in
widespread use, and overthrow SNMP from it's throne. Unfortunately, problems with
its implementation have delayed its use, and it is now only available in limited form
from developers themselves. (SNMP, Part 2 of 2, III.40.)
CMIP was designed to be better than SNMP in every way by repairing all flaws,
and expanding on what was good about it, making it a bigger and more detailed
network manager. It's design is similar to SNMP, where PDU's are used as variables
to monitor the network. CMIP however contains 11 types of PDU's (compared to
SNMP's 5). In CMIP, the variables are seen as very complex and sophisticated data
structures with three attributes. These include:
1) Variable attributes: which represent the variables characteristics (its data
type, whether it is writable)
2) variable behaviors: what actions of that variable can be triggered.
3) Notifications: the variable generates an event report whenever a specified
event occurs (eg. A terminal shutdown would cause a variable notification
event) (Comer, 82)
As a comparison, SNMP only employs variable properties from one and three above.
The biggest feature of the CMIP protocol is that its variables not only relay information
to and from the terminal (as in SNMP) , but they can also be used to perform tasks that
would be impossible under SNMP. For instance, if a terminal on a network cannot
reach the fileserver a pre-determined amount of times, then CMIP can notify
appropriate personnel of the event. With SNMP however, a user would have to
specifically tell it to keep track of unsuccessful attempts to reach the server, and then
what to do when that variable reaches a limit. CMIP therefore results in a more
efficient management system, and less work is required from the user to keep updated
on the status of the network. CMIP also contains the security measures left out by
SNMP. Because of the large development budget, when it becomes available, CMIP
will be widely used by the government, and the corporations that funded it.
After reading the above paragraph, you might wonder why, if CMIP is this
wonderful, is it not being used already? (after all, it had been in development for nearly
10 years) The answer is that possibly CMIP's only major disadvantage, is enough in
my opinion to render it useless. CMIP requires about ten times the system resources
that are needed for SNMP. In other words, very few systems in the world would able
to handle a full implementation on CMIP without undergoing massive network
modifications. This disadvantage has no inexpensive fix to it. For that reason, many
believe CMIP is doomed to fail. The other flaw in CMIP is that it is very difficult to
program. Its complex nature requires so many different variables that only a few
skilled programmers are able to use it to it's full potential.
Considering the above information, one can see that both management systems have
their advantages and disadvantages. However the deciding factor between the two,
lies with their implementation, for now, it is almost impossible to find a system with
the necessary resources to support the CMIP model, even though it is superior to
SNMP (v1 and v2) in both design and operation. Many people believe that the
growing power of modern systems will soon fit well with CMIP model, and might
result in it's widespread use, but I believe by the time that day comes, SNMP could
very well have adapted itself to become what CMIP currently offers, and more. As
we've seen with other products, once a technology achieves critical mass, and a
substantial installed base, it's quite difficult to convince users to rip it out and start
fresh with an new and unproven technology (Borsook, 48). It is then recommend that
SNMP be used in a situation where minimial security is needed, and SNMPv2 be used
where security is a high priority.
Works Cited
Borsook, Paulina. "SNMP tools evolving to meet critical LAN needs." Infoworld
June 1, 1992: 48-49.
Comer, Douglas E. Internetworking with TCP/IP New York: Prentice-Hall,
Inc., 1991.
Dryden, Partick. "Another view for SNMP." Computerworld December 11, 1995: 12.
Feit, Dr. Sidnie. SNMP. New York: McGraw-Hill Inc., 1995.
Fisher, Sharon. "Dueling Protocols." Byte March 1991: 183-190.
Horwitt, Elisabeth. "SNMP holds steady as network standard." Computerworld
June 1, 1992: 53-54.
Leon, Mark. "Advent creates Java tools for SNMP apps." Infoworld
March 25, 1996: 8.
Marshall, Rose. The Simple Book. New Jersey: Prentice Hall, 1994.
Miller, Mark A., P.E. Managing Internetworks with SNMP New York: M&T
Books, 1993.
Moore, Steve. "Committee takes another look at SNMP." Computerworld
January 16, 1995: 58.
Moore, Steve. "Users weigh benefits of DMI, SNMP." Computerworld
July, 31 1995: 60.
The SNMP Workshop & Panther Digital Corporation. SNMP FAQ Part 1 of 2.
Danbury, CT: http://www.www.cis.ohio-state.edu/hypertext/faq/usenet/snmp-
faq/part1/faq.html, pantherdig@delphi.com.
The SNMP Workshop & Panther Digital Corporation. SNMP FAQ Part 2 of 2.
Danbury, CT: http://www.www.cis.ohio-state.edu/hypertext/faq/usenet/snmp-
faq/part2/faq.html, pantherdig@delphi.com.
Stallings, William. SNMP, SNMPv2, and CMIP. Don Mills, Addison-Wesley, 1993.
Vallillee, Tyler, web page author. Http://www.undergrad.math.
uwaterloo.ca/~tkvallil/snmp.html
VanderSluis, Kurt. "SNMP: Not so simple." MacUser October 1992: 237-240
12
f:\12000 essays\sciences (985)\Computer\Cognitive Artifacts and Windows 95.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The article on Cognitive Artifacts by David A. Norman deals with the theories and principles of artifacts as they relate to the user during execution and completion of tasks. These principles and theories that Norman speaks about may be applied to any graphical user interface, however I have chosen to relate the article to the interface known as Windows 95.
Within Windows 95, Microsoft has included a little tool called the wizard that guides us through the steps involved in setting up certain applications. This wizard is a very helpful tool to the non experienced computer user, in the way that it acts like a to-do list. The wizard takes a complex task and breaks it into discrete pieces by asking questions and responding to those questions based on the answers. Using Norman's theories on system view and the personal view of artifacts, we see that the system views the wizard as an enhancement. For example, we wanted to set up the Internet explorer, you click on the icon answer the wizard's questions and the computer performs the work. Making sure everything is setup properly without the errors that could occur in configuring the task yourself. The wizard performs all the functions on its little to-do list without having the user worrying about whether he/she remembered to include all the commands. On the side of personal views the user may see the wizard as a new task to learn but in general it is simpler than having to configure the application yourself and making an error, that could cause disaster to your system. The wizard also prevents the user from having to deal with all the internal representation of the application like typing in command lines in the system editor.
Within Windows 95 most of the representation is internal therefore we need a way to transform it to surface representation so it is accessible to the user. According to Norman's article there are "three essential ingredients in representational systems. These being the world which is to be represented, the set of symbols representing the world, and an interpreter." This is done in Windows by icons on the desktop and on the start menu. The world we are trying to represent to the user is the application, which can be represented by a symbol which is the icon. These icons on the desktop and on the start menu are the surface representations the user sees when he goes to access the application not all the files used to create it or used in conjunction with the applications operation. With the icons a user can retrieve applications and their files by a click of a button. The icons lead the user directly into the application without showing all the commands the computer goes through to open the application. The icons make the user more efficient in accomplishing tasks because it cuts done on the time of trying to find an item when the user can relate what he/she wants to do by the symbol on the icon.
Another example of an artifact within Windows 95 that exhibits Norman's theories is the recycle bin. This requires the user to have a direct engagement with the windows explorer and knowing the right item to delete. As a user decides that he no longer desires a certain program and chooses to delete the item, he is executing a command that will change the perception of the system. By selecting the item to delete the user has started an activity flow which involves the gulf of evaluation and the gulf of execution. Either of these gulfs could be perceived differently by the user then by the system so Windows 95 prompts the user with a dialog box asking if the user is sure he/she wants to remove this item from the system and it prompts again when emptying the recycle bin. What the user intends to do and what the system plans to do might not be the same so by prompting the user for action we are double checking that this is what the user has in mind. However when windows prompts us with the confirmation message, we are breaking the scheduled activity flow. The main problem with halting the activity flow is that it breaks the user's attention, however when deleting an item you could have selected the wrong item by mistake and without the break in activity flow the outcome could be dangerous. Norman calls these breaks "forcing functions which prevent critical or dangerous actions from happening without conscious attention."
The artifacts discussed above using Windows 95 graphical user interface are very similar to the theories and principles that Norman suggests in his article. Norman has stressed that cognitive artifact should follow three aspects which I feel Windows has dealt with. Windows 95 in itself has been made so that it is adaptable to the user whether he/she be an experienced user or not, by creating artifacts like icons and menu bars that are all related to one another. This makes it easier for the user to adapt to its environment and continue computing happily.
f:\12000 essays\sciences (985)\Computer\Communication over the net.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Communication over the internet, and the effects it will have on our economy.
Thesis: Communication over the internet is growing at a rapid rate, this rate of growth may destroy the monopoly taking place with the major telecommunication giants.
In this day and age we as a global community are growing at a super fast rate. Communication is a vital tool which aids us in breaking the distance barrier. Over the past decades there has been a monopoly in the telecommunications business, but now with the power of the internet, and super fast data transfer rates people can communicate across the globe and only pay local rates.
· In essence the local phone companies almost promote this.
- When you log on to the internet chances are that you are logging on through a local internet provider. You will use your computer modem to dial up and create and data link with your net provider. Where does the net provider get his super fast net connection from? He gets the connection from the local phone company.
· How logging on the internet is almost like logging right onto the local telephone company.
-It all boils down to, the local phone company approving the use of the internet for any means.
· How phone companies are going to bring them selves down.
-I feel that because of this phone companies will be the cause to their own downfall.
· Methods of communication over the net
-There are many ways of communicating over the net: Inter relay chat (text only)
-Video/Audio: there are many net applications which allow the user to simply plug in a mini video camera( which can be purchased anywhere from $150+) and speakers and a microphone and establish a net connection and be able to view to and hear.
-There are also applications such as the internet phone which enables the user to talk with other people, this works almost like a conventional telephone.
· New technologies and what to expect in the near future.
There have been many new breakthroughs in communication s recently, we are unfolding new ideas and new and faster ways in communication. Fiber optic technology is probably the next major wave in technology. Fiber optic communication over the internet will mean that it will be a lot easier to communicate.
· Why there is no jurisdiction over this means of communication.
A major principal of law and order is to control a certain area and population. Laws that apply to one state or province don't necessarily apply to another. Just like in Amsterdam you can order a slice of hashish with your coffee and if you did the same in Singapore you would be executed. The internet does not reside somewhere nor is it a physical thing. The internet has no boundaries, there is no way in which we can control it. There is no one person liable for what happens on it, there is no board of control therefore nobody has any jurisdiction of what happens on the internet. This should be a major concern to large telecommunication companies.
· Advantages/Disadvantages with the technology available to the normal person
-There is a downside however to the communication on the internet, for example when talking on the internet phone, you can not talk both ways, one person says something the other waits until he/she is done, then the other person can respond. On the other hand if you have a problem of cutting people off then this would be a good solution for you.
· How corporations other than the telecommunication companies will boom
These new technologies will dramatically lower the cost of communication, not to mention advantages of online service. For example it is quite easy for a technician to log on to your machine and fix up any problems which may occur.
· How the government gain/loose from this new technology.
The government will gain money from the people as a whole because they reduce there costs enabling themselves to purchase desired goods which are taxable.
· The aftershocks of the effects on the economy i.e. decrease unemployment.
People might argue that there will be major job cuts due to new technologies, but what do the telephone companies plan to do any ways in the next five to ten years, they are all looking at technology as well to reduce there costs. Costs such as man labour. The new technology will also create jobs for graduating students from Universities, there will be a large demand for programming skills, computer oriented network managers, system operators etc..
Technology is a tool in which we will improve your quality of life, it will aid us in making life easier so that we can enjoy it to the fullest. Communication over the internet will help a lot from sending faxes, to chatting with someone from Australia, to video conferencing with a boss. Communication over the internet will effect our economy.
f:\12000 essays\sciences (985)\Computer\Communications Decency Act.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Communication Decency Act
(The Fight For Freedom of Speech on the Internet)
The Communication Decency Act is a bill which has insulted our right as American
citizens. It a bill which SHOULD not pass. I'll share with you how Internet users are
reacting to this bill, and why they say it is unconstitutional.
Some individuals disagree with one part of the bill. According to
http://thomas.loc.gov/cgi-bin/ query/z?c104:s.652.enr:, which has the Communications
Decency Act on-line for public viewing,: "Whoever uses an Internet service to send to a
person or persons under 18 years of age......any comment, request, suggestion, proposal,
image,........or anything offensive as measured by contemporary community standards,
sexual or excretory activities or organs.....shall be fined $250,000 if the person(s) is/are
under 18....... imprisoned not more than two years.......or both."
The wording of that section seems sensible. However, if this one little paragraph is
approved, many sites such as the: Venus de Milo site located at:
http://www.paris.org/Musees/Louvre/Treasures/gifs/venusdemilo.gif; the Sistine Chapel
at: http://www.oir.ucf.edu/wm/paint/auth/michelangelo/michelangelo.creation and
Michelangelo's David @ http://fileroom.aaup.uic.edu/FileRoom/images/image201.gif
could not be accessed and used by anybody under the age of 18. These works of art and
many other museum pictures would not be available. The bill says these sites show
indecent pictures.
The next part of the CDA has everybody in a big legal fit. We, concerned Internet
users, took the writers of this bill to court, and we won.
This part of the bill states: "Whoever....makes, creates, or solicits...........any
comment, request, suggestion, proposal, image, or other communication
which is obscene, lewd, lascivious, filthy, or indecent.......with intent to annoy, abuse,
threaten, or harass another person......by means of an Internet page..........shall be fined
$250,000 under title 18......imprisoned not more than two years....or both......"
The writer of that paragraph of the bill forgot something. It violates the
constitution. The First Amendment states: "Congress shall make no law....prohibiting or
abridging the freedom of speech......the right of the people peaceably to assemble.....and to
petition the Government.............."
This bill does exactly that. It says we cannot express our feelings cleanly. I
understand that what may be of interest to me, may be offensive to others. Many people
put up warning signs on their websites stating, "This site may contain offensive material.
If you are easily offended you may not want to come here." If the writers of this bill
would have listed that as a requirement there would have been no trouble.
Here is the way I look at it. I think that some things should be censored on the
Internet. Child pornography, for instance, is already illegal, so it follows that it should also
be illegal on the Internet. Besides, psychologically, it damages the children involved.
Something else that should be banned from the Internet are "hacker" programs
meant to harm other Internet users. Some examples of such programs are AOHell which
can give you access to America On-line for free and E-mail Bomb, or otherwise harass
others using the service (American On-line just passed a bill that gave them the right to
allow users to let them scan their mail for such harmful things.) Another thing that could
be banned are text files which describe how to complete illegal actions, such as make
bombs. The most famous is the "Anarchist Cook Book," which shows Internet users
some of the above problems.
I also believe that the use of log-ins, passwords, and rating systems on pages for
the Internet are a good idea, and are not violations of our civil rights. They simply allow
the user to choose what they want to see. Some of these systems are already in use today,
along with programs that watch for obscene and profane keywords, and links to
pornographic sites.
What have Internet users learned from the courts? After all was said and done, we
have learned that passing unconstitutional laws like the CDA is not the exception but the
rule these days in Washington, DC.
Next, the people responsible for giving us the CDA are respectable Republicans
and Democrats, not liberals and conservatives. If someone would have asked an Internet
user who is opposed to the CDA to vote for Clinton or Dole this past fall, they would say,
"Wouldn't that have been like being given a choice between cancer and heart disease?" In
other words, disrespect for the President and Congress seem appropriate.
Third, the White House recognizes that it is cheaper to pass this bill, by saying, this
is the law. Live with it. Doing this would prove to me this country is run by politicians
who do not care about the people, their rights, or the law. This bill, if passed, would only
prove to me that all the government cares about is themselves and their money. A great
president by the name of Abraham Lincoln once said, "This country was made for the
people, and run by the people..." America can now only hope, for another man like
Lincoln, to step up, and lead this country, bringing it back to what it used to be.
Also, it is time to focus on the things we need to have in this country, like building
a new society. After World War II and Vietnam, I believe it is the computer generation's
destiny to rebuild our family and give community abilities to evolve, solve problems,
generate and distribute wealth, promote peace, and personal security.
Finally, freedom is struggle, by definition. Freedom on the Internet is not a gift.
It's the space we ourselves own, in the face of the government and the media, who have
seemingly tried to take that space away from us.
CDA will also take away some sites such as: The Library of Congress Card
Catalog, which some say contains "indecent" language. We will not be able to view such
literature as Mark Twain's The Adventures of Huckleberry Finn and Nathaniel
Hawthorne's The Scarlet Letter, because the CDA says those "classics" contain offensive
material. The act also prevents any sites in existence which tell teens about safe sex and
Sexually Transmitted Diseases. Most on-line newspapers such as USA TODAY, will
have to be blackened out when the monitor's screen shows them articles about sex.
"Ignorance is caused by stupidity!" That has become a familiar "battle" cry of
Internet users. The goverment knows hardly nothing about the pride Internet users take in
having their own "world." That is the stupidity part of it. The ingnorance is the politicians
refusing to listen to us. They do not want to understand.
Some ways you can help fight this terrible bill would be to march through
Washington, DC on July 30, 1997. Many people have turned their web pages
backgrounds black to show they are protesting. Some display blue ribbons to show an
Internet users' displeasure with the CDA.
Another way to show you care is to e-mail high political officers. I have e-mailed
the current president (9:23 PM, 11-5-96) Bill Clinton and the vice-president Al Gore. I
have also mailed Bob Dole and Jack Kemp.
On the more local level I have e-mailed Senators: Rick Santorum and Arlen
Specter and Representatives: Jon Fox, Paul Kanjorski, Paul McHale, John Murtha, Robert
Walker, and Curt Weldon. I have mailed: Gov. Tom Ridge, Lt. Gov. Mark Schweiker
and Senators Roy Afflerbach, Gibson Armstrong, Clarence Bell, David Brightbill, J. Doyle
Corman, Daniel Delp, Vincent Fumo, Jim Gerlach, Stewart Greenleaf, Melissa Hart, F.
Joseph Loeper, Roger Madigan, Robert Mellow, Harold Mowery Jr., John Peterson,
James Rhoades, Robert Robbins, Allyson Schwartz, Joseph Uliana, Noah Wenger, Rep.
Lisa Boscola, Rep. Italo Cappabianca and Rep. Lawrence Curry have been contacted by
myself as well. I have e-mailed Happy Fernandez, a Philadelphia City Councilwoman.
The message I sent them is a smaller version of this one:
"To whom it may concern,
I am writing to you about the Communications Decency Act. I believe the act is
unconstitutional. Amendment I states: "Congress shall make no law......abridging the
freedom of speech...." This alone should prohibit this act. The Communications Decency
Act will force many educational Internet sites to close. I, as a student, use the Information
Super Highway for exactly that, information. It is very helpful to have updated facts and
so forth. With the Communications Decency Act such sites as the Library of Congress
Electronical Card Catalog would be kept away from me because of "indecent" titles. I use
the word indecent in quotation marks because I feel it is being used improperly. Some
other sites, will be closed because of nudity. Such sites as Michelangelo's David, because
of the "nudity." There again I use quotations. Sites informing teenagers such as myself of
the dangers of Unprotected Sex and AIDS, as well as other STD's will not be allowed to
be shown.
I know I may be taking this the wrong way, so I would appreciate response telling
me why this act should pass. I hope you consider what I, and many others, have been
saying.
Thank you for your time,
Ryne Crabb "
Another huge part of this world-wide protest was the Electronic March on
Washington, DC. People, of all ages, who care about the unconstitutionality of the CDA,
went to the White House and made signs, etc. while marching around the White House's
property. Also, everybody was asked to e-mail the president in protest. President Clinton
got over 10,000 e-mail messages on that day. I think it opened a lot of eyes.
Black Thursday was another big issue. Over 82% of the Internet's websites had a
"blackout." "Yahoo!" the famous search engine also blackened all of their pages in
protest. It was beautiful how many heads were turned. Major businesses such as AT&T
and ESPN also did their part in this battle by making comments about it to less informed
Internet users.
Although there are other things happening in cyberspace, this issue remains a
major problem. Chances are, however, when this piece of legal mess is settled, happily or
not, another will come up. I can almost see what is next on the list. Some countries are
taxing the Internet. Trust me, we do not even WANT to get into that, yet.
I hope this opened your eyes as to the importance of this fight. We need to show
the government this country still is made for the people, and run by the people. That is
written in the constitution. We do not want to change the document our forefathers wrote
expressing their wishes for our future generations. That document protects our freedoms.
It is important that the constitution remains intact so that it can preserve all of our
freedoms including use of the Internet as we see fit.
f:\12000 essays\sciences (985)\Computer\Compaq Corporation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
The intention of this project is to demonstrate the function of production planning in a non - artificial environment. Through this simulation we are able to forecast, with a degree of certainty the monthly requirements for end products, subassemblies, parts and raw materials. We are supplied with information that we are to base our decisions on. The manufacturer depicted in this simulation was actually a General Electric facility that produced black and white television sets Syracuse, New York. Unfortunately this plant is no longer operational, it was closed down and the equipment was shipped off to China. One can only wonder if the plant manager would have taken Professor Moily's class in production management the plant still might be running.
Modern production management or operation management (OM) systems first came to prominence in the early half of the twentieth century. Frederick W. Taylor is considered the father of operations management and is credited in the development of the following principles.
a. Scientific laws govern how much a worker can produce in a day.
b. It is the function of management to discover and use these laws in operation of productive systems.
c. It is the function of the worker to carry out management's wishes without question.
Many of today's method's of operation management have elements of the above stated principles. For example, part of Material Requirement Planning system (MRP) is learning how workers to hire, fire, or lay idle. This is because it we realize the a worker can only produce so many widgets a day, can work so many hours a day, and so many days a year.
I will disagree with principle "c" in that the worker should blindly carry out the wishes of management. Successful operations are based upon a two-way flow of thought and suggestions from management to labor. This two-way flow of ideas is incorporated into another modern system of operations management, the Just - In - Time system. Eastman Kodak gives monetary rewards to employees who devises an improvement in a current process or suggests an entirely new process of manufacturing. Often a small suggestion can yield a big reward when applied to a mass-produced item.
Body
In this project we are presented with the following information: bounds for pricing decisions, market share determination, the product explosion matrix, sales history (units per month at average price), unit value, setup man-hours, running man hours, initial workforce, value of inventory, on hand units. We also know that we have eight end products, four subassemblies, eight parts, and four raw materials. The eight end products are comprised entirely from the subassemblies, parts, and raw materials. From this information I was able to determine how many units of each final product, how many units of parts to produce in a month, how many units of raw material to order every month and how to price the final products.
The first step that I took in this project was to develop product structures for each product (please refer to the Appendices for product structures on all eight products, plus new product nine). This information was presented in product explosion matrix. For example, I determined that product one used one subassembly nine and one part thirteen. Part thirteen consisted of raw material twenty-one. Sub-assembly nine consists of part thirteen (which includes raw material twenty-one), raw material twenty one and raw material twenty-four. From this product explosion matrix I have realized that an end product does not just happen; they consist of many subassemblies, parts and raw material.
We also determined the minimal direct costs to each of the eight products. The minimal direct product is the cost of the raw material, plus the price of the amount of labor for the assembly to end product. For product one we have a total of three raw material "twenty-one" which cost ten dollars a piece and one raw material "twenty-four" which cost twenty dollars each. We now have a total of fifty dollars for the price of the parts. Next we calculate the labor that goes into transforming these parts into a viable end product. We get a total of six hours of running man hours/unit and an hourly labor rate of $8.50, which gives us a total of fifty-one dollars. This gives a minimal total cost of $101 to produce product one. This number is useful in determining how much a unit actually cost to manufacture and what we must minimally sell the product for to make a profit. We can than analyze if a product costs to much to make or the sum of the parts is more than the price of the end product. Product eight had the lowest direct minimum cost ($89.50) and four had the highest minimal direct cost.
From a purely economic stand point, it would be beneficial to use as much of raw material twenty-three ($5 unit) and as little of raw material twenty-two ($30 unit). This does not consider that raw material twenty-two may actually be more valuable than raw material twenty-three. Perhaps raw material twenty two may be gold or silver and raw material twenty-three may be sand or glass.
I also converted all information in the sales history per month (figure four of the MANMAN packet). The purpose of this step was so that I could sort and add the sales numbers to chronicle the past twenty four months. Clearly product one was the best-selling apparatus, and product three, four and five where sales laggards.
Entering the information into spreadsheet form was also necessary to present the eight products in graphical form. Of the following graph types that where at my disposal (line, bar, circle) to clearly illustrate the upward and downward trend of each of the eight product I chose the line graph method. A circle graph is good percentage comparisons or comparison of market share. Bar graphs can illustrate a snapshot in time but can distort trend data.
At this point our class gathered into groups to discuss which product to discontinue. Obviously product one was not going to be of the discontinued products, since it was our volume leader. Based on the sales figure for the past twenty-four months my group decided to eliminate products three, four and five. Also, products three, four and five had the highest minimum direct costs as well. Since these products where expensive to manufacture and where our lowest selling products a group decision was made to discontinue these products.
The discontinued product was then rolled over into a new product, now referred to product nine. Unfortunately, we where unable to decide by the information given if any of the discontinued products was a high margin product, low volume product (IE 50" big screen color Trinitron tube with oak cabinet and stereo sound).
Moving right into our next step we began to analyze our bar charts to make our starting forecast. We viewed sales from each product to see if they fall under one the following situations:
Base
(Base + Trend)
(Base + Trend) * Seasonality
When a product is base the sales alter little each sales period or change erratically with external market signals. An example of a product that would fall under the base model would be sand bags. Sand bags sell at the same level month after month. If a retailer sells a hundred bags in March the will sell a hundred bags in October. But, in a flood plain after terrantiel downpour, the sales of sandbags increase exponentially. This is because many people purchase the sandbags to hold back the rising flood waters. Another example of a product that would emulate the base model is insulin. There is a limited number of people with insulin dependant diabetes. The people with insulin dependant diabetes unfortunately die off, but are replaced with other people who fall ill to the disease. There is very little movement up or down in the sale of insulin.
The base plus trend model illustrates that a product has a trend of upward or downward groth in sales. Products at the begining or ending of their respective product cycles will display this type of performance. Sales of a new product such as Microsoft Windows95(tm) disk operating system will fall into this category. The sales of May are expected to be larger than April, the sales of April will be larger than March and so on. While the sales may decline (or increase) during a particular time frame, a trend of upward or downward growth will be apparent.
Lastly, the base plus trend times seasonality attempts to forecast the swings in demand that are caused by seasonal changes that can be expected to repeat themselves during a single or consecutive time period. For example, florists experience a predictable increases in demand each year, both occur at similiar (or exact) times during the year; Mothers Day and St. Valentines Day. Florists must forecast demand for roses and other flowers so they can meet this predictable demand. If I where to construct a ten year historical graph for a neighborhood florist, there would be clear increase in demand every February and May, in every one of those years. A caveat to the previous example would be that in most lines a business forecasting is never this easy. If it was there would not be a production management class or operations management science!
Some other methods used to forecast demand are: delphi method, historical analogy, simple moving average, box Jenkins type, exponential smoothing and regression analysis. Forecasting falls into four basic types: qualitative, time series analysis, casual relationships and simulation. All of the proceeding have pluses, minuses and degrees of accuraccy. I often depends on the precision of previous data. Also, as is often stated in financial prospectuses "past performance does not guarantee future results".
For product one I used base plus trend. The sales started of at 1246 units and gradually increased to 2146 at the end of twenty four months. There was a slight dip in sales between month nine-teen and month twenty three. This drop can from internal or external variables.
Product two was little more tricky. The swing where eratic and showed no detectable trend. I may have been able to use (Base + Trend) * Seasonality if there was not a decrease in sales from month eight and an increase in sales in month sixteen. For this I had to employ the base or simle method.
While I find it hard to comprehend how television sales can be seasonal, products three, five and six fall under (Base + Trend) * Seasonality models. I was able to replicate the wave in demand with my forecast. Perhaps consumers are buying portable televisions to use at the beach while on vacation, or people are replacing there old televisions to watch the Superbowl championship game or world series. Or maybe even watch the Syracuse Orangemen in the NCAA college basketball championship!
Conceivably, I was reading to much into product six when a decided on base plus trend model. The way I saw it was that none of the upward or downward where that substantial when compared with entire data, and sales from month one (521 units) decreased by almost fifty percent to 242 units.
I felt the same way about product eight that I felt about product two, this product demostrated eratic swings in no particular trend. I forecasted demand using the base or simlple method for this product.
From this point I was able to forecast demand. For the safety stock decision I always tried to error on the side of caution. On average I used a twenty five percent safety stock level. However, when calculating the MRP or labor plans I tried
to have the minimal amount of surplus. This often means that I only had safety stock on hand from period to period.
Conclusion
From this project and from the class lectures I have received an understanding of how how much planning goes into even the most simplest of manufactured goods. Production managers must employ at least one type of forecasting method in order to avoid the everyday stock outs, late deliveries and labor problems that arise. Forecasts are vital to every business organization and for every significant management decision.
Afterthought
I feel that I could have further reduced costs by reducing some of the parts, sub assemblies and outsourcing some of the production. Another situation that I felt was unrealistic was that there was only one source for each part and when that part was unvailable, there was a stock out. Perhaps in future projects there can be allowance for this.
f:\12000 essays\sciences (985)\Computer\Comparing Motorola and Intel Math Coprocessors.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Floating Point Coprocessors
The designer of any microprocessor would like to extend its
instruction set almost infinitely but is limited by the quantity of silicon
available (not to mention the problems of testability and complexity).
Consequently, a real microprocessor represents a compromise between what
is desirable and what is acceptable to the majority of the chip's users. For
example, the 68020 microprocessor is not optimized for calculations that
require a large volume of scientific (i.e. floating point) calculations. One
method to significantly enhance the performance of such a microprocessor is
to add a coprocessor. To increase the power of a microprocessor, it does not
suffice to add a few more instructions to the instruction set, but it involves
adding an auxiliary processor that works in parallel to the MPU (Micro
Processing Unit). A system involving concurrently operating processors can
be very complex, since there need to be dedicated communication paths
between the processors, as well as software to divide the tasks among them.
A practical multiprocessing system should be as simple as possible and
require a minimum overhead in terms of both hardware and software. There
are various techniques of arranging a coprocessor alongside a
microprocessor. One technique is to provide the coprocessor with an
instruction interpreter and program counter. Each instruction fetched from
memory is examined by both the MPU and the coprocessor. If it is a MPU
instruction, the MPU executes it; otherwise the coprocessor executes it. It
can be seen that this solution is feasible, but by no means simple, as it would
be difficult to keep the MPU and coprocessor in step. Another technique is
to equip the microprocessor with a special bus to communicate with the
external coprocessor. Whenever the microprocessor encounters an operation
that requires the intervention of the coprocessor, the special bus provides a
dedicated high-speed communication between the MPU and the coprocessor.
Once again, this solution is not simple. There are more methods of
connecting two (or more) concurrently operating processors, which will be
covered in more detail during the specific discussions of the Intel and
Motorola floating point coprocessors.
Motorola Floating Point Coprocessor (FPC) 68882
The designers of the 68000-family coprocessors decided to implement
coprocessors that could work with existing and future generations of
microprocessors with minimal hardware and software overhead. The actual
approach taken by the Motorola engineers was to tightly couple the
coprocessor to the host microprocessor and to treat the coprocessor as a
memory-mapped peripheral lying inside the CPU address space. In effect,
the MPU fetches instructions from memory, and, if an instruction is a
coprocessor instruction, the MPU passes it to the coprocessor by means of
the MPU's asynchronous data transfer bus. By adopting this approach, the
coprocessor does not have to fetch or interpret instructions itself. Thus if the
coprocessor requires data from memory, the MPU must fetch it. There are
advantages and disadvantages to this design. Most notably, the coprocessor
does not have to deal with, for example, bus errors, as all fetching is
performed by the host MPU. On the other hand, the FPC can not act as a bus
master (making it a non-DMA device), making memory accesses by the FPC
slower than if it were directly connected to the address and data bus.
In order for the coprocessor to work as a memory mapped device, the
designers of the 68000 series of MPU's had to set aside certain bit patterns
to represent opcodes for the FPC. In the case of the 68000's, the FPC is
accessed through the opcode 1111(2). This number is the same as 'F' in
hexadecimal notation, so this bit pattern is often referred to as the F-line.
Interface
The 68882 FPC employs an entirely conventional asynchronous bus
interface like all 68000 class devices, and absolutely no new signals
whatsoever are required to connect the unit to an MC 68020 MPU. The
68882 can be configured to run under a variety of different circumstances,
including various sized data buses and clock speeds. What follows is a
diagram of connections necessary to connect the 68882 to a 68020 or 68030
MPU using a 32-bit data path.
As mentioned previously, all instructions for the FPC are of the F-line
format, that is, they begin with the bit pattern 1111(2). A generic coprocessor
instruction has the following format: the first four bits must be 1111. This
identifies the instruction as being for the coprocessor. The next three bits
identify the coprocessor type, followed by three bits representing the
instruction type. The meaning of the remaining bits varies depending on the
specific instruction.
Coprocessor Operation
When the MPU detects an F-line instruction, it writes the instruction
into the coprocessors memory mapped command register in CPU space.
Having sent a command to the coprocessor, the host processor reads the
reply from the coprocessor's response register. The response could, for
example, instruct the processor to fetch data from memory. Once the host
processor has complied with the demands from the coprocessor, it is free to
continue with instruction processing, that is, both the processor and
coprocessor act concurrently. This is why system speed can be dramatically
improved upon installation of a coprocessor.
MC 68882 Specifics
The MC 68882 floating point coprocessor is basically a very simple
device, though it's data manual is nearly as thick as that of the MC 68000.
This complexity is due to the IEEE floating point arithmetic standards rather
than the nature of the FPC. The 68882 contains eight 80-bit floating point
data registers, FP0 to FP7, one 32-bit control register, FPCR, and one 32-bit
status register, FPSR. Because the FPC is memory mapped in CPU space,
these registers are directly accessible to the programmer within the register
space of the host MPU. In addition to the standard byte, word and longword
operations, the FPC supports four new operand sizes: single precision real
(.S), double precision real (.D), extended precision real (.X) and packed
decimal string (.P). All on-chip calculations take place in extended precision
format and all floating point registers hold extended precision values. The
single real and double real formats are used to input and output operands.
All three real floating point formats comply with the corresponding IEEE
floating point number standards. The FPC has built in functions to convert
between the various data formats added by the unit, for example a register
move with specified operand type (.P, .B, etc).
The 68882 FPC has a significant instruction set designed to satisfy
many number-crunching situations. All instructions native to the FPC start
with the bit pattern 1111(2) to show that the instruction deals with floating
point numbers. Some instructions supported by the FPC include FCOSH,
FETOX, FLOG2, FTENTOX, FADD, FMUL and FSQRT. There are many
more instructions available, but this excerpt demonstrates the versatility of
the 68882 unit.
One of the registers within the FPC is the status register. It is very
similar in function to the status register in a CPU; it is updated to show the
outcome of the most recently executed instruction. Flags within the status
register of the FPC include divide by zero, infinity, zero, overflow,
underflow and not a number. Some of the conditions signaled by the status
register of the FPC (for example divide by zero) require an exception routine
to be executed, so that the user is informed of the situation. These exceptions
are stored and executed within the host MPU, which means that the FPC can
be used to control loops and tests within user programs - further extending
the functionality of the coprocessor.
Intel Math Coprocessor 80387 DX
In many respects, the Intel 80387 math coprocessor (MCP) is very
similar to the MC 68882. Both designs were influenced by such factors as
cost, usability and performance. There are, however, subtle differences in
the designs of the two units.
Firstly, I shall discuss the similarities between the designs followed by
differences. Like the 68882, the 80387 requires no additional hardware to be
connected to a 80386. It is a non-DMA device, having no direct access to the
address bus of the motherboard. All memory and I/O is handled by the CPU,
which upon detection of a MCP instruction passes it along to the MCP. If
additional memory reads are necessary to load operands or data, the MCP
instructs the CPU to perform these actions. This design, although reducing
MCP performance when compared to a direct connection to the address bus,
significantly decreases complexity of the MCP as no separate address
decoding or error handling logic is necessary. The connection between the
CPU and the MCP instruction is via a synchronous bus, while internal
operation of the MCP can run asynchronously (higher clockspeed).
Moreover, the three functional units of the MCP can work in parallel to
increase system performance. The CPU can be transferring commands and
data to the MCP bus control logic while the MCP floating unit is executing
the current instruction. Similar to the 68882, the 80387 has a bit pattern
(11011(2)) reserved to identify instructions intended for it. Also, the registers
of the MCP are memory mapped into CPU address space, making the
internal registers of the MCP available to programmers.
Internally, the 80387 contains three distinct units: the bus
control logic (BCL), the data interface and control unit and the actual
floating point unit. The data interface and control unit directs the data to the
instruction decoder. The instruction decoder decodes the ESC instructions
sent to it by the CPU and generates controls that direct the data flow in the
instruction buffer. It also triggers the microinstruction sequencer that
controls execution of each instruction. If the ESC instruction is FINIT,
FCLEX, FSTSW, FSTSW AX, or FSTCW, the control unit executes it
independently of the FPU and the sequencer. The data interface and control
unit is the unit that generates the BUSY?, PEREQ and ERROR? signals
that synchronize Intel 387 DX MCP activities with the Intel 80386 DX CPU.
It also supports the FPU in all operations that it cannot perform alone (e.g.
exceptions handling, transcendental operations, etc.).
The FPU executes all instructions that involve the register stack,
including arithmetic, logical, transcendental, constant, and data transfer
instructions. The data path in the FPU is 84 bits wide (68 significant bits, 15
exponent bits, and a sign bit) which allows internal operand transfers to be
performed at very high speeds.
Interface
The MCP is connected to the MPU via a synchronous connection,
while the numeric core can operate at a different clock speed, making it
asynchronous. The following diagram will clarify this.
The following diagram shows the specific connections necessary
between the 80386 MPU and the 80387 MCP.
A typical coprocessor instruction must begin with the bit pattern
11011(2) to identify the instruction for the coprocessor. The bus control logic
of the MCP (BCL) communicates solely with the CPU using I/O bus cycles.
The BCL appears to the CPU as a special peripheral device. It is special in
one important respect: the CPU uses reserved I/O addresses to communicate
with the BCL. The BCL does not communicate directly with memory. The
CPU performs all memory access, transferring input operands from memory
to the MCP and transferring outputs from the MCP to memory.
Coprocessor Operation
When the CPU detects the arrival of a coprocessor instruction, it
writes the instruction into the coprocessors memory mapped command
register in CPU space. Having sent a command to the coprocessor, the host
processor reads the reply from the coprocessor's signals. The response
could, for example, instruct the processor to fetch data from memory. Once
the host processor has complied with the demands from the coprocessor, it is
free to continue with instruction processing, that is, both the processor and
coprocessor act concurrently. This is why system speed can be dramatically
improved upon installation of a coprocessor.
80387 Specifics
Just like the MC 68882 floating point coprocessor, the Intel 80387 is
basically a very simple device. Like any reasonable math coprocessor, it
conforms to the IEEE standards of floating point number representations.
The 80387 contains eight 82-bit floating point data registers (including a 2-
bit tag field), R0 to R7, one 16-bit control register, one 16-bit status register
and a tag word (that contains the tag fields for the eight data registers). The
MCP also indirectly uses the 48-bit instruction and data pointer registers of
the 80386 host processor, even though these are external to the unit. Because
the FPC is memory mapped in CPU space, these registers are directly
accessible to the programmer within the register space of the host MPU. In
addition to the standard word, short and long (16, 32 and 64-bit) integer
operations, the MCP supports four new operand sizes: single precision real,
double precision real, extended precision real and packed binary coded
decimal strings. All on-chip calculations take place in extended precision
format and all floating point registers hold extended precision values. The
single real and double real formats are used to input and output operands.
All three real floating point formats comply with the corresponding IEEE
floating point number standards. The MCP has built in functions to convert
between the various data formats added by the unit.
The 80387 has a significant instruction set designed to satisfy many
number-crunching situations. All instructions native to the MCP start with
the bit pattern 11011(2) to show that the instruction should be directed to the
coprocessor. Some (of the over 70) instructions supported by the MCP are
FCOMP, FDIV, FSQRT, FSINCOS, FINIT. There are many more
instructions available, but this excerpt demonstrates the versatility of the
80387 unit, which is very similar to that of the 68882 unit.
One of the registers within the MCP is the status register. Just like for
the 68882, the status register shows the outcome of the most recently
executed instruction. Flags within the status register of the FPC include
divide by zero, infinity, zero, overflow, underflow and invalid operation.
Some of the conditions signaled by the status register of the FPC (for
example divide by zero) require an exception routine to be executed by the
host MPU, so that the user is informed of the situation. These exceptions are
stored and executed within the host MPU, which means that the MCP can
again be used to control loops and tests within user programs - further
extending the functionality of the coprocessor.
The Intel 80387 DX MCP register set can be accessed either as a
stack, with instructions operating on the top one or two stack elements, or as
a fixed register set, with instructions operating on explicitly designated
registers. The TOP field in the status word identifies the current top-of-stack
register. A ``push'' operation decrements TOP by one and loads a value into
the new TOP register. A ``pop'' operation stores the value from the current
top register and then increments TOP by one. Like the 80386 DX
microprocessor stacks in memory, the MCP register stack grows ``down''
toward lower-addressed registers. Instructions may address the data registers
either implicitly or explicitly. The explicit register addressing is also relative
to TOP.
A notable feature of the 80387 is the addition of a tag field of 2 bits to
each of the eight floating point registers. The tag word marks the content of
each numeric data register, as Figure 2.1 shows. Each two-bit tag represents
one of the eight numeric registers. The principal function of the tag word is
to optimize the MCP's performance and stack handling by making it possible
to distinguish between empty and nonempty register locations. It also
enables exception handlers to check the contents of a stack location without
the need to perform complex decoding of the actual data.
Evaluation of the two Coprocessor
I started this paper thinking that the Motorola math coprocessor had to
be better in design, implementation and features than its Intel counterpart.
Throughout my research I came to realize that my opinions were based on
nothing but myths. In many respects the two coprocessors are very similar to
each other, while in other respects the coprocessors differ radically in design
and implementation. I will sum up the points I consider most important.
1. Intel uses a synchronous bus between the CPU and the MCP, while
the actual internal floating unit can run asynchronously to this.
This increases complexity of the design as synchronization logic
must exist between the two processors, but like this the floating
point unit can run at a higher clock speed than the CPU upon
installation of a dedicated clock generator.
2. The (logical, not physical) addition of tag fields to the data
registers in the 80387 to signal certain conditions of the data
registers makes certain operations that support tags much faster, as
certain information does not need to be decoded as it is "cached" in
the tag fields.
3. The 80387 can use its registers either in stack mode or absolute
addressing mode. Though some operations require stack
addressing, this feature adds a little more flexibility to the MCP
(even though the stack operations might be a legacy from the 8087
or 80287).
In most other fields, the coprocessors are equals. They have the same
number of data registers, both add their own instruction set and registers to
programmers in a transparent fashion and both support the same IEEE
numeric representation standards. Probably both coprocessors have similar
processing power at equal clockspeed as well. Even though the Motorola
coprocessor seems to be superior by name, I have to admit that the 80387
gets my vote for more flexibility and thoughtful optimizations (tags).
f:\12000 essays\sciences (985)\Computer\Computer 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
sdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdfsdfkjlsdjafkjsadlfjlksdjflksdjflksjdalkfasldfjlklkcvj,zx.mcnvskldfjnklasjkljklafjsdlkfjldskafjlkdsajflksadfjsadlkfjlksdfjlksdajflsdjflksdjflksdjflksdjfklsdjkfljsdklfjsdklfjskldfjklsdjfklsdjfklsdjflkdsfklsdjfklsdflksdjflsdjfljsdklfjsdlkfjsldkfjklsdfjklsdjfklsdjfkldsjfklsdjfklsdjlkfjsdflkjsdlkfjsdklfjlkdsfjklsdfjkldsfjklsdfjlksdfjlsdjflksdjfklsdjflksdjfkldsfjlksdjfklsdjflksdjfklsdfjklsdf
f:\12000 essays\sciences (985)\Computer\Computer Assignment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers in Education
The typical school has 1 computer per 20 students, a ratio that computer educators feel is still not high enough to affect classroom learning as much as books and classroom conversation.
Some critics see computer education as merely the latest in a series of unsuccessful attempts to revolutionise education through the use of audio- and visually-oriented non print media. For example, motion pictures, broadcast television, filmstrips, audio recorders, and videotapes were all initially heralded for their instructional potential, but each of these ultimately became minor classroom tools alongside conventional methods.
Communications Satellite
A communications satellite is an artificial SATELLITE placed into orbit around the Earth to facilitate communications on Earth. Most long-distance radio communication across land is sent via MICROWAVE relay towers. In effect, a satellite serves as a tall microwave tower to permit direct transmission between stations, but it can interconnect any number of stations that are included within the antenna beams of the satellite rather than simply the two ends of the microwave link.
Computer Crime
Computer crime is defined as any crime involving a computer accomplished through the use or knowledge of computer technology. Computers are objects of crime when they or their contents are damaged, as when terrorists attack computer centres with explosives or gasoline, or when a "computer virus" a program capable of altering or erasing computer memory is introduced into a computer system.
Personal Computer
A personal computer is a computer that is based on a microprocessor, a small semiconductor chip that performs the operations of a c.p.u.
Personal computers are single-user machines, whereas larger computers generally have multiple users. Personal computers have many uses such as: Word processing, communicating to other computers over a phone line using a modem,databases,leisure games are just some of the uses of a Personal Computer.
Computers for Leisure Games
As they proliferated, video games gained colour and complexity and adopted the basic theme that most of them still exhibit: the violent annihilation of an enemy by means of one's skill at moving a lever or pushing a button.
Many of the games played on home computers are more or less identical with those in video arcades. Increasingly, however, computer games are becoming more sophisticated, more difficult, and no longer dependent on elapsed time a few computer games go on for many hours. Graphics have improved to the point where they almost resemble movies rather than rough, jagged video screens of past games. Some of the newest arcade games generate their graphics through C.D R.O.M. Many include complicated sounds, some even have music and real actors. Given an imaginative programmer, a sophisticated video game has the potential for offering an almost limitless array of exotic worlds and fantastic situations.
In the early 90s parents and government were becoming increasingly aware of violence in video games so they introduced warnings on the box like in the movies.
f:\12000 essays\sciences (985)\Computer\Computer Building.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The computer we are trying to buy has a:
Cyrix 166+ Motherboard and CPU
32 megs of RAM
2.1 gig Hard Drive
USRobotic x2 56k modem (the best normal modem you can buy)
Logitech mouse, preferably the $80 Trackman Marble (best mouse you can buy)
Sound Blaster 64 AWE sound card (best on the market)
Creative Labs 12x Cd Drive
3.5" floppy drive
PCI 2 meg 64-bit graphics card
17" monitor (big one)
The first place I went to was InfoCastle Computers at www.infocastle.com (not local). They seem to have fair prices, but nothing to get excited about. Once you've seen some of the places out of Sacramento, you realize that buying local is good because:
A. Their prices are usually better
B. You don't have to pay the shipping
C. You can see what you're buying before you pay for it
Anyway, here are the InfoCastle prices:
RAM 4X32-16 MB 60 ns $ 100.00 x 2=$200
Hard Drive Western Digital 2.1 Gb IDE Caviar $ 250
CPU CyrixP166+ (133 Mz clock) $ 169.9
Modems 33.6 Apache Modem$ 109.95
Mice Logitech Mouse$ 23.95
Sound card Sound Conductor$ 39.00
CD-Rom SONY 8x $ 129.00
Floppy Disks NEC $ 29.00
PCI Video Card 2 Mb Trident $ 59.00 (not 64-bit though)
Keyboard $11
As this place didn't have cases, motherboards, or monitors we'll price them at $40, $150, and $300 respectively, because these are the average pricesfor these items.
Total price for the desired computer here was about $1530 plus shipping, which would end up at about $1600 total, but this machine wouldn't have as good of a CD drive, sound card, modem, or monitor (15 incher) as the preferred computer would, and costs $100 more... This isn't the one for us.
The next place visited was the ComputerSmith's Parts Place. They seem to have much better prices than the last place, InfoCastle, and their web page is nicer. Their URL is www.websmiths.com/csmiths/ .
Computersmith's
Creative Labs 16 Sound Card $52
Western Digital 2.5gb Hard Drive $247 (bigger but cheaper!?)
18 x CD-ROM $124
Mini-Mid Tower Case 230 wt Power Supply $42
AMD 5K86 P-166 CPU $129 (it's not as good as the Cyrix, but it'll do)
Keytronic 104 Key $25
16mb 4x32 RAM $83 x 2 = $166
56K US Robotics Internal Modem $195
Microsoft Intellimouse $59 (no Logitech so Microsoft, oh yech)
17 inch SVGA Flat Screen Monitor $485 (nicer than the one on trhe prefered computer)
Pentium P-5 Intel Triton III 512K Motherboard $118
Matrox Mystique 2mb $109 (this is better than the one in the prefered computer!!!)
3.5 Floppy Drive 1.44 meg $25
Canon BJC-4200 InkJet Printer $259 (I put this in for fun, to get an idea of how much a quality printer costs)
The total cost for the computer with a lower grade mouse, CPU, and sound card but higher quality video card, CD drive, monitor, and hard drive would be $1776. With the CD drive and all this is a great price for this machine but not what we want. What we want is the
Preferred Computer
The best prices I found to build a Cyrix 6x86 166+ with a 2 meg video card, 32 megs of RAM, a Creative Labs 12x CD Drive. a Sound Blaster AWE 64, a Logitech Trackman Marble (best mouse on the market), a 56k x2 USR modem, a 1.44 meg and a 2.1 gig hard drive, and a 17" monitor is as follows:
Cyrix 6x86 166+ Motherboard and Chip w/1 meg S3 Trio 64V+ graphics card + medium tower case
$289
32 megs of RAM
$138
Creative Sound Blaster 64 and CD Drive
$269
Logitech Trackman Marble
$69
x2 Modem
$179
1.44 Floppy Drive
$27
2.1 Gig Hard Drive
$199
17" monitor
$329
Total: $1499 + tax
This computer was put together by the good people at Ben and Son Computers. For all your computing needs, call (916)637-4515 and ask for Ben. You tell him what you want, how much money you have for it, and he'll make it happen speedy-like.
I got these prices out of the April issue of California Computer News, which you can get at any local grocery or computer store, and best of all: it's free!!! But come to Ben for the best prices.
Word from Ben:
Buying computer parts is just like anything buying anyting else. You need to know where to look, what to look for, and how much a good price is. You DON'T have to settle on a terrible price just because you don't know what a good one is. I've done much research over 8 years and I know what to look for, and if you would like some advise on buying your machine, give me a call.
f:\12000 essays\sciences (985)\Computer\Computer Communications.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NICATIONSBus NetworkBus Network, in computer science, a topology (configuration) for a local
area network in which all nodes are connected to a main communications line (bus). On a bus
network, each node monitors activity on the line. Messages are detected by all nodes but are
accepted only by the node(s) to which they are addressed. Because a bus network relies on a
common data "highway," a malfunctioning node simply ceases to communicate; it doesn't
disrupt operation as it might on a ring network, in which messages are passed from one node
to the next. To avoid collisions that occur when two or more nodes try to use the line at
the same time, bus networks commonly rely on collision detection or Token Passing to
regulate traffic.Star NetworkStar Network, in computer science, a local area network in
which each device (node) is connected to a central computer in a star-shaped configuration
(topology); commonly, a network consisting of a central computer (the hub) surrounded by
terminals. In a star network, messages pass directly from a node to the central computer,
which handles any further routing (as to another node) that might be necessary. A star
network is reliable in the sense that a node can fail without affecting any other node on
the network. Its weakness, however, is that failure of the central computer results in a
shutdown of the entire network. And because each node is individually wired to the hub,
cabling costs can be high.Ring networkRing Network, in computer science, a local area
network in which devices (nodes) are connected in a closed loop, or ring. Messages in a ring
network pass in one direction, from node to node. As a message travels around the ring, each
node examines the destination address attached to the message. If the address is the same as
the address assigned to the node, the node accepts the message; otherwise, it regenerates
the signal and passes the message along to the next node in the circle. Such regeneration
allows a ring network to cover larger distances than star and bus networks. It can also be
designed to bypass any malfunctioning or failed node. Because of the closed loop, however,
new nodes can be difficult to add. A ring network is diagrammed below.Asynchrous Transfer
ModeATM is a new networking technology standard for high-speed, high-capacity voice, data,
text andvideo transmission that will soon transform the way businesses and all types of
organizationscommunicate. It will enable the management of information, integration of
systems andcommunications between individuals in ways that, to some extent, haven't even
been conceived yet. ATM can transmit more than 10 million cells per second,resulting in
higher capacity, faster delivery and greater reliability. ATM simplifies information
transfer and exchange by compartmentalizing information into uniformsegments called cells.
These cells allow any type of information--from voice to video--to betransmitted over almost
any type of digitized communications medium (fiber optics, copper wire,cable). This
simplification can eliminate the need for redundant local and wide area networks
anderadicate the bottlenecks that plague current networking systems. Eventually, global
standardizationwill enable information to move from country to country, at least as fast as
it now moves from officeto office, in many cases faster.Fiber Distributed Data InterfaceThe
Fiber Distributed Data Interface (FDDI) modules from Bay Networks are designed
forhigh-performance, high-availability connectivity in support of internetwork topologies
that include: Campus or building backbone networks for lower speed LANs
Interconnection of mainframes or minicomputers to peripherals LAN interconnection for
workstations requiring high-performance networking FDDI is a 100-Mbps token-passing LAN that
uses highly reliable fiber-optic media and performsautomatic fault recovery through dual
counter-rotating rings. A primary ring supports normal datatransfer while a secondary ring
allows for automatic recovery. Bay Networks FDDI supportsstandards-based translation
bridging and multiprotocol routing. It is also fully compliant with ANSI,IEEE, and Internet
Engineering Task Force (IETF) FDDI specifications.Bay Networks FDDI interface features a
high-performance second-generation Motorola FDDI chipset in a design that provides
cost-effective high-speed communication over an FDDI network. TheFDDI chip set provides
expanded functionality such as transparent and translation bridging as wellas many advanced
performance features. Bay Networks FDDI is available in three versions -multimode,
single-mode, and hybrid. All versions support a Class A dual attachment or dual homingClass
B single attachment.Bay Networks FDDI provides the performance required for the most
demanding LAN backboneand high-speed interconnect applications. Forwarding performance over
FDDI exceeds 165,000packets per second (pps) in the high-end BLN and BCN. An innovative
High-Speed Filters optionfilters packets at wire speed, enabling microprocessor resources to
remain dedicated to packetforwarding.Data Compression In GraphicsMPEGMPEG is a group of
people that meet under ISO (the International Standards Organization) to generate standards
for digital video (sequences of images in time) and audio compression. In particular, they
define a compressed bit stream, which implicitly defines a decompressor. However, the
compression algorithms are up to the individual manufacturers, and that is where proprietary
advantage is obtained within the scope of a publicly available international standard. MPEG
meets roughly four times a year for roughly a week each time. In between meetings, a great
deal of work is done by the members, so it doesn't all happen at the meetings. The work is
organized and planned at the meetings. So far (as of January 1996), MPEG have completed the
"Standard of MPEG phase called MPEG I. This defines a bit stream for compressed video and
audio optimized to fit into a bandwidth (data rate) of 1.5 Mbits/s. This rate is special
because it is the data rate of (uncompressed) audio CD's and DAT's. The standard is in three
parts, video, audio, and systems, where the last part gives the integration of the audio and
video streams with the proper timestamping to allow synchronization of the two. They have
also gotten well into MPEG phase II, whose task is to define a bitstream for video and audio
coded at around 3 to 10 Mbits/s.How MPEG I worksFirst off, it starts with a relatively low
resolution video sequence (possibly decimated from the original) of about 352 by 240 frames
by 30 frames/s, but original high (CD) quality audio. The images are in color, but
converted to YUV space, and the two chrominance channels (U and V) are decimated further to
176 by 120 pixels. It turn out that you can get away with a lot less resolution in those
channels and not notice it, at least in "natural" (not computer generated) images. The
basic scheme is to predict motion from frame to frame in the temporal direction, and then to
use DCT's (discrete cosine transforms) to organize the redundancy in the spatial directions.
The DCT's are done on 8x8 blocks, and the motion prediction is done in the luminance (Y)
channel on 16x16 blocks. In other words, given the 16x16 block in the current frame that
you are trying to code, you look for a close match to that block in a previous or future
frame (there are backward prediction modes where later frames are sent first to allow
interpolating between frames). The DCT coefficients (of either the actual data, or the
difference between this block and the close match) are "quantized", which means that you
divide them by some value to drop bits off the bottom end. Hopefully, many of the
coefficients will then end up being zero. The quantization can change for every
"macroblock" (a macroblock is 16x16 of Y and the corresponding 8x8's in both U and V). The
results of all of this, which include the DCT coefficients, the motion vectors, and the
quantization parameters (and other stuff) is Huffman coded using fixed tables. The DCT
coefficients have a special Huffman table that is "two-dimensional" in that one code
specifies a run-length of zeros and the non-zero value that ended the run. Also, the motion
vectors and the DC DCT components are DPCM (subtracted from the last one) coded.
f:\12000 essays\sciences (985)\Computer\Computer Crime 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A report discussing the proposition that computer crime has increased dramatically over the last 10 years.
Introduction
Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. Increasing instances of white-collar crime involve computers as more businesses automate and the information held by the computers becomes an important asset. Computers can also become objects of crime when they or their contents are damaged, for example when vandals attack the computer itself, or when a "computer virus" (a program capable of altering or erasing computer memory) is introduced into a computer system.
As subjects of crime, computers represent the electronic environment in which frauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators' accounts for withdrawal. Computers are instruments of crime when they are used to plan or control such criminal acts. Examples of these types of crimes are complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal or alter valuable information from an employer.
Variety and Extent
Since the first cases were reported in 1958, computers have been used for most kinds of crime, including fraud, theft, embezzlement, burglary, sabotage, espionage, murder, and forgery. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses i.e. persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers. This method of computer crime is simpler and safer than the complex process of writing a program to change data already in the computer.
Now that personal computers with the ability to communicate by telephone are prevalent in our society, increasing numbers of crimes have been perpetrated by computer hobbyists, known as "hackers," who display a high level of technical expertise. These "hackers" are able to manipulate various communications systems so that their interference with other computer systems is hidden and their real identity is difficult to trace. The crimes committed by most "hackers" consist mainly of simple but costly electronic
trespassing, copyrighted-information piracy, and vandalism. There is also evidence that organised professional criminals have been attacking and using computer systems as they find their old activities and environments being automated.
Another area of grave concern to both the operators and users of computer systems is the increasing prevalence of computer viruses. A computer virus is generally defined as any sort of destructive computer program, though the term is usually reserved for the most dangerous ones. The ethos of a computer virus is an intent to cause damage, "akin to vandalism on a small scale, or terrorism on a grand scale." There are many ways in which viruses can be spread. A virus can be introduced to networked computers thereby infecting every computer on the network or by sharing disks between computers. As more home users now have access to modems, bulletin board systems where users may download software have increasingly become the target of viruses. Viruses cause damage by either attacking another file or by simply filling up the computer's memory or by using up the computer's processor power. There are a number of different types of viruses, but one of the factors common to most of them is that they all copy themselves (or parts of themselves). Viruses are, in essence, self-replicating.
We will now consider a "pseudo-virus," called a worm. People in the computer industry do not agree on the distinctions between worms and viruses. Regardless, a worm is a program specifically designed to move through networks. A worm may have constructive purposes, such as to find machines with free resources that could be more efficiently used, but usually a worm is used to disable or slow down computers. More specifically, worms are defined as, "computer virus programs ... [which] propagate on a computer network without the aid of an unwitting human accomplice. These programs move of their own volition based upon stored knowledge of the network structure."
Another type of virus is the "Trojan Horse." These viruses hide inside another seemingly harmless program and once the Trojan Horse program is used on the computer system, the virus spreads. One of the most famous virus types of recent years is the Time Bomb, which is a delayed action virus of some type. This type of virus has gained notoriety as a result of the Michelangelo virus. This virus was designed to erase the hard drives of people using IBM compatible computers on the artist's birthday. Michelangelo was so prevalent that it was even distributed accidentally by some software publishers when the software developers' computers became infected.
SYSOPs must also worry about being liable to their users as a result of viruses which cause a disruption in service. Viruses can cause a disruption in
service or service can be suspended to prevent the spread of a virus. If the SYSOP has guaranteed to provide continuous service then any disruption in service could result in a breach of contract and litigation could ensue. However, contract provisions could provide for excuse or deferral of obligation in the event of disruption of service by a virus.
Legislation
The first federal computer crime law, entitled the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984, was passed in October of 1984.
The Act made it a felony to knowingly access a computer without authorisation, or in excess of authorisation, in order to obtain classified United States defence or foreign relations information with the intent or reason to believe that such information would be used to harm the United States or to advantage a foreign nation.
The act also attempted to protect financial data. Attempted access to obtain information from financial records of a financial institution or in a consumer file of a credit reporting agency was also outlawed. Access to use, destroy, modify or disclose information found in a computer system, (as well as to prevent authorised use of any computer used for government business) was also made illegal. The 1984 Act had several shortcomings, and was revised in The Computer Fraud and Abuse Act of 1986.
Three new crimes were added to the 1986 Act. These were a computer fraud offence, modelled after federal mail and wire fraud statutes, an offence for the alteration, damage or destruction of information contained in a "federal interest computer", an offence for trafficking in computer passwords under some circumstances.
Even the knowing and intentional possession of a sufficient amount of counterfeit or unauthorised "access devices" is illegal. This statute has been interpreted to cover computer passwords "which may be used to access computers to wrongfully obtain things of value, such as telephone and credit card services."
Remedies and Law Enforcement
Business crimes of all types are probably decreasing as a direct result of increasing automation. When a business activity is carried out with computer and communications systems, data are better protected against modification,
destruction, disclosure, misappropriation, misrepresentation, and contamination. Computers impose a discipline on information workers and facilitate use of almost perfect automated controls that were never possible when these had to be applied by the workers themselves under management edict. Computer hardware and software manufacturers are also designing computer systems and programs that are more resistant to tampering.
Recent U.S. legislation, including laws concerning privacy, credit card fraud and racketeering, provide criminal-justice agencies with tools to fight business crime. As of 1988, all but two states had specific computer-crime laws, and a federal computer-crime law (1986) deals with certain crimes involving computers in different states and in government activities.
Conclusion
There are no valid statistics about the extent of computer crime. Victims often resist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year because of the increasing number of computers in business applications where crime has traditionally occurred. The largest recorded crimes involving insurance, banking, product inventories, and securities have resulted in losses of tens of millions to billions of dollars and all these crimes were facilitated by computers.
Bibliography
Bequai, August, Techno Crimes (1986).
Mungo, Paul, and Clough, Bryan, Approaching Zero: The Extraordinary Underworld of Hackers, Phreakers, Virus Writers, and Keyboard Criminals (1993).
Norman, Adrian R. D., Computer Insecurity (1983).
Parker, Donn B., Fighting Computer Crime (1983).
Dodd S. Griffith, The Computer Fraud and Abuse Act of 1986: A Measured
Response to a Growing Problem, 43 Vand. L. Rev. 453, 455 (1990).
f:\12000 essays\sciences (985)\Computer\Computer Crime 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Crime
by: Manik Saini
Advances in telecommunications and in computer technology have brought us to the information revolution. The rapid advancement of the telephone, cable, satellite and computer networks, combined with the help of technological breakthroughs in computer processing speed, and information storage, has lead us to the latest revolution, and also the newest style of crime, "computer crime". The following information will provide you with evidence that without reasonable doubt, computer crime is on the increase in the following areas: hackers, hardware theft, software piracy and the information highway. This information is gathered from expert sources such as researchers, journalists, and others involved in the field.
Computer crimes are often heard a lot about in the news. When you ask someone why he/she robbed banks, they world replied, "Because that's where the money is." Today's criminals have learned where the money is. Instead of settling for a few thousand dollars in a bank robbery, those with enough computer knowledge can walk away from a computer crime with many millions. The National Computer Crimes Squad estimates that between 85 and 97 percent of computer crimes are not even detected. Fewer than 10 percent of all computer crimes are reported this is mainly because organizations fear that their employees, clients, and stockholders will lose faith in them if they admit that their computers have been attacked. And few of the crimes that are reported are ever solved.
Hacking was once a term that was used to describe someone with a great deal of knowledge with computers. Since then the definition has seriously changed. In every neighborhood there are criminals, so you could say that hackers are the criminals of the computers around us. There has been a great increase in the number of computer
break-ins since the Internet became popular.
How serious is hacking? In 1989, the Computer Emergency Response Team, a organization that monitors computer security issues in North America said that they had 132 cases involving computer break-ins. In 1994 alone they had some 2,341 cases, that's almost an 1800% increase in just 5 years. An example is 31 year old computer expert Kevin Mitnick that was arrested by the FBI for stealing more then $1 million worth in data and about 20,000 credit card numbers through the Internet. In Vancouver, the RCMP have arrested a teenager with breaking into a university computer network. There have been many cases of computer hacking, another one took place here in Toronto, when Adam Shiffman was charged with nine counts of fraudulent use of computers and eleven counts of mischief to data, this all carries a maximum sentence of 10 years in jail.
We see after reading the above information that hacking has been on the increase. With hundreds of cases every year dealing with hacking this is surely a problem, and a problem that is increasing very quickly.
Ten years ago hardware theft was almost impossible, this was because of the size and weight of the computer components. Also computer components were expensive so many companies would have security guards to protect them from theft. Today this is no longer the case, computer hardware theft is on the increase.
Since the invention of the microchip, computers have become much smaller and easier to steal, and now even with portable and lap top computers that fit in you briefcase it's even easier. While illegal high-tech information hacking gets all the attention, it's the computer hardware theft that has become the latest in corporate crime. Access to valuable equipment skyrockets and black-market demand for parts increases. In factories, components are stolen from assembly lines for underground resale to distributors. In offices, entire systems are snatched from desktops by individuals seeking to install a home PC. In 1994, Santa Clara, Calif., recorded 51 burglaries. That number doubled in just the first six months of 1995. Gunmen robbed workers at Irvine, Calif., computer parts company, stealing $12 million worth of computer chips. At a large advertising agency in London, thieves came in over a weekend and took 96 workstations, leaving the company to recover from an $800,000 loss. A Chicago manufacturer had computer parts stolen from the back of a delivery van as he was waiting to enter the loading dock. It took less then two minutes for the doors to open, but that was enough time for thieves to get away with thousands of computer components.
Hardware theft has sure become a problem in the last few years, with cases popping up each day we see that hardware theft is on the increase.
As the network of computers gets bigger so will the number of software thief's. Electronic software theft over the Internet and other online services and cost the US software companies about $2.2 billion a year. The Business Software Alliance shows that number of countries were surveyed in 1994, resulting in piracy estimated for 77 countries, totaling more than $15.2 billion in losses. Dollar loss estimates due to software piracy in the 54 countries surveyed last year show an increase of $2.1 billion, from $12.8 billion in 1993 to $14.9 billion in 1994. An additional 23 countries surveyed this year brings the 1994 worldwide total to $15.2 billion.
As we can see that software piracy is on the increase with such big numbers.
Many say that the Internet is great, that is true, but there's also the bad side of the Internet that is hardly ever noticed. The crime on the Internet is increasing dramatically. Many say that copyright law, privacy law, broadcasting law and law against spreading hatred means nothing. There's many different kinds of crime on the Internet, such as child pornography, credit card fraud, software piracy, invading privacy and spreading hatred.
There have been many cases of child pornography on the Internet, this is mainly because people find it very easy to transfer images over the Internet without getting caught. Child pornography on the Internet has more the doubled on the Internet since 1990, an example of this is Alan Norton of Calgary who was charged of being part of an international porn ring.
Credit card fraud has caused many problems for people and for corporations that have credit information in their databases. With banks going on-line in last few years, criminals have found ways of breaking into databases and stealing thousands of credit cards and information on their clients. In the past few years thousands of clients have reported millions of transactions made on credit cards that they do not know of.
Invading privacy is a real problem with the Internet, this is one of the things that turns many away from the Internet. Now with hacking sites on the Internet, it is easy to download Electronic Mail(e-mail) readers that allows you to hack servers and read incoming mail from others. Many sites now have these e-mail readers and since then invading privacy has increased.
Spreading hatred has also become a problem on the Internet. This information can be easily accessed by going to any search engine for example http://www.webcrawler.com and searching for "KKK" and this will bring up thousands of sites that contain information on the "KKK". As we can see with the freedom on the Internet, people can easily incite hatred over the Internet.
After reading that information we see that the Internet has crime going on of all kinds.
The above information provides you with enough proof that no doubt computer crime is on the increase in many areas such as hacking, hardware theft, software piracy and the Internet. Hacking can be seen in everyday news and how big corporations are often victims to hackers. Hardware theft has become more popular because of the value of the computer components. Software piracy is a huge problem, as you can see about $15 billion are lost each year. Finally the Internet is good and bad, but theirs a lot more bad then good, with credit card fraud and child pornography going on. We see that computer crime is on the increase and something must be done to stop it.
f:\12000 essays\sciences (985)\Computer\Computer Crime 5.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Crime
Computer crimes need to be prevented and halted thought increased computer network security measures as well as tougher laws and enforcement of those laws in cyberspace:
Computer crime is generally defined as any crime accomplished through special knowledge of computer technology. All that is required is a personal computer, a modem, and a phone line. Increasing instances of white-collar crime involve computers as more businesses automate and information becomes an important asset. Computers are objects of crime when they or their contents are damaged, as when terrorists attack computer centers with explosives or gasoline, or when a "computer virus"--a program capable of altering or erasing computer memory--is introduced into a computer system. As subjects of crime, computers represent the electronic environment in which frauds are programmed and executed; an example is the transfer of money balances in accounts to perpetrators' accounts for withdrawal. Computers are instruments of crime when used to plan or control such criminal acts as complex embezzlements that might occur over long periods of time, or when a computer operator uses a computer to steal valuable information from an employer.
Computers have been used for most kinds of crime, including fraud, theft, larceny, embezzlement, burglary, sabotage, espionage, murder, and forgery, since the first cases were reported in 1958. One study of 1,500 computer crimes established that most of them were committed by trusted computer users within businesses; persons with the requisite skills, knowledge, access, and resources. Much of known computer crime has consisted of entering false data into computers, which is simpler and safer than the complex process of writing a program to change data already in the computer. With the advent of personal computers to manipulate information and access computers by telephone, increasing numbers of crimes--mostly simple but costly electronic trespassing, copyrighted-information piracy, and vandalism--have been perpetrated by computer hobbyists, known as "hackers," who display a high level of technical expertise. For many years, the term hacker defined someone who was a wizard with computers and programing. It was an honor to be considered a hacker. But when a few hackers began to use their skills to break into private computer systems and steal money, or interfere with the system's operations, the word acquired its current negative meaning. Organized professional criminals have been attacking and using computer systems as they find their old activities and environments being automated.
There are not a large number of valid statistics about the extent and results of computer crime. Victims often resist reporting suspected cases, because they can lose more from embarrassment, lost reputation, litigation, and other consequential losses than from the acts themselves. Limited evidence indicates that the number of cases is rising each year, because of the increasing number of computers in business applications where crime has traditionally occurred. The largest recorded crimes involving insurance, banking, product inventories, and securities have resulted in losses of tens of millions to billions of dollars--all facilitated by computers. Conservative estimates have quoted $3 billion to $100 billion as yearly losses due to computer hackers. These losses are increasing on a basis equivalent to the number of computers logged on to networks, which is almost an exponential growth rate. The seriousness of cybercrimes also increases as the dependancy on computers becomes greater and greater.
Crimes in cyberspace are becoming more and more popular for several reason. The first being that computers are become more and more accessible; thus are just become another tool in the arsenal of tool to criminals. The other reason that computer crimes are becoming more and more common are that they are sometimes very profitable. The average computer crime nets a total of $650,000 (American, 1991 standards); more than seventy-two times that of the average bank robbery.
Today's Techno bandits generally fall one of three groups, listed in the order of the threat they pose:
1. Current or former computer operations employees.
2. Career criminals who use computers to ply their trade.
3. The hacker.
Outsiders who break into computer systems are sometimes more of a threat, but employees and ex-employees are usually in a better position to steal. Because we rely more and more on computers, we also depend on those who make them and run them. The U.S. Bureau of Labor Statistics projects that the fastest growing employment opportunities are in the field computers and data-processing. Since money is a common motive for those who use their computing know-how to break the law, losses from computer theft are expected to grow as the number of computer employees rises.
The following are examples of how employees that work on computers can gain profit at the employers expense:
In 1980, two enterprising ticket agents for TransWorld Airlines (TWA) discovered how to make their employer's computer work for them. The scam went like this: When a passenger used cash to pay for a one-way ticket, Vince Giovengo sent in the credit change form, which should have been discarded. He kept the receipt the should have been given to the costumer for paying cash. Samuel Paladina, who helped board passengers, kept the part of the traveler's ticket that should have been returned to the costumer. The two agents used computers to reassemble the ticket from the pieces they had. They then marked the ticket void, and kept the cash the traveler had paid. The swindle was finally discovered by another employee who questioned the large number of voided tickets, only after approximately $93,000 was taken.
The two TWA employees were tried in the United States and were convicted of federal wire fraud in the United States. They each received six months in prison as a result of their crime. The penalty they had to pay should have been much, much higher, in order to prevent computers from being used in crimes in the future. This is true for not just the United States and Canada, but for every country in the world.
Another computer heist, one of the largest ever, involved several highly placed employees of the Volkswagen car company of West Germany. In 1987 the company discovered that these "loyal" workers had managed to steal $260 million by re- programming the computers to disguise the company's foreign currency transactions. The workers that committed the crime received 10 years in Germany's prison system; not nearly as harsh as if they had stolen the money through non- computerized means. This sets an example that computer crimes are easy to execute, and are punished very lightly. This will evoke a downward spiral; leading to more and more computer crimes.
For career criminals, computers represent a new medium for their illegal actions. Computers only enhance the speed and quality of the crimes. Now the professional criminals can steal or commit almost any other crime they want, simply by the means of typing directions into the computer. Computers are quickly being added to the list of the tools of crime.
Hackers often act in groups. The actions of several groups of hackers, most notable are the Masters of Deception (MOD) and the Legion of Doom, have been exposed in the media recently. These groups and most malicious hackers are involved in computer crime for the profit available to them.
Individual hackers are often male teenagers. They comprise the majority of the computer criminals, although they do pose a major threat to society's computer uses.
Computer criminals have various reasons for doing what they do. The main reason computer crimes are being committed on the large scale that they are is mainly for the profit. As earlier stated, the average computer crime nets more than seventy-two times that of the average bank robbery. This is seen for many skilled computer operators as a opportunity to make some quick profit. Cybercrimes are usually only committed by people that would not commit any other type of crime. This shows that the chances of getting caught and punished are perceived as very low, a view that must be changed.
Some hackers feel that it is their social responsibly to keep the cyberspace a free domain, with out authorities. This is accomplished by sharing information, and removing the concept of property in cyberspace. Therefore, they feel that it is proper to take information and share it. In their minds, they have committed no crime, but from the victim's eyes, they deserve to be punished.
Other hackers think that it is a challenge to read other's files and see how far they can penetrate into a system. It is pure enjoyment for these hackers to explore a new computer network. It becomes a challenge to gain access to strange, new networks. They often argue that they are only curious and cause no harm by merely exploring, but that is not always the case.
Where did my homework files go? Who is making charges to my credit card? Sounds like someone is out for revenge. Computer have become a modern day tool for seeking revenge. Here is an example: A computer system operator was fired from CompuServe (a major Internet provider) and by the next day his former manager's credit card numbers had been distributed to thousands of people via electronic bulletin boards. The manager's telephone account had been charged with thousands of long-distance phone calls, and his driver's license had been issued with hundreds of unpaid tickets. This shows the awesome power of a knowledgeable hacker.
Also, these hackers try to maintain free services. Some of these free services include Internet access, and long distance telephone access.
Banks and brokerage houses are major targets when stealing money is the objective, because of their increased reliance on electronic funds transfer (EFT). Using EFT, financial institutions and federal and provincial governments pass billions of dollars of funds and assets back and forth over the phone lines every day. The money transferred from bank to bank or account to account, is used by sending telephone messages between computers. In the old days, B. [efore] C. [omputers], transferring money usually involved armored cars and security guards. Today, the computer operators simply type in the appropriate instructions and funds are zipped across telephone lines from bank A to bank B. These codes can be intercepted by hackers and used to gain credit card numbers, ATM and personal identification numbers, as well as the actual money being transferred. With the ATM and credit card numbers, they have access to all the money in the corresponding accounts.
The act of changing data going into a computer or during the output from the computer is called "data diddling". One New Jersey bank suffered a $128,000 loss when the manager of computer operations made some changes in the account balances. He transferred the money to accounts of three of his friends.
"Salami slicing" is a form of data diddling that occurs when an employee steals small amounts of money from a large number of sources though the electronic changing of data (like slicing thin pieces from a roll of salami). For example, in a bank, the interest paid into accounts may routinely be rounded to the nearest cent. A dishonest computer programer may change the program so that all the fractions of the cents left over go into his account. This type of theft is hard to detect because the books balance. The sum of money can be extremely large, when taken from thousands of accounts over a period of time.
"Phone Phreaks" were the first hackers. They are criminals that break into the telephone system though many various mean to gain free access to the telephone network. Since telephone companies use large and powerful computers to route their calls, they are an oblivious target to hackers.
Stealing information in the form of software (computer programs) is also illegal. Making copies of commercial software, for re-sale or to give to others is a crime. These types of crime represent the largest growing area of computer crime. In one case, a group of teenagers pretending to be a software firm, sold $350,000 dollars worth of stolen software to a Swiss electronics company.
While most hackers claim curiosity and a desire for profit as motives for cracking computer systems, a few "dark-side hackers" seem to intentionally harm others. For these individuals, computers are convenient tools of wickedness. One crazed hacker broke into the North American Air Defense computer system and the U.S. Armies MASNET computer network. While browsing the files, officials say, he had the ability to launch missiles at the USSR. This could have led the a nuclear war, and possibly the destruction of the world.
There are more than 1200 bugs out there, and the infections spread put the victim out of action until the healing process begins (if there is a healing process). This may sound like a description of the common cold or flu virus, except that the virus does not attack people. This bug is made by human hands and it attacks computers. It is spread though shared software, almost as easily as a sneeze, and it can be every bit as weakening as the flu. Around the world on March 6, 1992, computer users reported for work only to find that their computers didn't work. The machines had "crashed" due to Michelangelo. This was a computer virus that was set to destroy all infected computers because it was set to go off on the renaissance artist's 517th birthday. Approximately 10,000 computers were hit world wide. The virus disabled the computers causing millions of dollars worth of down-time and lost data.
Computer crimes are becoming more and more dangerous. New laws and methods of enforcement need to be created; the evidence is above. An effort is being made by governments, but it is not enough. The problem is a international affair, and should be treated as such.
Current Canadian laws are some of the most lenient in industrialized counties. They were also in place much later than other countries put their laws about computer crime into effect, when compared to the United States and Japan. The Criminal Law Amendment Act, 1985 included a number of specific computer crime- related offences. Now, for the first time, Canadian law enforcement agencies can lay charges relating to cybercrime. The following text is an excerpt from the Martin's Annual Canadian Criminal Code, 1995 edition:
326. (1) Every one commits theft who fraudulently, maliciously, or without colour of right,
(b) uses any telecommunication facility or obtains any telecommunication service.
(2)In this section and section 327, "telecommunication" means any transmission, emission or reception of signs, signals, writing, images or sounds or intelligence of any nature by wire, radio, visual, or any other electro-magnetic system.
342. 1 (1) Every one who, fraudulently and without colour of right,
(a) obtains, directly or indirectly, any computer service,
(b) by means if an electro-magnetic, acoustic, mechanical or other device, intercepts or causes to be intercepted, directly or indirectly, any function of a computer system, or
(c) Uses or causes to be used, directly of indirectly, a computer system with intent to commit and offence under paragraph (a) or (b) or an offence under section 430 in relation to data or a computer system
is guilty of an indictable offence and liable to imprisonment for a term not exceeding ten years, or is guilty to an offence punishable on summary conviction.
430. (1.1) Every one commits mischief who willfully
(a) destroys or alters data;
(b) renders data meaningless, useless or ineffective;
(c) obstructs, interrupts or interferes with any person in the lawful use of data; or
(d) obstructs, interrupts of interferes with the lawful use of data or denies access to data to any person who is entitled to access
thereto.
These Canadian are already outdated, and they are only eleven years old. They need to be amended to include stiffer penalties. At the time of the creation of the laws in 1985, they were deemed adequate, because computers crimes were not looked upon as a serious issue with far-reaching effects. In 1996, computer crime has become a damaging and dangerous part of life. It is now necessary to revamp these laws to include young offenders.
The young people committing some of these crimes have very detailed knowledge of computer systems and computer programing. If they can handle this type of knowledge, and commit these crimes, they should be able to foresee the consequences of their actions. Most young hackers feel that they are bright, and therefore should be able understand the results of their actions on other's computers and computer systems. The laws should treat these young offenders like adults, because they realize what they are doing is wrong, and should suffer the consequences.
Some of the computer crimes listed in the criminal code are only summary offences; thus are not considered very serious. This spreads the message to hackers that the crimes are not serious, but they are. Since the hackers don't view the crimes as serious, they are likely to commit more of them. If the consequences of breaking any laws referring to computer crime were made tougher, hackers would realize what they are doing is wrong. They will also see other hackers being charged with offences under the criminal code ans figure out that they may be next on the list to be punished for their actions.
Not only do these laws need to be made tougher, they need to be enforced consistently. The Authorities must from all countries must have a conference to discuss the need for consistent enforcement of the law referring to computer crime. This is because computer crimes are truly international. A hacker in Canada may break into a bank in Switzerland. Does the criminal get punished by the laws of the U.S. or by the laws of Switzerland? This needs to be decided upon.
The authorities must have special operations to stop computer crimes, just as they do for drug trafficking. Much time must be devoted to stopping these crimes, before it leads to disaster. The problem is getting out of hand and the public must actively participate in cooperation with the authorities in order to bring the problem under control.
Security is one matter that we can take into our own hands. Until new laws are created and enforced, it is up to the general computer-using public to protect themselves. The use of passwords, secure access multiports and common sense can prevent computer crimes making victims of us all.
Passwords can add a remarkable amount of security to a computer system. The cases where pass words have been cracked are rare. Included with the password program must be a access restriction accessory. This limits the number of guesses at a password to only a few tries, thus effectively eliminating 98% of intruders.
Secure access multiports (SAM's) are the best protection against computer crime making a victim out of a computer user. When joining a network, the user calls in and enters password and access code. The user is then disconnected from the network. If the network recognizes the user as a valid one, it will call the user back and allow he/she to log on. If the user was invalid, the network will not attempt to re-connect with the user (see figure 1). This prevents unwanted persons from logging on to a network.
Common sense is often the best defense against computer crime. Simple things like checking and disinfecting for viruses on a regular basis, not sharing your password or giving out your credit card number on online services (ie: Internet). Also, employers can restrict access employees have to computers at the place of employment. This would dissipate most computer crimes executed by employees.
If new laws and enforcement of those law are not soon established, along with heightened security measures, the world will have a major catastrophe as a result of computer activity. The world is becoming increasingly dependant on computers, and the crimes committed will have greater and greater impact as the dependancy rises. The possible end of the world was narrowly averted, but was caused by a computer crime. The United States defense computer system was broken into, and the opportunity existed for the hacker to declare intercontinental nuclear war; thus leading to death of the human race. Another event like this is likely to occur if laws, enforcement of the laws and security of computers are not beefed up. The greatest creation of all time, the computer, should not lead the destruction of the race that created it.
f:\12000 essays\sciences (985)\Computer\Computer Crime in the 90s.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Crime In The
1990's
We're being ushered into the digital frontier. It's a cyberland with incredible promise and untold dangers. Are we prepared ? It's a battle between modern day computer cops and digital hackers. Essentially just think what is controlled by computer systems, virtually everything.
By programming a telephone voice mail to repeat the word yes over and over again a hacker has beaten the system. The hacker of the 1990's is increasingly becoming more organized very clear in what they're looking for and very, very sophisticated in their methods of attack.. As hackers have become more sophisticated and more destructive, governments, phone companies and businesses are struggling to defend themselves.
Phone Fraud
In North America the telecommunications industry estimates long distance fraud costs five hundred million perhaps up to a billion every year, the exact the exact figures are hard to be sure of but in North America alone phone fraud committed by computer hackers costs three, four maybe even up to five billion dollars every year. Making an unwitting company pay for long distance calls is the most popular form of phone fraud today. The first step is to gain access to a private automated branch exchange known as a "PABX" or "PBX". One of these can be found in any company with twenty or more employees. A "PABX" is a computer that manages the phone system including it's voice mail. Once inside a "PABX" a hacker looks for a phone whose voice mail has not yet been programmed, then the hacker cracks it's access code and programs it's voice mail account to accept charges for long distance calls, until the authorities catch on, not for a few days, hackers can use voice mail accounts to make free and untraceable calls to all over the world. The hackers that commit this type of crime are becoming increasingly organized. Known as "call cell operators" they setup flyby night storefronts were people off the street can come in and make long distance calls at a large discount, for the call cell operators of course the calls cost nothing, by hacking into a PABX system they can put all the charges on the victimized companies tab. With a set of stolen voice mail access codes known as "good numbers" hackers can crack into any phone whenever a company disables the phone they're using. In some cases call cell operators have run up hundreds of thousands of dollars in long distance charges, driving businesses and companies straight into bankruptcy. Hacking into a PABX is not as complicated as some people seem to think. The typical scenario that we find is an individual who has a "demon dialer" hooked up to their personal home computer at home that doesn't necessarily need to be a high powered machine at all but simply through the connection of a modem into a telephone line system. Then this "demon dialer" is programmed to subsequently dial with the express purpose of looking for and recording dialtone. A demon dialer is a software program that automatically calls thousands of phone numbers to find ones that are connected to computers. A basic hacker tool that can be downloaded from the internet. They are extremely easy programs to use. The intention is to acquire dialtone, that enables the hacker to move freely through the telephone network. It's generally getting more sinister. We are now seeing a criminal element now involved in term of the crimes they commit, the drugs, money laundering etc. These people are very careful they want to hide their call patterns so they'll hire these people to get codes for them so they can dial from several different calling locations so they cannot be detected.
The worlds telephone network is a vast maze, there are many places to hide but once a hacker is located the phone company and police can track their every move. The way they keep track is by means of a device called a "DNR" or a dial number recorder. This device monitors the dialing patterns of any suspected hacker. It lists all the numbers that have been dialed from their location, the duration of the telephone call and the time of disconnection. The process of catching a hacker begins at the phone company's central office were thousands of lines converge to a main frame computer, the technicians can locate the exact line that leads to a suspected hackers phone line by the touch of a button. With the "DNR" device the "computer police" retrieve the number and also why the call was made and if it was made for illegal intention they will take action and this person can be put in prison for up to five years and be fined for up to $ 7500.00.
The telephone network is a massive electronic network that depends on thousands of computer run software programs and all this software in theory can be reprogrammed for criminal use. The telephone system is in other words a potentially vulnerable system, by cracking the right codes and inputting the correct passwords a hacker can sabotage a switching system for millions of phones, paralyzing a city with a few keystrokes.
Security experts say telephone terrorism poses a threat, society hasn't even begun to fathom ! You have people hacking into systems all the time. There were groups in the U.S.A in 1993 that shutdown three of the four telephone switch stations on the east coast, if they had shutdown the final switch station as well the whole east coast would have been without phones. Things of this nature can happen and have happened in the past. Back in the old days you had mechanical switches doing crossbars, things of that nature. Today all telephone switches are all computerized, they're everywhere. With a computer switch if you take the first word "computer" that's exactly what it is, a switch
being operated by a computer. The computer is connected to a modem, so are you and all the hackers therefore you too can run the switches.
Our generation is the first to travel within cyberspace, a virtual world that exists with all the computers that form the global net. For most people today cyberspace is still a bewildering and alien place. How computers work and how they affect our lives is still a mystery to all but the experts, but expertise doesn't necessarily guarantee morality. Originally the word hacker meant a computer enthusiasts but now that the internet has revealed it's potential for destruction and profit the hacker has become the outlaw of cyberspace. Not only do hackers commit crimes that cost millions of dollars, they also publicize their illegal techniques on the net where they innocent minds can find them and be seduced by the allure of power and money. This vast electronic neighborhood of bits and bytes has stretched the concepts of law and order. Like handbills stapled to telephone polls the internet appears to defy regulation. The subtleties and nuances of this relatively new form to the words "a gray area" and "right and wrong". Most self described hackers say they have been given a bad name and that they deserve more respect. For the most part they say hackers abide by the law, but when they do steal a password or break into a network they are motivated by a helping desire for knowledge, not for malicious intent. Teenagers are especially attracted by the idea of getting something for nothing.
When system managers try to explain to hackers that it is wrong to break into computer systems there is no point because hackers with the aid of a computer possess tremendous power. They cannot be controlled and they have the ability to break into any computer system they feel like. But suppose one day a hacker decides to break into a system owned by a hospital and this computer is in charge of programming the therapy for a patient there if a hacker inputs the incorrect code the therapy can be interfered with and the patient may be seriously hurt. Even though this wasn't done deliberately. These are the type of circumstances that give hackers a bad reputation. Today anyone with a computer and a modem can enter millions of computer systems around the world. On the net they say bits have no boundaries this means a hacker half way around the world can steal passwords and credit card numbers, break into computer systems and plant crippling viruses as easily as if they were just around the corner. The global network allows hackers to reach out and rob distant people with lightning speed.
If cyberspace is a type of community, a giant neighborhood made up of networked computer users around the world, then it seems natural that many elements of traditional society can be found taking shape as bits and bytes. With electronic commerce comes electronic merchants, plugged-in educators provide networked education, and doctors meet with patients in offices on-line. IT should come as no surprise that there are also cybercriminals committing cybercrimes.
As an unregulated hodgepodge of corporations, individuals, governments, educational institutions, and other organizations that have agreed in principle to use a standard set of communication protocols, the internet is wide open to exploitation. There are no sheriffs on the information highway waiting to zap potential offenders with a radar gun or search for weapons if someone looks suspicious. By almost all accounts, this lack of "law enforcement" leaves net users to regulate each other according to the reigningnorms of the moment. Community standards in cyberspace appear to be vastly different from the standards found at the corner of Markham and Lawrence. Unfortunately, cyberspace is also a virtual tourist trap where faceless, nameless con artists can work the crowds.
Mimicking real life, crimes and criminals come in all varieties on the internet. The FBI's National Computer Squad is dedicated to detecting and preventing all types of computer -related crimes. Some issues being carefully studied by everyone from the net veterans and law enforcement agencies to radical crimes include:
Computer Network Break-Ins
Using software tools installed on a computer in a remote location, hackers can break into any computer systems to steal data, plant viruses or trojan horses, or work mischief of a less serious sort by changing user names or passwords. Network intrusions have been made illegal by the U.S. federal government, but detection and enforcement are difficult.
Industrial Espionage
Corporations, like governments, love to spy on the enemy. Networked systems provide new opportunities for this , as hackers-for-hire retrieve information about product development and marketing strategies, rarely leaving behind any evidence of the theft. Not only is tracing the criminal labor-intensive, convictions are hard to obtain when laws are not written with electronic theft in mind.
Software Piracy
According to estimates by U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed worldwide. These copies work as well as the originals, and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes.
Child Pornography
This is one crime that is clearly illegal, both on and off the internet. Crackdowns may catch some offenders, but there are still ways to acquire images of children in varying stages of dress and performing a variety of sexual acts. Legally speaking, people who provide access to child porn face the same charges whether the images are digital or on a piece of paper. Trials of network users arrested in a recent FBI bust may challenge the validity of those laws as they apply to online services.
Mail Bombings
Software can be written that will instruct a computer to do almost anything, and terrorism has hit the internet in the form of mail bombings. By instructing a computer to repeatedly send mail (email) to a specified person's email address, the cybercriminal can overwhelm the recipient's personal account and potentially shut down entire systems. This may not be illegal , but it is certainly disruptive.
Password Sniffers
Password sniffers are programs that monitor and record the name and password of network users as they log in, jeopardizing security at a site. Whoever installs the sniffer can then impersonate an authorized user and log in to access restricted documents. Laws are not yet up to adequately prosecute a person for impersonating another person on-line, but laws designed to prevent unauthorized access to information may be effective in apprehending hackers using sniffer programs. The Wall Street Journal suggest in recent reports that hackers may have sniffed out passwords used by members of America On-line, a service with more than 3.5 million subscribers. If the reports are accurate, even the president of the service found his account security jeopardized.
Spoofing
Spoofing is the act of disguising one computer to electronically "look" like another computer in order to gain access to a system that would normally be restricted. Legally, this can be handles in the same manner as password sniffers, but the law will have to change if spoofing is going to be addressed with more than a quick fix solution. Spoofing was used to access valuable documents stored on a computer belonging to security expert Tsutomu Shimomura (security expert of Nintendo U.S.A)
Credit Card Fraud
The U.S secret service believes that half a billion dollars may be lost annually by customers who have credit card and calling card numbers stolen from on-line databases. Security measures are improving and traditional methods of law enforcement seem to be sufficient for prosecuting the thieves of such information. Bulletin boards and other on-line services are frequent targets for hackers who want to access large databases or credit card information. Such attacks usually result in the implementation of stronger security systems.
Since there is no single widely-used definition of computer-related crime, computer network users and law enforcement officials most distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the internet.
TABLE OF CONTENTS
PHONE FRAUD..........................................Pg1
NETWORK BREAK-INS...........................Pg6
INDUSTRIAL ESPIONAGE......................Pg7
SOFTWARE PIRACY................................Pg7
CHILD PORNOGRAPHY..........................Pg7
MAIL BOMBING.......................................Pg8
PASSWORD SNIFFING............................Pg8
SPOOFING.................................................Pg9
CREDIT CARD FRAUD...........................Pg9
f:\12000 essays\sciences (985)\Computer\Computer Crime.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A young man sits illuminated only by the light of a computer screen. His
fingers dance across the keyboard. While it appears that he is only word processing or
playing a game, he may be committing a felony.
In the state of Connecticut, computer crime is defined as:
53a-251. Computer Crime
(a) Defined. A person commits computer crime when he violates any of the
provisions of this section.
(b) Unauthorized access to a computer system. (1) A person is guilty of the
computer crime of unauthorized access to a computer system when, knowing that he is not
authorized to do so, he accesses or causes the be accessed any computer system without
authorization...
(c) Theft of computer services. A person is guilty of the computer crime o f
theft of computer services when he accesses or causes to be accessed or otherwise uses or
causes to be used a computer system with the intent to obtain unauthorized computer
services.
(d) Interruption of computer services. A person is guilty of the computer
crime of interruption of computer services when he, without authorization, intentionally or
recklessly disrupts or degrades or causes the disruption or degradation of computer services
or denies or causes the denial of computer services to an authorized user of a computer
system.
(e) Misuse of computer system information. A person is guilty of the computer
crime of misuse of computer system information when: (1) As a result of his accessing or
causing to be accessed a computer system, he intentionally makes or causes to be made an
unauthorized display, use, disclosure or copy, in any form, of data residing in,
communicated by or produced by a computer system.
Penalties for committing computer crime range from a class B misdemeanor to a
class B felony. The severity of the penalty is determined based on the monetary value of
the damages inflicted. (2)
The law has not always had much success stopping computer crime. In 1990 there
was a nationwide crackdown on illicit computer hackers, with arrests, criminal
charges, one dramatic show-trial, several guilty pleas, and huge confiscations of data and
equipment all over the USA.
The Hacker Crackdown of 1990 was larger, better organized, more deliberate, and
more resolute than any previous efforts. The U.S. Secret Service, private telephone
security, and state and local law enforcement groups across the country all joined
forces in a determined attempt to break the back of America's electronic underground. It was
a fascinating effort, with very mixed results.
In 1982, William Gibson coined the term "Cyberspace". Cyberspace is defined as
"the 'place' where a telephone conversation appears to occur. Not inside your actual
phone, the plastic device on your desk... The place between the phones. The indefinite
place out there." (1, p. 1)
The words "community" and "communication" share the same root. Wherever one
allows many people to communicate, one creates a community. "Cyberspace" is as much of a
community as any neighborhood or special interest group. People will fight more to defend
the communities that they have built then they would fight to protect themselves.
This two-sided fight truly began when the AT&T telephone network crashed on January 15,
1990.
The crash occurred due to a small bug in AT&T's own software. It began with a
single switching station in Manhattan, New York, but within ten minutes the domino
effect had brought down over half of AT&T's network. The rest was overloaded, trying to
compensate for the overflow.
This crash represented a major corporate embarrassment. Sixty thousand people
lost their telephone service completely. During the nine hours of effort that it took to
restore service, some seventy million telephone calls went uncompleted.
Because of the date of the crash, Martin Luther King Day (the most politically
touchy holiday), and the absence of a physical cause of the destruction, AT&T did not
find it difficult to rouse suspicion that the network had not crashed by itself- that it
had been crashed, intentionally. By people the media has called hackers.
Hackers define themselves as people who explore technology. If that technology
takes them outside of the boundaries of the law, they will do very little about it.
True hackers follow a "hacker's ethic", and never damage systems or leave electronic
"footprints" where they have been.
Crackers are hackers who use their skills to damage other people's systems or
for personal gain. These people, mistakenly referred to as hackers by the media, have been
sensationalized in recent years.
Software pirates, or warez dealers, are people who traffic in pirated software
(software that is illegally copied and distributed). These people are usually looked
down on by the more technically sophisticated hackers and crackers.
Another group of law-breakers that merit mentioning are the phreakers.
Telephone phreaks are people that experiment with the telephone network. Their main
goal is usually to receive free telephone service, through the use of such devices as
homemade telephone boxes. They are often much more extroverted than their computer
equivalents. Phreaks have been known to create world-wide conference calls that run for
hours (on someone else's bill, of course). When someone has to drop out, they call up
another phreak to join in.
Hackers come from a wide variety of odd subcultures, with a variety of
languages, motives and values. The most sensationalized of these is the "cyberpunk"
group. The cyberpunk FAQ (Frequently Asked Questions list) states:
2. What is cyberpunk, the subculture?
Spurred on by cyberpunk literature, in the mid-1980's certain groups
of people started referring to themselves as cyberpunk, because they
correctly noticed the seeds of the fictional "techno-system" in
Western society today, and because they identified with the
marginalized characters in cyberpunk stories. Within the last few
years, the mass media has caught on to this, spontaneously dubbing
certain people and groups "cyberpunk". Specific subgroups which are
identified with cyberpunk are:
Hackers, Crackers, and Phreaks: "Hackers" are the "wizards" of the
computer community; people with a deep understanding of how their
computers work, and can do things with them that seem
"magical". "Crackers" are the real-world analogues of the "console
cowboys" of cyberpunk fiction; they break in to other people's
computer systems, without their permission, for illicit gain or simply
for the pleasure of exercising their skill. "Phreaks" are those who do
a similar thing with the telephone system, coming up with ways to
circumvent phone companies' calling charges and doing clever things
with the phone network. All three groups are using emerging computer
and telecommunications technology to satisfy their individualist
goals.
Cypherpunks: These people think a good way to bollix "The System" is
through cryptography and cryptosystems. They believe widespread use of
extremely hard-to-break coding schemes will create "regions of privacy"
that "The System" cannot invade. (3)
This simply serves to show that computer hackers are not only teenage boys with
social problems who sit at home with their computers; they can be anyone.
The crash of AT&T's network and their desire to blame it on people other than
themselves brought the political impetus for a new attack on the electronic underground.
This attack took the form of Operation Sundevil. "Operation Sundevil" was a crackdown on
those traditional scourges of the digital underground: credit card theft and telephone code
abuse.
The targets of these raids were computer bulletin board systems. Boards can be
powerful aids to organized fraud. Underground boards carry lively, extensive, detailed, and
often quite flagrant discussions of lawbreaking techniques and illegal activities.
Discussing crime in the abstract, or discussing the particulars of criminal cases, is not
illegal, but there are stern state and federal laws against conspiring in groups in order to
commit crimes. It was these laws that were used to seize 25 of the "worst" offenders,
chosen from a list of over 215 underground BBSs that the Secret Service had fingered for
"carding" traffic.
The Secret Service was not interested in arresting criminals. They sought to
seize computer equipment, not computer criminals. Only four people were arrested during the
course of Operation Sundevil; one man in Chicago, one man in New York, a nineteen-year-old
female phreak in Pennsylvania, and a minor in California.
This was a politically motivated attack designed to show the public that the
government was capable of stopping this fraud, and to show the denizens of the
electronic underground that the government could penetrate into the very heart of their
society and destroy routes of communication, as well as bring down the legendary BBS
operators. This is not an uncommon message for law-enforcement officials to send to
criminals. Only the territory was new.
Another message of Sundevil was to the employees of the Secret Service
themselves; proof that such a large-scale operation could be planned and accomplished
successfully.
The final purpose of Sundevil was as a message from the Secret Service to their
long-time rivals the Federal Bureau of Investigation. Congress had not clearly stated which
agency was responsible for computer crime. Later, they gave the Secret Service jurisdiction
over any computers belonging to the government or responsible for the transfer of money.
Although the secret service can't directly involve themselves in anything outside of this
jurisdiction, they are often called on by local police for advice.
Hackers are unlike any other group of criminals, in that they are constantly in
contact with one another. There are two national conventions per year, and monthly
meetings within each state. This has forced people to pose the question of whether
hacking is really a crime at all.
After seeing such movies at "The Net" or "Hackers", people have begun to wonder
how vulnerable they individually are to technological crime. Cellular phone
conversations can be easily overheard with modified scanners, as can conversations on
cordless phones.
Any valuable media involving numbers is particularly vulnerable. A common
practice among hackers is "trashing". Not, as one might think, damaging public
property, but actually going through a public area and methodically searching the trash for
any useful information. Public areas that are especially vulnerable are ATM chambers and
areas where people posses credit cards printouts or telephone bills.
This leads to another part of hacking that has very little to do with the
technical details of computers or telephone systems. It is referred to by those who
practice it as "social engineering". With the information found on someone's phone bill
(account or phonecard number), an enterprising phreak can call up and impersonate an
employee of the telephone company- obtaining useable codes without any knowledge of the
system whatsoever. Similar stunts are often performed with ATM cards and pin numbers.
The resulting codes are either kept or used by whomever obtained them, traded or
sold over Bulletin Board Systems or the Internet, or posted for anyone interested to
find.
With the increasing movement of money from the physical to the electronic,
stricter measures are being taken against electronic fraud, although this can backfire.
In several instances, banks have covered up intrusions to prevent their customers from
losing their trust in the security of the system. The truth has only come out long after
the danger was passed.
Electronic security is becoming a way of life for many people. As with the
first cellular telephone movements, this one has begun with the legitimately wealthy and the
criminals. The most common security package is PGP, or Pretty Good Privacy. PGP uses RSA
public-key encryption algorithms to provide military-level encryption to anyone who seeks to
download the package from the Internet.
The availability of this free package on the Internet caused an uproar and
brought about the arrest of the author, Phil Zimmerman. The United States government
lists RSA encryption along with weapons of which the exportation is illegal. The
Zimmerman case has not yet been resolved.
The United States government has begun to take a large interest in the Internet
and private Bulletin Board Systems. They have recently passed the Communications
Decency Act, which made it illegal to transmit through the Internet or phone lines in
electronic form any "obscene or inappropriate" pictures or information. This Act
effectively restricted the information on the Internet to that appropriate in PG-13
movies.
As of June 12, 1996, the censorship section of the Communications Decency Act
was overturned by a three-judge panel of the federal court of appeals, who stated that it
violates Internet user's first amendment rights, and that it is the responsibility of the
parents to censor their children's access to information, not the government's. The court
of appeals, in effect, granted the Internet the protections previously granted to
newspapers, one of the highest standards of freedom insured by our Constitution. The
Clinton administration has vowed to appeal this decision through the Supreme Court.
Technological crime is harder to prosecute than any other, because the police
are rarely as technologically advanced as the people they are attempting to catch. This
situation was illustrated by the recent capture of Kevin Mitnick. Mitnick had eluded police
for years. After he broke into security expert Tsumona's computer, Tsumona took over the
investigation and captured Mitnick in a matter of months.
It will be fascinating to see, as technology continues to transform society, the
way that technological criminals, usually highly intelligent and dangerous, will
transform the boundaries of crime. As interesting to see will be how the government
will fight on this new battle ground against the new types of crime, while preserving
the rights and freedom of the American people.
f:\12000 essays\sciences (985)\Computer\Computer Crime1.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I
Ever since I got my first computer. I have enjoyed working on them. I have learned a tremendous amount of trouble shooting. With my recent computer I have come across computer crime. I got interested in hacking, prhreaking, and salami slicing. But before I go to far, I need to learn more about it like the consequences? One question in mind is what crimes are their and what kind of things you can do with them? I would like to find out why people do thesis things? I would also like to learn the laws against all computer crime?
II
Today's computer society has brought a new form of crime. There are those "hackers" who break their way into computers to learn the system or get information. I found out in the book Computer Crime written by Judson, Karen: That "Salami Slicers" steal small amounts of money from many bank customers this adding up to a great deal of money. I also read about phone phreaks more known as "Phreakers." They steal long distance phone services. Phreakers commit many other crimes against phone companies.
In the book Computer Crime it states, most people commit thesis crimes, because they where carious and wanted to explore the system. All they want to do is exploit systems not destroy it. It is purely intellectual. I know one reason is that is can be very rewarding. Hackers are drawn to computers for the aninymity they allow. They feel powerful and can do anything. Hackers can be there own person out side the real world.
I found out Arizona was the first state to pass a law against computer crime, in 1979. In 1980 the U.S. copyright act was amended to include soft ware. I found out that in 1986 a
computer farad abuse act was passed. This act was made to cover over any crime or computer scheme that was missed with any former laws. Violations to any of thesis laws are a maxim of five years in prison and a $250,000 fine.
III
With my computer I can do lots of thesis things but choose not to. Because I know that if you know computers you can do much more like carious wise. If you know computers you set for the future. I'm not saying I don't have fun with my computer I like causing a little trouble every now and then. Well I piety much covered the motives and intentions behind the most common computer crimes. I explained the laws and punishments for committing thesis crimes. I hope I cleared things up for you elutriate computer people, and gave you a more understanding of the things that can be done. As you have red you can see that computers can and are more dangerous than guns.
f:\12000 essays\sciences (985)\Computer\Computer Crime4.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It's the weekend, you have nothing to do so you decide to play around
on your computer. You turn it on and then start up, you start calling
people with your modem, connecting to another world, with people just like
you at a button press away. This is all fine but what happens when you
start getting into other peoples computer files. Then it becomes a crime,
but what is a computer crime really, obviously it involves the use of a
computer but what are these crimes. Well they are: Hacking, Phreaking, &
Software Piracy.
To begin I will start with Hacking, what is hacking. Hacking is
basically using your computer to "Hack" your way into another. They use
programs called scanners which randomly dials numbers any generating tones
or carriers are recorded. These numbers are looked at by hackers and then
used again, when the hacker calls up the number and gets on he's presented
with a logon prompt, this is where the hacking really begins, the hacker
tries to bypass this anyway he knows how to and tries to gain access to the
system. Why do they do it, well lets go to a book and see "Avid young
computer hackers in their preteens and teens are frequently involved in
computer crimes that take the form of trespassing, invasion of privacy, or
vandalism. Quite often they are mearly out for a fun and games evening, and
they get entangled in the illegal use of their machines without realizing
the full import of what they are doing", I have a hard time believing that
so lets see what a "hacker" has to say about what he does "Just as they
were enthraled with their pursuit of information, so are we. The thrill of
the hack is not in breaking the law, it's in the pursuit and capture of
knowledge.", as you can see the "hacker" doesn't go out to do destroy
things although some do. It's in the pursuit of knowledge. Of course this
is still against the law. But where did all of this start, MIT is where
hacking started the people there would learn and explore computer systems
all around the world. In the views of professional hacking is like drugs or
any other addictive substance, it's an addiction for the mind and once
started it's difficult to stop. This could be true, as hackers know what
they are doing is wrong and they know odds are they will be caught. But as
I mentioned some hackers are just above average criminals, using there
skills to break in banks and other places where they can get money, or
where they can destroy information. What a hacker does at a bank is take a
few cents or even a few fractions of a cents from many different accounts
this may seem like nothing but when all compiled can be alot. A stick up
robber averages about $8,000 each "job", and he has to put his life and
personal freedom on the line to do it while the computer hacker in the
comfort of his own living room averages $500,000 a "job". As for people
destroying information, this is for taking some one down, destruction of
data could end a business which for some is very attractive. It can cost a
company thousands of dollars to restore the damage done.
Now that you have an understanding of what a "hacker" is, it time to
move on to someone closely associates with a hacker. This is a Phreak, but
what is that. For the answer we turn to the what is known as the
"Official" Phreakers Manual "Phreak [fr'eek] 1. The action of using
mischievous and mostly illegal ways in order to not pay for some sort of
telecommunications bill, order, transfer, or other service. It often
involves usage of highly illegal boxes and machines in order to defeat the
security that is set up to avoid this sort of happening. [fr'eaking] v. 2.
A person who uses the above methods of destruction and chaos in order to
make a better life for all. A true phreaker will not go against his fellows
or narc on people who have ragged on him or do anything termed to be
dishonourable to phreaks. [fr'eek] n. 3. A certain code or dialup useful in
the action of being a phreak. (Example: "I hacked a new metro phreak last
night.")" The latter 2 ideas of what a phreak is, is rather weird. A
Phreak like the hacker likes to explore and experiment, however his choice
of exploring is not other computer but the phone system as a whole. Phreaks
explore the phone system finding many different ways to do things, most
often make free calls. Why do they do this, " A hacker and phreaker will
have need to use telephone systems much more than an average individual,
therefore, methods which can be used to avoid toll charges are in order. ".
A phreak has two basic ways of making free calls, he can call up codes or
PBXs on his phone and then enter a code and make his call or he can use
Electronic Toll Fraud Devices. Codes are rather easy to get the phreak
will scan for them, but unlike a hacker will only save the tone(s) number
instead of the carrier(s). Then he will attempt to hack the code to use it,
these codes range from numbers 0 - 9 and can be any length, although most
are not more than 10. Electronic Toll Fraud Devices are known as Boxes in
the underground. Most are the size of a pack of smokes, or than can be
smaller or bigger. I will not go too deep. They are electronic devices
than do various things, such as make outgoing calls free, make incoming
calls free, simulate coins dropping in a phone, etc. People who "Phreak"
are caught alot these days thanks to the new technology.
Software Piracy is the most common computer crime, it is the illegal
coping of software. "People wouldn't think of shoplifting software from a
retail store, but don't think twice about going home and making several
illegal copies of the same software." and this is true because I myself am
guilty of this. The major problem is not people going out and buying the
software then making copies for everyone, it's the Bulletin Boards that
cater to pirating software, that really cause the problem. On anyone one
of these boards one can find an upwards of 300 - 1000+ of pirated software
open for anyone to take. This is a problem and nothing can really be done
about it. Few arrests are made in this area of computer crime.
I will now devote a brief section to the above mentioned BBS' , most
are legal and do nothing wrong. However there are many more that do accept
pirated software, pornographic pictures, animations , and texts. As well as
a trading area for phone codes, other BBS', Credit Card numbers, etc. This
is where a majority of Hackers and Phreaks come, as well as those who
continue to pirate software come to meet and share stories. In this is a
new world, where you can do anything, there are groups that get, crack, and
courier software all over the world some of them are called: INC:
International Network Of Crackers, THG: The Humble Guys, TDT: The Dream
Team. As well a number of other groups have followed suit such as
Phalcon/SKISM (Smart Kids Into Sick Methods), NuKE, and YAM (Youngsters
Against McAfee) these are virus groups who write and courier their work
anywhere they can, they just send it somewhere, where anyone can take it
and use it in any manner they wish, such as getting even with someone. All
of these activities are illegal but nothing can be done, the people running
these boards know what they are doing. As it stands right now, the BBS
world is in two parts Pirating and the Underground, which consists of
Hackers/Phreaks/Anarchists/Carders(Credit Card Fraud)/Virus programmers.
All have different boards and offer a variety of information on virtually
any subject.
Well from all of this reading you just did you should have a fairly
good idea of what computer crime is. I didn't mention it in the sections
but the police, phone companies are arresting and stopping alot of things
every day. With the new technology today it is easier to catch these
criminals then it was before. With the exception of the BBS' the police
have made some major blows busting a few BBS', arresting hackers and
phreaks. All of which were very looked up to for knowledge in their areas
of specialty. If I had more time I could go into these arrests but I must
finish by saying that these are real crimes and the sentences are getting
harsher, with alot of the older people getting out the newer people are
getting arrested and being made examples of. This will deter alot of
would-be computer criminal away.
f:\12000 essays\sciences (985)\Computer\Computer crime5.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the world of computers, computer fraud and computer
crime are very prevalent issues facing every computer user.
This ranges from system administrators to personal computer
users who do work in the office or at home. Computers
without any means of security are vulnerable to attacks from
viruses, worms, and illegal computer hackers. If the proper
steps are not taken, safe computing may become a thing of
the past. Many security measures are being implemented to
protect against illegalities.
Companies are becoming more aware and threatened by the
fact that their computers are prone to attack. Virus
scanners are becoming necessities on all machines.
Installing and monitoring these virus scanners takes many
man hours and a lot of money for site licenses. Many server
programs are coming equipped with a program called "netlog."
This is a program that monitors the computer use of the
employees in a company on the network. The program monitors
memory and file usage. A qualified system administrator
should be able to tell by the amounts of memory being used
and the file usage if something is going on that should not
be. If a virus is found, system administrators can pinpoint
the user who put the virus into the network and investigate
whether or not there was any malice intended.
One computer application that is becoming more widely
used and, therefore, more widely abused, is the use of
electronic mail or email. In the present day, illegal
hackers can read email going through a server fairly easily.
Email consists of not only personal transactions, but
business and financial transactions. There are not many
encryption procedures out for email yet. As Gates
describes, soon email encryption will become a regular
addition to email just as a hard disk drive has become a
regular addition to a computer (Gates p.97-98).
Encrypting email can be done with two prime numbers
used as keys. The public key will be listed on the Internet
or in an email message. The second key will be private,
which only the user will have. The sender will encrypt the
message with the public key, send it to the recipient, who
will then decipher it again with his or her private key.
This method is not foolproof, but it is not easy to unlock
either. The numbers being used will probably be over 60
digits in length (Gates p.98-99).
The Internet also poses more problems to users. This
problem faces the home user more than the business user.
When a person logs onto the Internet, he or she may download
a file corrupted with a virus. When he or she executes that
program, the virus is released into the system. When a
person uses the World Wide Web(WWW), he or she is
downloading files into his or her Internet browser without
even knowing it. Whenever a web page is visited, an image
of that page is downloaded and stored in the cache of the
browser. This image is used for faster retrieval of that
specific web page. Instead of having to constantly download
a page, the browser automatically reverts to the cache to
open the image of that page. Most people do not know about
this, but this is an example of how to get a virus in a
machine without even knowing it.
Every time a person accesses the Internet, he or she is
not only accessing the host computer, but the many computers
that connect the host and the user. When a person transmits
credit card information, it goes over many computers before
it reaches its destination. An illegal hacker can set up
one of the connecting computers to copy the credit card
information as it passes through the computer. This is how
credit card fraud is committed with the help of the
Internet. What companies such as Maxis and Sierra are doing
are making secure sites. These sites have the capabilities
to receive credit card information securely. This means the
consumer can purchase goods by credit card over the Internet
without worrying that the credit card number will be seen by
unauthorized people.
System administrators have three major weapons against
computer crime. The first defense against computer crime is
system security. This is the many layers systems have
against attacks. When data comes into a system, it is
scanned for viruses and safety. Whenever it passes one of
these security layers, it is scanned again. The second
resistance against viruses and corruption is computer law.
This defines what is illegal in the computer world. In the
early 1980's, prosecutors had problems trying suspect in
computer crimes because there was no definition of illegal
activity. The third defense is the teaching of computer
ethics. This will hopefully defer people from becoming
illegal hackers in the first place (Bitter p. 433).
There are other ways companies can protect against
computer fraud than in the computer and system itself. One
way to curtail computer fraud is in the interview process
and training procedures. If it is made clear to the new
employee that honesty is valued in the company, the employee
might think twice about committing a crime against the
company. Background checks and fingerprinting are also good
ways to protect against computer fraud.
Computer crime prevention has become a major issue in
the computer world. The lack of knowledge of these crimes
and how they are committed is a factor as to why computer
crime is so prevalent. What must be realized is that the
"weakest link in any system is the human" (Hafner and
Markoff p. 61). With the knowledge and application of the
preventative methods discussed, computer crime may actually
become an issue of the past.
Works Cited
Bitter, Gary G., ed. The MacMillian Encyclopedia of
Computers. MacMillian Publishing Company: New York,
1992.
Gates, William. The Road Ahead. New York : Penguin Books,
1995.
Hafner, Katie & John Markoff. Cyberpunk. New York : Simon
and Schuster, 1991.
Romney, Marshall. "Computer Fraud - What Can Be Done
About It?" CPA Journal Vol. 65 (May 1995): p. 30-33.
f:\12000 essays\sciences (985)\Computer\Computer crime6.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Crime:
The Crime of the Future
English II
6 April 1996
Computer Crimes
Explosive growth in the computer industry over the last decade has made new
technologies cheaper and simpler for the average person to own. As a result, computers
play an intricate part in our daily lives. The areas in which computers affect life are
infinite, ranging from entertainment to finances. If anything were to happen to these
precious devices, the world would be chaotic.
There is a type of person that thrives on chaos, that is the malevolent hacker.
Some hackers act on revenge or just impersonal mischievousness. But whatever their
motives, their deeds can be destructive to a person's computer. An attack by a hacker not
only affects the victim, but others as well.
One case involving a notorious hacker named Kevin Mitnick did just that. Mitnick
is a very intelligent man. He is 31 and pending trial for computer fraud. When he was a
teenager, he used his knowledge of computers to break into the North American Defense
Command computer. Had he not been stopped, he could have caused some real national
defense problems for the United States (Sussman 66).
Other "small time" hackers affect people just as much by stealing or giving away
copyrighted software, which causes the prices of software to increase, thus increasing the
price the public must pay for the programs.
Companies reason that if they have a program that can be copied onto a disc then
they will lose a certain amount of their profit. People will copy it and give to friends or
pass it around on the Internet. To compensate, they will raise the price of disc programs.
CD Rom programs cost more to make but are about the same price as disc games.
Companies don't loose money on them because it is difficult to copy a CD Rom and
impossible to transmit over the Internet (Facts on File #28599 1).
One company in particular, American On-line, has been hit hard by hackers. The
feud started when a disgruntled ex-employee used his inside experience to help fellow
hackers disrupt services offered by AOL (Alan 37). His advice became popular and he
spawned a program called AOHell. This program, in turn, created many copycats. They
all portray their creators as gangsters, and one of the creator's names is "Da Chronic."
Many also feature short clips of rap music (Cook 36).
These programs make it easy for people with a little hacker knowledge to disrupt
AOL. These activities include gaining access to free accounts, gaining access to other
people's credit card numbers, and destroying chat rooms. The following is an excerpt
from a letter from the creator of AOHell to a user:
What is AOHell? AOHell is an AOL for Windows add-on, which allows you to do
many things. AOHell allows you to download for free, talk using other people's screen
names, steal passwords and credit card information, and much more. AOHell is basically
an anarchy program designed to help you, the user, and destroy AOL, the enemy:
No matter what AOL says to you, nor what even Steve Case* himself may say
about AOHell, don't be too quick to judge. America On-line may say anything to get you
to stop using AOHell. They may say it's a virus, they may say it'll cancel your account,
hell, they've even tried to suggest it may steal your password and send it to the author.
None of this is true however. Free AOL does not interest me, as I have many ways to
accomplish that. You should always keep that in mind when you hear such rumors. It's
AOL and their sick pedophiles I'm against, not you, the user. You are the ones who are
making it possible for me to achieve my goal, which is to make AOL a virtual Hell. Now
stop reading, and go destroy a Mac room with the bot or something. :) (Cook 36)
The quote above was in defence of AOHell which has received a lot of negative
feedback. The loopholes for hackers and freeloaders may be closing, however. America
On-line is reluctant to discuss specifics of its counterattack for fear of giving miscreants
warning. However, many software trading rooms are being shut down almost as soon as
they are formed. Others are often visited by 'narcs' posing as traders. New accounts
started with phony credit cards are being cut off more promptly, and other card-
verification schemes are in place.
AOL has now developed the ability to resurrect a screen name that had been
deleted by the hackers, and is rumored to have call-tracing technologies in the works
(Alan 37).
Hacking is not just a problem in America. All across the world hackers plague
anyone they can, and they're getting better at it. In Europe they're known as "Phreakers"
(technologically sophisticated young computer hackers). These self-proclaimed Phreakers
have made their presence felt all the way up the political ladder. They managed to steal
personal expense accounts of the European Commission President Jacques. They revealed
some embarrassing overspending (PC Weekly 12).
Was this stealing justified? Was it done to protect the public from wasting their
tax money? The European judicial system did not think so. The accused were sentenced
to six months in prison (PC Weekly 12).
This punishment might seem harsh, but not to Bill Clinton. He has appointed a
task force to try to enforce laws on the Internet. The new laws would try to strengthen
copyright laws by monitoring information being transferred and if a violation occurred, a
$5,000 fine would be implemented (Facts On File #28599 1).
Clinton thinks this will protect businesses as well as consumers by keeping
copyrighted material at a reasonable price. The only exception would be that libraries
would have the right to copy "for purposes of preservation" (Phelps 75).
Some people view hackers as the "Robin Hoods" of the Internet. They wrestle
with the heavyweight businesses to try to gain leverage for individuals. But in doing so
they make businesses increase prices to pay for security. It is an ongoing cycle.
Many anti-hacking groups think they are gaining some ground on hackers by
making more sophisticated software. But like a virus that becomes immune too quickly,
the hackers find another way. The loopholes of the hacker are infinite. Just as one cannot
leave their shadow behind on a sunny day, the hacker will be around as long as there is
something to hack.
Works Cited
Alan Robert, "AOL's Piracy Woes: Attack and Counterattack"
Macworld 16 June 1995: 37-38
"Computers: On-line Copyright Protection Proposed"
Facts on File World News Digest 14 September 1995 28599
"Data Busters"
PC Weekly 8 August 1995: 12-14
Phelps, Alan Abstract "On-line Slime"
PC Novice 1995 74-75 Pro Quest, DiscII
Sussman, Vic: "Hacker Nabbed"
Us News & World Report 27 Febuary 1995 66-67
Cook, William "Aol's battle with AOHell"
Internet Underground 22 April 1995: 36-37
f:\12000 essays\sciences (985)\Computer\Computer Crimes 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ABSTRACT
Computer crimes seem to be an increasing problem in today's society. The main aspect
concerning these offenses is information gained or lost. As our government tries to take
control of the information that travels through the digital world, and across networks such
as the InterNet, they also seem to be taking away certain rights and privileges that come
with these technological advancements. These services open a whole new doorway to
communications as we know it. They offer freedom of expression, and at the same time,
freedom of privacy in the highest possible form. Can the government reduce computer
crimes, and still allow people the right to freedom of expression and privacy?
INFORMATION CONTROL IN THE DIGITIZED WORLD
In the past decade, computer technology has expanded at an incredibly fast rate, and the
information stored on these computers has been increasing even faster. The amount of
money, military intelligence, and personal information stored on computers has increased
far beyond expectations. Governments, the military, and the economy could not operate
without the use of computers. Banks transfer trillions of dollars every day over
inter-linking networks, and more than one billion pieces of electronic mail are passed
through the world's networks daily. It is the age of the computer network, the largest of
which is known as the InterNet. A complex web of communications inter-linking millions of
computers together -- and this number is at least doubling every year. The computer was
originally designed as a scientific and mathematical tool, to aid in performing intense
and precise calculations. However, from the large, sixty square foot ENIAC (Electronical
Numerical Integrator and Calculator) of 1946, to the three square foot IBM PC of today,
their uses have mutated and expanded far beyond this boundary. Their almost infinite
capacity and lightning speed, which is increasing annually, and their low cost, which is
decreasing annually, has allowed computers to stabilize at a more personal level, yet
retain their position in mathematical and scientific research1 . They are now being used in
almost every aspect of life, as we know it, today. The greatest effect of computers on
life at this present time seems to be the InterNet. What we know now as the InterNet
began in 1969 as a network then named ArpaNet. ArpaNet, under control by the pentagon's
Defense Advanced Research Projects Agency, was first introduced as an answer to a problem
concerning the government question of how they would communicate during war. They needed a
network with no central authority, unlike those subsequent to this project. A main
computer controlling the network would definitely be an immediate target for enemies. The
first test node of ArpaNet was installed at UCLA in the Fall of 1969. By December of the
same year, three more nodes were added, and within two years, there was a total of fifteen
nodes within the system. However, by this time, something seemed to be changing concerning
the information traveling across the nodes. By 1971, government employees began to obtain
their own personal mail addresses, and the main traffic over the net shifted from
scientific information to personal mail and gossip. Mailing lists were used to send mass
quantities of mail to hundreds of people, and the first newsgroup was created for
discussing views and opinions in the science fiction world. The networks decentralized
structure made the addition of more machines, and the use of different types of machines
very simple. As computer technology increased, interest in ArpaNet seemed only to expand.
In 1977, a new method of transmission was put into effect, called TCP/IP. The transmission
control protocol (TCP) would convert messages into smaller packets of information at their
source, then reassemble them at their destination, while the InterNet protocol (IP) would
control the addressing of these packets to assure their transmission to their correct
destinations. This newer method of transmission was much more efficient then the previous
network control protocol (NCP), and became very popular. Corporations such as IBM and DEC
began to develop TCP/IP software for numerous different platforms, and the demand for such
software grew rapidly. This availability of software allowed more corporations and
businesses to join the network very easily, and by 1985, ArpaNet was only a tiny portion of
the newly created InterNet. Other smaller networks are also very widely used today, such as
FidoNet. These networks serve the same purpose as the InterNet, but are on a much smaller
scale, as they have less efficient means of transferring message packets. They are more
localized, in the sense that the information travels much more slowly when further
distances are involved. However, the ease of access to these networks and various
computers has allowed computer crimes to increase to a much higher scale. These computers
and networks store and transfer one thing -- information. The problem occurs when we want
to determine the value of such information. Information lacks physical properties, and
this intangible aspect of data creates problems when developing laws to protect it. The
structure of our current legal system has, to this point, been based on ascertainable
limits. Physical properties have always been at its main core2 . In the past, this
information, or data, has been 'converted' into tangible form to accommodate our system. A
prime example is the patent, which is written out on paper. Today, however, it is becoming
much more difficult to 'convert' this data into a physical form, as the quantity is
increasing so rapidly, and this quantity of information is being stored in a virtual,
digitized space3 . It is very important to realize and emphasize that computers and
networks store and transfer only information, and that most all of this information can be
altered, in some way, undetectably. For example, when a file is stored in the popular DOS
environment (and also in environments such as Windows, OS/2, and in similar ways, UNIX), it
is also stored with the date, time, size, and four attributes -- read-only, system, hidden,
and archive. One may consider checking the date at which the document, or information
stored on the computer, was saved to determine if it was modified. However, this is also
digital information, and easily changed to whatever date or time the operator prefers. One
may also consider the attributes stored with the file. If a file is flagged as
'read-only,' then perhaps it cannot be overwritten. This is surely the case -- however,
this attribute is easily turned off and on, as it is also information in a digitized sense,
and therefore very easily changed. This is the same case when a file is 'hidden'. It may
very well be hidden to the novice user, but it is easily seen to anyone who has even a
slight knowledge of the commands of the system. One may also consider moving this
information to a floppy disk in order to preserve its originality; but we are once again
giving it a physical aspect, which we earlier addressed as being a close to impossible task
when involved with the amount of information involved in this area today. Digital
information is infinitely mutable, and the information that protects this information is
infinitely mutable4 . In order to understand how to control this information, we must first
understand what information and it's value -- especially that of a digital nature -- is.
One cannot specifically define information in a whole. In today's society, 'knowledge is
power' seems to be a common phrase, and a quite true one. It would be even more true to
say 'knowledge can be power.' It's how we use this knowledge that determines it's power.
In the same sense, it is how we use and distribute this knowledge that determines it's
value. Information can be used in so many ways that it is virtually impossible to value
it. What information is of value to one person may be completely worthless to another.
The availability of this knowledge also determines it's worth. If information is as free
as air, it has virtually no worth5 . Therefore, it is also a privacy issue. We can now
base the value of information on three things: it's availability, it's use, and it's user.
In order to protect information in our current government, we must first value it.
Those three aspects of information can be so differentiated, that this is close to
impossible to do so. In addition to this, how do we determine who "owns" the information?
Information itself is not a physical thing which only one person has in their possession at
any time. If information is given away, it is still held by the giver, as well as the
taker. It is impossible to determine exactly who has this information. If someone steals
information, we cannot take it away from them -- it is intangible in almost every aspect.
We must also understand the way in which our government, and most governments, create laws
and attempt to desist illegal actions. As stated earlier, the American government, and
many other governments, are based on a physical center, which I exemplified with the case
of the US patent. When our government creates laws, the subjects of the laws are given a
definable, ascertainable limit. When someone commits grand theft auto, breaking and
entering, or murder, we understand what has occurred and have definite ways to prove what
has occurred, where and when it has occurred, how it has occurred, and, if applicable, what
has been harmed and what is its value. However, when we look at computer crimes, such as
unauthorized access, we cannot be as clear on these aspects, and we do not have definite
ways to prove the crime, or who committed it, nor do we have a way in which to define the
value of anything damaged, if it had even been damaged. It is hard to convict a person
when all they did was slow down a computer network for a few days, or look at a credit
profile on John Doe. Problems also occur because people, including those in the legal
profession as well as jurors, do not always understand technology. They do not always
understand how mutable digital information can be, and how easily accessible and
distributed it can be. When a jury does not understand, one cannot truly be declared
guilty "beyond a reasonable doubt". "Technically, I didn't commit a crime. All I did was
destroy data. I didn't steal anything.6 " How can this be argued? Crimes committed in
the computer world do not exactly adhere with current laws that address physical crimes.
We cannot adapt current laws to those involving information crimes, and trying to do that
will cause too many problems and confusions because of the variety, extent, and value of
information as a whole. However, this is exactly what the government is trying to do. It
must also be considered that this information is not strictly a US problem, nor is it more
geared towards the US. Although started by the United States government, the InterNet has
grown world wide, reaching over seventy countries. Since the InterNet has such a
decentralized structure, one cannot say that the US is "in charge" of the network. The
problem is, the US government does not see this themselves. The United States government
wants to censor the information traveling across the InterNet and other telecommunication
services, but this cannot be the case any longer because of this situation. We cannot
expect other countries to adhere to the laws of the United States, just as most Americans
would not expect to have to agree to laws set by other countries. Therefore, it could be
easily said that the government would be invading privacy if they were to attempt to censor
the information which travels these networks. Individual computers, are, of course, an
individuals property, and it would, without a doubt, be an invasion of privacy if the
government wanted to, at any given time, search your hard drive without just cause. I feel
that the government wants too much power this time. It would seem that they want to have
access to and control all digital information in America for their own benefit. The US
government created an encryption device called the Clipper chip, which was to insure
digital privacy among it's users. However, our government seems to only define privacy to
an extent. They had also planned to keep, in their possession, a duplicate of each chip.
So much for total privacy. The government seems to be on a quest for total control over
it's citizens, and the citizens of the world. This may seem extreme at the present time,
but our current legal system does not allow for the undefinable limits that information
control presents, especially on a world wide basis. If the government tries to gain too
much control, it could very well lead to it's failure. Control -- the control we need -- is
not a legal problem at all. It is a social, moral, and technological problem7 . What is
needed is a type of 'information ethics'. A set of morals and customs must be slowly
adapted, and not pounded into the digital world by the government. Virtual laws must be
formed by a virtual government. Information cannot be controlled by our government in it's
current form. In order to control information, the government would have to induce a
drastic change. The first amendment, in reality, is the foundation of the rights of the
citizens of this country. This amendment, in it's most basic form, guarantees our right to
inform and be informed. The government can not and will not be able to control digital
information as a whole, or govern the right to this information without sacrificing the
keystone of our nation and of our rights as Americans.
1 We see about 50-70% more computing power per year, and hardware prices drop about 25-50%
per year. Since 1978, raw computing power has increased by over 500 times. "80x86
Evolution," Byte, June 1994, pp. 19. 2 Curtis E.A. Karnow, Recombinant Culture: Crime In
The Digital Network. (Speech, Defcon II, Los Vegas), 1994.
3 S. Zuboff, In the Age of the Smart Machine, New York; 1992.Michael Gemignani, Viruses And
Criminal Law. Reprinted in Lance Hoffman, Rogue Programs: Viruses, Worms and Trojan Horses,
New York, 1990.4 Lauren Wiener, Digital Woes, 1993.5 John Perry Barlow, "The Economy of
Ideas", Wired, March 1994.6 Martin Sprouse, "Sabotage in the American Workplace: Anecdotes
of Dissatisfaction, Mischief, and Revenge", New York; 1992. (Bank of America Employee who
planted a logic bomb in the company computer system).
7 Curtis E.A. Karnow, Recombinant Culture: Crime In The Digital Network. (Speech, Defcon
II, Los Vegas), 1994.
------------------
Works Cited
Addison-Wesley, Bernard. How the Internet Came to Be. New York: Vinton Cerf, 1993.
Communications Decency Act. Enacted by the U.S. Congress on February 1, 1996.
Computer Fraud and Abuse Statute. Section 1030: Fraud and related activity in connection
with computers.
Denning, Dorothy. "Concerning Hackers Who Break into Computer Systems". Speech presented
at the 13th National Computer Security Conference, Washington, DC, 1990.
Gates, Bill. The Road Ahead. New York: Penguin Books USA, inc, 1995.
The Gatsby. "A Hackers Guide to the Internet". Phrack. Issue 33, File 7; 15 September
1991.
Icove, David, Karl Seger, and William VonStorch. Fighting Computer Crime. USA: O'Reilly
Books, 1996.
Time Life Books. Revolution in Science. Virginia: Time Life Books, inc., 1987.
Wallich, Paul. "A Rouge's Routing." Scientific American. May 1995, pp. 31.
f:\12000 essays\sciences (985)\Computer\Computer Crimes Speech.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer crimes are on the rise 1 in 10 Americans experience some form of a malicious attack on their computer system. If you pay attention to the rest of this speech you will understand how a Hackers mind works and how to defend yourself from them. In this speech I will tell you why and how people break into computers, what sorts of trouble they cause, and what kind of punishment lie ahead for them if caught.
Hackers and Crackers break into computer systems for any of a wide variety of reasons. Many groups break into computers for capital gain while still others do it as a means to pass time at work or at school. For most it's a thrill to figure out how to break into a computer. Most people never have any intention of causing harm. I believe that for the vast majority of people it's merely the thrill of the "hunt" at pushes them to such great lengths. Many employees that work in large corporations feel that they don't get paid as much as they should. Therefore if they have high security clearance they are able to capitalize from that by selling the data they have access to on the black-market. Whether it be Ford Motor companies plan for the 1999 F-150 or spec sheets for the military's new bomber it happens everyday. Too by left is a drawing that illustrates the method that most Hackers use to take over your computer. Ever since the dial-up connection was invented anyone with a modem had the ability to wreck any one of thousands of computers.
One of the most talked about forms of computer crime is computer viruses. A computer virus is a small but highly destructive program written by an unscrupulous computer Hacker. Back in 1984 a 17 year old computer Hacker single handedly brought down four hundred thousand computers in a matter of hours. Too my left is a graph depicting the # of computer crimes comited from 1988 till now. Some Hackers create a program called a worm. A worm is a piece of malicious software and is part of the virus family. People write worms to transfer money from bank accounts into their own personal checking account. Another way that Hackers cause trouble is by altering the telephone switching networks at MCI, AT&T, and Sprint. By doing this they are able to listen to any conversation they choose. Often-times they will listen in on the Police and FBI communicating with each-other. This allows them to move to a new location before they are found. Some Hackers use their knowledge of the telephone system to turn their enemies home telephone into a virtual pay-phone that asks for quarters whenever you take the phone off the hook.
A person to commits a computer crime in caught will very likely face a substantial punishment. Often these types of criminals are never caught unless they really screw up. The most wanted Hacker Kevin Mitinick was tracked down and arrested after he broke into a computer that belonged to a Japanese security professional. After this man noticed that someone had gotten into his computer he dedicated the rest of his life to tracking down this one man. Kevin was able to say one step ahead of police for some time but the fatal mistake that he made was leaving a voice-mail message on a computer bragging about the fact that he thought he was unstoppable. When he was arrested he faced a 250,000 dollar fine, 900 hours community service, and a 10 year jail sentence. Many schools and small businesses still don't have a clue about how to deal with computer crimes and the like whenever they happen to strike.
In conclusion hopefully you now know a little more about computer crimes and the people who commit them. Although most computer crimes are never accounted for the ones that are, are almost always prosecuted to the fullest extent under the law.
f:\12000 essays\sciences (985)\Computer\computer crimes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THESIS: Laws must be passed to address the increase in the number and types of computer
crimes.
Over the last twenty years, a technological revolution has occurred as computers are now an
essential element of today's society. Large computers are used to track reservations for the airline
industry, process billions of dollars for banks, manufacture products for industry, and conduct
major transactions for businesses because more and more people now have computers at home
and at the office.
People commit computer crimes because of society's declining ethical standards more than any
economic need. According to experts, gender is the only bias. The profile of today's
non-professional thieves crosses all races, age groups and economic strata. Computer criminals
tend to be relatively honest and in a position of trust: few would do anything to harm another
human, and most do not consider their crime to be truly dishonest. Most are males: women have
tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals
tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated,
adventuresome, and willing to accept technical challenges."(Shannon, 16:2)
"It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow
different from
'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal
"often marches to the same drum as the potential victim but follows and unanticipated
path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from
young teens to elders, from black to white, from short to tall.
Definitions of computer crime has changed over the years as the users and misusers of computers
have expanded into new areas. "When computers were first introduced into businesses, computer
crime was defined simply as a form of white-collar crime committed inside a computer
system."(2600:Summer 92,p.13)
Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden
code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't
have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is
"salamis." It came from the big salami loaves sold in delis years ago. Often people would take
small portions of bites that were taken out of them and then they were secretly returned to the
shelves in the hopes that no one would notice them missing.(Phrack 12,p.44)
Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary
Committee approved a bipartisan computer crime bill that was expanded to make it a federal
crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B
13:1) This bill is generally creating several categories of federal misdemeanor felonies for
unauthorized access to computers to obtain money, goods or services or classified information.
This also applies to computers used by the federal government or used in interstate of foreign
commerce which would cover any system accessed by interstate telecommunication systems.
"Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many
U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by
disgruntled employees. American businesses wishes that the computer security nightmare would
vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for
$33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B
1:2)
All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness,
yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as
huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And
by the turn of the century, "nearly all of the software to run computers will be bought from vendors
rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1)
A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all
over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the
Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable
search and seizures, they disrupted the lives and livelihoods of many people, and generally
conducted themselves in an unconstitutional manner. "My whole life changed because of that
operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel
Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the
Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for
our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun
Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but
should very soon. The secret service seized all of Steve Jackson's computer materials which he
made a living on. They charged that he made games that published information on how to commit
computer crimes. He was being charged with running a underground hack system. "I told them it
was only a game and that I was angry and that was the way that I tell a story. I never thought
Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they
seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid
of eight people out of 18. If the Secret Service had just come with a subpoena we could have
showed or copied every file in the building for them."(Steve Jackson Interview)
Computer professionals are grappling not only with issues of free speech and civil liberties, but
also with how to educate the public and the media to the difference between on-line computer
experimenters. They also point out that, while the computer networks and the results are a new
kind of crime, they are protected by the same laws and freedom of any real world domain.
"A 14-year old boy connects his home computer to a television line, and taps into the computer at
his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring
93,p.19) On paper and on screens a popular new mythology is growing quickly in which
computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer
capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of
real life cases. Computer crimes are not just crimes against the computer, but it is also against the
theft of money, information, software, benefits and welfare and many more.
"With the average damage from a computer crime amounting to about $.5 million, sophisticated
computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many
forms. Swindling or stealing of money is one of the most common computer crime. An example of
this kind of crime is the Well Fargo Bank that discovered an employee was using the banks
computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack
23,p.46)
Credit Card scams are also a type of computer crime. This is one that fears many people and for
good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses
his computer to access credit data bases. In a talk that I had with him he tried to explain what he
did and how he did it. He is a very intelligent person because he gained illegal access to a credit
data base and obtained the credit history of local residents. He then allegedly uses the residents
names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to
issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught
once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he
was the one who did the other ones so he was put on probation. "I was 17 and I needed money
and the people in the underground taught me many things. I would not go back and not do what I
did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and
we are currently devising other ways to make money. If it weren't for my computer my life would
be nothing like it is today."(Interview w/Raven)
"Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't
realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are
urged to use the computer but sometimes the use becomes excessive or improper or both. For
example, at most colleges computer time is thought of as free-good students and faculty often
computerizes mailing lists for their churches or fraternity organizations which might be written off as
good public relations. But, use of the computers for private consulting projects without payment of
the university is clearly improper.
In business it is the similar. Management often looks the other way when employees play
computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is
stealing work time. And computers can only process only so many tasks at once. Although
considered less severe than other computer crimes such activities can represent a major business
loss.
"While most attention is currently being given to the criminal aspects of computer abuses, it is likely
that civil action will have an equally important effect on long term security problems."(Alexander,
V119) The issue of computer crimes draw attention to the civil or liability aspects in computing
environments. In the future there may tend to be more individual and class action suits.
CONCLUSION
Computer crimes are fast and growing because the evolution of technology is fast, but the
evolution of law is slow. While a variety of states have passed legislation relating to computer
crime, the situation is a national problem that requires a national solution. Controls can be instituted
within industries to prevent such crimes. Protection measures such as hardware identification,
access controls software and disconnecting critical bank applications should be devised.
However, computers don't commit crimes; people do. The perpetrator's best advantage is
ignorance on the part of those protecting the system. Proper internal controls reduce the
opportunity for fraud.
BIBLIOGRAPHY
Alexander, Charles, "Crackdown on Computer Capers,"
Time, Feb. 8, 1982, V119.
Ball, Leslie D., "Computer Crime," Technology Review,
April 1982, V85.
Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26,
1993, B, 1:2.
Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall
Street Journal, Aug. 27, 1992, A, 1:1.
Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job".
Wall Street Journal, Aug 27, 1992, A, 7:5.
Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1.
Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World,
Dec. 19, 1984, V18.
Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik.
Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2.
Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3.
Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4.
2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel G
f:\12000 essays\sciences (985)\Computer\Computer Criminals.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
omputers are used to track reservations for the airline industry, process billions of dollars for banks, manufacture products for industry, and conduct major transactions for businesses because more and more people now have computers at home and at the office.
People commit computer crimes because of society's declining ethical standards more than any economic need. According to experts, gender is the only bias. The profile of today's non-professional thieves crosses all races, age groups and economic strata. Computer criminals tend to be relatively honest and in a position of trust: few would do anything to harm another human, and most do not consider their crime to be truly dishonest. Most are males: women have tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated, adventuresome, and willing to accept technical challenges."(Shannon, 16:2)
"It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow different from
'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal "often marches to the same drum as the potential victim but follows and unanticipated path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from young teens to elders, from black to white, from short to tall.
Definitions of computer crime has changed over the years as the users and misusers of computers have expanded into new areas. "When computers were first introduced into businesses, computer crime was defined simply as a form of white-collar crime committed inside a computer system."(2600:Summer 92,p.13)
Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is "salamis." It came from the big salami loaves sold in delis years ago. Often people would take small portions of bites that were taken out of them and then they were secretly returned to the shelves in the hopes that no one would notice them missing.(Phrack 12,p.44)
Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary Committee approved a bipartisan computer crime bill that was expanded to make it a federal crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B 13:1) This bill is generally creating several categories of federal misdemeanor felonies for unauthorized access to computers to obtain money, goods or services or classified information. This also applies to computers used by the federal government or used in interstate of foreign commerce which would cover any system accessed by interstate telecommunication systems.
"Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by disgruntled employees. American businesses wishes that the computer security nightmare would vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for $33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B 1:2)
All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness, yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And by the turn of the century, "nearly all of the software to run computers will be bought from vendors rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1)
A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable search and seizures, they disrupted the lives and livelihoods of many people, and generally conducted themselves in an unconstitutional manner. "My whole life changed because of that operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but should very soon. The secret service seized all of Steve Jackson's computer materials which he made a living on. They charged that he made games that published information on how to commit computer crimes. He was being charged with running a underground hack system. "I told them it was only a game and that I was angry and that was the way that I tell a story. I never thought Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid of eight people out of 18. If the Secret Service had just come with a subpoena we could have showed or copied every file in the building for them."(Steve Jackson Interview)
Computer professionals are grappling not only with issues of free speech and civil liberties, but also with how to educate the public and the media to the difference between on-line computer experimenters. They also point out that, while the computer networks and the results are a new kind of crime, they are protected by the same laws and freedom of any real world domain.
"A 14-year old boy connects his home computer to a television line, and taps into the computer at his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring 93,p.19) On paper and on screens a popular new mythology is growing quickly in which computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of real life cases. Computer crimes are not just crimes against the computer, but it is also against the theft of money, information, software, benefits and welfare and many more.
"With the average damage from a computer crime amounting to about $.5 million, sophisticated computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many forms. Swindling or stealing of money is one of the most common computer crime. An example of this kind of crime is the Well Fargo Bank that discovered an employee was using the banks computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack 23,p.46)
Credit Card scams are also a type of computer crime. This is one that fears many people and for good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses his computer to access credit data bases. In a talk that I had with him he tried to explain what he did and how he did it. He is a very intelligent person because he gained illegal access to a credit data base and obtained the credit history of local residents. He then allegedly uses the residents names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he was the one who did the other ones so he was put on probation. "I was 17 and I needed money and the people in the underground taught me many things. I would not go back and not do what I did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and we are currently devising other ways to make money. If it weren't for my computer my life would be nothing like it is today."(Interview w/Raven)
"Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are urged to use the computer but sometimes the use becomes excessive or improper or both. For example, at most colleges computer time is thought of as free-good students and faculty often computerizes mailing lists for their churches or fraternity organizations which might be written off as good public relations. But, use of the computers for private consulting projects without payment of the university is clearly improper.
In business it is the similar. Management often looks the other way when employees play computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is stealing work time. And computers can only process only so many tasks at once. Although considered less severe than other computer crimes such activities can represent a major business loss.
"While most attention is currently being given to the criminal aspects of computer abuses, it is likely that civil action will have an equally important effect on long term security problems."(Alexander, V119) The issue of computer crimes draw attention to the civil or liability aspects in computing environments. In the future there may tend to be more individual and class action suits.
CONCLUSION
Computer crimes are fast and growing because the evolution of technology is fast, but the evolution of law is slow. While a variety of states have passed legislation relating to computer crime, the situation is a national problem that requires a national solution. Controls can be instituted within industries to prevent such crimes. Protection measures such as hardware identification, access controls software and disconnecting critical bank applications should be devised. However, computers don't commit crimes; people do. The perpetrator's best advantage is ignorance on the part of those protecting the system. Proper internal controls reduce the opportunity for fraud.
BIBLIOGRAPHY
Alexander, Charles, "Crackdown on Computer Capers,"
Time, Feb. 8, 1982, V119.
Ball, Leslie D., "Computer Crime," Technology Review,
April 1982, V85.
Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26, 1993, B, 1:2.
Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall Street Journal, Aug. 27, 1992, A, 1:1.
Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job". Wall Street Journal, Aug 27, 1992, A, 7:5.
Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1.
Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World, Dec. 19, 1984, V18.
Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik.
Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2.
Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3.
Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4.
2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel Goldstein.
f:\12000 essays\sciences (985)\Computer\Computer Ergonomics in the Workplace.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Business strive for high production at low cost. This would result in the highest profit for a company. To many businesses, this is only a mirage. This is because the 'low cost' of the business usually results in a 'high cost' for the employees. This high cost is lower quality workplace items, lower salaries, less benefits, etc. These costs create an upset workplace environment. Companies understand that the more efficient their workers are, the more productive their business will become. Although this will take lots of money at first, the result will be extreme success.
There exist many different things in the workplace that add to stress and injuries. They range from lifting heavy boxes to typing too much on the keyboard. This paper will be focusing on the principals of ergonomics in the computer workstation. According to the Board of Certification for Professional Ergonomists (BCPE), the definition of ergonomics "is a body of knowledge about human abilities, human limitations and human characteristics that are relevant to design. Ergonomic design is the application of this body of knowledge to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable and effective human use."(BCPE, 1993) In the average computer workstation, employees are prone to over a dozen hazards. There exist two factors that can prevent this: forming good work habits and ergonomically designed computer workstations. We will discuss these
preventions throughout the paper.
First, a few terms may need defining. Repetitive Strain Injuries (RSI) takes place from the repeated physical movements of certain body parts which results in damage to tendons, nerves, muscles, and other soft body tissues. If these injuries are not taken care of immediately, permanent damage could be done. A few common results of RSI's that were not taken care of right away are injuries like Carpal Tunnel Syndrome, Tendentious, Tenosynovitis, DeQuervain's Syndrome, Thoracic Outlet Syndrome etc. All of these are able to be prevented by the use of good working habits and ergonomic engineering.i
Usually, ergonomically designing a computer workstation would cost about $1000. This expense could be eliminated by the formation of good work habits. This is essential for the safety of computer terminal employees. There exist a number of precautions that can be taken into consideration when dealing with a computer workstation. We shall discuss six of them.
First, the whole body must be relaxed. The correct posture is shown in Figure 1. Notice that the arms and thighs are parallel to the floor and the feet are flat on the floor. Also notice that the wrists are not bent in any way. This is one of the most damaged parts of the body when speaking of (RSI).
Figure 1
The wrists, when typing, should not be rested on anything when typing. This would cause someone to stretch their fingers to hit keys. They should also be straight: not bent up, down, or to the side. The correct position is portrayed in figure 2, incorrect in figure 3. Studies show that these steps are easier to perform while the keyboard is not tilted toward the user. When it is tilted, it is natural to rest your wrists on the table. This puts the keyboard at a lower level, creating a more natural position.
Another practice that should be taken into consideration is how hard you press on the keys. The user is not supposed to hit the keys. This may cause damage to the tendons and nerves in the fingers. Instead, use a soft touch, not only will your fingers thank you for it, the keyboard will too!
Keeping in mind not to stretch your fingers when typing, use two hands to perform double-key operations. For example, you need to capitalize the first letter in every sentence, therefore, you would hold down the shift and press the first letter.
Figure 2 Figure 3
This is a double key operation. Instead of stretching two fingers on one hand to do this operation, use both hands.
No matter what kind of a pace you are on when doing work, take breaks every ten minutes or so in addition to your hourly breaks. These breaks need only be a few moments at a time. If breaks are not taken at this pace, you may be subjecting yourself to injuries in the back, neck, wrists and fingers. Also, when using the mouse, do not grip it tightly. Most mice that are used in offices today are not designed with human factors in mind. Some mice, like the Microsoft mouse, are designed to fit the contour of your hand. Although this may seem nice, it does not mean that one will be able to use it for hours on end and not feel any discomfort in the hand. Other mice, that will be mentioned later, are designed for comfortable use for extended periods of time.
Try to keep your arms and hands warm. Cold muscles are more apt
to strain and injury than warm ones. Wearing a sweater or a long-sleeved shirt can be of great importance especially when working in air-conditioned offices.
And finally, do not use the computer more than necessary. Your body can handle only so much strain on the neck, shoulders, wrists and fingers. Even with the greatest state-of-the-art ergonomically designed computer workstation, people put themselves at risk.
Some people tend to use their break times at work playing video games. This is a good way to ease the mind of everyday pressure (to some extent). This is also a good example of using the computer 'more than necessary'. If a person needs to use a computer for video games, take a break every ten minutes or so, as mentioned above.ii
All of these strategies mentioned above are things that can be done to reduce injuries when using a computer for an extended period of time. They do not include any type of ergonomically designed hardware. If employees form these habits, there would be less need to purchase any ergonomic equipment for the office. But, making new habits is not the easiest thing to do for most people. Next, we will take a look at how a computer workstation should be set up. The following data was retrieved by an on-line quiz from the University of Virginia.
The first question about computer workstations poses a question about the seat being too high. This would cause strain on the legs of the operator causing them to "go to sleep". Basically, the blood flow to the leg and feet will be cut off.
The next fact presented to us is that the top of the Video Terminal Display (VDT) should be no higher than eye level. This is one of the most controversial topics because it deals with the neck and shoulders. Some people state that it should be below, but not at eye level because our natural tendency is to look down.
Thirdly, the best viewing distance from the VDT is about 24 inches from the screen. This deals with eye strain. Some people worry about radiation that may be emitted from the VDT. Radiation is not a big problem with newer monitors. Even old ones have a protective coating around the screen. This allows very few particles to go through the screen. Even if they do manage to get that far from the screen, the radiation goes inches before withering away. The eye strain is the important factor here. Look away at an object far away from you if eye strain continues to be a problem.
The next question deals with the tilt of the screen. If the monitor should be at or below eye level, it would be easier to read with a 10 to 20 degree back tilt. Many VDT's have a tilt on the bottom, if not, a book could propped under the monitor to tilt it back a bit.
Another question asked is about the height of the keyboard from the floor. It should be elbow height. As mentioned before, the fore-arms and thighs should be parallel to the floor. This is possible only if the keyboard is elbow height from the floor.
How should the lighting be in offices when using a computer? It should be a bit dimmer than normal office lighting. This is so because if the office lighting is brighter, there will be a lot of glare on the screen. It also has to do with eye strain.
Noise in the work area causes fatigue. This may be true, to add to this statement, it also causes the computer operator to lose concentration on their work. Not only does noise affect our concentration and causes fatigue, it obviously can damage one's hearing.
Using this questionnaire, I conducted a survey among students at Canisius College in Buffalo, NY. The purpose of the survey was to test the knowledge of the student body as to their knowledge of VDT's and their safety precautions. In order to accomplish this in a professional manner, a random sample of students was acquired. In order to obtain a random sample, certain criteria must be met, too numerous to mention in this essay. Needless to say, not all of the criteria were met for the sample to be random. The sample size of the survey was approximately 100 students. The results were not surprising. There was one problem with the questionnaire : many students did not know what VDT meant1.
According the survey, 100% of the people was familiar with what
ergonomics is, knew how to reduce tension, what movement in your peripheral vision does, and what you should do if you should wear bifocal lenses. This question posed a problem because of the way in which the answer was worded. The correct answer is very specific, and sticks out over the other possible answers. The rest of the questions were well worded and not too obvious.
Besides the first and last question, there were a few others that were all answered correctly. These were questions eleven and twelve. The probable cause for this is that the questions were easy. The answers were more obvious than the others. If you compare these questions to the ones that were more difficult (seven and thirteen) the percent correct differ. Questions seven and thirteen deal with very specific measurements that are all closely related. These questions are not 'common knowledge' questions. I am assuming that people were taking educated guesses when encountering these questions. This could be the reason for the large percent of error in these parts of the survey.
Now that we have discovered the good habits to form when working at computer workstations and took a look at what a selected college student population knew about VDT's, we will now take a look at ergonomic engineering and the reason for its emergence.
There are a number of devices ranging from keyboards and mice to chairs and even foot stands. In this paper we will just review a few of these ergonomically designed items and why ergonomics is an issue to computer users.
First, we will discuss the purpose of ergonomically designed items. There are a number of reasons for the emergence of ergonomics. One reason for this is insurance purposes. Many companies have disability and other types of insurance to cover injuries that occur while working. This would not be needed as much if there were ergonomically designed computer workstations. It would save the company insurance hassle and money in the long run. Another purpose for the emergence of ergonomically designed workstations is that the injuries due to the overuse of the computers are long lasting. These ailments do not just go away in time. And one can not put a price on injuries like this. This is why ergonomics is so important.
Secondly, we will look at an item that effects the common computer
user the most: the keyboard. With computers getting faster and faster every day, it is about time that people looked at the hazards they pose instead of perfecting them. Keyboards pose the largest threat to the computer user, not only because it is the most used input device, but also because of its design. It is a flat, straight input device that can cause strain and injury to the user if not used properly. Ergonomic engineers realized this hazard and designed a number of different alternatives. All of the ergonomically designed keyboards attempt to reduce injuries by studying the natural position of the fingers, hands and wrists. By using this knowledge, keyboards and mice are designed. There is no ideal position for the hand as of yet. Hence, there exists different types of keyboards and mice. Figures 4 - 5 show different styles of keyboards and mice.
Figure 4 - http://www.earthlink.net/~dbialick/kinesis
Figure 5
Notice the unique structure of the keyboard. It does not even look like one. This may take time to get used to, but it will payoff in the end.
Not only is there hardware for the reduction of RSI, but there exists software to help you reduce the RSI. Micronite softwareiii
designed a program called ARMS (Against Repetitive Strain Injury) Which reminds you when it is time to take a break. Also, it walks you through a series of videos which portray ways to massage different parts of your hand, neck, and shoulders.
With all of this hardware and software available for business and personal use, who would not be interested? Well many people think that it will not happen to them until it does. People should not wait that long. If you use a computer for more than four hours a day, you are prone to RSI. If your company does not have ergonomically engineered hardware, software or furniture, then do something about it. It's your health.
1 A copy of the survey is attached to the end of this paper. The correct answer is bolded.
i URL address : http://webreference.com/rsi.html#whatis
ii URL address : http://www.engr.unl.edu/ee/eeshop/rsi.html
iii URL address : http://www.micronite.com/
Glossary
CGI
"Common Gateway Interface". A standard protocol which allows HTML based forms to send field contents to a program on the Internet for processing. It also allows the receiving program to respond by sending an HTML response document.
Email
"Electronic Mail". An electronic document similar to a piece of mail in that it is sent from one person to another using addresses, and contains information. Email commonly contains information such as: sender name and computer address, list of recipient names and computer addresses, message subject, date and time composed, and message content. Sometimes, an Email message can have attached computer files such as pictures, programs, and data files.
Firewall
A program or device which serves as an intelligent and secure router of network data packets. These mechanisms are configured to restrict the flow of packets in different directions (i.e. to and from the Internet) based on the system addresses (a.k.a. IP addresses) of the connected computers.
FTP
"File Transfer Protocol". A program or feature popularly used over the Internet to transfer files between computers.
Hacker
A person who deliberately breaks into computer systems for entertainment, gain, or spite. The most sophisticated hackers spend all of their time breaking into computers. The risk that these people pose is that they often steal or damage software systems and information.
Home Page
A Web Page which is at the root of all Web Pages for a particular Web Site. A Home Page should portray the image that the company wants to project. Usually, these pages resemble marketing slicks, but with an interactive slant. This front page of a Web Site then provides hypertext links to the rest of the Web Site's content and possibly to Home Pages for other related Web Sites.
HTML
"HyperText Markup Language". A standardized programming language used to create hypertext documents. Used to create all Web Pages on the Internet. Also allows definition of data forms which communicate with CGI compatible programs on the Internet.
HTTP
"HyperText Transfer Protocol". A communications protocol used by Internet Web Service software to send Web Pages to Web Browser software over the Internet.
HyperText
A type of text document which contains embedded "hotspots" which point to other sections of text or other documents. Any piece of text or graphic can be defined as a hotspot which points elsewhere.
Internet
(a.k.a. "The Information Superhighway"). A world-wide interconnection between thousands of computer networks on many different platforms, with over 10 million end users (and growing). The telecommunications backbone of the Internet is based on a network of U.S. government owned, national T3 lines. A growing number of Internet Providers are adding their own backbones.
Internet Providers
A community of competing businesses which provide "on-ramps to the Internet". The largest of these companies connect directly into the Internet backbone, or provide their own national or international backbones. Examples of true Internet Providers: Netcom, UUNet, CERFNet, SprintNet, and Spry. Examples of partial Internet Providers & partial Information Service Providers: CompuServe, Prodigy, and America On-Line.
IRC
"Internet Relay Chat". A program or feature popularly used on the Internet by individuals to chat with others, by typing and watching text-based dialog. Many topic specific IRC channels have been created on the Internet by users. These channels form a sort of forum for conference room discussion.
Newsgroups
A collection of forums which gather Email from Internet users about a specific subject. The collected Email entries (known as news articles) can then be perused by all Internet users. Some are simply for recreational discussions, while others may allow people to form self-supporting user groups.
PGP
"Pretty Good Privacy" encryption. A protocol for using private and public key encryption to secure Email and other Internet transactions.
TCP/IP
"Transfer Control Protocol / Internet Protocol". The network communication protocol used by all Internet computers. Similar in function to NetBIOS, SNA, or Novell Netware's IPX/SPX.
Telnet
A program or feature popularly used on the Internet by individuals to log into, and take control of other computers on the Internet.
VRML
"Virtual Reality Markup Language" A new emerging language becoming supported by the World Wide Web, for programming virtual reality content on the Internet.
Web Browser
A type of program used by individuals which reads HTML files on the Internet and presents them to the user in a friendly way and interactive way. Many such programs exist for many platforms. For UNIX several GUI browsers are popular. For those UNIX based terminals or DOS based PCs, Lynx provides a text interface to browse Web Pages. All Web Browsers allow the user to interactively jump from place to place by selecting hotspots (highlighted text or graphics). Some browsers allow the user to print page contents.
Web Page or Web Document
A single viewable unit of Web information. Often be comprised of an HTML file with several referenced graphics files. Generally, each Web Page has hypertext links to other Web Pages.
Web Site
A collection of Web Pages built for or by a single company or individual. Usually provides one theme of content. A Web Site is not to be confused with a single physical location where a Web Server exists. It is a Cyber-Location.
Web Server
A combination of computer hardware, telecomm. lines, and HTTP server software.
World Wide Web, WWW, or The Web
An intricate and vast web of information, tied together by hypertext links between multimedia documents residing on thousands of Internet computers around the globe.
f:\12000 essays\sciences (985)\Computer\Computer Languages.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Languages
By Nicholas Soer
Differences in computer languages is a topic that many people are not familiar
with. I was one of those kinds of people before I started researching on this topic. There
many different computer languages and each one of them are similar in some ways, but
are also different in other ways, such as: program syntax, the format of the language, and
the limitations of the language.
Most computer programmers start programming in languages such as turbo pascal
or one of the various types of basic. Turbo pascal, Basic, and Fortran are some of the
oldest computer languages. Many of today's modern languages have been a result of
one of these three languages, but are greatly improved. Both turbo pascal and basic are
languages that are easy to understand and the syntax is very easy and straightforward.
In Basic when printing to the screen you simply type the word 'print', in turbo pascal you
would type 'writeln'. These are very simple commands that the computer executes. To
execute a line of code in a language such as C, or C++, you would have to type in much
more sophisticated lines of code that are much more confusing than the previous two.
The format and layout of the various languages are very diverse between some,
and between others are somewhat similar. When programming in Basic the user has to
type in line numbers before each new line of code. In an updated version of Basic called
QBasic, numbers are optional. Turbo pascal does not allow the user to input numbers, it
has preset commands that seperate each part of the program. This is similar to QBasic,
but is much more sophisticated. Instead of using the command gosub in Basic, the user
would make a procedure call.
Another new language is C. C is a spinoff of turbo pascal but is capable of doing
more things than turbo. The format and layout are similar, but the syntax is much more
complex than turbo is. When C first came out, there were many major flaws in the
language so a new version had to be put out, C++. The main addition from C to C++ is
the concept of classes and templates. Many other small flaws were fixed when this new
version of C came out also.
Many of the languages have different limitations on the tasks that they came
perform. The newer the language, the more things you can do. Things that are being
accomplished today, were thought to have been impossible 20 years ago. Despite the
differences between the many languages I have mentioned, and the others that I have not,
the limitations are starting to go higher and higher as technology improves.
This is a subject that one could write on and on about the minute differences
between the many languages. After researching these main languages I found out that
there are just as many simmilarities between languages as there are differences.
f:\12000 essays\sciences (985)\Computer\Computer Literacy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For over fifty years, beginning with the famous ENIAC, a revolution has been taking place
in the United States and the world. The personal computer has changed the way many
people think and live. With its amazing versatility, it has found its way into every area of
life, and knowing how to operate it is a requirement for today's world. Those who have
not taken the time to learn about computers often do not even know what to do once one
has been turned on, and this problem should be corrected. That is why all high schools
must make a computer literacy course a requirement for graduation.
Although a computer course would take away two or three periods of a high
school student's weekly schedule, it will be well worth it in the real world. With so many
careers today involving a knowledge of a computer's basic functions, computer literacy
plays a big part in job security. If a potential employee comes along demonstrating
outstanding computer skills, he or she may take a job that formerly belonged to another
employee if that employee doesn't even know how to check his e-mail. A good computer
class would teach the basics of computers: typing a document in a word processor,
running a specified program, and using a modem to check e-mail and access the Internet.
Personal computers now have a tremendous entertainment value due to their
versatility. Not only can a computer do all the things that are unique to computers, it can
be a television and a radio as well. Computers have also attracted millions of people with
games galore. Immersive, three-dimensional games such as Doom 2, Quake, and Duke
Nukem 3D can keep people glued to their computers for hours. With current technology,
two friends can connect from anywhere in the world via modem and play a blazing fast
two-player game against one another. With the recent emergence of the Internet, friends
that would normally have to pay 25 cents a minute to talk on the phone long distance can
play and talk as long as they want for free.
The most important reason for required computer classes, however, is the
enormous amount of information available on the Internet. The Internet is a 24 hours a
day, 7 days a week information resource that cannot be beaten by any library in the world.
An experienced user can connect and find the information that he is looking for in as little
as ten minutes, without leaving the comfort of his own home. The Internet will only
continue to grow as time passes, and being able to navigate quickly and successfully is
becoming more and more important.
A computer course is an advantageous investment in a student's future with today's
technology. A personal computer is the most diverse machine in the world and being
familiar with its uses is a must to be successful. The amount of practical application that it
will have is astounding, and it will make all students more successful in today's changing
world.
f:\12000 essays\sciences (985)\Computer\Computer Nerds.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER NERDS
A computer nerd is a person uses a computer in order to use one.
Steve Wozniak fell in love with computers and how they worked. He built the first computer, the Apple one. The Apple one formed the basis for the future of Apple Computer, Inc. Steve Wozniak also designed the Apple II, the first ready made computer and one of the most popular ever made. It was a complete computer with keyboard and power supply. After he retired from Apple, Steve returned to the University of California at Berkeley and got his bachelor's degree in Computer Science.
Steve Jobs was the co-founder of Apple Computers. At the age of 25 he was worth over 100 million dollars. He was fascinated by the effects of computers. He was also amazed that a computer could take your ideas and translate them into information. He and Wozniak created the printed circuit board for the Apple I computer.
Bill Gates started programing at the age of 13. When he was a student at Harvard University, he developed BASIC for the first microcomputer, the Altair. Gates believed that there would be a personal computer in every household. Gates and Paul Allen formed Microsoft in 1975. Today Gates is a very important leader in Microsoft.
Paul Allen was also a co-founder of Microsoft. He bought a chip from a store and brought it back to Bill Gates and then they called their friends. They loaded BASIC into the computer and it worked by printing out the memory size. Paul left Microsoft in 1983 after an illness.
f:\12000 essays\sciences (985)\Computer\Computer Pornography.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Congress shall make no law respecting an
establishment of religion, or prohibiting the free
exercise thereof or abridging the freedom of
speech, or of the press, or the right of the people
peaceably to assemble, and to petition the
Government for a redress of grievances.(Wallace:
3)
A statement from a document that a group of
individuals put together to ensure their own ideas and
beliefs would never change. The group of people was
the forefathers of the United States of America and that
document: The United States Constitution. That phrase
was put into the Constitution because our forefathers
wanted to protect their freedom of speech. Something
they cherished and something that in days previous was
squashed by ruling government. Today our freedom of
speech is in danger again.
The Government is now trying to censor what
ideas go onto something we know as the Information
Superhighway. The Internet is now supposed to be
regulated so that it will be "safe" for everyone to enter.
The Government passed a law known as the
Telecommunications Act of 1996. In the TA there is a
part called the Communications Decency Act or CDA.
This part of the bill arose because of the recent surge of
pornography on the Infobahn. The CDA criminalizes
indecent speech on the Internet(Wallace: 1). The CDA
describes indecent speech as anything "depicting or
describing sexual or excretory acts or organs in patently
offensive fashion under contemporary community
standards."
First take the word "indecent". This word is used
because of its vague definition. Not only does this word
ban sexually explicit materials, it also bans sexually
explicit words too. If this were applied to the real world
some of the greatest novels would be taken off the
shelf. For example there is the great lesbian novel The
Well of Loneliness by Radcliffe Hall. In that book there
is a line t hat states "And that night, they were not
divided." Clearly that would be a sexually explicit
phrase(Wallace: 2).
Now the words "depicting or describing". The word
"describing" translates into anything with pure explicit
text. That would include any book converted and placed
on the Internet with outspoken words or phrases. This
goes against the first amendment. Henry Miller's Tropic
of Cancer and James Joyce's Ulysses would not be able
to possibly be posted online(Wallace: 2).
"Sexual or excretory acts or functions": This
would relieve anything from sleazy bestsellers to 19th
century classics, such as Zola's LaTerre and Flouber's
Madam Bovary, to nonfiction books on disease, rape,
health, and sexual intercourse from our shelves. This
phrase is again unconstitutional(Wallace: 2).
Another phrase in there is "Patently Offensive".
This is very subjective. These words mean that a jury
can decide on what is offensive and what is not
(Wallace: 2). If there is a very conservative jury you get
a very conservative verdict, but in the same respect if
you get a very liberal jury you get a liberal verdict.
Would that be considered a fair trial?
And last "Contemporary community standards".
There is an easy example to understand under these
words. In 1994 two California sysops [system
operators] were found guilty of putting offensive
material on their BBS [Bulletin Board System]. Their
BBS was accessible by people all over the world as long
as whoever wanted the information called the California
number they had setup. Someone one day out of
Memphis, Tennessee called the number and found
something disturbing to themselves. The two sysops
were convicted because of the community standards in
Tennessee not the ones in California(Wallace: 3).
There is no reason to treat the electronic and
written word different especially because of the big
conversion(Wallace: 3). More and more often people
are looking to the Internet to do reports and research. It
is one of the biggest resources in the world today. If the
TA bill stays in effect many of the books listed will not
be downloadable. Mark Managan co-author of the book
Sex states, " A law burning books by Miller, Joyce,
Burroughs, and Nabokor might also protect children who
might get a hold of them, but would be completely
unconstitutional under the First Amendment (Wallace:
4)."
In 1994 a United States survey showed that
450,000 pornographic pictures and text files were
accessible on the Net around the world and that these
files were accessed more than 6 million times(Chidley:
58). This is one reason why the government passed the
CDA. The Government rationalizes the CDA because of
two reasons. One, the protection of children. Two, they
claim it is constitutional because the Internet is like a
telephone or TV and can be regulated.
The protection of children is not an issue the
Government should handle. Proponents of the CDA have
completely forgot that a credit card number is needed
to be given to an ISP [Internet Service Provider] to get
connected to the Net(Wallace: 4). Passwords may be
added security. Parents let their children "veg out" in
front of the TV all day so of course you would figure
that those same parents are going to let them surf the
net when they want to(Bruce: 3).
Donna Rice Hughes, formerly with Sen. Gary Hart
but now a born again Christian and President of Enough
is Enough!, anti-pornography on the Net organization,
states, " Any child can access it . and once they've
seen it, it can't be erased from their minds(Jerome:
51)."
First, modem communication on a phone line is
just static. A computer, modem, communications
software, and Internet access is needed. This a child
cannot purchase. Second, there are many securities on
a computer so that a child cannot access certain parts
of the home system(Lohr: 1). If the parent is responsible
enough they should know more about the PC they
purchased than their child. Third, This quote sums up
the biggest argument: " And it is not as if cybersurfers
are inundated with explicit images. Users have to go
looking for the images in the unorganized and complex
network, and even need special decoders" to translate
what is written into a file(Chidley: 58).
Jeffery Shallit associate Professor at the University
of Waterloo in Ontario and Treasurer of Electronic
Frontier Canada, an organization devoted to maintaining
free speech in Cyberspace, says, "Every new medium of
expression will be used for sex. Every new medium of
expression will come under attack, usually because
of." the previous sentence(Chidley: 58). If the
regulation passes there will just be another way of
getting around it. One example is encryption. This is a
form of false information sent to another person via the
Net and translated on the other side. As Internet pioneer
John Gilmore once said, " The Net interprets censorship
as damage and routes around it(Barlow: 76)."
I decided to try "trading" myself and was startled
when I completed two online interviews with some
known traders. The two persons "nicknames" I talked to
were GMoney and BigGuy.
First I needed to get on the chatlines. I downloaded
a program called MIRC [My Internet Relay Chat]. This
program is free. It downloaded in a matter of minutes. It
was very easy to setup and before I knew it I was on an
IRC channel. If a child new of this program it would
have been very easy for them to access the channel I
was on. The channel I was on was called !!!!SEXPIX!!!!.
The side bar noted: " All the pics you want from horses
to grandmas. " I decided this would be a good place to
start. Inside the channel there were 27 other people.
You can talk to each one individually or talk as a whole
if you like. It's like sitting in a circle in a room full of
strangers.
The first of the two interviews I did was with
GMoney. I first asked how often he traded pictures. He
said usually once or twice a day. He told me he tried to
do it fast so his mother wouldn't catch him. So I
immediately asked how old he was. He replied, "13/M I
guess I shouldn't be doing this but I just think these
things are cool. Once I started I can't stop now. People
are so f_cked up it's unreal." I then asked why he
traded and he responded, " I think its just to see what
screwed up things are really going on." I also asked if
he would try anything he saw in the pictures. He wrote,
"God no you see what goes on. I would never do any of
that weird sh_t. Now some of the things I see being
done to girls. I think I'll enjoy... I don't think that's
that bad though."
The other interview with BigGuy was not much
better. BigGuy was a 25 year old female. She said that
her husband was the one who usually did it and he ran a
web page with pornography on it. When asked what she
thought of the CDA she typed," It's ridiculous how
could anyone think that censorship could stop the
trading of pornography on the Internet." I later asked if
they some how had a check to see if minors would
access their web page. She responded," No I wont let
him. We have a theory. We ask for their email address.
They must have one. We then email them and tell them
the password to get into the board. We figure that the
children won't let us email them in case their parents
find the letter. It's not fool-proof but it stops some of
it. "
The CDA hits smaller ISPs harder that the larger
ones because of the different types of users on each
system(Emigh: 1). The bill has good points and bad
ones. Steve Dasbach, Libertarian Party Chair, states
that, " This bill is censorship. This bill threatens to
interrupt and curb the rapid evolution of electronic
information systems. This bill isn't needed. This bill
usurps the role of parents("CDA: LP calls new bill `high-
tech censorship'.": 1)."
Clifford Stoll, a renown Internet scientist and
author of the 1989 bestseller The Cuckoo's Egg, when
asked "Are you concerned about the abundance of
pornography on the Net?" said:
Well, I can't get worked up over it. Some people
say, `Oh no, my kid just downloaded this image
that has explicit sex in it.' Yeah, sad to say, it's
true. Sad to say that just like every place in
society, there are reptiles who will exploit children.
Certainly, the child molester will find a way to use
the computer networks to find victims-just as
child molesters take advantage of cars and ordinary
roadways to get around. But the concerns with
cars and roadways go deeper than simply the fact
that child molesters use them(Chidley: 59).
The computer industry describes the CDA as
unconstitutionally vague and it subjects computer
networks to more restrictive standards than any form of
written work such as books, magazines, and other
printed materials(Chidley: 59). When it comes to
anything basic ethics are broken everyday whether it be
in business, on the Internet, or in your own home
(Lester: 1). There will always be someone who finds a
way around the rules. The CDA, as written, gives no
guidance but instead tries to ban Internet pornography
(Wallace: 1). As stated by Steve Dasbach, "The
Communications Decency Act is a case of 20th-century
politicians using 19th-century laws to control 21st-
century technology("CDA: LP calls new bill `high-tech
censorship'.": 1)."
Two easy cures for this unorganized, uncensored,
uncontrollable Internet are: First, Promoting the use of
child safe Internet Service Providers and second, the
use of local screening software(Wallace: 5). The
Government should not be responsible for censorship. If
so they must do it as a whole and this would be
unconstitutional. Eliminate the problem by choice not by
force.
Works Cited
BigGuy. Online Personal Interview.
washington.dc.us.undernet.org/port=6667 (20 Jun. 1996).
Bruce, Marty. "Censorship on the Internet." Censorship on the
Internet. 1996.
(29 Jun. 1996).
"CDA: LP calls new bill `high-tech censorship'. " Libertarian Press
July 1995. (29 Jun. 1996).
Chidley, Joe. "Reality Check." MacLean's 22 May 1995: 59.
Chidley, Joe. "Red-Light District." MacLean's 22 May 1995: 58.
Emigh, Jacqueline. "Computers & Privacy - Telecom Act Hits ISPs
Hard 04/02/96." Computers & Privacy. 02 Apr. 1996.
(18 Jun. 1996).
GMoney. Online Personal Interview.
washington.dc.us.undernet.org/port=6667 (20 Jun. 1996).
Jerome, Richard and Linda Kramer. "Monkey Business No More."
People Weekly 19 Feb. 1996: 51+.
Lester, Meera. "What's Your Code of Ethics?"
_VJF_Library_Career_Resources: What's Your Code of
Ethics? 1996.
(29 Jun.
1996).
Lohr, Steve. "Censorship on the Internet: Pre-emptory Effort At
Self-Policing," New York Times 13 March 1996, sec. C: 3.
Wallace, Jonathan and Mark Mangan. "The Internet Censorship
FAQ." The Internet Censorship FAQ. 1996.
(29 Jun.
1996).
f:\12000 essays\sciences (985)\Computer\Computer Programming.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Programming
Choosing to do a research on a career can be a little easier to do when you have some or a
general knowledge a particular field of work. There are many different types of jobs one
can decide to undertake, one of which is in the most popular line of work today: Computer
Programming. Although this line of work might seem a little tiresome but you might find it
enjoyable by people with lots of patience and the will to do long and tidious work. Most
programmers in large corporations work in teams, with each person focusing on a specific
aspect of the total project(AOL). Programmers write the detailed instructions for a
computer to follow. A computer programmer carefully studies the program that best suits
the employer needs. They may also work for a large computer corporation developing new
software and/or improving older versions of these programs. Programmers write specific
programs by breaking down each step into a logical series of hours of writing programs, the
programmer must follow. After long hours of writing programs, the programmer must
thoroughly testing and revising it. Generally, programmers create software by using the
following a basic step-by-step development process:
(1) Define the scope of the program by outlining exactly what the program will do.
(2) Plan the sequence of computer operations, usually by developing a flowchart (a
diagram showing the order of computer actions and data flow). (3) Write the code--the
program instructions encoded in a particular programming language. (4) Test the program.
(5) Debug the program (eliminate problems in program logic and correct incorrect usage of
the programming language). (6) Submit the program for beta testing, in which users test
the program extensively under real-life conditions to see whether it performs
correctly(AOL)
Programmers are grouped into two types: Application programmers and systems programmers.
These programmers write the software that changes a basic machine into a personal tool
that not only is useful for increasing productivity but also be fun and entertain the user.
Applications programmers write commercial programs to be used by businesses, in scientific
research centers, and in the home. Systems programmers write the complex programs that
control the inner-workings of the computer. Application programmers are focused primarily
on business, engineering, or science tasks, such as writing a program to direct the
guidance system of a missile to its target (Information Finder). A systems programmer
maintains the software that controls the operation of the entire computer system. They
make changes to the instructions that controls the central processing unit, in turn,
controls the computers hardware itself(FL View #475). They also help application
programmers determine the source of problems that may occur with their programs. Many
specialty areas exist within these two large groups, such as database and telecommunication
programmers. Computer programmers can attend really any college or school because the
employers needs vary. All programmers are college graduates and have taken special courses
in the programming field. Most employers prefer experience in accounting, inventory
control and other business skills. Employers look for people who can think logically and
can have patience when doing analytical work(Information Finder). Then entrance salary of
a new fresh out of college computer programmer ranges in the area of $30,000 in 1989(Occ.
Outlook Handbook 115). The little more experienced programmers that have five to ten
years experience earn about $40,000+ annually, but the professionals get nearly $60,000 per
year (S.I.R.S. CD-ROM). Employers are looking for ways to cut costs, and minimizing
on-the-job training is one way to do that. Many employers prefer to hire with previous
experience in the field. To have the best chance of becoming a skilled computer programmer
they must learn many computer languages to land the job of their choice. The Shuttle
program, for example, consist of a total of about half a million separate instructions and
were written by hundreds of programmers.) For this reason, scientific and industrial
software sometimes costs much more than do the computers on which the programs run.
Programmers work mostly at a desk in front of a computer all day. They usually work
between 40 to 50 hours a week and more if they have to meet crucial deadlines. Programmers
might arrive at work early or work late occasionally, depending on the circumstances at the
work place. The employment outlook of the computer programming field is very good and
growing fast through the year 2000(Occ. Outlook Handbook 115). Most of the job openings
for programmers will probably result from replacement needs. The need for computer
programmers will increase as business, government, schools, and scientific organizations
seek new applications for computer software and improvements already in use. The computer
programming field is not an easy line of work to be successful in nor is it a easy one to
get into. This job requires a lot of demands as a person such as: working late hours,
writing complex programs that sometimes don't always work properly, the patience, and the
time needing to be a successful computer programmer.
Works Cited
Florida View 1990: Careers Black & White. Florida Dept. of Education. 1990, occ. #475.
Florida View 1991: Careers Black & White. Florida Dept. of Education. 1990, occ. #362.
Information Finder by World Book. Chicago .World Book, Inc., 1992.
Occupation Outlook Handbook. 1990-91 edition; United States Department of Labor, 1991.
Social Issues Resources Series. SIRS Combined Text & Index, 1993 SIRS, Inc. Spring 1993.
America Online Database. America Online, Inc. 1995.
f:\12000 essays\sciences (985)\Computer\Computer Protection.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
About two hundred years before, the word "computer" started
to appear in the dictionary. Some people even didn't know what is
a computer. However, most of the people today not just knowing
what is a computer, but understand how to use a computer.
Therefore, computer become more and more popular and
important to our society. We can use computer everywhere and
they are very useful and helpful to our life. The speed and
accuracy of computer made people felt confident and reliable.
Therefore, many important information or data are saved in the
computer. Such as your diary, the financial situation of a oil
company or some secret intelligence of the military department. A
lot of important information can be found in the memory of
computer. So, people may ask a question: Can we make sure that
the information in the computer is safe and nobody can steal it
from the memory of the computer?
Physical hazard is one of the causes of destroying the data
in the computer. For example, send a flood of coffee toward a
personal computer. The hard disk of the computer could be
endangered by the flood of coffee. Besides, human caretaker of
computer system can cause as much as harm as any physical hazard.
For example, a cashier in a bank can transfer some money from one
of his customer's account to his own account. Nonetheless, the
most dangerous thief are not those who work with computer every
day, but youthful amateurs who experiment at night --- the
hackers.
The term "hacker "may have originated at M.I.T. as students'
jargon for classmates who labored nights in the computer lab. In
the beginning, hackers are not so dangerous at all. They just
stole computer time from the university. However, in the early
1980s, hackers became a group of criminals who steal information
from other peoples' computer.
For preventing the hackers and other criminals, people need
to set up a good security system to protect the data in the
computer. The most important thing is that we cannot allow those
hackers and criminals entering our computers. It means that we
need to design a lock to lock up all our data or using
identification to verify the identity of someone seeking access
to our computers.
The most common method to lock up the data is using a
password system. Passwords are a multi-user computer system's
usual first line of defense against hackers. We can use a
combination of alphabetic and number characters to form our own
password. The longer the password, the more possibilities a
hacker's password-guessing program must work through. However it
is difficult to remember a very long passwords. So people will
try to write the password down and it may immediately make it a
security risk. Furthermore, a high speed password-guessing
program can find out a password easily. Therefore, it is not
enough for a computer that just have a password system to protect
its data and memory.
Besides password system, a computer company may consider
about the security of its information centre. In the past, people
used locks and keys to limit access to secure areas. However,
keys can be stolen or copied easily. Therefore, card-key are
designed to prevent the situation above. Three types of card-keys
are commonly used by banks, computer centers and government
departments. Each of this card-keys can employ an identifying
number or password that is encoded in the card itself, and all
are produced by techniques beyond the reach of the average
computer criminals. One of the three card-key is called watermark
magnetic. It was inspired by the watermarks on paper currency.
The card's magnetic strip have a 12-digit number code and it
cannot be copied. It can store about two thousand bits in the
magnetic strip. The other two cards have the capability of
storing thousands of times of data in the magnetic strip. They
are optical memory cards (OMCs) and Smart cards. Both of them are
always used in the security system of computers.
However, it is not enough for just using password system and
card-keys to protect the memory in the computer. A computer
system also need to have a restricting program to verify the
identity of the users. Generally, identity can be established by
something a person knows, such as a password or something a
person has, such as a card-key. However, people are often forget
their passwords or lose their keys. A third method must be used.
It is using something a person has --- physical trait of a human
being.
We can use a new technology called biometric device to
identify the person who wants to use your computer. Biometric
devices are instrument that perform mathematical analyses of
biological characteristics. For example, voices, fingerprint and
geometry of the hand can be used for identification. Nowadays,
many computer centers, bank vaults, military installations and
other sensitive areas have considered to use biometric security
system. It is because the rate of mistaken acceptance of
outsiders and the rejection of authorized insiders is extremely
low.
Individuality of vocal signature is one kind of biometric
security system. The main point of this system is voice
verification. The voice verifier described here is a
developmental system at American Telephone and Telegraph. Only
one thing that people need to do is repeating a particular phrase
several times. The computer would sample, digitize and store what
you said. After that, it will built up a voice signature and make
allowances for an individual's characteristic variations. The
theory of voice verification is very simple. It is using the
characteristics of a voice: its acoustic strength. To isolate
personal characteristics within these fluctuations, the computer
breaks the sound into its component frequencies and analyzes how
they are distributed. If someone wants to steal some information
from your computer, the person needs to have a same voice as you
and it is impossible.
Besides using voices for identification, we can use
fingerprint to verify a person's identity because no two
fingerprints are exactly alike. In a fingerprint verification
system, the user places one finger on a glass plate; light
flashes inside the machine, reflects off the fingerprint and is
picked up by an optical scanner. The scanner transmits the
information to the computer for analysis. After that, security
experts can verify the identity of that person by those
information.
Finally, the last biometric security system is the geometry
of the hand. In that system, the computer system uses a
sophisticated scanning device to record the measurements of each
person's hand. With an overhead light shining down on the hand, a
sensor underneath the plate scans the fingers through the glass
slots, recording light intensity from the fingertips to the
webbing where the fingers join the palm. After passing the
investigation of the computer, people can use the computer or
retrieve data from the computer.
Although a lot of security system have invented in our
world, they are useless if people always think that stealing
information is not a serious crime. Therefore, people need to pay
more attention on computer crime and fight against those hackers,
instead of using a lot of computer security systems to protect
the computer.
Why do we need to protect our computers ?
It is a question which people always ask in 18th century. However,
every person knows the importance and useful of a computer security
system.
In 19th century, computer become more and more important and
helpful. You can input a large amount of information or data in a small
memory chip of a personal computer. The hard disk of a computer system is
liked a bank. It contained a lot of costly material. Such as your diary,
the financial situation of a trading company or some secret military
information. Therefore, it just like hire some security guards to protect
the bank. A computer security system can use to prevent the outflow of
the information in the national defense industry or the personal diary in
your computer.
Nevertheless, there is the price that one might expect to pay for
the tool of security: equipment ranging from locks on doors to
computerized gate-keepers that stand watch against hackers, special
software that prevents employees to steal the data from the company's
computer. The bill can range from hundreds of dollars to many millions,
depending on the degree of assurance sought.
Although it needs to spend a lot of money to create a computer
security system, it worth to make it. It is because the data in a
computer can be easily erased or destroyed by a lot of kind of hazards.
For example, a power supply problem or a fire accident can destroy all
the data in a computer company. In 1987, a computer centre inside the
Pentagon, the US military's sprawling head quarters near Washington, DC.,
a 300-Watt light bulb once was left burning inside a vault where computer
tapes were stored. After a time, the bulb had generated so much heat that
the ceiling began to smelt. When the door was opened, air rushing into
the room brought the fire to life. Before the flames could be
extinguished, they had spread consume three computer systems worth a
total of $6.3 million.
Besides those accidental hazards, human is a great cause of the
outflows of data from the computer. There have two kind of people can go
in the security system and steal the data from it. One is those trusted
employee who is designed to let in the computer system, such as
programmers, operators or managers. Another kind is those youth amateurs
who experiment at night ----the hackers.
Let's talk about those trusted workers. They are the groups who can
easily become a criminal directly or indirectly. They may steal the
information in the system and sell it to someone else for a great profit.
In another hand, they may be bribed by someone who want to steal the
data. It is because it may cost a criminal far less in time and money to
bride a disloyal employee to crack the security system.
Beside those disloyal workers, hacker is also very dangerous. The
term "hacker" is originated at M.I.T. as students' jargon for classmates
who doing computer lab in the night. In the beginning, hackers are not so
dangerous at all. They just stole some hints for the test in the
university. However, in early 1980s, hacker became a group of criminal
who steal information from other commercial companies or government
departments.
What can we use to protect the computer ?
We have talked about the reasons of the use of computer security
system. But what kind of tools can we use to protect the computer. The
most common one is a password system. Password are a multi-user computer
system's which usual used for the first line of defense against
intrusion. A password may be any combination of alphabetic and numeric
characters, to maximum lengths set by the e particular system. Most
system can accommodate passwords up to 40 characters. However, a long
passwords can be easily forget. So, people may write it down and it
immediately make a security risk. Some people may use their first name or
a significant word. With a dictionary of 2000 common names, for instance,
a experienced hacker can crack it within ten minutes.
Besides the password system, card-keys are also commonly used.
Each kind of card-keys can employ an identifying number or password that
is encoded in the card itself, and all are produced by techniques beyond
the reach of the average computer criminal. Three types of card usually
used. They are magnetic watermark, Optical memory card and Smart card.
However, both of the tools can be easily knew or stole by other
people. Password are often forgotten by the users and card-key can be
copied or stolen. Therefore, we need to have a higher level of computer
security system. Biometric device is the one which have a safer
protection for the computer. It can reduce the probability of the
mistaken acceptance of outsider to extremely low. Biometric devices are
instrument that perform mathematical analyses of biological
characteristics. However, the time required to pass the system should not
be too long. Also, it should not give inconvenience to the user. For
example, the system require people to remove their shoes and socks for
footprint verification.
Individuality of vocal signature is one kind of biometry security
system. They are still in the experimental stage, reliable computer
systems for voice verification would be useful for both on-site and
remote user identification. The voice verifier described here is invented
by the developmental system at American Telephone and Telegraph.
Enrollment would require the user to repeat a particular phrase several
times. The computer would sample, digitize and store each reading of the
phrase and then, from the data, build a voice signature that would make
allowances for an individual's characteristic variations.
Another biometric device is a device which can measuring the act of
writing. The device included a biometric pen and a sensor pad. The pen
can converts a signature into a set of three electrical signals by one
pressure sensor and two acceleration sensors. The pressure sensor can
change in the writer's downward pressure on the pen point. The two
acceleration sensor can measure the vertical and horizontal movement.
The third device which we want to talk about is a device which can
scan the pattern in the eyes. This device is using an infrared beam which
can scan the retina in a circular path. The detector in the eyepiece of
the device can measure the intensity of the light as it is reflected from
different points. Because blood vessels do not absorb and reflect the
same quantities of infrared as the surrounding tissue, the eyepiece
sensor records the vessels as an intricate dark pattern against a lighter
background. The device samples light intensity at 320 points around the
path of the scan , producing a digital profile of the vessel pattern. The
enrollment can take as little as 30 seconds and verification can be even
faster. Therefore, user can pass the system quickly and the system can
reject those hackers accurately.
The last device that we want to discuss is a device which can map
the intricacies of a fingerprint. In the verification system, the user
places one finger on a glass plate; light flashes inside the machine
,reflect off the fingerprint and is picked up by an optical scanner. The
scanner transmits the information to the computer for analysis.
Although scientist have invented many kind of computer security
systems, no combination of technologies promises unbreakable security.
Experts in the field agree that someone with sufficient resources can
crack almost any computer defense. Therefore, the most important thing is
the conduct of the people. If everyone in this world have a good conduct
and behavior, there is no need to use any complicated security system to
protect the computer.
----------------
f:\12000 essays\sciences (985)\Computer\Computer Revolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Computer Revolution
If I were to make a history book of the years from 1981 to 1996, I would
put computers on the cover. Computers, you may ask?, Yes computers, because if there
were suddenly no computers on the world, there would be total chaos. People could not;
communicate, commute, make business transactions, purchase things, or do most things in
their daily routine, because power plants use computers to control the production of
electricity.
Computers have evolved extreme rapidly in the past fifteen years. Ten
years ago, all that you could do with a computer, was primarily make mathematical
calculations and type documents, but doing that required typing in a series of complex
codes that took a great deal of training to learn. Then the Apple computer company took
this complex computer language and evolved it to a simpler system of computer language
using words that made sense in their context. This system was called BASIC. BASIC
was a major development in the computer industry, because it made computers accessible
to the average American. This helped greatly in proving that computers were no longer
just toys and they had a very useful purpose. Most people still felt the cost was too great
for a glorified typewriter.
Several years after they introduction of the BASIC system, Apple
introduced a new line of computers called the Macintosh. These Macintosh computers
were extreme easy to use, and were about the same price of a computer that used BASIC.
Apple's business exploded with the Mac, Macintosh were put in schools and millions of
homes proving that the computer was an extreme useful tool after all. The Macintosh
made such an impact on the computer industry that IBM and Microsoft joined forces to
produce the MS-DOS system. The MS-DOS system was the basis of the Windows
program, which made Bill Gates the multi-billionaire that he is. With windows and the
Apple system, the modem which had been around for several years, could be used for its
full potential. Instead of linking one computer to another, now millions of computers
could be linked with massive main frames from on-line services such as; American
On-Line or Prodigy. People finally had full affordable access to the World-Wide-Web and
could communicate with people across the street or across the world. The internet is used
by millions of people across the world each day for a vast variety of reasons from getting
help with homework, to reading a magazine, to getting business information such as stock
quotes, to planning a trip and making reservations, to send and receive e-mail, or even
listen to music or watch a video clip.
Businesses would come to a grinding halt if their computers suddenly
stopped. Business men would not be able to communicate with one another, because they
could not use phones, pagers, cellular phones, fax machines, or e-mail. People could not
write documents because many offices do not even have a typewriter. Business is not the
only aspect of our lives that is effect by computers, we could not by things at the store,
use most appliances in our homes, drive our cars, or even get electricity, because the
power commands use computers to control the flow electricity.
In the future, computers will play an even more important role in our lives.
Computers will link the citizens of this country with the government, some day making it
possible for the citizens to directly vote on each bill that is up for vote. This will make
America the first true democracy since the Greeks.
f:\12000 essays\sciences (985)\Computer\Computer Security 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chapter # 5
1 - Define encryption and explain how it is used to protect transmission of information.
Encryption is method of scrambling data in some manner during transmission. In periods of war, the use of encryption becomes paramount so those messages are not intercepted by the opposing forces. There are a number of different ways to protect data during transmission, such as Substitution (character for character replacement) in which one unit (usually character) of cipher text (unintelligible text or signals produced through an encryption system) is substituted for a corresponding unit of plain text (the intelligible text or signals that can be read without using decryption), according to the algorithm in use and the specific key.
The other method is Transposition (rearrangement characters) which is the encryption process in which units of the original plain text (usually individual characteristics) are simply moved around; they appear unchanged in the cipher text for their relative location.
Study Case (Bank of Shenandoah Valley)
While both encryption and authentication method are providing some measures of security, the implementation of security itself has totally a different approach. Before any methods chosen, the two most important factors in security implementations are having to be determined. The level of security needed and the cost involved, so the appropriate steps can be taken to ensure a safe and secure environment. In this case Bank of Shenandoah Valley is in type of business which a high level of security is required, therefore, I would suggest the use of encryption method with a complex algorithm involved. Although an authentication method is a secure method as well, is not as complex as encryption method of complex algorithm since it has been used in military during the war where a high levels of security are a must. During the war, the use of encryption becomes paramount so those messages are not intercepted by the opposing forces. This is a perfect example of how reliable an encrypted message can be while used within its appropriates guidelines.
Chapter # 6
4- Describe the three different database models - hierarchical, relational and network.
For data to be effectively transformed into useful information, it must be organized in a logical, meaningful way. Data is generally organized in a hierarchy that starts with the smallest unit (or piece of data) used by the computer and then progresses into the database, which holds all the information about the topic. The data is organized in a top - down or inverted tree likes structure. At the top of every tree or hierarchy is the root segment or element of the tree that corresponds to the main record type. The hierarchical model is best suited to situations in which the logical relationship between data can be properly presented with the one parent many children (one to many) approach. In a hierarchical database, all relationships are one - to -one or one- to - many, but no group of data can be on the "many" side of more than one relationship.
Network Database is a database in which all types of relationships are allowed. The network database is an extension of the hierarchical model, where the various levels of one-to-many relationships are replaced with owner-member relationships in which a member may have many owners. In a network database structure, more that one path can often be used to access data. "Databases structured according to either the hierarchical model or the network model suffers from the same deficiency: once the relationships are established between the data elements, it is difficult to modify them or to create new relationships.
Relational Database describes data using a standard tabular format in which all data elements are placed in two-dimensional tables that are the logical equivalent of files. In relational databases, data are accessed by content rather than by address (in contrast with hierarchical and network databases). Relational databases locate data logically, rather than physically. A relational database has no predetermined relationship between the data such as one-to-many sets or one-to-one.
Case study ( D'Angelo Transportation, Inc.)
There are a number of factor which ought to be discussed during discussion:
Ø How much of the system should by computerized?
Ø Should we purchase software or build based on what we are using in the current system. ( make versus buy analysis)
Ø If we decide to make the new system, should we design an on-line or batch system?
Ø Should we design the system for a mainframe computer, minicomputer, microcomputers or some combinations?
Ø What information technologies might be useful for this application?
Some of the security issues, are consist of the level of security required and the cost involved in this conversion. A database system is vulnerable to criminal attack at many levels. Typically, it is the end user rather the programmer who is often (but not always) guilty of the simple misuse of applications. Thus, it is essential that the total system is secure. The two classifications of security violations are malicious or accidental.
One of the most emphasized and significant factors of any program development is the early involvement of the end-users. This provides the programmer as well as the end-user with important functionality of the new system and help them to adapt to the new working environment more efficiently and effectively. The continuos training of the staff is essential in meeting the objectives of the organization since they will be provided with needed skills and expertise necessary to deal with daily issues using of new system.
f:\12000 essays\sciences (985)\Computer\Computer Security 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As defined in Computer Security Basics by O'Reilly & Associates, Inc. Biometrics is the use of a persons unique physiological, behavioral, and morphological characteristics to provide positive personal identification.
Biometric systems that are currently avaiable today examine fingerprints, handprints, and retina patterns. Systems that are close to biometrics but are not classified as such are behavioral systems such as voice, signature and keystroke systems. They test patterns of behavior not parts of the body.
It seems that in the world of biometrics that the more effective the device, the less willing people will be to accept it. Retina pattern devices are the most reliable but most people hate the idea of a laser shooting into their eye. Yet something such as monitoring keystroke patters people don't mind, but it's not nearly as effective.
Biometric verification is forecast to be a multibillion dollar market in this decade. There is no doubt that financial credit and debit cards are going to be the biggest part of the biometric market. There are also many significant niche markets which are growing rapidly.
For example, biometric identification cards are being used at a university in Georgia to allow students to get their meals, and in a Maryland day care center to ensure that the right person picks up the right child. In Los Angeles, they are using fingerprints to stop welfare fraud. And they're also being used by frequent business travellers for rapid transit through immigration and customs in Holland, and now at JFK and Newark airports in the United States. It could also be used to simply prevent one employee from "punching in" for some one else, or to prevent someone from opening up an account at a bank using a false name. Then there is also the security access market, access to computer databases, to premises and a
variety of other areas.
The Sentry program made by Fingerprint Technologies uses several devices at once. The system first prompts for a user name and password. Then they must have their fingerprint scan match what is on record. They can also use a video camera for real time video to capture photographs which can be incorporated into the data base. The time to scan and gain entrance to the building take from 6 to 10 seconds depending on what other information the operator wishes the user to enter. The system also keeps on record three of the individuals finger patterns incase one of the others is injured.
Biometrics is still relatively new to most people and will remain expensive to purchase good equipment until it becomes more popular and the technology gets better. And as people become more aware of how the systems work they will become more accepting of the more secure systems and not shy away from them as much. The future of access control security is literally in the hands, eyes, voice, keystroke, and signature of everyone.
as much
f:\12000 essays\sciences (985)\Computer\COMPUTER SECURITY ANALYSIS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
===================================
=INTRODUCTION TO DENIAL OF SERVICE=
===================================
Brian --------
Bri000001@aol.com
Last updated: Friday, March 28, 1997 10:19:23 AM 10:19:23
.0. FOREWORD
.A. INTRODUCTION
.A.1. WHAT IS A DENIAL OF SERVICE ATTACK?
.A.2. WHY WOULD SOMEONE CRASH A SYSTEM?
.A.2.1. INTRODUCTION
.A.2.2. SUB-CULTURAL STATUS
.A.2.3. TO GAIN ACCESS
.A.2.4. REVENGE
.A.2.5. POLITICAL REASONS
.A.2.6. ECONOMICAL REASONS
.A.2.7. NASTINESS
.A.3. ARE SOME OPERATING SYSTEMS MORE SECURE?
.B. SOME BASIC TARGETS FOR AN ATTACK
.B.1. SWAP SPACE
.B.2. BANDWIDTH
.B.3. KERNEL TABLES
.B.4. RAM
.B.5. DISKS
.B.6. CACHES
.B.7. INETD
.C. ATTACKING FROM THE OUTSIDE
.C.1. TAKING ADVANTAGE OF FINGER
.C.2. UDP AND SUNOS 4.1.3.
.C.3. FREEZING UP X-WINDOWS
.C.4. MALICIOUS USE OF UDP SERVICES
.C.5. ATTACKING WITH LYNX CLIENTS
.C.6. MALICIOUS USE OF telnet
.C.7. MALICIOUS USE OF telnet UNDER SOLARIS 2.4
.C.8. HOW TO DISABLE ACCOUNTS
.C.9. LINUX AND TCP TIME, DAYTIME
.C.10. HOW TO DISABLE SERVICES
.C.11. PARAGON OS BETA R1.4
.C.12. NOVELLS NETWARE FTP
.C.13. ICMP REDIRECT ATTACKS
.C.14. BROADCAST STORMS
.C.15. EMAIL BOMBING AND SPAMMING
.C.16. TIME AND KERBEROS
.C.17. THE DOT DOT BUG
.C.18. SUNOS KERNEL PANIC
.C.19. HOSTILE APPLETS
.C.20. VIRUS
.C.21. ANONYMOUS FTP ABUSE
.C.22. SYN FLOODING
.C.23. PING FLOODING
.C.24. CRASHING SYSTEMS WITH PING FROM WINDOWS 95 MACHINES
.C.25. MALICIOUS USE OF SUBNET MASK REPLY MESSAGE
.C.26. FLEXlm
.C.27. BOOTING WITH TRIVIAL FTP
.D. ATTACKING FROM THE INSIDE
.D.1. KERNEL PANIC UNDER SOLARIS 2.3
.D.2. CRASHING THE X-SERVER
.D.3. FILLING UP THE HARD DISK
.D.4. MALICIOUS USE OF eval
.D.5. MALICIOUS USE OF fork()
.D.6. CREATING FILES THAT IS HARD TO REMOVE
.D.7. DIRECTORY NAME LOOKUPCACHE
.D.8. CSH ATTACK
.D.9. CREATING FILES IN /tmp
.D.10. USING RESOLV_HOST_CONF
.D.11. SUN 4.X AND BACKGROUND JOBS
.D.12. CRASHING DG/UX WITH ULIMIT
.D.13. NETTUNE AND HP-UX
.D.14. SOLARIS 2.X AND NFS
.D.15. SYSTEM STABILITY COMPROMISE VIA MOUNT_UNION
.D.16. trap_mon CAUSES KERNEL PANIC UNDER SUNOS 4.1.X
.E. DUMPING CORE
.E.1. SHORT COMMENT
.E.2. MALICIOUS USE OF NETSCAPE
.E.3. CORE DUMPED UNDER WUFTPD
.E.4. ld UNDER SOLARIS/X86
.F. HOW DO I PROTECT A SYSTEM AGAINST DENIAL OF SERVICE ATTACKS?
.F.1. BASIC SECURITY PROTECTION
.F.1.1. INTRODUCTION
.F.1.2. PORT SCANNING
.F.1.3. CHECK THE OUTSIDE ATTACKS DESCRIBED IN THIS PAPER
.F.1.4. CHECK THE INSIDE ATTACKS DESCRIBED IN THIS PAPER
.F.1.5. EXTRA SECURITY SYSTEMS
.F.1.6. MONITORING SECURITY
.F.1.7. KEEPING UP TO DATE
.F.1.8. READ SOMETHING BETTER
.F.2. MONITORING PERFORMANCE
.F.2.1. INTRODUCTION
.F.2.2. COMMANDS AND SERVICES
.F.2.3. PROGRAMS
.F.2.4. ACCOUNTING
.G. SUGGESTED READING
.G.1. INFORMATION FOR DEEPER KNOWLEDGE
.G.2. KEEPING UP TO DATE INFORMATION
.G.3. BASIC INFORMATION
.H. COPYRIGHT
.I. DISCLAIMER
.0. FOREWORD
------------
In this paper I have tried to answer the following questions:
- What is a denial of service attack?
- Why would someone crash a system?
- How can someone crash a system.
- How do I protect a system against denial of service attacks?
I also have a section called SUGGESTED READING were you can find
information about good free information that can give you a deeper
understanding about something.
Note that I have a very limited experience with Macintosh, OS/2 and
Windows and most of the material are therefore for Unix use.
You can always find the latest version at the following address:
http://www.student.tdb.uu.se/~t95hhu/secure/denial/DENIAL.TXT
Feel free to send comments, tips and so on to address:
t95hhu@student.tdb.uu.se
.A. INTRODUCTION
~~~~~~~~~~~~~~~~
.A.1. WHAT IS A DENIAL OF SERVICE ATTACK?
-----------------------------------------
Denial of service is about without permission knocking off
services, for example through crashing the whole system. This
kind of attacks are easy to launch and it is hard to protect
a system against them. The basic problem is that Unix
assumes that users on the system or on other systems will be
well behaved.
.A.2. WHY WOULD SOMEONE CRASH A SYSTEM?
---------------------------------------
.A.2.1. INTRODUCTION
--------------------
Why would someone crash a system? I can think of several reasons
that I have presentated more precisely in a section for each reason,
but for short:
.1. Sub-cultural status.
.2. To gain access.
.3. Revenge.
.4. Political reasons.
.5. Economical reasons.
.6. Nastiness.
I think that number one and six are the more common today, but that
number four and five will be the more common ones in the future.
.A.2.2. SUB-CULTURAL STATUS
---------------------------
After all information about syn flooding a bunch of such attacks
were launched around Sweden. The very most of these attacks were
not a part of a IP-spoof attack, it was "only" a denial of service
attack. Why?
I think that hackers attack systems as a sub-cultural pseudo career
and I think that many denial of service attacks, and here in the
example syn flooding, were performed for these reasons. I also think
that many hackers begin their carrer with denial of service attacks.
.A.2.3. TO GAIN ACCESS
----------------------
Sometimes could a denial of service attack be a part of an attack to
gain access at a system. At the moment I can think of these reasons
and specific holes:
.1. Some older X-lock versions could be crashed with a
method from the denial of service family leaving the system
open. Physical access was needed to use the work space after.
.2. Syn flooding could be a part of a IP-spoof attack method.
.3. Some program systems could have holes under the startup,
that could be used to gain root, for example SSH (secure shell).
.4. Under an attack it could be usable to crash other machines
in the network or to deny certain persons the ability to access
the system.
.5. Also could a system being booted sometimes be subverted,
especially rarp-boots. If we know which port the machine listen
to (69 could be a good guess) under the boot we can send false
packets to it and almost totally control the boot.
.A.2.4. REVENGE
---------------
A denial of service attack could be a part of a revenge against a user
or an administrator.
.A.2.5. POLITICAL REASONS
-------------------------
Sooner or later will new or old organizations understand the potential
of destroying computer systems and find tools to do it.
For example imaginate the Bank A loaning company B money to build a
factory threating the environment. The organization C therefor crash A:s
computer system, maybe with help from an employee. The attack could cost
A a great deal of money if the timing is right.
.A.2.6. ECONOMICAL REASONS
--------------------------
Imaginate the small company A moving into a business totally dominated by
company B. A and B customers make the orders by computers and depends
heavily on that the order is done in a specific time (A and B could be
stock trading companies). If A and B can't perform the order the customers
lose money and change company.
As a part of a business strategy A pays a computer expert a sum of money to
get him to crash B:s computer systems a number of times. A year later A
is the dominating company.
.A.2.7. NASTINESS
-----------------
I know a person that found a workstation where the user had forgotten to
logout. He sat down and wrote a program that made a kill -9 -1 at a
random time at least 30 minutes after the login time and placed a call to
the program from the profile file. That is nastiness.
.A.3. ARE SOME OPERATING SYSTEMS MORE SECURE?
---------------------------------------------
This is a hard question to answer and I don't think that it will
give anything to compare different Unix platforms. You can't say that
one Unix is more secure against denial of service, it is all up to the
administrator.
A comparison between Windows 95 and NT on one side and Unix on the
other could however be interesting.
Unix systems are much more complex and have hundreds of built in programs,
services... This always open up many ways to crash the system from
the inside.
In the normal Windows NT and 95 network were is few ways to crash
the system. Although were is methods that always will work.
That gives us that no big different between Microsoft and Unix can
be seen regardning the inside attacks. But there is a couple of
points left:
- Unix have much more tools and programs to discover an
attack and monitoring the users. To watch what another user
is up to under windows is very hard.
- The average Unix administrator probably also have much more
experience than the average Microsoft administrator.
The two last points gives that Unix is more secure against inside
denial of service attacks.
A comparison between Microsoft and Unix regarding outside attacks
are much more difficult. However I would like to say that the average
Microsoft system on the Internet are more secure against outside
attacks, because they normally have much less services.
.B. SOME BASIC TARGETS FOR AN ATTACK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.B.1. SWAP SPACE
----------------
Most systems have several hundred Mbytes of swap space to
service client requests. The swap space is typical used
for forked child processes which have a short life time.
The swap space will therefore almost never in a normal
cause be used heavily. A denial of service could be based
on a method that tries to fill up the swap space.
.B.2. BANDWIDTH
---------------
If the bandwidth is to high the network will be useless. Most
denial of service attack influence the bandwidth in some way.
.B.3. KERNEL TABLES
-------------------
It is trivial to overflow the kernel tables which will cause
serious problems on the system. Systems with write through
caches and small write buffers is especially sensitive.
Kernel memory allocation is also a target that is sensitive.
The kernel have a kernelmap limit, if the system reach this
limit it can not allocate more kernel memory and must be rebooted.
The kernel memory is not only used for RAM, CPU:s, screens and so
on, it it also used for ordinaries processes. Meaning that any system
can be crashed and with a mean (or in some sense good) algorithm pretty
fast.
For Solaris 2.X it is measured and reported with the sar command
how much kernel memory the system is using, but for SunOS 4.X there
is no such command. Meaning that under SunOS 4.X you don't even can
get a warning. If you do use So
f:\12000 essays\sciences (985)\Computer\COMPUTER SECURITY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The book Computer Security written by Time Life Books, explains what computer security is, how it works, and how it affects peoples lives. Without computer security peoples private information can be stolen right off their computer.
Computer security is exactly what it sounds like. It is security on a computer to prevent people from accessing a computer. It is very difficult to secure a computer for a computer is like a mechanical human brain and if one has a real human brain they can operate it. In other words if one knows how a computer works they can make it work. If they do not then they cannot do anything at all. This is what computer security is meant to do. It is done by making it only possible for someone to access a computer by using a password or by locking it up.
Computer security works by many ways of password use or by locking it up. The password method is enforced by prompting a computer user to enter a password before they can access any programs or information already contained within the computer. Another password security method would be to have the computer user carry a digital screen that fits in your pocket. This digital screen receives an encrypted message and displays numbers that change every few minutes. These numbers make the password one needs for the next few minutes in order to access the computer. This password method is somewhat new. It is also better, for the previous password method is not totally fool proof. This is because the passwords are stored in the computer and if a computer literate person was to access this information, they could get into a computer and enter it looking as if they were someone else for they have obtained someone else's password. In the future when technology increasingly gets better there are possibilities that a computer could use you hand, inner eye, or voice print. Currently these methods are not used but they are right around the corner. The only other way to absolutely secure a computer would be to lock it up but with computer networks these days that cannot be easily done unless dealing with a single personal computer.
Computer security has become such a big issue due to the huge amount of loss in profits by businesses. This happens because the computers were not totally secure from unwanted visitors. An example of how this could happen is a big business was to spend millions of dollars on research and development that would only stolen by another to profit themselves. Businesses lose three hundred million to about five billion dollars yearly due to these computer criminals called hackers or computer information thieves.
Computer security has come a long way since the creation of computers. But still we cannot fully believe our computers are fully secure and safe from unwanted snoopers. Computer security is very hard to create presently for anyone can get in if they know what to do. Right now what to do is easy but if you need someone's hand to get in a computer that may be a little more difficult.
f:\12000 essays\sciences (985)\Computer\Computer Secutity 4.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Viruses: Past, Present And Future
In our health-conscious society, viruses of any type are an enemy. Computer viruses are especially pernicious. They can and do strike any unprotected computer system, with results that range from merely annoying to the disastrous, time-consuming and expensive loss of software and data. And with corporations increasingly using computers for enterprise-wide, business-critical computing, the costs of virus-induced down-time are growing along with the threat from viruses themselves. Concern is justified - but unbridled paranoia is not. Just as proper diet, exercise and preventative health care can add years to your life, prudent and cost-effective anti-virus strategies can minimize your exposure to computer viruses.
· A history of computer viruses
· Who writes viruses - and how they can reach you
· The early warning symptoms of virus infection
· The real numbers behind the growth of viruses and their costs
· How viruses work - and how virus protection can stop them
What, Exactly, Is A Computer Virus?
A computer virus is a program designed to replicate and spread, generally with the victim being oblivious to its existence. Computer viruses spread by attaching themselves to other programs (e.g., word processors or spreadsheets application files) or to the boot sector of a disk. When an infected file is activated - or executed - or when the computer is started from an infected disk, the virus itself is also executed. Often, it lurks in computer memory, waiting to infect the next program that is activated, or the next disk that is accessed.
What makes viruses dangerous is their ability to perform an event. While some events are harmless (e.g. displaying a message on a certain date) and others annoying (e.g., slowing performance or altering the screen display), some viruses can be catastrophic by damaging files, destroying data and crashing systems.
How Do Infections Spread?
Viruses come from a variety of sources. Because a virus is software code, it can be transmitted along with any legitimate software that enters your environment:
· In a 1991 study of major U.S. and Canadian computer users by the market research firm Dataquest for the National Computer Security Association, most users blamed an infected diskette (87 percent). Forty-three percent of the diskettes responsible for introducing a virus into a corporate computing environment were brought from home.
· Nearly three-quarters (71 percent) of infections occurred in a networked environment, making rapid spread a serious risk. With networking, enterprise computing and inter-organizational communications on the increase, infection during telecommunicating and networking is growing.
· Seven percent said they had acquired their virus while downloading software from an electronic bulletin board service.
· Other sources of infected diskettes included demo disks, diagnostic disks used by service technicians and shrink-wrapped software disks - contributing six percent of reported infections.
What Damage Can Viruses Do To My System?
As mentioned earlier, some viruses are merely annoying, others are disastrous. At the very least, viruses expand file size and slow real-time interaction, hindering performance of your machine. Many virus writers seek only to infect systems, not to damage them - so their viruses do not inflict intentional harm. However, because viruses are often flawed, even benign viruses can inadvertently interact with other software or hardware and slow or stop the system. Other viruses are more dangerous. They can continually modify or destroy data, intercept input/output devices, overwrite files and reformat hard disks.
What Are The Symptoms Of Virus Infection?
Viruses remain free to proliferate only as long as they exist undetected. Accordingly, the most common viruses give off no symptoms of their infection. Anti-virus tools are necessary to identify these infections. However, many viruses are flawed and do provide some tip-offs to their infection. Here are some indications to watch for:
· Changes in the length of programs
· Changes in the file date or time stamp
· Longer program load times
· Slower system operation
· Reduced memory or disk space
· Bad sectors on your floppy
· Unusual error messages
· Unusual screen activity
· Failed program execution
· Failed system bootups when booting or accidentally booting from the A: drive.
· Unexpected writes to a drive.
The Virus Threat: Common - And Growing
How real is the threat from computer viruses? Every large corporation and organization has experienced a virus infection - most experience them monthly. According to data from IBM's High Integrity Computing Laboratory, corporations with 1,000 PCs or more now experience a virus attack every two to three months - and that frequency will likely double in a year.
The market research firm Dataquest concludes that virus infection is growing exponentially. It found nearly two thirds (63%) of survey respondents had experienced a virus incident (affecting 25 or fewer machines) at least once, with nine percent reporting a disaster affecting more than 25 PCs. The 1994 Computer Crime Survey by Creative Strategies Research International and BBS Systems of San Francisco found 76 percent of U.S. respondents had experienced infection in 1993 alone.
If you have only recently become conscious of the computer virus epidemic, you are not alone. Virus infections became a noticeable problem to computer users only around 1990 - but it has grown rapidly since then. According to a study by Certus International of 2,500 large U.S. sites with 400 or more PCs, the rate of infection grew by 600 percent from 1994 to 1995.
More Viruses Mean More Infections
Virus infections are a growing problem, in part, because there are more strains of viruses than ever before. In 1986, there were just four PC viruses. New viruses were a rarity, with a virus strain created once every three months. By 1989, a new virus appeared every week. By 1990, the rate rose to once every two days. Now, more than three viruses are created every day - for an average 110 new viruses created in a typical month. From those modest four viruses in 1986, today's computer users face thousands of virus strains.
Number Of Unique Viruses
Here is the frightening part: Most infections today are caused by viruses that are at least six years old. That is, the infections are caused by viruses created no later than 1990, when there were approximately 300 known viruses. Today, there are thousands of viruses. If that pattern of incubation holds, the explosion of new viruses over the past few years could result in another explosion in total infections over the next few years.
The History Of Viruses: How It All Began
Today, the existence of viruses and the need to protect against them are inevitable realities. But it wasn't always so. As recently as the middle 1980s, computer viruses didn't exist. The first viruses were created in university labs - to demonstrate the"potential" threat that such software code could provide. By 1987, viruses began showing up at several universities around the world. Three of the most common of today's viruses - Stoned, Cascade and Friday the 13th - first appeared that year.
Serious outbreaks of some of these viruses began to appear over the next two years. The Datacrime and Friday the 13th viruses became major media events, presaging the concern that would later surround the Michelangelo virus. Perhaps surprisingly, tiny Bulgaria became known as the world's Virus Factory in 1990 because of the high number of viruses created there. The NCSA found that Bulgaria, home of the notorious Dark Avenger, originated 76 viruses that year, making it the world's single largest virus contributor. Analysts attribute Bulgaria's prolific virus output to an abundance of trained but unemployed programmers; with nothing to do, these people tried their hands at virus production, with unfortunately successful results.
This growing activity convinced the computer industry that viruses were serious threats requiring defensive action. IBM created its High Integrity Computing Laboratory to lead Big Blue's anti-virus research effort. Symantec began offering Symantec Anti-Virus, one of the first commercially available virus defenses. These responses came none too soon. By 1991, the first polymorphic viruses - that can, like the AIDS virus in humans, change their shape to elude detection - began to spread and attack in significant numbers. That year too, the total number of viruses began to swell, topping 1,000 for the first time.
Virus creation proliferated, and continues to accelerate, because of the growing population of intelligent, computer-literate young people who appreciate the challenge - but not the ethics - of writing and releasing new viruses. Cultural factors also play a role. The U.S. - with its large and growing population of computer-literate young people - is the second largest source of infection. Elsewhere, Germany and Taiwan are the other major contributors of new viruses.
Another reason for the rapid rise of new viruses is that virus creation is getting easier. The same technology that makes it easier to create legitimate software - Windows-based development tools, for example - is, unfortunately, being applied to virus creation. The so-called Mutation Engine appeared in 1992, facilitating the development of polymorphic viruses. In 1992, the Virus Creation Laboratory, featuring on-line help and pull-down menus, brought virus creation within the reach of even non-sophisticated computer users.
More PCs And Networks Mean More Infections, Too
The growing number of PCs, PC-based networks and businesses relying on PCs are another set of reasons for rising infections: there are more potential victims. For example, in the decade since the invention and popularization of the PC, the installed base of active PCs grew to 54 million by 1990. But that number has already more than doubled (to 112 million PCs in 1993) and climbed to 154 million in 1994.
Not only are PCs becoming more common - they are taking over a rising share of corporate computing duties. A range of networking technologies - including Novell NetWare, Microsoft Windows NT and LAN Manager, LAN Server, OS/2 and Banyan VINES - are allowing companies to downsize from mainframe-based computer systems to PC-based LANs and, now, client-server systems. These systems are more cost-effective and they are being deployed more broadly within organizations for a growing range of mission-critical applications, from finance and sales data to inventory control, purchasing and manufacturing process control.
The current, rapid adoption of client-server computing by business gives viruses fertile new ground for infection. These server-based solutions are precisely the type of computers that are susceptible - if unprotected - to most computer viruses. And because data exchange is the very reason for using client-server solutions, a virus on one PC in the enterprise is far more likely to communicate with - and infect - more PCs and servers than would have been true a few years ago.
Moreover, client-server computing is putting PCs in the hands of many first-time or relatively inexperienced computer users, who are less likely to understand the virus problem. The increased use of portable PCs, remote link-ups to servers and inter-organization-and inter-network e-mail all add to the risk of infections, too. Once a virus infects a single networked computer, the average time required to infect another workstation is from 10 to 20 minutes - meaning a virus can paralyze an entire enterprise in a few hours.
What Is Ahead?
The industry's latest buzz-phrase is "data superhighway" and, although most people haven't thought about those superhighways in the context of virus infections, they should. Any technology that increases communication among computers also increases the likelihood of infection. And the data superhighway promises to expand on today's Internet links with high-bandwidth transmission of dense digital video, voice and data traffic at increasingly cost-effective rates. Corporations, universities, government agencies, non-profit organizations and consumers will be exchanging far more data than ever before. That makes virus protection more important, as well.
In addition to more opportunities for infection, there'll be more and more-damaging strains of virus to do the infecting. Regardless of the exact number of viruses that appear in the next few years, the Mutation Engine, Virus Creation Laboratory and other virus construction kits are sure to boost the virus population. Viruses that combine the worst features of several virus types - such as polymorphic boot sector viruses - are appearing and will become more common. Already, Windows-specific viruses have appeared. Virus writers, and their creations, are getting smarter. In response to the explosion in virus types and opportunities for transmission, virus protection will have to expand, too.
Computer anti-virus program manufacturers had a speed bump in which many used to profit: 32-bit applications. DOS and Windows 3.1 used a 16-bit architecture, and other 32-bit platforms such as Windows NT, UNIX, and a variety of other server operating systems had anti-virus programs already made. McAfee and Symantec, two giants in the anti-virus industry, prepared for the release of a new 32-bit home operating system. In August, Microsoft released Windows 95 for resale and it stormed across the nation. A large number of virus problems surfaced in the short months after the release. This was due to the neglect of a readily-available 32-bit anti-virus for the home user, and the fact that old 16-bit anti-virus programs could not detect 32-bit viruses. McAfee introduced Virus Scan 95 and Symantec released Norton Antivirus 95 shortly after the Windows 95 release. As the future progresses and the data architecture increases, anti-virus programs will have to be upgraded to handle the new program structure.
The Costs Of Virus Infection
Computer viruses have cost companies worldwide nearly two billion dollars since 1990, with those costs accelerating, according to an analysis of survey data from IBM's High Integrity Computing Laboratory and Dataquest. Global viral costs are clmbed another 1.9 billion dollars in 1994 alone, but has been at a more steady rate as anti-virus programs have been improved significantly.
The costs are so high because of the direct labor expense of cleanup for all infected hard disks and floppies in a typical incident. The indirect expense of lost productivity - an enormous sum - is higher, still. In a typical infection at a large corporate site, technical support personnel will have to inspect all 1,000 PCs. Since each PC user has an average 35 diskettes, about 35,000 diskettes will have to be scanned, too.
Recovery Time For A Virus Disaster (25 PCs)
On average, it took North American respondents to the 1991 Dataquest study four days to recover from a virus episode - and some MIS managers needed fully 30 days to recover. Even more ominously, their efforts were not wholly effective; a single infected floppy disk taken home during cleanup and later returned to the office can trigger a relapse. Some 25 percent of those experiencing a virus attack later suffered such a re-infection by the same virus within 30 days.
That cleanup is costing each of these corporations an average $177,000 in 1993 - and that sum will grow to more than $254,000 in 1994. If you're in an enterprise with 1,000 or more PCs, you can use these figures to estimate your own virus-fighting costs. Take the cost-per-PC ($177 in 1993, $254 in 1994) and multiply it by the number of PCs in your organization.
At a briefing before the U.S. Congress in 1993, NYNEX, one of North America's largest telecommunications companies, described its experience with virus infections
· Since late 1989, the company had nearly 50 reported virus incidents - and believes it experienced another 50 unreported incidents.
· The single user, single PC virus incident is the exception. More typical incidents involved 17 PCs and 50 disks at a time. In the case of a 3Com network, the visible signs of infection did not materialize until after 17 PCs were infected. The LAN was down for a week while the cleanup was conducted.
· Even the costs of dealing with a so-called benign virus are high. A relatively innocuous Jerusalem-B virus had infected 10 executable files on a single system. Because the computer was connected to a token ring network, all computers in that domain had to be scanned for the virus. Four LAN administrators spent two days plus overtime, one technician spent nine hours, a security specialist spent five hours, and most of the 200 PC on the LAN had to endure 15-minute interruptions throughout a two-day period.
In the October 1993 issue of Virus Bulletin, Micki Krause, Program Manager for Information Security at Rockwell International, outlined the cost of a recent virus outbreak at her corporation:
• In late April 1993, the Hi virus was discovered at a large division of Rockwell located in the U.S. The division is heavily networked with nine file servers and 630 client PCs. The site is also connected to 64 other sites around the world (more than half of which are outside the U.S.). The virus had entered the division on program disks from a legitimate European business partner. One day after the disks arrived, the Hi virus was found by technicians on file servers, PCs and floppy disks. Despite eradication efforts, the virus continued to infect the network throughout the entire month of May.
• 160 hours were spent by internal PC and LAN support personnel to identify and contain the infections. At $45.00 per hour, their efforts cost Rockwell $7,200.
• Rockwell also hired an external consultant to assist Rockwell employees in the cleanup. 200 hours were spent by the consultant, resulting in a cost of $8,000.
• One file server was disconnected from the LAN to prevent the virus from further propagating across the network. The server, used by approximately 100 employees, was down for an entire day. Rockwell estimated the cost of the downtime at $9,000 (100 users @ $45/hr for 8 hours, with users accessing the server, on average, 25% of the normal workday).
• While some anti-virus software was in use, Rockwell purchased additional software for use on both the servers and the client PCs for an additional $19,800.
• Total Cost of the virus incident at Rockwell was $44,000.
Technical Overview
Computer Viruses And How They Work
Viruses are small software programs. At the very least, to be a virus, these programs must replicate themselves. They do this by exploiting computer code, already on the host system. The virus can infect, or become resident in almost any software component, including an application, operating system, system boot code or device driver. Viruses gain control over their host in various ways. Here is a closer look at the major virus types, how they function, and how you can fight them.
File Viruses
Most of the thousands of viruses known to exist are file viruses, including the Friday the 13th virus. They infect files by attaching themselves to a file, generally an executable file - the .EXE and .COM files that control applications and programs. The virus can insert its own code in any part of the file, provided it changes the hosts code, somewhere along the way, misdirecting proper program execution so that it executes the virus code first, rather than to the legitimate program. When the file is executed, the virus is executed first.
Most file viruses store themselves in memory. There, they can easily monitor access calls to infect other programs as they're executed. A simple file virus will overwrite and destroy a host file, immediately alerting the user to a problem because the software will not run. Because these viruses are immediately felt, they have less opportunity to spread. More pernicious file viruses cause more subtle or delayed damage - and spread considerably before being detected.
As users move to increasingly networked and client-server environments, file viruses are becoming more common. The challenge for users is to detect and clean this virus from memory, without having to reboot from a clean diskette. That task is complicated because file viruses can quickly infect a range of software components throughout a user's system. Also, the scan technique used to detect viruses can cause further infections; scans open files and file viruses can infect a file during that operation. File viruses such as the Hundred Years virus can infect data files too.
Boot Sector/partition table viruses
While there are only about 200 different boot sector viruses, they make up 75 percent of all virus infections. Boot sector viruses include Stoned, the most common virus of all time, and Michelangelo, perhaps the most notorious. These viruses are so prevalent because they are harder to detect, as they do not change a files size or slow performance, and are fairly invisible until their trigger event occurs - such as the reformatting of a hard disk. They also spread rapidly. The boot sector virus infects floppy disks and hard disks by inserting itself into the boot sector of the disk, which contains code that's executed during the system boot process. Booting from an infected floppy allows the virus to jump to the computer's hard disk. The virus executes first and gains control of the system boot even before MS-DOS is loaded. Because the virus executes before the operating system is loaded, it is not MS-DOS-specific and can infect any PC operating system platform - MS-DOS, Windows, OS/2, PC-NFS, or Windows NT.
The virus goes into RAM, and infects every disk that is accessed until the computer is rebooted and the virus is removed from memory. Because these viruses are memory resident, they can be detected by running CHKDSK to view the amount of RAM and observe if the expected total has declined by a few kilobytes. Partition table viruses attack the hard disk partition table by moving it to a different sector and replacing the original partition table with its own infectious code. These viruses spread from the partition table to the boot sector of floppy disks as floppies are accessed.
Multi-Partite Viruses
These viruses combine the ugliest features of both file and boot sector/partition table viruses. They can infect any of these host software components. And while traditional boot sector viruses spread only from infected floppy boot disks, multi-partite viruses can spread with the ease of a file virus - but still insert an infection into a boot sector or partition table. This makes them particularly difficult to eradicate. Tequila is an example of a multi-partite virus.
Trojan Horses
Like its classical namesake, the Trojan Horse virus typically masquerades as something desirable - e.g., a legitimate software program. The Trojan Horse generally does not replicate (although researchers have discovered replicating Trojan Horses). It waits until its trigger event and then displays a message or destroys files or disks. Because it generally does not replicate, some researchers do not classify Trojan Horses as viruses - but that is of little comfort to the victims of these malicious stains of software.
File Overwriters
These viruses infect files by linking themselves to a program, keeping the original code intact and adding themselves to as many files as possible. Innocuous versions of file overwriters may not be intended to do anything more than replicate but, even then, they take up space and slow performance. And since file overwriters, like most other viruses, are often flawed, they can damage or destroy files inadvertently. The worst file overwriters remain hidden only until their trigger events. Then, they can deliberately destroy files and disks.
Polymorphic viruses
More and more of today's viruses are polymorphic in nature. The recently released Mutation Engine - which makes it easy for virus creators to transform simple viruses into polymorphic ones - ensures that polymorphic viruses will only proliferate over the next few years. Like the human AIDS virus that mutates frequently to escape detection by the body's defenses, the polymorphic computer virus likewise mutates to escape detection by anti-virus software that compares it to an inventory of known viruses. Code within the virus includes an encryption routine to help the virus hide from detection, plus a decryption routine to restore the virus to its original state when it executes. Polymorphic viruses can infect any type of host software; although polymorphic file viruses are most common, polymorphic boot sector viruses have already been discovered.
Some polymorphic viruses have a relatively limited number of variants or disguises, making them easier to identify. The Whale virus, for example, has 32 forms. Anti-virus tools can detect these viruses by comparing them to an inventory of virus descriptions that allows for wildcard variations - much as PC users can search for half-remembered files in a directory by typing the first few letters plus an asterisk symbol. Polymorphic viruses derived from tools such as the Mutation Engine are tougher to identify, because they can take any of four billion forms.
Stealth Viruses
Stealth aircraft have special engineering that enables them to elude detection by normal radar. Stealth viruses have special engineering that enables them to elude detection by traditional anti-virus tools. The stealth virus adds itself to a file or boot sector but, when you examine the host software, it appears normal and unchanged. The stealth virus performs this trickery by lurking in memory when it's executed. There, it monitors and intercepts your system's MS-DOS calls. When the system seeks to open an infected file, the stealth virus races ahead, uninfects the file and allows MS-DOS to open it - all appears normal. When MS-DOS closes the file, the virus reverses these actions, reinfecting the file.
Boot sector stealth viruses insinuate themselves in the system's boot sector and relocate the legitimate boot sector code to another part of the disk. When the system is booted, they retrieve the legitimate code and pass it along to accomplish the boot. When you examine the boot sector, it appears normal - but you are not seeing the boot sector in its normal location. Stealth viruses take up space, slow system performance, and can inadvertently or deliberately destroy data and files. Some anti-virus scanners, using traditional anti-virus techniques, can actually spread the virus. That is because they open and close files to scan them - and those acts give the virus additional chances to propagate. These same scanners will also fail to detect stealth viruses, because the act of opening the file for the scan causes the virus to temporarily disinfect the file, making it appear normal.
Anti-Virus Tools And Techniques
Anti-virus software tools can use any of a growing arsenal of weapons to detect and fight viruses, including active signature-based scanning, resident monitoring, checksum comparisons and generic expert systems. Each of these tools has its specific strengths and weaknesses. An anti-virus strategy that uses only one or two of the following techniques can leave you vulnerable to viruses designed to elude specific defenses. An anti-virus strategy that uses all of these techniques provides a comprehensive shield and the best possible defense against infection.
Signature-Based Scanners
Scanners - which, when activated, examine every file on a specified drive - can use any of a variety of anti-virus techniques. The most common is signature-based analysis. Signatures are the fingerprints of computer viruses - distinct strands of code that are unique to a single virus, much as DNA strands would be unique to a biological virus. Viruses, therefore, can be identified by their signatures. Virus researchers and anti-virus product developers catalog known viruses and their signatures, and signature-based scanners use these catalogs to search for viruses on a user's system. The best scanners have an exhaustive inventory of all viruses now known to exist. The signature-based scanner examines all possible locations for infection - boot sectors, system memory, partition tables and files - looking for strings of code that match the virus signatures stored in its memory.
When the scanner identifies a signature match, it can identify the virus by name and indicate where on the hard disk or floppy disk the infection is located. Because the signature-based scanner offers a precise identification of known viruses, it can offer the best method for effective and complete removal. The scanner can also detect the virus before it has had a chance to run, reducing the chance that the infection will spread before detection. Against these benefits, the signature-based scanner has limitations. At best, it can only detect viruses for which it is programmed with a signature. It cannot detect so-called unknown viruses - those that have not been previously discovered, analyzed and recorded in the files of anti-virus software. Polymorphic viruses elude detection by altering the code string that the scanner is searching for; to identify these viruses, you need another technique.
There is more than this... but it won't fit. PLease, let me email you the copy so I can have the password.
f:\12000 essays\sciences (985)\Computer\Computer Secutity and the law.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER SECURITY
AND THE
LAW
I. Introduction
You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues.
Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities.
II. THE TECHNOLOGICAL PERSPECTIVE
A. The Objectives of Computer Security
The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems.
B. Basic Concepts
There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3].
The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.)
Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2
Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS.
Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future.
Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks.
C. Computer Security Requirements
The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject labels can access which object labels.
For example, suppose you went shopping and had to present your drives license to pick up some badges assigned to you at the entrance, each listing a brand name. The policy at some stores is that you can only buy the brand name listed on one of your badges. At the check-out lane, the cashier compares the brand names of each object you want to buy with names on your badges. If there's a match, she rings it up. But if you choose a brand name that doesn't appear on one of your badges she puts it back on the shelf. You could be sneaky and alter a badge, or pretend to be your neighbor who has more badges than you, or find a clerk who will turn a blind eye. No doubt the store would employ a host of measures to prevent you from cheating. The same situation exists on secure computer systems. Security measure are employed to prevent illicit tampering with labels, positively identify subjects, and provide assurance that the security measures are doing the job correctly. A comprehensive list of minimal requirements to secure an AIS are presented in The Orange Book[5].
III The Legal Perspective
A. Sources Of Computer Law
The three branches of the government, legislative, executive, and judicial, produce quantities of computer law which are inversely proportional to the amount of coordination needed for it's enactment. The legislative branch, consisting of the Congress and fifty state legislators, produce the smallest amount if law which is worded in the most general terms. For example, the Congress may pass a bill mandating that sensitive information in government computers be protected. The executive branch, consisting of the president and numerous agencies, issues regulations which implement the bills passed by legislators. Finally, the judicial branch serves as an avenue of appeal and decides the meaning of the laws and regulations in specific cases. After the decisions are issued, and in some cases appealed, they are taken as the word of the law in legally similar situations.
B. Current Views On Computer Crime
Currently there is no universal argument in the legal community on what constitutes a computer crime. One reason is the rapidly changing state of computer technology. For example in 1979, the U.S. Department of justice publication[6] partitioned computer crime into three categories: 1) Computer abuse, "the broad range of international acts involving a computer where one or more perpetrators made or could have made gain and one or victims suffered or could have suffered a loss." Computer crime, "Illegal computer abuse the implies direct involvement of computers in committing a crime. 3) Computer related crimes "Any illegal act for which a knowledge of computer technology is essential for successful prosecution." These definitions have become blurred by the vast proliferation of computers and computer related products over the last decade. For example, does altering an inventory bar code at a store constitute computer abuse? Should a person caught in such an act be prosecuted both under theft and computer abuse laws? Clearly, advances in computer technology should be mirrored by parallel changes in computer laws.
Another attempt to describe the essential features of computer crimes has been made by wolk and Luddy[1]. They claim that the majority of crimes committed against or which the use of a computer can be classified. These crimes are classified as follows: 1) sabotage, "involves an attack against the entire computer system, or against it's sub components, and may be the product of foreign involvement or penetration by a competitor." 2) Theft of services, "using a computer at someone else's expense. 3) Property crime involving the "theft of property by and through the use of a computer. A good definition of computer crime should capture all acts which are criminal and involve computers and only those acts. Assessing the completeness of a definition seems problematic, tractable using technical computer security concepts.
IV. Conclusion
The development of effective computer security law and public policy cannot be accomplished without cooperation between the technical and legal communities. The inherently abstruse nature of computer technology and the importance of social issues it generates demands the combined talents of both. At stake is not only a fair and just interpretation of the law as it pertains to computers, but more basic issues involving the protection of civil rights. Technological developments have challenged these rights in the past and have been met with laws and public policies which have regulated their use. For example the use of the telegraph and telephone gave rise to privacy laws pertaining to wire communications. We need to meet advances in automated information technology with legislation that preserves civil liberties and establishes legal boundaries for protecting confidentiality, integrity, and assured service. Legal and computer professionals have a vital role in meeting this challenge together.
REFERENCES
[1] Stuart R. Wolk and William J. Luddy Jr., "Legal Aspects of Computer Use" Prentice Hall, 1986,pg. 129
[2] National Computer Security Center, "Glossary of Computer Security Terms" October 21,1988
[3] Thomas R. Mylott III, "Computer Law for the Computer Professional," Prentice Hall, 1984,p.g. 131.e
[4] Gasser, Morrie, "Building a Secure Computer System" Van Nostrand, 1988.
[5] Department of Defense, "Department of Defense Trusted Computer System Evaluation Criteria," December 1985
[6] United States Department of Justice, "Computer Crime, Criminal Justice Resource Manual," 1979
COMPUTER SECURITY
AND THE
LAW
I. Introduction
You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues.
Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities.
II. THE TECHNOLOGICAL PERSPECTIVE
A. The Objectives of Computer Security
The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems.
B. Basic Concepts
There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3].
The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.)
Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2
Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS.
Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future.
Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks.
C. Computer Security Requirements
The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject labels can access which object labels.
For example, suppose you went shopping and had to present your drives license to pick up some badges assigned to you at the entrance, each listing a brand name. The policy at some stores is that you can only buy the brand name listed on one of your badges. At the check-out lane, the cashier compares the brand names of each object you want to buy with names on your badges. If there's a match, she rings it up. But if you choose a brand name that doesn't appear on one of your badges she puts it back on the shelf. You could be sneaky and alter a badge, or pretend to be your neighbor who has more badges than you, or find a clerk who will turn a blind eye. No doubt the store would employ a host of measures to prevent you from cheating. The same situation exists on secure computer systems. Security measure are employed to prevent illicit tampering with labels, positively identify subjects, and provide assurance that the security measures are doing the job correctly. A comprehensive list of minimal requirements to secure an AIS are presented in The Orange Book[5].
III The Legal Perspective
A. Sources Of Computer Law
The three branches of the government, legislative, executive, and judicial, produce quantities of computer law which are inversely proportional to the amount of coordination needed for it's enactment. The legislative branch, consisting of the Congress and fifty state legislators, produce the smallest amount if law which is worded in the most general terms. For example, the Congress may pass a bill mandating that sensitive information in government computers be protected. The executive branch, consisting of the president and numerous agencies, issues regulations which implement the bills passed by legislators. Finally, the judicial branch serves as an avenue of appeal and decides the meaning of the laws and regulations in specific cases. After the decisions are issued, and in some cases appealed, they are taken as the word of the law in legally similar situations.
B. Current Views On Computer Crime
Currently there is no universal argument in the legal community on what constitutes a computer crime. One reason is the rapidly changing state of computer technology. For example in 1979, the U.S. Department of justice publication[6] partitioned computer crime into three categories: 1) Computer abuse, "the broad range of international acts involving a computer where one or more perpetrators made or could have made gain and one or victims suffered or could have suffered a loss." Computer crime, "Illegal computer abuse the implies direct involvement of computers in committing a crime. 3) Computer related crimes "Any illegal act for which a knowledge of computer technology is essential for successful prosecution." These definitions have become blurred by the vast proliferation of computers and computer related products over the last decade. For example, does altering an inventory bar code at a store constitute computer abuse? Should a person caught in such an act be prosecuted both under theft and computer abuse laws? Clearly, advances in computer technology should be mirrored by parallel changes in computer laws.
Another attempt to describe the essential features of computer crimes has been made by wolk and Luddy[1]. They claim that the majority of crimes committed against or which the use of a computer can be classified. These crimes are classified as follows: 1) sabotage, "involves an attack against the entire computer system, or against it's sub components, and may be the product of foreign involvement or penetration by a competitor." 2) Theft of services, "using a computer at someone else's expense. 3) Property crime involving the "theft of property by and through the use of a computer. A good definition of computer crime should capture all acts which are criminal and involve computers and only those acts. Assessing the completeness of a definition seems problematic, tractable using technical computer security concepts.
IV. Conclusion
The development of effective computer security law and public policy cannot be accomplished without cooperation between the technical and legal communities. The inherently abstruse nature of computer technology and the importance of social issues it generates demands the combined talents of both. At stake is not only a fair and just interpretation of the law as it pertains to computers, but more basic issues involving the protection of civil rights. Technological developments have challenged these rights in the past and have been met with laws and public policies which have regulated their use. For example the use of the telegraph and telephone gave rise to privacy laws pertaining to wire communications. We need to meet advances in automated information technology with legislation that preserves civil liberties and establishes legal boundaries for protecting confidentiality, integrity, and assured service. Legal and computer professionals have a vital role in meeting this challenge together.
REFERENCES
[1] Stuart R. Wolk and William J. Luddy Jr., "Legal Aspects of Computer Use" Prentice Hall, 1986,pg. 129
[2] National Computer Security Center, "Glossary of Computer Security Terms" October 21,1988
[3] Thomas R. Mylott III, "Computer Law for the Computer Professional," Prentice Hall, 1984,p.g. 131.e
[4] Gasser, Morrie, "Building a Secure Computer System" Van Nostrand, 1988.
[5] Department of Defense, "Department of Defense Trusted Computer System Evaluation Criteria," December 1985
[6] United States Department of Justice, "Computer Crime, Criminal Justice Resource Manual," 1979
COMPUTER SECURITY
AND THE
LAW
I. Introduction
You are a computer administrator for a large manufacturing company. In the middle of a production run, all the mainframes on a crucial network grind to a halt. Production is delayed costing your company millions of dollars. Upon investigating, you find that a virus was released into the network through a specific account. When you confront the owner of the account, he claims he neither wrote nor released the virus, but he admits that he has distributed his password to "friends" who need ready access to his data files. Is he liable for the loss suffered by your company? In whole or in part? And if in part, for how much? These and related questions are the subject of computer law. The answers may very depending in which state the crime was committed and the judge who presides at the trial. Computer security law is new field, and the legal establishment has yet to reach broad agreement on may key issues.
Advances in computer security law have been impeded by the reluctance on the part of lawyers and judges to grapple with the technical side of computer security issues[1]. This problem could be mitigated by involving technical computer security professional in the development of computer security law and public policy. This paper is meant to help bridge to gap between technical and legal computer security communities.
II. THE TECHNOLOGICAL PERSPECTIVE
A. The Objectives of Computer Security
The principal objective of computer security is to protect and assure the confidentiality, integrity, and availability of automated information systems and the data they contain. Each of these terms has a precise meaning which is grounded in basic technical ideas about the flow of information in automated information systems.
B. Basic Concepts
There is a broad, top-level consensus regarding the meaning of most technical computer security concepts. This is partly because of government involvement in proposing, coordinating, and publishing the definitions of basic terms[2]. The meanings of the terms used in government directives and regulations are generally made to be consistent with past usage. This is not to say that there is no disagreement over the definitions in the technical community. Rather, the range of such disagreement is much narrower than in the legal community. For example there is presently no legal consensus on exactly what constitutes a computer[3].
The term used to establish the scope of computer security is "automated information system," often abbreviated "AIS." An Ais is an assembly of electronic equipment, hardware, software, and firmware configured to collect, create, communicate, disseminate, process, store and control data or information. This includes numerous items beyond the central processing unit and associated random access memory, such as input/output devises (keyboards, printers, etc.)
Every AIS is used by subjects to act on objects. A subject is any active entity that causes information to flow among passive entities called objects. For example, subject could be a person typing commands which transfer information from a keyboard (an object) to memory (another object), or a process running on the central processing unit that is sending information from a file(an object) to a printer a printer(another object).2
Confidentiality is roughly equivalent to privacy. If a subject circumvents confidentiality measures designed to prevent it's access to an object, the object is said to be "comprised." Confidentiality is the most advanced area of computer security because the U.S. Department of Defense has invested heavily for many years to find way to maintain the confidentiality of classified data in AIS [4]. This investment has produced the Department of Defense trusted computer system evaluation criteria[5], alternatively called the Orange Book after the color of it's cover. The orange book is perhaps the single most authoritative document about protecting the confidentiality of data in classified AIS.
Integrity measures are meant to protect data form unauthorized modification. The integrity of an object can be assessed by comparing it's current state to it's original or intended state. An object which has been modified by a subject with out proper authorization is sad to "corrupted." Technology for ensuring integrity has lagged behind that for confidentiality[4]. This is because the integrity problem has until recently been addressed by restricting access to AIS to trustworthy subjects. Today, the integrity threat is no longer tractable exclusively through access control. The desire for wide connectivity through networks and the increased us of commercial off the shelf software has limited the degree to which most AIS's can trust accelerating over the past few years, and will likely become as important a priority as confidentiality in the future.
Availability means having an AIS system and it's associated objects accessible and functional when needed by it's user community. Attacks against availability are called denial of service attacks. For example, a subject may release a virus which absorbs so much processor time that the AIS system becomes overloaded. This is by far the least well developed of the three security properties, largely for technical reasons involving the formal verification of AIS designs[4]. Although such verification is not likely to become a practical reality for many years, techniques such as fault tolerance and software reliability are used to migrate the effects of denial service attacks.
C. Computer Security Requirements
The three security properties of confidentiality, integrity, and availability are acvhied by labeling the subjects and objects in an AIS and regulating the flow of information between them according to a predetermined set of rules called a security policy. The security policy specifies which subject la
f:\12000 essays\sciences (985)\Computer\Computer Software Priacy and its Impact on the International .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Software Piracy and it's Impact on the International Economy
The PC industry is over twenty years old. In those twenty years, evolving software
technology brings us faster, more sophisticated, versatile and easy-to-use products.
Business software allows companies to save time, effort and money. Educational
computer programs teach basic skills and complicated subjects. Home software now
includes a wide variety of programs that enhance the users productivity and creativity.
The industry is thriving and users stand to benefit along with the publishers. The SPA
(Software Publishers Association) reports that the problem of software theft has grown,
and threatens to prevent the development of new software products. Unauthorized
duplication of software is known as software piracy which is a "Federal offense that
affects everyone" ("Software Use..." Internet). The following research examines
software piracy in its various forms, its impact on the end user and the international
industry as a whole, and the progress that has been made in alleviating the problem.
Software piracy harms all software companies and ultimately, the end user. Piracy
results in higher prices for honest users, reduced levels of support and delays in funding
and development of new products, causing the overall breadth and quality of software to
suffer" ("What is..." Internet). Even the users of unlawful copies suffer from their own
illegal actions: they receive no documentation, no customer support and no information
about product updates ("Software Use..." Internet).
The White Paper says that while virtually every software publisher expresses
concern about their software from unauthorized duplication, over time, many have
simply accepted the so-called "fact" that such duplication is unavoidable. This has
created an atmosphere in which software piracy is commonly accepted as "just another
cost of doing business" ("With the Growth..." Internet).
In a brochure published by the SPA it is stated that a major problem arises from
the fact that most people do not even know they are breaking the law. "Because the
software industry is relatively new, and because copying software is so easy, many people
are either unaware of the laws governing software use or choose to ignore them" ("To
Copy or not to Copy" Internet).
Robert Perry states that much of the problem of software theft arises from the way
the software industry developed. In the past, when a software firm spent millions of
dollars to write a program for a mainframe computer, it knew it would sell a handful of
copies. It licensed each copy to protect its ownership rights and control the use of each
copy. That is easy to do with only a few copies of a program. It is impossible for a
software company to handle five million copies of there latest program (27).
Software piracy is defined as any violations of software license agreements. In
1964, the United States Copyright Office began to register software as a form of literary
expression. The Copyright Act, title 17 of the U.S. Code, was amended in 1980 to
explicitly include computer programs. Today, according to the Copyright Act, it is illegal
to make or distribute copyrighted material without authorization, the only exceptions are
the user's right to make as an "essential step" in using the program (for example, by
copying the program into RAM or on the hard drive) and to make a single backup copy
for "archival purposes." No other copies may be made without specific authorization
from the copyright owner (title 17 section 117).
A SPA press release shows that in December 1990, the U.S. Congress approved
the Software Rental Amendments Act, which generally prohibits the rental, leasing or
lending of software with out the express written permission of the copyright holder
("Retailers Agree..." Internet). "It doesn't mater whether the transaction is called 'rental,
'buy-back,' 'try before you buy,' preview,' 'evaluation' or any similar term. If the
software dealer does not have written permission from the copyright holders to rent
software, it is illegal to do so." said Sandra Sellers, SPA vice president of intellectual
property education and enforcement ("SPA sues..." Internet.")
NERDC information services researched that the copyright holder may grant
additional rights at the time the personal computer software is acquired. For example,
many applications are sold in LAN (local area network) versions that allow a software
package to be placed on a LAN for access by multiple users. Additionally, permission is
given under special license agreement to make multiple copies for use throughout a large
organization. However unless these rights are specifically granted, U.S. law prohibits a
user from making duplicate copies of software except to ensure one working copy and
one archival copy (NERDC Internet).
Without authorization from the copyright owner, title 18 of U.S. Code prohibits
duplicating software for profit, making multiple copies for use by different users within
an organization, downloading multiple copies from a network, or giving an unauthorized
copy to another individual. All are illegal and a federal crime. Penalties include fines up
to $250, 000 and jail terms up to five years (Title 18, Section 2320 and 2322).
Microsoft states that illegal copying of personal computer software is a crucial
dilemma both in the United States and over seas. Piracy is widely practiced and widely
tolerated, in some countries, legal protection for software is non existent; in others laws
are unclear, or not enforced with sufficient commitment. Significant piracy losses are
suffered in virtually ever region of the world. In Some cases, like Indonesia, the rate of
unauthorized copies is believed to be in excess of ninety-nine percent ("What is..."
Internet). Copyright laws vary widely from country to country, as do interpretations of the
laws and the degree to which they are enforced. The concept of protecting the intellectual
property incorporated in software is not universally recognized.
Asia is one of the most technologically advanced regions of the world. As the
software market continues to grow and flourish so does the black market of software
piracy ("The Impact..." Internet). The worst countries in this area are China and Russia.
Named "one copy countries" two years in a row (1995 and 1996) by the SPA, studies
show that ninety-five to ninety-eight percent, virtually every copy, of U.S. business
software is illegally pirated, which costs U.S. software companies an estimated five-
hundred million dollars a year ("SPA names..." Internet and "U.S., China..." D1 - 2). In
Russia the latest statistics from the SPA show that ninety-five percent of business
software is illegally copied, that cost the U.S. $117 million in 1994 ("SPA names..."
Internet).
Although Asia has extremely high piracy rates, SPA Executive Director Ken
Wasch comments "China, Russia, and Thailand (the three countries in Asia with the
highest piracy rates) deserve credit for enacting copyright laws that specifically protect
computer programs and other software..." Russia and China enacted copyright protection
statutes several years ago, and Thailand enacted its law late in 1994 ("SPA names..."
Internet).
Asian countries have also taken action against offenders of copyright laws. The
SPA reports that "on Wednesday, May 22, 1996, Hong Kong Customs officers arrested
two suspected software pirate vendors and seized 20 CD-ROMs, each containing
software with an estimated total retail value of US$20,000 along with the equipment
capable of reproducing the pirate CDs" ("Hong Kong..." Internet). A Software Publishers
Association press release shows more examples of Asia's fight against software piracy
when Singapore police raided vans carrying 5,800 CD-ROMs containing $700,000 U.S.
dollars worth of pirated software on March 25, 1996 ("SPA, Singapore..." Internet). The
Bloomberg forum reports that on August 7, 1995 China anti-piracy forces invaded stores
in the southwestern city of Chengdu and arrested 37 people. The Business Software
Alliance's "vice president Stephanie Mitchell said while that was the largest number of
people so far arrested in a single raid on software retailers, China must dish out harder
punishments to discourage pirates after their caught" ("China takes..." Internet).
A result of China's lack of strictness, the SPA called upon the USTR (U.S. Trade
Representative) "...to take action against China under Section 306 of the Trade Act of
1974 for failing to improve enforcement of intellectual property right in computer
software." Also Russia and Korea were placed on the Special 301 Priority Watch List by
the USTR so that the SPA is able to review their intellectual property laws and
enforcement ("China and Russia..." Internet). "The United States and China signed a
major accord in March of 1996 mandating tough enforcement against intellectual
property piracy in China..."(Parker np).
The BSA's European anti-piracy program is comprised of over 20 countries
through out the region and was initiated in 1989 "...with the filing of the software
industry's first enforcement action for the illegal use of software in Italy". Piracy
continues to be a significant problem in spite of the enactment of stronger copyright laws
and successful prosecutions against software theft. "The average piracy rates of 25
European countries was estimated at 58 percent in 1994, with dollar losses exceeding $6
billion" ("The Impact..." Internet).
Microsoft's studies show that many European countries including some which
offer computer software protection, have "unreasonably burdensome" administrative
rules. Poland and the United Kingdom have displayed difficulty in collecting evidence
and Greece is blamed for "fragmentation of court process." Most European countries do
not have sufficient penalties and inadequate civil enforcement possibilities to discourage
piracy, especially Germany, Poland, Sweden and the UK. "Several countries, for
example, Belarus and Romania, have general copyright laws that protect literary
expression, but fail to clearly protect computer software" ("What is.." Internet). Ireland
is Europe's worst offender with yearly losses of more then forty-four million dollars per
year due to the fact that eighty-three percent of software is pirated ("Software Piracy:
Ireland..." Internet).
The BSA "called for legislative reform and stricter observance of laws" after
reviewing a study examining Europe's software piracy rates. The BSA argues that
"experience has shown that improved legal protection for software copyright, and better
policing by private companies and governments, can lead to a significant reduction in the
number of illegal copies being made" ("Software Piracy: Ireland..." Internet).
Latin America is the second fastest growing market for package software ("The
Impact..." Internet). SPA president Ken Wasch said, "The encouraging first quarters sales
data (1995) confirms Brazil's status as a major market for U.S. software publishers. With
a rapidly growing and increasing sophisticated economy. The potential for U.S. software
companies in Brazil is enormous" ("Latin America..." Internet). Gowning along with the
increase of sales and production is the threat of software theft "with the average piracy
rate in 16 countries estimated at seventy-eight percent in 1994" ("The Impact..." Internet).
The effect of international piracy organizations is a major problem that everyone
is aware of. Another element which is beginging to make its presence known is the small-
time software pirates that distribute software on BBSs (Bulletin Board Systems) or over
the Internet. As with most topics dealing with the extremely new Internet underground
and Internet crimes, it is very difficult to obtain information on these subjects. In order to
acquire information about these underground Internet crimes, which are important to
fully understand the concept of software piracy, most of the subject matter is supplied by
my own personal observations and investigations.
Most small-time software piracy centers around bulletin board systems that
specialize in "warez" (common underground term for pirated software). On these
systems, pirates can contribute and share copies of commercial software. Having access
to these systems (usually obtained by contributing copyrighted programs via telephone
modem or money donations) allows the pirate to copy, or "download," copyrighted
software. All the participants benefit because individuals must "upload" (copy files from
their system to the BBS) copyrighted programs in order to download. This way new
programs are appearing continuously.
My observation reveals how pirates have found ways to become more efficient by
creating mutual participation "pirate groups" (as referred to by the computer
underground). These groups are composed of ten to seventy members contributing in
different ways. The members usually are anywhere from thirteen to thirty years of age.
Some pirate groups are international, with members operating from different regions of
the world. Their primary purpose is to obtain the latest software, remove any copy-
protection from it and then distribute it to the pirate community. The methods the pirates
use to obtain the software is only known by the members of the pirate groups themselves.
Some speculate that the members either "hack" (break into a computer via modem from
one's own system) into computers of software companies and steal the software or "pay
off" employees of software companies. The software they receive is almost always less
then one day old and is often referred to as a "zero day ware."
"The Internet is an incredible international electronic information system
providing millions with access to education, entertainment. and business resources, as
well as promoting new forms of personal communication, including e-mail and on-line
chatting" (Larson Internet). This also creates ideal piracy breeding grounds. Software
pirates utilize the services of the Internet to "trade" copyrighted "warez." In 1994 the
Washington Post reported about an individual who had set up a computer bulletin board
system connected to the Internet, that allowed over one million dollars worth of software
to be copied. People using the Internet computer network were able to retrieve
commercial software from this BBS for free. The sysop (system operator or person
operating the BBS) was charged with fraud and copyright infringement but never
convicted because of "murky" laws (Daly, D1).
IRC (Internet relay chat) is an Internet service that enables people all over the
world to communicate with each other by means of "switching" channels and typing
messages on the screen. IRC also allows individual to "post" files in selected channels
most of which are copyrighted software available for trade. If someone sees a particular
program they want, all they have to do is "tag" the file for download and it is copied onto
their local hard drive.
With the exception of the real-time "chatting" capabilities of IRC, most of the
functions of USENET are the same. USENET is a message network available on the
Internet where users post public messages, on almost any topic imaginable, in hopes of
getting an answer. Like IRC users can attach files to the messages, some of which are
copyrighted programs. Through my own analysis I have found that software pirates have
found USENET and IRC to be extremely efficient ways to provide and trade copyrighted
software, which is beginning to make BBS use obsolete.
On-line services such as America Online, Prodigy, and CompuServe combine the
ease of use of BBSs and the capabilities of the Internet. Most on-line services provide e-
mail, virtual chat rooms, file areas and even access to the Internet. Software pirate groups
are found utilizing these on-line services to trade copyrighted software and with over
1.25 million other users on-line, they can go about unnoticed. David Pogue, a writer for
MacWorld says that members of these pirate groups sign on by using fake credit card
numbers and phony personal information. While on-line, the pirates trade copyrighted
software or "warez" by e-mailing them to each other and using chat rooms to receive
new programs (Pogue 37).
Most anti-piracy organizations have taken little, if any, action against this new
wave of software piracy. The Software industry looses millions if not billions of dollars
to small-time software pirates. On the pirates' side is the safety of private bulletin boards,
unclear laws, the vast size of on-line services and the fact that IRC and USENET are
completely lawless. There are no laws, no restrictions and no one to stop the software
pirates from committing their crimes. This permits pirates to go virtually undetected and
free from punishment. In a article on computer crime in Newsweek a spokes woman for
the on-line service Prodigy speaks about the Internet: "Its the Wild West. No one owns it.
It has no rules" (Meyer 36-38).
Microsoft says major software developers recognize that piracy is a problem.
They have begun taking steps to alleviate the problem. The software industry realizes that
the problem of software piracy cannot be solved by one company alone. Computer
companies have "made a commitment to address the problem together." Software
publishers are taking an active role in directly addressing software piracy by monitoring
markets, conducting investigations, and pursuing litigation on their own as well as
through the Business Software Alliance (BSA) and the Software Publishers Association
(SPA) ("What is..." Internet).
The White Paper lists "a number of potential solutions to software piracy that
software publishers have used over time." Package warning and license labeling makes
users aware of the consequences of illegal use of the software but usually are ignored by
the user. High profile "piracy busts" and legal action against organized counterfeiters by
anti-piracy organizations such as the SPA and BSA are "essentially sending a message to
pirates that there are real risks associated with illegally coping software." Site Licensing
is a "popular" and "cost-effective" way of selling software to large organizations who
need more then one copy of the software. Forced registration and support contracts only
effect novice computer users because experts don't necessarily need technical support or
manuals ("With the Growth..." Internet).
Software piracy is a worldwide problem; one that is making an impact on the
international economy and currently costing the software publishing industry more than
fifteen billion dollars per year in lost revenues. With the growing interest in the
distribution of software over the Internet and on-line services, the potential for these
losses to increase is very real. Software publishers have used a number of alternative
methods to protect their intellectual property, but have generally achieved marginal
success in reducing losses to piracy.
Works Cited
"China and Russia Again Named 'One Copy Countries' by the SPA in special 301
Report." Software Publishers Association. Press Release. Washington D.C. 20 Feb
1996. URL: http://www.spa.org/gvmt/spa301.htm.
"China Takes Software Piracy Clampdown Inland." Bloomberg Forum. 1995. News and
Observer. URL: http://www.nando.net/new...fo/080785/info518_5.html.
Daly, Christopher B. "Judge Dismisses Fraud Charges Against Student in Software
Case." Washington Post. 30 Dec 1994: D1. NewsBank CD-ROM 1995.
"Hong Kong Software Pirates Arrested Due to SPA Investigation." Software Publishers
Association. Press Release. Washington D.C. 4 June 1996. URL:
http://www.spa.org/piracy/releases/hongk.htm.
Larson, Megan J. "Copyright in Cyberspace." ts. U of Oregon, 1995. URL:
http://gladstone.
uoregon.edu/%7Emega/Copy.html.
"Latin America Software Sales Reach $48.2 Million in First Quarter 1995." Software
Publishers Association. Press Release. Washington D.C. 13 Feb 1995. URL:
http://www.spa.org/research/95q1lati.htm.
Meyer, Michael. "Stop! Cyberthief!" Newsweek. 6 Feb 1995: 36-38. SIRS Researcher
CD- ROM, 1995. Art 103. Parker, Jerry. "China Tackles Software Piracy at State
Agencies." Reuters: 14 April 1995: np.NewsBank CD-ROM 1995.
Perry, Robert L. Computer Crime. New York: Franklin Watts, 1986.
"Retailers Agree Not to Rent Computer Software Without Permission From Publishers."
Software Publishers Association. Press Release. Washington D.C. 7 Feb, 1996.
URL: http://www.spa.org/piracy/releases/swrental.htm.
"Software Piracy - It's not Worth the Risk." NERDC Information Service. URL:
http://nervm.nerdc.ufl.edu/update/U9506O7A.html.
"Software Piracy: Ireland is Europe's Worst Offender." IBCE News. URL:
http:///www.iol.ie/ibc/news/IBEC/january/4.htm.
Software Publishers Association. Software Use and the Law. Washington D.C.: SPA
1995.
URL:http://www.spa.org/piracy/sftuse.htm.
Software Publishers Association. To Copy or Not to Copy. Washington D.C.: SPA 1996.
URL: http://www.spa.org/piracy/okay.htm.
"SPA Names Russia, China 'One Copy Countries.'" Software Publishers Association.
Press Release. Washington D.C. 13 Feb 1995. URL: http://www.spa.org/gvmt/
onecopy.html.
"SPA, Singapore Police, and AACT Raid Vans Carrying Pirated Software." Software
PublishersAssociation. Press Release. Washington D.C. 4 June 1996. URL:
http://www.spa.org/piracy/releases/singapor.htm.
"SPA Sues Six U.S. Software Rental Companies." Software Publishers Association. Press
Release. Washington D.C. 28 Feb 1996. URL: http//www.spa.org/piracy/releases/
rentsuit.htm.
"The Impact of Software Piracy on the International Market Place." URL:
http://198.105.234.4/
piracy/rgnifact.htm.
United States. U.S. Code: Copyright Acts. Title 17, Sec 17.
United States. U.S. Code: Copyright Acts. Title 18, Sec 2320 and 2322.
"U.S., China Avert Trade War." Sun-Sentinel 18 June 1996: 1D - 2.
"With the Growth of Worldwide Software Piracy and the Emergence of On-Line
Software Distribution, Protecting Intellectual Property is now More Critical than
Ever." The White Pages. URL: http://www.hasd.com/hasd/misc/white.htm.
"What is Software Piracy?" Microsoft Anti-Piracy Home Page. 1995. URL:
http://198.105.232.4/piracy/intlrep.htm.
f:\12000 essays\sciences (985)\Computer\Computer System in the Context of Retail Business.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computing Studies Assignment
Computer System in the Context of Retail Business
w Today, retailing businesses have to had up to date technology in order to be successful. Accurate, efficient communication sending and receiving can affect the business. So it is very important that to have the latest technology such as computers and networks. Retailing on a local and global scale can also affect how successful is the business. Locally, efficient networking that retailing businesses had allow customers purchase goods more faster such as the new bar-code scanners in supermarkets helps customers reduce time waiting in order to purchase goods. Globally, such as trading, eg: a computer retailing store may like to purchase some stock from over seas, they can make contracts by using the Internet.
w Computer systems in retail trading on a local and global scale played an important role in today's society. Computer systems such as :
the supermarket POS system, provides efficient and accurate calculations when customers purchasing goods.
Absolut Software, provides a host of state-of-the-art capabilities vital for
increasing sales and productivity. Absolut Software will easily reduce the number of operators and supporting hardware by 15 percent. Absolut Software provides a training mode for novices and a high-speed mode for the experienced.
Features:
* Complete mailing list management
* Promotion tracking
* Catalog and telemarketing
* Importing sub-system
* On-line order entry
* Inventory control (multi-site, serialized, lot number, decimal quantities, and style-color-size)
* Credit card billing
* Computer-driven in-store POS with "Suspend and Hold;" availability displayed
* Wildcard search
* User-definable extended search
* Unlimited text and binary (graphics) storage for key fields
* A complete financial (A/R, A/P, G/L) sub-system
* E.D.I.
And the international network system which brought customers information, e-mail address, and contract forms of customers are interested.
w A retail computer system can do tasks such as:
Stock control is the control and how much stock were around the business, and the price of the items. Eg: Such as the SMART System is a totally integrated and interactive retail business system that includes the following modules: Order Entry, Inventory Management, Sales Analysis, Accounts Payable, Accounts Receivable, Monthly Lease, Financial Accounting, Payroll, and Customer Mailing. Modules may be purchased separately if desired. An optional module for contracts and insurance calculation and form printing, known as EZYCALC, is also available and can be integrated into the system. The system was originally programmed for the Retail Furniture Industry but works equally well for any big ticket retail operation. All affected files are updated as each transaction is keyed providing real-time information and reports also the system will handle multiple and remote store operations.
Personnel Management is the system which kept the record of employees and staff, also the information of their salary, holidays and absent days. The system is designed to manage is wages and staff going right or not.
Checkout Eg: The Easy Sale Scan is a bar-code scanning point of sale application which automatically organizes business functions from any central computer site. Bar-code scanning minimizes keyboard data entry errors and user frustration
while providing unlimited information selection, transmission, updating, and reporting.
Features:
* Automatically calculates quantity discounts, special customer discounts, sales tax amounts, credit limit verification, and change due.
* The Work-in-Progress feature automatically generates a current production schedule of orders to be processed by customer or product.
* The Inventory feature flags price fluctuations or dangerously low stock quantities on site.
* A built-in expert system displays the latest trends for re-order decision support.
* A universal API provides open connectivity to any database or hardware platform. Easy Sale Scan is currently portable to over 140 computer platforms.
Customised labels can help customers to know which region of store sales which item and show customers new items which were out.
w Hardware played by retailing store:
Point of Sale System is designed for retail and/or wholesale businesses that need to generate at-counter customer invoices and monitor product inventory levels. This easy-to-use system improves employee cash handling efficiency and accuracy by displaying all cash transaction data. It supports split tender payments, unit of measure pricing (including metric pricing) and "look up" features that allow the user to view product, price, and customer information directly on the terminal. The reports generated include margin, price, inventory valuation, minimum on hand, and daily summary. For multistore chains and franchises, a remote store processing add-on is available. The system interfaces with Accounts Receivable, General Ledger, and Purchase Order Management. Single and multi-user versions are available..
The electronic funds transfer and payment processing system. The system performs credit card sales, refunds and voids. It also performs debit card sales and voids and has full transaction "stand-in" store-and-forward capabilities to maintain sales activity if transmission lines go down.
The Central Computer contains all necessary information in the business and operates as a server. It contains data such as database on items and employees.
w Retailing businesses has been effected by the introduction of computer systems by the demand of the customers. Such as efficient and accurate calculation when purchasing goods, convenient cash transfer ( cash registers) and efficient of shock keeping and record keeping.
w The development of technology brought about by the needs of the retail trade:
Customised Software- as the growth of large and small retailing store and the demand of efficient systems were increasing. So developers were starting to increase and to produce more produces.
Bar code readers- designed to be more efficient when calculating and reading prices off the item.
Eftpos- the Electronic funds transfer at point of sale system were also designed to be convenient to customers who wouldn't carry a vast amount of money around when purchasing goods.
International bar code convention- Bar codes were placed onto every package of product. When bar code readers attached the product and it will automatically display the product's information and calculate the price.
w Retailers have dealt with the issues of:
Privacy- not allowing people to access the database of the store and may set password to protect it.
The nature of work- Many jobs were done by people before but they can been taking over by more faster and powerful computers which enable jobs to be done more efficiently and more accurate.
Copyright- laws to stop companies braking into other companies computer systems and stealing their developing plan or secrets and trading price.
Ethics- they are issues that determine the action is right or wrong. Is some information open to public in the retailing that anyone can access? Business had to consider that before they take the action.
Computer Crime- Computer Crime had been increasing due to the companies starts to join in to Internet and local area networks. Companies may set password protecting the data.
w Ways that certain issues have been effected by the use of technology in retail establishments:
Power- Power consumption were much higher than before due to the brought of computer system.
Control- Decreasing of control by retailing due to the freedom of users. But retailing business had control on information that can be open in public and which cannot. That was related to the freedom of information policies.
Equity- some jobs can be done by people who were menially ill and companies should provide opportunity to them.
The Environment- The technology had a great effect to the environment, such as power saving computers, less radiation computer in retailing business and also air condition to change the temperature of the workplace.
* Similarities between the computer based systems in the context of libraries and context of retail trade are that technology had been a great influence. Technology had been more and more advanced as time pasts to meet the needs of people. Both the context of Libraries and Retailing trade considered many factors such as privacy, nature of work, copyright, ethics and computer crime as well as power, control, equity, and the environment.
* Differences between the context of Libraries and context of Retailing trade are libraries considered the flow of information to people such as help people to access to information they would like to have but retailing considered only on their own information but not others.
Anthony Wu
11CS2
f:\12000 essays\sciences (985)\Computer\Computer Systems Analyst.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Systems Analyst
I push the button, I hear a noise, the screen comes alive. My computer
loads up and starts to process. I see the start screen for Windows 95, and I type
in my password. Even though this takes time, I know that I will be able to do
whatever I want to do without any trouble, without any glitches, without any
questions. My computer is now easier to use and more user friendly because
computer systems analysts have worked out the problems that many computer
systems still have.
It appears to me that a career choice needs to contain a number of
different features. The first being: Will this area of interest mentally stimulate me
as well as challenge me? The second being: Is there a way of making a living
in these areas of interest? And finally: Do I enjoy the different activities within
this area of interest? From the first day that I started my first computer, I have
grasped the concepts quickly and with ease. But the computer as well as I will
never stop growing. I have introduced myself to all topics of word processing to
surfing the web. After reviewing a number of resources, I have noticed a
relatively high demand for technologically integrated hardware and software
positions available with companies that wish to compete with the demand for
"networking". ("Computer Scientists" 95) This leads me to believe that future
employment prospects will be high and of high quality pay within the next eight
to ten years. The past, present, and future have and will see the computer.
Since I have seen the computer, I have enjoyed the challenges and countless
opportunities to gain in life from this machine. From school projects to games;
from the Internet to programming languages; I have and always will feel like that
little kid in the candy store.
Job Description
A Computer Systems Analyst decides how data are collected, prepared
for computers, processed, stored, and made available for users. ("Computer
Systems" COIN 1) The main achievement as a systems analyst is to improve
the efficiency or create a whole new computer system that proves to be more
efficient for a contracting company. When on an assignment, the analyst must
meet a deadline. While striving for a deadline, he must create and comprehend
many sources of information for the presentation. He must review the systems
capabilities, workflow, and scheduling limitations ("Systems Analyst" 44), to
determine if certain parts of the system must be modified for a new program.
First, a computer programmer writes a program that he thinks will be
beneficial for a certain system. He incorporates all of what he thinks is
necessary. But the hard part is when the programmer runs the program. 99% of
the time the program will not work, thus not creating a profit for the company.
Then the analyst looks at the program. It is now his job to get rid of all of the
glitches that are present. He must go over every strand of the program until the
program is perfect.
When the analyst is finished "chopping up" the program, he must then
follow a technical procedure of data collecting, much like that of a science lab.
The Dictionary of Occupational Titles says he must plan and prepare technical
reports, memoranda, and instructional manuals as documentation of program
development. (44)
When the presentation day is near, the analyst submits the proof. He
must organize and relate the data to a workflow chart and many diagrams. More
often than not, an idea is always to good to be true unless the proof is there. For
this new program that will go into the system, detailed operations must be laid
out for the presentation. Yet, when the system hits the market, the program must
be as simple as possible. A computer systems analyst must always look for the
most minute points whenever a program is be reviewed.
Education and Training
Many people think that this is the type of a job where you must really like
the concept. This is true. Many people thing that you need a great prior
experience to ever make it somewhere. This is true. Many people think that you
need a Bachelors degree to at least star out somewhere. This is not true.
Through research, it a known fact that you don't really have to go to college to
ever make it. In this particular field, a college education would be helpful to
impress the employer, but for a basic analyst job, the only proof really needed to
go somewhere is the Quality Assurance Institute. This awards the designation
Certified Quality Analyst (CQA) to those who meet education and experience
requirements, pass an exam, and endorse a code of ethics. ("Computer
Scientists" 95) Linda Williams found a technical analyst at the Toledo Hospital,
who went to the Total Technical Institute near Cleveland and earned his CQA.
(11 -13)
However, college is the best bet and a bachelors is the best reward to
have after achieving the CQA. Employers almost always seek college graduates
for analyst positions. Many however, have some prior experience. Many
rookies are found in the small temporary agencies that need small help. The
one who have really made it are in the business for at least 15 years.
When in a secure professional position, an analyst will always need an
upgrading just a quickly as the systems themselves do. Continuous study is
necessary to keep the skills up to date. Continuing education is usually offered
by employers in the form of paid time in night classes. Hardware and software
vendors might also sponsor a seminar where analysts will go to gather ideas and
new products. Even colleges and universities will sponsor some of these types
of events. ("Computer Systems" America's 36)
Environment, Hours, and Earnings
Systems analysts work in offices in comfortable surroundings. They
usually work about 40 hours a week - the same as other professionals and office
workers. Occasionally, however, evening or weekend work may be necessary to
meet deadlines according to America's 50 fastest Growing Jobs. (36) Most of
the time, an analyst will live a quite lifestyle, unlike that of a lawyer or doctor.
Even he has the freedoms that those occupations don't offer. The pay might
decrease, but the family time increases. Although this may sound pretty basic, it
is coming to the point where the common analyst will work from the everyday
setting. In bed, at home, in the car and at the diner might all be places where an
analyst might perform his work thanks to the technology available today. Even
technical support can be done from a remote location largely in part to modems,
laptops, electronic mail and even the Internet. ("Computer Scientists" 94)
So as the hours per week is starting to vary because of where the work
can be done, so are the earnings. The industry is growing and according to the
Occupational Outlook Quarterly Chart, the industry will be the fastest growing
from now until 2005. This occupation will grow so rapidly in fact, that in 2005,
the number of systems analysts will have increased by 92%. To imagine that
this is the only job that will practically double by the year 2005 is to think that the
earnings would go up too. According to the same chart, the average weekly
earning are $845. This is third only to the two obvious occupations of Lawyers,
and Physicians. (48)
In 1994, the median earning for a full time computer systems analyst was
about $44,000. The middle 50% earned between $34,100 and $55,000. The
highest tenth of all analysts earned $69,400 where those with degrees generally
earn more. ("Computer Scientists" 95) It is also stated in America's 50 Fastest
Growing Jobs that systems analysts working in the Northeast had the highest
earnings and those working in the Midwest had the lowest earnings. (37)
Works Cited
"America's Fastest Growing Job Opportunities." Hispanic Times. 1996
"Computer Scientists and Systems Analysts." Occupational Outlook Handbook.
Indianapolis: JIST Works Inc. pp. 93-95.
"Computer Systems Analyst." COIN Educational Products. CD-ROM, 1995-96:
1-6
Farr, J. Michael. (1994). America's 50 Fasted Growing Jobs. Indianapolis: JIST
Works Inc.
Emch, Brian. Job Shadowing. Dana Corporation. 1996
Occupational Outlook Quarterly. Bureau of Labor Statistics. 1996.
"Systems Analyst." Dictionary of Occupational Titles. US Department of Labor.
1992: p.44
Williams, Linda. Careers Without College: Computers. Princeton: Peterson's
Guides. 1992.
f:\12000 essays\sciences (985)\Computer\Computer Technician.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Technician
I Believe a Computer Technician is a good career for me because I have been around computers for many years now and enjoy them. I began to learn the basics of computers from my father when I was about 9 years old. Since then I have pretty much taught myself and took off in the computer field. I now have 7 networked computers " linked together ", help run an internet provider and build web pages.
About a year ago my Uncle changed jobs and now he is a Computer Technician. I have been working with him and really enjoy it.
Five Tasks a Computer Technician May Perform
Generally there are five tasks a Computer Technician has to perform such as : conducting research, analyzing systems, monitoring software and hardware, fixing hardware and software and designing computers.
Working Conditions
The working conditions of a Computer Technician varies. It depends on where and who you are working for. Usually the average working environment is indoors, quiet, temperature controlled and usually alone.
Working Schedule
The working hours vary as well. Computer Technician's are on call 24 hours 7 days a week due to the fact that most companies computers are running all the time and cannot wait long for their computer to be fixed.
Salary
The average salary for a Computer Technician is approximately $65,500 per year.
To become a Computer Technician you need one or two years of technical training and you must have good math skills Which most technical and vocational schools offer. There are no licensing or exams needed to pass to become a Computer Technician.
Certain personal qualities are needed to become a Computer Technician such as good eyesight, good hearing and the ability to work without supervision. Certain skills are needed as well such as how different computers function and work with others.
Computer Technician employment opportunities exist now as listed in the want ad's and are going to continue to grow in the future.
To become a Computer Technician you might want to pursue business courses, advanced math and computer courses during high school.
To prepare myself to become a Computer Technician I am going to have to take advantage of math classes to improve my math skills and take computer classes that are being offered in high school.
In my opinion a Computer Technician is the best choice for me because I have been around computers for so long, enjoy them and like solving other peoples and companies computers.
f:\12000 essays\sciences (985)\Computer\Computer Technology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Programming
Programming a computer is almost as easy as using one and does not require you to be a math genius. People who are good at solving story problems make good programmers, and others say that artistic or musical talent is a sign of potential programmer. Various computer languages are described, and tips on choosing the right language and learning how to use it are provided.
Learning how to program is actually easier than many people think. Learning to program takes about the same time as two semesters of a college course. The process of learning to program is uniquely reinforcing, because students receive immediate feedback on their screens. The programming languages Basic, Pascal, C, and Database are discussed; tips on learning the languages are offered; and a list of publishers' addresses is provided.
One way of programming is rapid application development (RAD) has tremendous powers, but it is not without its limits. The two basic advantages RAD tools promise over traditional programming are shorter, more flexible development cycle and the fact that applications can be developed by a reasonably sophisticated end user. The main disadvantage is that RAD tools often require code to be written, which will result in most developers probably having to learn to program using the underlying programming language, except in the case of the simplest applications. The time gained from using a RAD tool can be immense, however: Programmers using IBM's VisualAge report the ability to create up to 80 percent of an application visually, with the last 20 percent consisting of specialized functions, which means by using and IBM program it is much easier because most of the program is graphics which is just point and click to do, and the rest is code, which really isn't much.
Anyone who is willing to invest a little time and effort can now write computer programs and customize commercial applications, thanks to new software tools. People can create their own application with such programming languages as Microsoft's Visual Basic for Windows (which is about $130) or Novell's AppWare, part of its PerfectOffice suite. These products enable users to do much of their programming through point-and-click choices without memorizing many complicated commands.
Programming can also be very difficult. At least one programming mistake is always made and debugging it can be very hard. Just finding where the problem is can take a long time alone, then if you fix that problem, another could occur. There was a programming involving a cancer-therapy machine, has led to loss of life, and the potential for disaster will increase as huge new software programs designed to control aircraft and the national air-traffic control system enter into use. There is currently no licensing or regulation of computer programmers, a situation that could change as internal and external pressures for safety mount.
Programming these days is also hard if you don't have the right hardware and software. Limited memory, a lack of programming standards, and hardware incompatibilities contributed to this problem by making computing confusingly complicated. Computing does not have to be complicated anymore, however. Although computer environments still differ in some respects, they look and feel similar enough to ease the difficulty of moving from one machine to another and from one application to another. Improved software is helping to resolve problems of hardware incompatibility. As users spend less time learning about computers, they can spend more time learning with them.
I would like to learn some of these programming languages. I am especially interested in learning Borland C++ or Visual C++. Visual Basic is all right, but I think learning a C language would be much more interesting and probably more profitable in the future.
Bibliography
1. Business Week April 3, 1995
2. Byte Magazine August 1995
3. Compute Magazine June 1995
4. Compute Magazine May 1996
5. Newsweek Magazine January 29, 1995
f:\12000 essays\sciences (985)\Computer\Computer Viruses 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Information About Viruses
A growing problem in using computers is computer viruses. Viruses are pesky little programs that some hacker with too much time on his hands wrote. Viruses have been known to create annoyances from displaying messages on the screen, such as "Your PC is now stoned," to completely obliterating everything that is contained on the computer's hard disk. Viruses are transferred from computer to computer in a number of ways. The most common way that users receive viruses at home are downloading programs off the internet, or off of local Bulletin Board Systems, or BBSs. These viruses are then transferred to a floppy disk when written to by the infected computer. Computers can also be infected when a disk that is infected with a virus is used to boot the computer. When this computer is infected, every one who writes a floppy disk on this computer gets their floppy disk contaminated, and risks getting this virus on their computer, which may not have good virus protection. On IBM-Compatible PCs, viruses will only infect executable programs, such as the .EXE and .COM files. On a Macintosh, any file can be contaminated. A disk can also be infected even without any files on it. These viruses are called BOOT SECTOR viruses. These viruses reside on the part of the floppy disk, or hard disk that store the information so that these disks can be used, and is loaded into memory each time the computer is booted by one of these disks.
DON'T DESPAIR!
Despite all of what has just been said, viruses are controllable. Their is
software called Virus Protection Software. A couples of programs that have been proven to work are F-PROT and McAfee's Virus Scan. These programs scan the computer's memory and the files contained on the hard disk each time the program is executed. These programs also resided on the computer's memory. These programs will help reduce the spread of viruses.
f:\12000 essays\sciences (985)\Computer\Computer Viruses 4.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----------
Viruses
-----------
A virus is a program that copies itself without the knowledge of the computer user. Typically, a virus spreads from one computer to another by adding itself to an existing piece of executable code so that it is executed when its host code is run. If a virus if found, you shouldn't panic or be in a hurry, and you should work systematically. Don't rush!
A Viruse may be classified by it's method of concealment (hiding). Some are called stealth viruses because of the way that they hide themselves, and some polymorphic because of the way they change themselves to avoid scanners from detecting them.
The most common classification relates to the sort of executable code which the virus attaches itself to. These are:
¨ Partition Viruses
¨ Boot Viruses
¨ File Viruses
¨ Overwriting Viruses
As well as replicating, a virus may carry a Damage routine.
There is also a set of programs that are related to viruses by virtue of their intentions, appearances, or users likely reactions. For example:
¨ Droppers
¨ Failed viruses
¨ Packagers
¨ Trojans
¨ Jokes
¨ Test files
THE DAMAGE ROUTINE
Damage is defined as something that you would prefer not to have happened. It is measured by the amount of time it takes to reverse the damage.
Trivial damage happens when all you have to do is get rid of the virus. There may be some audio or visual effect; often there is no effect at all.
Minor damage occurs when you have to replace some or all of your executable files from clean backups, or by re-installing. Remember to run FindVirus again afterwards.
Moderate damage is done when a virus trashes the hard disk, scrambles the FAT, or low-level formats the drive. This is recoverable from your last backup. If you take backups every day you lose, on average, half a day's work.
Major damage is done by a virus that gradually corrupts data files, so that you are unaware of what is happening. When you discover the problem, these corrupted files are also backed up, and you might have to restore a very old backup to get valid data.
Severe damage is done by a virus that gradually corrupts data files, but you cannot see the corruption (there is no simple way of knowing whether the data is good or bad). And, of course, your backups have the same problem.
Unlimited damage is done by a virus that gives a third party access to your network, by stealing the supervisor password. The damage is then done by the third party, who has control of the network.
CLASSIFICATION OF VIRUSES
Stealth Viruses
If a stealth virus is in memory, any program attempting to read the file (or sector) containing the virus is fooled into believing that the virus is not there, as it is hiding. The virus in memory filters out its own bytes, and only shows the original bytes to the program.
There are three ways to deal with this:
1. Cold Boot from a clean DOS floppy, and make sure that nothing on the hard disk is executed. Run any anti-virus software from floppy disk. Unfortunately, although this method is foolproof, relatively few people are willing to do it.
2. Search for known viruses in memory. All the virus scanners do this when the programs are run.
3. Use advanced programming techniques to probe the confusion that the virus causes. A process known as the "Anti-Stealth Methodology" in some scanners can be used for this.
Polymorphic Viruses
A polymorphic virus is one that is encrypted, and the decryptor/loader for the rest of the virus is very variable. With a polymorphic virus, two instances of the virus have no sequence of bytes in common. This makes it more difficult for scanners to detect them.
Many scanners use the "Fuzzy Logic" technique and a "Generic Decryption Engine" to detect these viruses.
The Partition and Partition Viruses
The partition sector is the first sector on a hard disk. It contains information about the disk such as the number of sectors in each partition, where the DOS partition starts, plus a small program. The partition sector is also called the "Master Boot Record" (MBR).
When a PC starts up, it reads the partition sector and executes the code it finds there. Viruses that use the partition sector modify this code.
Since the partition sector is not part of the normal data storage part of a disk, utilities such as DEBUG will not allow access to it. However, it is possible to use Inspect Disk to examine the partition sector. A floppy disk does not have a partition sector.
How to Remove a Partition Sector (MBR) Virus
1. Cold Boot from a clean DOS diskette.
2. Run the DOS scanner.
3. Select the drive to clean and "Repair" it.
4. Follow the instructions.
The Boot Sector and Boot Sector Viruses
The boot sector is the first sector on a floppy disk. On a hard disk it is the first sector of a partition. It contains information about the disk or partition, such as the number of sectors, plus a small program.
When the PC starts up, it attempts to read the boot sector of a disk in drive A:. If this fails because there is no disk, it reads the boot sector of drive C:. A boot sector virus replaces this sector with its own code and moves the original elsewhere on the disk.
Even a non-bootable floppy disk has executable code in its boot sector. This displays the "not bootable" message when the computer attempts to boot from the disk. Therefore, a non-bootable floppy can still contain a virus and infect a PC if it is inserted in drive A: when the PC starts up.
File Viruses
File viruses append or insert themselves into executable files, typically .COM and .EXE programs.
A direct-action file virus infects another executable file on disk when its 'host' executable file is run.
An indirect-action (or TSR - Terminate and Stay Resident) file virus installs itself into memory when its 'host' is executed, and infects other files when they are subsequently accessed.
Overwriting Viruses
Overwriting viruses overwrite all or part of the original program. As a result, the original program doesn't run. Overwriting viruses are not, therefore, a real problem - they are extremely obvious, and so cannot spread effectively.
APPEARANCES AND INTENTIONS OF VIRUSES
Droppers
Droppers are programs that have been written to perform some apparently useful job but, while doing so, write a virus out to the disk. In some cases, all that they do is install the virus (or viruses).
A typical example is a utility that formats a floppy disk, complete with Stoned virus installed on the boot sector.
Failed Viruses
Sometimes a file is found that contains a 'failed virus'. This is the result of either a corrupted 'real' virus or simply a result of bad programming on the part of an aspiring virus writer. The virus does not work - it hangs when run, or fails to infect.
Many viruses have severe bugs that prevent their design goals - some will not reproduce successfully or will fail to perform their intended final actions (such as corrupting the hard disk). In general many virus authors are very poor programmers.
Packagers
Packagers are programs that in some way wrap something around the original program. This could be as an anti-virus precaution, or for file compression. Packagers can mask the existence of a virus inside.
Trojans and Jokes
A Trojan is a program that deliberately does unpleasant things, as well as (or instead of) its declared function. They are not capable of spreading themselves and rely on users copying them.
A Joke is a harmless program that does amusing things, perhaps unexpectedly. We include the detection of a few jokes in the Toolkit, where people have found particular jokes that give concern or offence.
Test files
Test files are used to test and demonstrate anti-virus software, in the context of viruses. They are not viruses - simply small files that are recognised by the software and cause it to simulate what would happen if it had found a virus. This allows users to see what happens when it is triggered, without needing a live virus.
METHODS OF REMOVING VIRUSES
How to Remove a Boot Virus from a Hard Disk
1. Cold Boot from a clean DOS diskette.
2. Run the scanner.
3. Select the drive to clean and "Repair" it.
An alternative method is as follows:
1. Cold Boot from a clean DOS diskette.
2. Type:
SYS C: at the DOS prompt. (if C drive is infected)
The clean DOS diskette should be the same version of DOS that is on the hard disk.
How to Remove a Boot Virus from a Floppy
1. Cold Boot from a clean DOS diskette.
2. Run the scanner.
3. Make sure to "Replace the Boot Sector" of the floppy drive.
If you find a new virus...
If you have some symptoms that you think are a virus, then:
1. Format a floppy disk in the infected computer.
2. Copy any infected files to that floppy.
3. Copy your FORMAT and CHKDSK programs too.
As you can see in this essay, viruses are very appalling, and since a virus spreads from one computer to another, it gets worse! Just like a contagious human virus which causes more harm, as more people are infected and more need to be treated. This same concept applies to a computer virus infecting computers continually. Also, in this essay, various techniques have been explained on how to remove and deal with computer viruses, of different types, inflicting different components in a computer. So, next time you have suspicions that your computer has been damaged by a virus, read through this essay and exercise the remedies indicated.
f:\12000 essays\sciences (985)\Computer\Computer Viruses and their Effects on your PC.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Viruses and their Effects on your PC
This page intentionally left blank
Table of Contents
What is a Virus? 1
HOW A VIRUS INFECTS YOUR SYSTEM: 2
HOW DOES A VIRUS SPREAD? 3
BIGGEST MYTH: "I BUY ALL OF MY PROGRAMS ON CD ROM FROM THE STORE". STORE BOUGHT SOFTWARE NEVER CONTAINS VIRUSES. 3
INFECTION (DAMAGES) 4
PROTECT YOUR COMPUTER, NOW!! 5
6
6
1
A virus is an independent program that reproduces itself. It can attach itself to other programs and make copies of itself (i.e., companion viruses). It can damage or corrupt data, or lower the performance of your system by using resources like memory or disk space. A virus can be annoying or it can cost you lots of cold hard cash. A virus is just another name for a class of programs. They do anything that another program can. The only distinguishing characteristic is the program has ability to reproduce and infect other programs. Is a computer virus similar to a human virus? Below is a chart that will show the similarities.
Comparing Biological Viruses & Human Viruses
Human Virus Effects Computer Virus Effects
Attack specific body cells' Attack specific programs (*.com,*.exe)
Modify the genetic information of a cell other
than previous one. Manipulate the program:
It performs tasks.
New viruses grow in the infected cell itself. The infected program produces virus programs.
An infected program may not exhibit symptoms for a while. The infected program can work without error for a long time.
Not all cells with which the virus contact are infected. Program can be made immune against certain viruses.
Viruses can mutate and thus cannot clearly be diagnosed. Virus program can modify themselves & possibly escape detection this way.
Infected cells aren't infected more than once by the same cell. Programs are infected only once by most viruses.
There are many ways a virus can infect you system. One way is, if the virus is a file infecting virus, when you run a file infected with that virus. This particular kind of virus can only infect if YOU run the program! This virus targets COM and EXE files, but have also been found in other executable files. some viruses are memory resident which will infect every file run after that one. Other are "direct action" injectors that immediately infect other files on your hard drive then leave. Another way viruses infect your system is if they are polymorphic. Polymorphism is where the virus changes itself with every infection so it is harder to find. Also, virus writers have come up with a virus called a multipartite virus. This virus can infect boot sectors and the master boot record as well as files therefore enables it to attack more targets, spread further and thus do more damage.
A computer virus can be spread in many different ways. The first way is by a person knowingly installing a virus onto a computer. Now the computer is infected with a virus. The second way is inserting your disk into an infected computer. The infected computer will duplicate the virus onto your disk. Now your disk is a virus carrier. Any computer that comes in contact with this disk will become infected. For example, I once caught a virus from Cochise College by copying two non-infected disks, the computer was infected. What if my friend borrows an infected disk? Your friend's computer will most likely become infected the instant that he/she uses your disk into a computer. The third way, is the Internet. A lot of programs on the Internet contain live viruses. However, there seems to be countless numbers of ways to become infected. Every time you download a program from somewhere or borrow a disk from a friend, you are taking a risk of getting infected.
Computer software bought in stores have been know to carry viruses. "How? CD-ROMS are non-recordable?" A virus may be installed into a computer at the time of manufacturing. In September of 1996, the September edition of Microsoft SPCD has a file infected with a virus called "Wazzu". Watch out for SIA\MKTOOLS\CASE\ED3905A.DOC. Microsoft aided the spread of Wazzu by distributing a Wazzu-infected document on the Swiss ORBITconference CD, and keeping an identical copy of the infected document on it's Swiss Website for at least five days after being notified of the problem. It is noted, by Microsoft records , that over 2 million of the infected CD's were sold. The CD's were replaced on a recall from Microsoft, however: this aided the spread of the Wazzu Virus.
2
The major damages can vary, but here are the most common:
A. Fill up your P.C. with Garbage:
As a virus reproduces, it takes up space. This space cannot be used by the operator
As more copies of the virus are made, the memory space is lessened.
B. Mess Up Files:
Computer files have a fixed method of being stored. With this being the case, it is very easy for a computer virus to affect the system so some parts of the accessed files cannot be located.
C. Mess Up FAT:
Fat (File Allocation Table) is the method used to contain the information required about the location of files stored on a disk. Any allocation to this information can cause endless trouble
D. Mess Up The Boot Sector:
The boot sector is the special information found on a disk. Changing the boot sector could result in the inability of the computer to run.
E. Erase The Whole Hard Drive/ Diskette:
A virus can simply format a disk. This will cause you to lose all of the data stored on the formatted disk.
F. Reset The Computer:
Virus can reset your computer. Normally, the operator or user has to press a few keys. The virus can do this by sending codes to the operating system.
G. Slowing Things Down:
The object of this virus can slow down the running line of a program. This causes a computer with 100 megahertz to act like a computer with 16 megahertz. That is why a 486 or 586 computer can slow down and run as if it were a 286. As I would call it "Turtle Speed".
H. Redefine Keys:
The computer has been programmed to recognize certain codes with the press of certain keys. For Example: When you press the letter T, your computer puts a T on your display. A virus can change the command. Imagine if every time you pressed the T, your computer would format your hard drive.
I. Lock The Keyboard:
Redefining all the keys into an empty key. Then the user cannot use the keyboard to input any data.
People are often telling me I am paranoid of viruses. Some forms of paranoia are healthy. When it comes to securing your system from viruses, trust no one, not even your mother-when you change disks with her, that is. Thank god for the invention of Anti-Virus Software. Anti-Virus Software is a program that can protect your PC from a virus. They can also remove a virus, once it is detected. However, there are thousands of viruses in existence. And finding a consistant virus scanning program can be rough. I have read many articles on popular virus scanning programs. I have found the top two virus scanning programs to be:
#1.) McAfee Virus Scan
#2.) Norton Anti-Virus
Both of these programs can prevent a virus from entering your computer. If one sneaks past, then you will have a choice to delete the file, clean the virus or move the virus. I would highly suggest you to check out these programs and test them.
Conclusion:
Remember, one virus can shred many years of work on your computer. Protect yourself and always, use an Anti-Virus Program.
f:\12000 essays\sciences (985)\Computer\Computer Viruses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER VIRUSES
(anonymous)
<1>WHAT IS A COMPUTER VIRUS:
The term usually used to define a computer virus is:
' A computer virus is often malicious software which
replicates itself'
[ Podell 1987 for similar definition ]
- COMPUTER VIRUSES ARE BASICALLY PROGRAMS, LIKE A
SPREADSHEET OR A WORD PROCESSOR.
- PROGRAMS WHICH CAN INSERT EXECUTABLE COPIES OF ITSELF INTO
OTHER PROGRAMS.
- PROGRAMS THAT MANIPULATES PROGRAMS, MODIFIES OTHER
PROGRAMS AND REPRODUCE ITSELF IN THE PROCESS.
Comparing Biological viruses & Computer viruses
*************************************************************
* Attack specific * Attack specific *
* body cells * programs (*.COM *.EXE) *
*************************************************************
* Modify the genetic information * Manipulate the program: *
* of a cell other than previous 1* It performs tasks *
*************************************************************
* New viruses grow in the * The infected program produces *
* infected cell itself * virus programs *
*************************************************************
* Infected cells aren't infected * Program are infected only once*
* more than once by the same cell* by most programs*
*************************************************************
* An infected organism may not * The infected program can work *
* exhibit symptoms for a while * without error for a long time *
*************************************************************
* Not all cells with which the * Program can be made immune *
* virus contact are infected * against certain viruses *
*************************************************************
* Viruses can mutate and thus * Virus program can modify *
* cannot be clearly told apart * themselves & possibly escape *
* * detection this way *
*************************************************************
However, " computer virus " is just another name for a class
of programs. They can do anything that another program can.
The only distinguishing characteristic is the program has
ability to reproduce and infect other programs.
<2>WHAT KIND OF PROGRAM ARE CHARACTERIZED AS A VIRUS PROGRAM:
- PROGRAM WHICH HAS CAPABILITY TO EXECUTE THE MODIFICATION
ON A NUMBER OF PROGRAMS.
- CAPABILITY TO RECOGNIZE A MODIFICATION PERFORMED ON A
PROGRAM.(THE ABILITY TO PREVENT FURTHER MODIFICATION OF
THE SAME PROGRAM UPON SUCH RECONDITION.)
- MODIFIED SOFTWARE ASSUME ATTRIBUTES 1 TO 4.
<3>HOW DOES A VIRUS SPREAD:
A computer virus can only be put into your system either by
yourself or someone else. One way in which a virus can be
put into your computer is via a Trojan Horse.
-TROJAN HORSE IS USUALLY CONTAMINATED IN DISKS WHICH ARE
PARTICULARY PIRATED COPIES OF SOFTWARE. IT IS SIMPLY A
DAMAGING PROGRAM DISGUISED AS AN INNOCENT ONE. MANY
VIRUSES MAYBE HIDDEN IN IT, BUT T.H. THEMSELVES DO NOT
HAVE THE ABILITY TO REPLICATE.
Viruses also can be spread through a Wide Area network
(WAN) or a Local Area Network (LAN) by telephone line.
For example down loading a file from a local BBS.
BBS(bulletin board system)-AN Electronic mailbox that user
can access to send or receive massages.
However, there seems to be countless numbers of ways to
become infected. Every-time you down loads a program from
somewhere or borrowed a disk from a friend, you are taking
a risk of getting infected.
<4>DAMAGES AND SIGNS OF INFECTION:
a.> Fill Up your P.C. with Garbage:
As a virus reproduces, it takes up space. This space cannot
be used by the operator. As more copies of the virus are
made, the memory space is lessened.
b.> Mess Up Files:
Computer files have a fixed method of being stored. With
this being the case, it is very easy for a computer virus to
affect the system so some parts of the accessed files cannot
be located.
c.> Mess Up FAT:
FAT(the File Allocation Table) is the method used to contain
the information required about the location of files stored
on a disk. Any allocation to this information can cause
endless trouble.
d.> Mess Up The Boot Sector:
The boot sector is the special information found on a disk.
Changing the boot sector could result in the inability of the
computer to run.
e.> Format a Disk/ Diskette:
A virus can simply format a disk as the operator would with
the format or initialise command.
f.> Reset The Computer:
To reset the computer, the operator or the user only has to
press a few keys. The virus can do this by sending the codes
to the operating system.
g.> Slowing Things Down:
As the name implies, the object of the virus is to slow down
the running line of the program.
h.> Redefine Keys:
The computer has been program to recognize that certain
codes/ signals symbolize a certain keystroke. The virus
could change the definition of these keystrokes.
i.> Lock The Keyboard: redefining all keys into an empty key.
<5>WHAT TO DO AFTER VIRUS ATTACKS:
When signs of a virus attack have been recognized,
the virus has already reproduced itself several times.
Thus, to get rid of the virus, the user has to hack down
and destroy each one of these copies. The easier way is to:
1. Have the original write protected back-up copy of your
operating system on a diskette.
2. Power down the machine.
3. Boot up the system from the original system diskette.
4. Format the hard disk.
5. Restore all back-ups and all executable program.
*If it's not effective, power down and seek for professional help*
<6> TYPE OF VIRUSES:
a.> OVER-WRITING VIRUSES
b.> NON-OVERWRITNG VIRUSES
c.> MEMORY RESENDENT VIRUSES
<7>PRACTICE SAFE HEX:
Viruses are a day to day reality. Different activities
leads to different exposure. To protect oneself from a
virus, several things can be done:
1. Avoid them in the first place.
2. Discovering and getting rid of them.
3. Repairing the damage.
The simple thing that can cut down on exposure rate are to:
avoid pirate software, checking programs that have been down
loaded form the BBS before running them. Make sure that you
have sufficient backups.
<8> ANTIVIRUS PRODUCTS COMPANY:
The pace at which new antiviral products have been pouring
onto the market has accelerated rapidly since the major
infection of 1988. Indeed, by early 1989, there were over 60
proprietary products making varied claims for effectiveness in
preventing or detecting virus attacks.
For: IBM PCs & Compatibles
DISK DEFENDER PC SAFE McAFEE SCAN
DIRECTOR TECHNOLOGIES THE VOICE CONNECTION McAFEE ASSOCIATES
906 University Place 17835 Skypark Circle 4423 Cheeney Street
Evanston, IL 60201 Irvine, CA 92714 Santa Clara, CA 95054
TEL: (408) 727-4559 TEL: (714) 261-2366 TEL: (408) 988-3832
Price: $ 240.00 U.S. Price: $ 45.00 U.S. Price: $ 80.00 U.S.
Class : HARD.2 Class : SOFT.1 Class: SOFT.3
For: Macintosh Plus, SE, & II(Apple)
VIREX
HJC SOFTWARE
P.O. BOX 51816
Durham, NC 27717
TEL: (919) 490-1277
Price: $ 99.95 U.S.
Class : SOFT.3
*Class 1(infection prevention class)*
Most Class 1 products are unable to distinguish between an
acceptable or unacceptable access to an executable program.
For example, a simply DOS COPY command might cause the waring
appear on screen.
*Class 2(infection detection class)*
All Class 2 products are able to distinguish all DOS
commands. Addition to Class 1's prevention fuction, it is able
to protect all COM and EXE files from infection.
*Class 3(Top class)*
Class 3 products are cable of both prevention & detection
fuctions. And they are cable of removing the infection viruses.
<1>. COMPUTER VIRUSES a high-tech disease
WRITTEN BY: RALF BURGER
PUBLISH BY: ABACUS, U.S.A
<2>. DATA THEFT
WRITTEN BY: HUGO CORNWALL
PUBLISH BY: PONTING-GREEN, LONDON
<3>. COMPUTER VIRUSES,WORMS,DATA DIDDLERS,KILLER
-PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM
WRITTEN BY: JOHN McAFEE & COLIN HAYNES
PUBLISH BY: ST.MARTIN'S PRESS, U.S.A
*************************************************
* COMPUTER VIRUSES CRISIS * THE SECRET WORLD *
* WRITTEN BY: PHILP E FITES * OF COMPUTER *
****************************** WRITTEN BY: *
* COMPUTE'S COMPUTER VIRUSES * ALLAN LNNDELL *
* WRITTEN BY: RALPH ROBERTS * *
f:\12000 essays\sciences (985)\Computer\computer virusses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER VIRUSES
Kevan
<1>WHAT IS A COMPUTER VIRUS:
The term usually used to define a computer virus is:
' A computer virus is often malicious software which
replicates itself'
[ Podell 1987 for similar definition ]
- COMPUTER VIRUSES ARE BASICALLY PROGRAMS, LIKE A
SPREADSHEET OR A WORD PROCESSOR.
- PROGRAMS WHICH CAN INSERT EXECUTABLE COPIES OF ITSELF INTO
OTHER PROGRAMS.
- PROGRAMS THAT MANIPULATES PROGRAMS, MODIFIES OTHER
PROGRAMS AND REPRODUCE ITSELF IN THE PROCESS.
Comparing Biological viruses & Computer viruses
*************************************************************
* Attack specific * Attack specific *
* body cells * programs (*.COM *.EXE) *
*************************************************************
* Modify the genetic information * Manipulate the program: *
* of a cell other than previous 1* It performs tasks *
*************************************************************
* New viruses grow in the * The infected program produces *
* infected cell itself * virus programs *
*************************************************************
* Infected cells aren't infected * Program are infected only once*
* more than once by the same cell* by most programs*
*************************************************************
* An infected organism may not * The infected program can work *
* exhibit symptoms for a while * without error for a long time *
*************************************************************
* Not all cells with which the * Program can be made immune *
* virus contact are infected * against certain viruses *
*************************************************************
* Viruses can mutate and thus * Virus program can modify *
* cannot be clearly told apart * themselves & possibly escape *
* * detection this way *
*************************************************************
However, " computer virus " is just another name for a class
of programs. They can do anything that another program can.
The only distinguishing characteristic is the program has
ability to reproduce and infect other programs.
<2>WHAT KIND OF PROGRAM ARE CHARACTERIZED AS A VIRUS PROGRAM:
- PROGRAM WHICH HAS CAPABILITY TO EXECUTE THE MODIFICATION
ON A NUMBER OF PROGRAMS.
- CAPABILITY TO RECOGNIZE A MODIFICATION PERFORMED ON A
PROGRAM.(THE ABILITY TO PREVENT FURTHER MODIFICATION OF
THE SAME PROGRAM UPON SUCH RECONDITION.)
- MODIFIED SOFTWARE ASSUME ATTRIBUTES 1 TO 4.
<3>HOW DOES A VIRUS SPREAD:
A computer virus can only be put into your system either by
yourself or someone else. One way in which a virus can be
put into your computer is via a Trojan Horse.
-TROJAN HORSE IS USUALLY CONTAMINATED IN DISKS WHICH ARE
PARTICULARY PIRATED COPIES OF SOFTWARE. IT IS SIMPLY A
DAMAGING PROGRAM DISGUISED AS AN INNOCENT ONE. MANY
VIRUSES MAYBE HIDDEN IN IT, BUT T.H. THEMSELVES DO NOT
HAVE THE ABILITY TO REPLICATE.
Viruses also can be spread through a Wide Area network
(WAN) or a Local Area Network (LAN) by telephone line.
For example down loading a file from a local BBS.
BBS(bulletin board system)-AN Electronic mailbox that user
can access to send or receive massages.
However, there seems to be countless numbers of ways to
become infected. Every-time you down loads a program from
somewhere or borrowed a disk from a friend, you are taking
a risk of getting infected.
<4>DAMAGES AND SIGNS OF INFECTION:
a.> Fill Up your P.C. with Garbage:
As a virus reproduces, it takes up space. This space cannot
be used by the operator. As more copies of the virus are
made, the memory space is lessened.
b.> Mess Up Files:
Computer files have a fixed method of being stored. With
this being the case, it is very easy for a computer virus to
affect the system so some parts of the accessed files cannot
be located.
c.> Mess Up FAT:
FAT(the File Allocation Table) is the method used to contain
the information required about the location of files stored
on a disk. Any allocation to this information can cause
endless trouble.
d.> Mess Up The Boot Sector:
The boot sector is the special information found on a disk.
Changing the boot sector could result in the inability of the
computer to run.
e.> Format a Disk/ Diskette:
A virus can simply format a disk as the operator would with
the format or initialise command.
f.> Reset The Computer:
To reset the computer, the operator or the user only has to
press a few keys. The virus can do this by sending the codes
to the operating system.
g.> Slowing Things Down:
As the name implies, the object of the virus is to slow down
the running line of the program.
h.> Redefine Keys:
The computer has been program to recognize that certain
codes/ signals symbolize a certain keystroke. The virus
could change the definition of these keystrokes.
i.> Lock The Keyboard: redefining all keys into an empty key.
<5>WHAT TO DO AFTER VIRUS ATTACKS:
When signs of a virus attack have been recognized,
the virus has already reproduced itself several times.
Thus, to get rid of the virus, the user has to hack down
and destroy each one of these copies. The easier way is to:
1. Have the original write protected back-up copy of your
operating system on a diskette.
2. Power down the machine.
3. Boot up the system from the original system diskette.
4. Format the hard disk.
5. Restore all back-ups and all executable program.
*If it's not effective, power down and seek for professional help*
<6> TYPE OF VIRUSES:
a.> OVER-WRITING VIRUSES
b.> NON-OVERWRITNG VIRUSES
c.> MEMORY RESENDENT VIRUSES
<7>PRACTICE SAFE HEX:
Viruses are a day to day reality. Different activities
leads to different exposure. To protect oneself from a
virus, several things can be done:
1. Avoid them in the first place.
2. Discovering and getting rid of them.
3. Repairing the damage.
The simple thing that can cut down on exposure rate are to:
avoid pirate software, checking programs that have been down
loaded form the BBS before running them. Make sure that you
have sufficient backups.
<8> ANTIVIRUS PRODUCTS COMPANY:
The pace at which new antiviral products have been pouring
onto the market has accelerated rapidly since the major
infection of 1988. Indeed, by early 1989, there were over 60
proprietary products making varied claims for effectiveness in
preventing or detecting virus attacks.
For: IBM PCs & Compatibles
DISK DEFENDER PC SAFE McAFEE SCAN
DIRECTOR TECHNOLOGIES THE VOICE CONNECTION McAFEE ASSOCIATES
906 University Place 17835 Skypark Circle 4423 Cheeney Street
Evanston, IL 60201 Irvine, CA 92714 Santa Clara, CA 95054
TEL: (408) 727-4559 TEL: (714) 261-2366 TEL: (408) 988-3832
Price: $ 240.00 U.S. Price: $ 45.00 U.S. Price: $ 80.00 U.S.
Class : HARD.2 Class : SOFT.1 Class: SOFT.3
For: Macintosh Plus, SE, & II(Apple)
VIREX
HJC SOFTWARE
P.O. BOX 51816
Durham, NC 27717
TEL: (919) 490-1277
Price: $ 99.95 U.S.
Class : SOFT.3
*Class 1(infection prevention class)*
Most Class 1 products are unable to distinguish between an
acceptable or unacceptable access to an executable program.
For example, a simply DOS COPY command might cause the waring
appear on screen.
*Class 2(infection detection class)*
All Class 2 products are able to distinguish all DOS
commands. Addition to Class 1's prevention fuction, it is able
to protect all COM and EXE files from infection.
*Class 3(Top class)*
Class 3 products are cable of both prevention & detection
fuctions. And they are cable of removing the infection viruses.
<1>. COMPUTER VIRUSES a high-tech disease
WRITTEN BY: RALF BURGER
PUBLISH BY: ABACUS, U.S.A
<2>. DATA THEFT
WRITTEN BY: HUGO CORNWALL
PUBLISH BY: PONTING-GREEN, LONDON
<3>. COMPUTER VIRUSES,WORMS,DATA DIDDLERS,KILLER
-PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM
WRITTEN BY: JOHN McAFEE & COLIN HAYNES
PUBLISH BY: ST.MARTIN'S PRESS, U.S.A
*************************************************
* COMPUTER VIRUSES CRISIS * THE SECRET WORLD *
* WRITTEN BY: PHILP E FITES * OF COMPUTER *
****************************** WRITTEN BY: *
* COMPUTE'S COMPUTER VIRUSES * ALLAN LNNDELL *
* WRITTEN BY: RALPH ROBERTS * *
********************
f:\12000 essays\sciences (985)\Computer\Computers 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tim Gash 1
CRS-07
Mr. Drohan
January 31, 1997
The History and Future of Computers
With the advances in computer technology it is now possible for more and more Canadians to have personal computers in their homes. With breakthroughs in computer processing speeds and with computer storage capacity, the combination of this with the reduced size of the computer have allowed for even the smallest apartment to hold a computer. In the past the only places to have computers were military institutes and some universities; this was because of their immense size and price. Today with falling computer prices and the opportunity to access larger networks, the amount of computers has grown from just 10% in 1986 to 25% in 1994. Also, of the 25%, 34% of them were equipped with modems, which allow for connection to on line services via telephone lines.
The primitive start of the computer came about around 4000 BC; with the invention of the abacus, by the Chinese. It was a rack with beads strung on wires that could be moved to make calculations. The first digital computer is usually accredited to Blaise Pascal. In 1642 he made the device to aid his father, who was a tax collector. In 1694 Gottfried Leibniz improved the machine so that with the rearrangement of a few parts it could be used to multiply. The next logical advance came from Thomas of Colmar in 1890, who produced a machine that could perform all of the four basic operations, addition, subtraction, multiplication and division. With the added versatility this device was in operation up until the First World War.
Thomas of Colmar made the common calculator, but the real start of computers as they are known today comes from Charles Babbage. Babbage designed a machine that he called a Difference Engine. It was designed to make many long calculations automatically and print out the results. A working model was built in 1822 and fabrication began in
Gash 2
1823. Babbage works on his invention for 10 years when he lost interest in it. His loss of interest was caused by a new idea he thought up. The Difference Engine was limited in adaptability as well as applicability. The new idea would be a general purpose, automatic mechanical digital computer that would be fully program controlled. He called this the Analytical Engine. It would have Conditional Control Transfer Capability so that commands could be inputted in any order, not just the way that it had been programmed. The machine was supposed to use punch cards which were to be read into the machine from several reading stations. The machine was supposed to operate automatically by steam power and only require one person there to operate it. Babbages machines were never completed for reasons such as, non-precise machining techniques, the interest of few people and the steam power required for the devices was not readily available.
The next advance in computing came from Herman Hollerith and James Powers. They made devices that were able to read cards that information had been punched into, automatically. This advance was a huge step, because it provided memory storage capability. Companies such as IBM and Remington made improved versions of the machine that lasted for over fifty years.
ENIAC which was thought up in 1942, was in use from 1946 to 1955. Thought up by J. Presper Eckert and his associates. The computer was the first high-speed digital computer and was one thousand times faster than its predecessor, the relay computers. ENIAC was very bulky, taking up 1,800 square feet on the floor and having 18,000 vacuum tubes. It was also very limited in programmability, but it was very efficient in the programs that it had been designed for.
In 1945 John von Neumann along with the University of Pennsylvania came up with what is known as the stored-program technique. Also due to the increasing speed of the computer subroutines needed to be repeated so that the computer could be kept busy. It
Gash 3 would also be better if instructions to the computer could be changed during a compution so that there would be a different outcome in the compution. Neumann fulfilled these needs by creating a command that is called a conditional control transfer. The conditional control transfer allows for program sequences to be started and stopped at any time. Instruction programs were also stored together so that they can be arithmetically changed just like data. This generation of computers included ones using RAM, as well as the first commercially available computers, EDVAC and UNIVAC. These computers used punched-card or punched tape reading devices. Also some of the later ones were only about the size of a grand piano and contained 2,500 electron tubes, which was much smaller than ENIAC.
During the fifties and sixties the two most important advances were magnetic core memory and the transistor. These discoveries increased RAM sizes from 8,000 to 64,000 words in commercially available computers. The first supercomputers were made with this new technology. During this period successful commercial computers were made by Burroughs, IBM, Sperry-Rand, Honeywell and Control Data. These computers could now have printers, disk storage, tape storage, stored programs and memory operating systems. These computers were usually owned by industry, government and private laboratories.
The next advance came in the form of a chip. Transistors and vacuum tubes created vast amounts of heat and this damaged the delicate internal parts of the computer. The heat problem was eliminated through quartz. The integrated circuit made in 1958 consisted of three components placed on a silicon disc that was made of quartz. As technology advanced more and more components were fit onto indiviual chips and this resulted in smaller and smaller computers. There was also an operating system created during this stage that allowed for many programs to be run at once, with one central program that had the ability to monitor and coordinate computer memory.
f:\12000 essays\sciences (985)\Computer\Computers 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. The virus is made up of five parts and is in the size range of 10 nm-300 nm in
diameter. The first is the coat made up of protein that protects the virus to a point. Next
is the head that contains the genetic material for the virus. The genetic material for a virus
is DNA. The two other parts are the tail sheath and the tail fibers that are used for odd
jobs. I believe that a virus is not considered to be a living creature due to the fact it is a
parasitic reproducer. To me it is just like ripping up a piece of paper because it is still the
same thing and it isn't carrying out any other function besides reproduction. Since the
virus cannot continue to do its functions without taking from a host and being a parasite it
is considered an obligated parasite.
2. The adult fern plant in its dominate generation (sporophyte) develops
sporangium on one side of its leaf. When meiosis is finished inside the sporangia and the
spores are completed the annulus dries out releasing the spores. The spore germinates and
grows into a prothallus which is the gametophyte generation. The antheridia and the
archegonia are developed on the bottom of the prothallus. The archegonia are at the
notch of the prothallus and the antheridia are located near the tip. Fertilization occurs
when outside moisture is present and the sperm from the antheridia swim to the eggs of
the archegonia. A zygote is formed on the prothallus and a new sporophyte grows.
4. Flowering plants have unique characteristics that help them survive. One is the
flower itself that contains the reproductive structures. The color of the flower helps
because it may attract birds and insects that spread the plants pollen which diversify the
later generation of plants. Flowers also produce fruits that protect their seeds and
disperses them with the help of fruit eating animals.
5. Fungi, Animalia, and, Plantae are all believed to be evolved from Protista. All 3
of these kingdoms are eukaryotic and their cells have a nucleus and all the other
organelles. Fungi live on organic material they digest, Plants produce their own organic
material, and Animals go out and find their food. Animalia are heterotrophic whereas
Plantae are photosynthetic. Fungi who digest their own food on the outside are different
from animals who digest their food on the inside. Plants and animals both have organs
systems but animals have organized muscle fibers and plants do not.
8. The Gasreopoda , Pelecypoda, and the Cephalapoda all have three of the
same characteristics. The first one is the visceral mass that includes internal organs like a
highly specialized digestive tract, paired kidneys, and reproductive organs. The mantle is
the second one. It is a covering that doesn't completely cover the visceral mass. The last
one is the foot that can be used for movement, attachment, food capture, or a combination
of these. The Gastropods are the snails and slugs. They use their foot for crawling and
their mantle (shell) to protect their visceral mass. The class Pelecypoda consists of clams,
oysters, scallops, and mussels. These animals have two shells that are hinged together by
a strong muscle and these shells protect the visceral mass. They use their foot for making
threads so they can attach to things. Cephalopods consist of octopuses, squids, and
nautiluses. These guys use their mantle cavity to squeeze water out and causes
locomotion. The foot has evolved into tentacles around the head that are used to catch
prey. Nautiluses have an external shells, squids have smaller but internal shell and
octopuses lack shells entirely.
9. The word Arthropod means jointed foot which come to some of the features of
an arthropod that are the jointed appendages, compound eyes, an exoskeleton, and a brain
with a ventral solid nerve cord. The class Crustacea has compound eyes and five pairs of
appendages two of which are sensory antenni. Some examples are shrimp, cray, lobsters,
and crabs. Insecta has 900,000 species in its class. For example in a grasshopper they
have compound eyes with five pair of appendages, three that are legs, one of which is for
hopping, and two pairs of wings. Spiders that belong to the class Arachnidia have six pair
of appendages. The first pair of appendage are modified fangs and the second pair are
used for chewing. The other four are walking legs ending in claws. Spiders don't have
compound eyes, instead, they have simple eyes. More examples are scorpions, ticks, mites,
and chiggers. To similar classes are Diplopoda and
Chilopoda because they are segmented in the same way and each segment has a pair of
walking legs but in the Diplopoda some segments fuse together and seem to have two pair
of legs to one segment.
10.The Phylum Chordata contains creatures that would have bilateral symmetry, well
developed coelom, and segmentation. In order to be placed in this phylum they must have
had a dorsal hollow nerve cord, a dorsal supporting rod called a notochord, and gill slits
or pharyngeal pouches sometime in their life history. In the subphylum Urochordata the
only one of the three traits they carry on into adulthood is the gill slits. In their tadpole
form of their life they contained all three of these characteristics. Subphylum
Cephalochordata retain all three qualifications into adult form and have segmented bodies.
In subphylum Vertebrata it has all three traits as usual but its notochord is replaced by a
vertebral column.
11. In these fish the sac-like lungs were placed at the end of the fishes digestive
tract. In their case when the oxygen level in the water they were in was low they could
still collect oxygen by breathing. After time these sac-like lungs became swim bladders
that control the up and down motion of a fish.
12. The reptiles most helpful advancement in reproduction that helped them live
on land was the use of internal fertilization and the ability to lay eggs that are protected by
shells. The shells got rid of the swimming larva stage and the eggs did everything inside
of the shell. The eggs has extraembryonic membranes that protect the embryo , get rid of
wastes, and give the embryo oxygen, food, and water. Inside the shell there is a
membrane called the amnion and is filled with fluid and is used as a pond where the
embryo develops and keeps the embryo from drying out.
13. The three subclasses of mammalia all have hair and mammary glands that
produce milk. Each of these classes also have well developed sense organs, limbs for
movement, and an enlarged brain. In the subclass Prototheria the animals lay their eggs in
a burrow and incubate. When the young hatch they receive milk by licking it off the
modified sweat glands that are seeping milk. Subclass Metatheria the young begin
developing inside the female but are born at a very immature age. The newborn crawl into
their mothers pouch and begin nursing. While they are nursing they continue to develop.
With the subclass Eutheria the organisms contain a placenta that exchanges maternal
blood with fetal blood. The young develops inside the mothers uterus and exchanges
nutrients and wastes until it is read to be born.
f:\12000 essays\sciences (985)\Computer\Computers 4.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
omputers are used to track reservations for the airline industry, process billions of dollars for banks, manufacture products for industry, and conduct major transactions for businesses because more and more people now have computers at home and at the office.
People commit computer crimes because of society's declining ethical standards more than any economic need. According to experts, gender is the only bias. The profile of today's non-professional thieves crosses all races, age groups and economic strata. Computer criminals tend to be relatively honest and in a position of trust: few would do anything to harm another human, and most do not consider their crime to be truly dishonest. Most are males: women have tended to be accomplices, though of late they are becoming more aggressive. Computer Criminals tend to usually be "between the ages of 14-30, they are usually bright, eager, highly motivated, adventuresome, and willing to accept technical challenges."(Shannon, 16:2)
"It is tempting to liken computer criminals to other criminals, ascribing characteristics somehow different from
'normal' individuals, but that is not the case."(Sharp, 18:3) It is believed that the computer criminal "often marches to the same drum as the potential victim but follows and unanticipated path."(Blumenthal, 1:2) There is no actual profile of a computer criminal because they range from young teens to elders, from black to white, from short to tall.
Definitions of computer crime has changed over the years as the users and misusers of computers have expanded into new areas. "When computers were first introduced into businesses, computer crime was defined simply as a form of white-collar crime committed inside a computer system."(2600:Summer 92,p.13)
Some new terms have been added to the computer criminal vocabulary. "Trojan Horse is a hidden code put into a computer program. Logic bombs are implanted so that the perpetrator doesn't have to physically present himself or herself." (Phrack 12,p.43) Another form of a hidden code is "salamis." It came from the big salami loaves sold in delis years ago. Often people would take small portions of bites that were taken out of them and then they were secretly returned to the shelves in the hopes that no one would notice them missing.(Phrack 12,p.44)
Congress has been reacting to the outbreak of computer crimes. "The U.S. House of Judiciary Committee approved a bipartisan computer crime bill that was expanded to make it a federal crime to hack into credit and other data bases protected by federal privacy statutes."(Markoff, B 13:1) This bill is generally creating several categories of federal misdemeanor felonies for unauthorized access to computers to obtain money, goods or services or classified information. This also applies to computers used by the federal government or used in interstate of foreign commerce which would cover any system accessed by interstate telecommunication systems.
"Computer crime often requires more sophistications than people realize it."(Sullivan, 40:4) Many U.S. businesses have ended up in bankruptcy court unaware that they have been victimized by disgruntled employees. American businesses wishes that the computer security nightmare would vanish like a fairy tale. Information processing has grown into a gigantic industry. "It accounted for $33 billion in services in 1983, and in 1988 it was accounted to be $88 billion." (Blumenthal, B 1:2)
All this information is vulnerable to greedy employees, nosy-teenagers and general carelessness, yet no one knows whether the sea of computer crimes is "only as big as the Gulf of Mexico or as huge as the North Atlantic." (Blumenthal,B 1:2) Vulnerability is likely to increase in the future. And by the turn of the century, "nearly all of the software to run computers will be bought from vendors rather than developed in houses, standardized software will make theft easier." (Carley, A 1:1)
A two-year secret service investigation code-named Operation Sun-Devil, targeted companies all over the United States and led to numerous seizures. Critics of Operation Sun-Devil claim that the Secret Service and the FBI, which have almost a similar operation, have conducted unreasonable search and seizures, they disrupted the lives and livelihoods of many people, and generally conducted themselves in an unconstitutional manner. "My whole life changed because of that operation. They charged me and I had to take them to court. I have to thank 2600 and Emmanuel Goldstein for publishing my story. I owe a lot to the fellow hackers and fellow hackers and the Electronic Frontier Foundation for coming up with the blunt of the legal fees so we could fight for our rights." (Interview with Steve Jackson, fellow hacker, who was charged in operation Sun Devil) The case of Steve Jackson Games vs. Secret Service has yet to come to a verdict yet but should very soon. The secret service seized all of Steve Jackson's computer materials which he made a living on. They charged that he made games that published information on how to commit computer crimes. He was being charged with running a underground hack system. "I told them it was only a game and that I was angry and that was the way that I tell a story. I never thought Hacker [Steve Jackson's game] would cause such a problem. My biggest problem was that they seized the BBS (Bulletin Board System) and because of that I had to make drastic cuts, so we laid of eight people out of 18. If the Secret Service had just come with a subpoena we could have showed or copied every file in the building for them."(Steve Jackson Interview)
Computer professionals are grappling not only with issues of free speech and civil liberties, but also with how to educate the public and the media to the difference between on-line computer experimenters. They also point out that, while the computer networks and the results are a new kind of crime, they are protected by the same laws and freedom of any real world domain.
"A 14-year old boy connects his home computer to a television line, and taps into the computer at his neighborhood bank and regularly transfers money into his personnel account."(2600:Spring 93,p.19) On paper and on screens a popular new mythology is growing quickly in which computer criminals are the 'Butch Cassidys' of the electronic age. "These true tales of computer capers are far from being futuristic fantasies."(2600:Spring 93:p.19) They are inspired by scores of real life cases. Computer crimes are not just crimes against the computer, but it is also against the theft of money, information, software, benefits and welfare and many more.
"With the average damage from a computer crime amounting to about $.5 million, sophisticated computer crimes can rock the industry."(Phrack 25,p.6) Computer crimes can take on many forms. Swindling or stealing of money is one of the most common computer crime. An example of this kind of crime is the Well Fargo Bank that discovered an employee was using the banks computer to embezzle $21.3 million, it is the largest U.S. electronic bank fraud on record. (Phrack 23,p.46)
Credit Card scams are also a type of computer crime. This is one that fears many people and for good reasons. A fellow computer hacker that goes by the handle of Raven is someone who uses his computer to access credit data bases. In a talk that I had with him he tried to explain what he did and how he did it. He is a very intelligent person because he gained illegal access to a credit data base and obtained the credit history of local residents. He then allegedly uses the residents names and credit information to apply for 24 Mastercards and Visa cards. He used the cards to issue himself at least 40,000 in cash from a number of automatic teller machines. He was caught once but was only withdrawing $200 and in was a minor larceny and they couldn't prove that he was the one who did the other ones so he was put on probation. "I was 17 and I needed money and the people in the underground taught me many things. I would not go back and not do what I did but I would try not to get caught next time. I am the leader of HTH (High Tech Hoods) and we are currently devising other ways to make money. If it weren't for my computer my life would be nothing like it is today."(Interview w/Raven)
"Finally, one of the thefts involving the computer is the theft of computer time. Most of us don't realize this as a crime, but the congress consider this as a crime."(Ball,V85) Everyday people are urged to use the computer but sometimes the use becomes excessive or improper or both. For example, at most colleges computer time is thought of as free-good students and faculty often computerizes mailing lists for their churches or fraternity organizations which might be written off as good public relations. But, use of the computers for private consulting projects without payment of the university is clearly improper.
In business it is the similar. Management often looks the other way when employees play computer games or generate a Snoopy calendar. But, if this becomes excessive the employees is stealing work time. And computers can only process only so many tasks at once. Although considered less severe than other computer crimes such activities can represent a major business loss.
"While most attention is currently being given to the criminal aspects of computer abuses, it is likely that civil action will have an equally important effect on long term security problems."(Alexander, V119) The issue of computer crimes draw attention to the civil or liability aspects in computing environments. In the future there may tend to be more individual and class action suits.
CONCLUSION
Computer crimes are fast and growing because the evolution of technology is fast, but the evolution of law is slow. While a variety of states have passed legislation relating to computer crime, the situation is a national problem that requires a national solution. Controls can be instituted within industries to prevent such crimes. Protection measures such as hardware identification, access controls software and disconnecting critical bank applications should be devised. However, computers don't commit crimes; people do. The perpetrator's best advantage is ignorance on the part of those protecting the system. Proper internal controls reduce the opportunity for fraud.
BIBLIOGRAPHY
Alexander, Charles, "Crackdown on Computer Capers,"
Time, Feb. 8, 1982, V119.
Ball, Leslie D., "Computer Crime," Technology Review,
April 1982, V85.
Blumenthal,R. "Going Undercover in the Computer Underworld". New York Times, Jan. 26, 1993, B, 1:2.
Carley, W. "As Computers Flip, People Lose Grip in Saga of Sabatoge at Printing Firm". Wall Street Journal, Aug. 27, 1992, A, 1:1.
Carley, W. "In-House Hackers: Rigging Computers for Fraud or Malice Is Often an Inside Job". Wall Street Journal, Aug 27, 1992, A, 7:5.
Markoff, J. "Hackers Indicted on Spy Charges". New York Times, Dec. 8, 1992, B, 13:1.
Finn, Nancy and Peter, "Don't Rely on the Law to Stop Computer Crime," Computer World, Dec. 19, 1984, V18.
Phrack Magazine issues 1-46. Compiled by Knight Lightning and Phiber Optik.
Shannon, L R. "THe Happy Hacker". New York Times, Mar. 21, 1993, 7, 16:2.
Sharp, B. "The Hacker Crackdown". New York Times, Dec. 20, 1992, 7, 18:3.
Sullivan, D. "U.S. Charges Young Hackers". New York Times, Nov. 15, 1992, 1, 40:4.
2600: The Hacker Quarterly. Issues Summer 92-Spring 93. Compiled by Emmanuel Goldstein.
f:\12000 essays\sciences (985)\Computer\Computers and Marketing.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTERS AND MARKETING
Marketing is the process by which goods are sold and purchased. The aim of
marketing is to acquire, retain, and satisfy customers. Modern marketing has evolved into
a complex and diverse field. This field includes a wide variety of special functions such as
advertising, mail-order business, public relations, retailing and merchandising, sales,
market research, and pricing of goods.
Businesses, and particularly the marketing aspect of businesses, rely a great deal on the
use of computers. Computers play a significant role in inventory control, processing and
handling orders, communication between satelite companies in an organization, design and
production of goods, manufacturing, product and market analysis, advertising, producing
the company newsletter, and in some cases, complete control of company operations.
In today's extremely competitive business environment businesses are searching for
ways to improve profitability and to maintain their position in the marketplace. As
competition becomes more intense the formula for success becomes more difficult. Two
particular things have greatly aided companies in their quests to accomplish these goals.
They are the innovative software products of CAD/CAM and, last but not least, the World
Wide Web.
An important program has aided companies all over the world. Computer-aided design
and computer-aided manufacturing (CAD/CAM) is the integration of two technologies. It
has often been called the new industrial revolution. In CAD, engineers and designers use
specialized computer software to create models that represent characteristics of objects.
These models are analyzed by computer and redesigned as necessary. This allows
companies needed flexibility in studying different and daring designs without the high
costs of building and testing actual models, saving millions of dollars. In CAM, designers
and engineers use computers for planning manufacturing processes, testing finished parts,
controlling manufacturing operations, and managing entire plants. CAM is linked to CAD
through a database that is shared by design and manufacturing engineers.
The major applications of CAD/CAM are mechanical design and electronic design.
Computer-aided mechanical design is usually done with automated drafting programs that
use interactive computer graphics. Information is entered into the computer to create
basic elements such as circles, lines, and points. Elements can be rotated, mirrored,
moved, and scaled, and users can zoom in on details. Computerized drafting is quicker
and more accurate than manual drafting. It makes modifications much easier.
Desktop manufacturing enables a designer to construct a model directly from data
which is stored in computer memory. These software programs help designers to consider
both function and manufacturing consequences at early stages, when designs are easily
modified.
More and more manufacturing businesses are integrating CAD/CAM with other
aspects of production, including inventory tracking, scheduling, and marketing. This idea,
known as computer-integrated manufacturing (CIM), speeds processing of orders, adds to
effective materials management, and creates considerable cost savings.
In addition to designing and manufacturing a product, a company must be effectively
able to advertise, market, and sell its product. Much of what passes for business is
nothing more than making connections with other people. What if you could passout your
business card to thousands, maybe millions of potential clients and partners? You can,
twenty four hours a day, inexpensively and simply on the World Wide Web. Firms
communicate with their customers through various types of media. This media usually
follows passive one-to-many communication where a firm reaches many current and
potential customers through marketing efforts that allow limited forms of feedback on the
part of the customer. For several years a revolution has been developing that is
dramatically changing the traditional form of advertising and communication media. This
revolution is the Internet, a massive global network of interconnected computer networks
which has the potential to drastically change the way firms do business with their
customers.
The World Wide Web is a hypertext based information service. It provides access to
multimedia, complex documents, and databases. The Web is one of the most effective
vehicles to provide information because of its visual impact and advanced features. It can
be used as a complete presentation media for a company's corporate information or
information on all of its products and services.
The recent growth of the world wide web (WWW) has opened up new markets and
shattered boundaries to selling to a worldwide audience. For marketers the world wide
web can be used to creat a client base, for product and market analysis, rapid information
access, wide scale information dissemination, rapid communication, cost-effective
document transfers, expert advise and help, recruiting new employees, peer communi-
cations, and new business opportunities. The usefullness of the Internet or WWW
depends directly on the products or services of each business. There are different benefits
depending upon the type of business and whether you are a supplier, retailer, or
distributor. Lets examine these in more detail.
Finding new clients and new client bases is not always an easy task. This process
involves a careful market analysis, product marketing and consumer base testing. The
Internet is a ready base of several million people from all walks of life. One can easily find
new customers and clients from this massive group, provided that your presence on the
internet is known. If you could keep your customer informed of every reason why they
should do business with you, your business would definitely increase. Making business
information available is one of the most inportant ways to serve your customers. Before
people decide to become customers, they want to know about your company, what you
do and what you can do for them. This can be accomplished easily and inexpensively on
the World Wide Web.
Many users also do product analyses and comparisons and report their findings via the
World Wide Web. Quite frequently one can find others who may be familiar with a
product that you are currently testing. A company can get first hand reports on the
functionality of such products before spending a great deal of money. Also, the large base
of Internet users is a principle area for the distribution of surveys for an analysis of the
market for a new product of service ideas. These surveys can reach millions of people and
potential clients with very little effort on the part of the surveyors. Once a product is
already marketed, you can examine the level of satisfaction that users have received from
the product. Getting customer feedback can lead to new and improved products.
Feedback will let you know what customers think of your product faster, easier and much
less expensively than any other market you may reach. For the cost of a page or two of
Web programming, you can have a crystal ball into where to position your product or
service in the marketplace.
Accessing information over the Internet is much faster on most occasions than
transmissions and transfers via fax or postal courier services. You can access information
and data from countries around the world and make interactive connections to remote
computer systems just about anywhere in the world. Electronic mail has also proved to be
an effective solution to the problem of telephone tag. Contacting others through email has
provided a unique method of communication which has the speed of telephone
conversations, yet still provides the advantages of postal mail. Email can be sent from just
about anywhere that there is an Internet service or access so that businessmen or travelers
can keep in touch with up to the minute details of the office.
Another benefit of the World Wide Web is wide scale information circulation. You can
place documents on the Internet and instantly make them accessible to millions of users
around the world. Hypertext documents provide an effective technique by which to
present information to subscribers, clients or the general public. Creating World Wide
Web documents and registering your site with larger Web sites improves the availability of
the documents to a client base larger, and cheaper, than the circulation of many major
newspapers and/or television medias. You may not be able to use the mail, phone system
and regulation systems in all of your potential international markets. With the World Wide
Web, however, you can open up a dialogue with international markets as easily as with the
company accross the street.
The Web is also more cost-effective than conventional advertising. Transferring on-
line documents via the Internet takes a minimal amount of time, saving a great deal of
money over postal or courier services which can also suffer late deliveries, losses or
damage. If a document transfer fails on the Internet, you can always try again since the
cost of the transfer is exactly the same. Current or potential clients are not lost due to late
or absent documents.
Beyond product and market analysis, there are a great number of experts on the
Internet who make their presence widely known and easily accessable. Quite often you
can get free advice and help with problems you might have from the same people you may
otherwise pay highly for their consulting services to large organizations, magazines, and
other periodicals. Researchers and business executives alike have attested to the fact that
much of their communications over the Internet are with others in their line of research or
field of work. Communicating with peers allows the sharing of ideas, problems and
solutions among themselves. Often people find that others in their field have already
created solutions for problems similar to their own. They are able to obtain advice on
their own situations and create a solution based upon this shared knowledge.
Many businessmen and conpanies are continuously on the look-out for new and
innovative ideas as viable business ventures. Internet users are continuously coming up
with such new ideas because of the available research the Internet offers and also because
of the cooperative atmosphere that surrounds the internet. In addition, the Internet has
many job lists and resumes online for prospective employers. New resumes are constantly
posted to the Web to inform companies of the availability of new skills.
As competition intensifies in the business world, consumers are faced with more and
more products and services to choose from. The future of business is being decided right
now in the minds and wallets of customers. The successful business and marketing
approach utilizes everything possible to insure that the choice the customer makes is to
choose their product or service. Computer technology is by far the most important and
impressive means by which to insure a company's success. Computers play a significant
role in every aspect of a company's survival, from product design and manufacturing,
creating client databases, inventory control , market analysis, advertising and sales, and
even total company operations.
f:\12000 essays\sciences (985)\Computer\Computers and Society.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The decade of the 1980's saw an explosion in computer technology and computer usage that deeply changed society. Today computers are a part of everyday life, they are in their simplest form a digital watch or more complexly computers manage power grids, telephone networks, and the money of the world. Henry Grunwald, former US ambassador to Austria best describes the computer's functions, "It enables the mind to ask questions, find answers, stockpile knowledge, and devise plans to move mountains, if not worlds." Society has embraced the computer and accepted it for its many powers which can be used for business, education, research, and warfare.
The first mechanical calculator, a system of moving beads called the abacus, was invented in Babylonia around 500 BC. The abacus provided the fastest method of calculating until 1642, when the French scientist Pascal invented a calculator made of wheels and cogs. The concept of the modern computer was first outlined in 1833 by the British mathematician Charles Babbage. His design of an analytical engine contained all of the necessary components of a modern computer: input devices, a memory, a control unit, and output devices. Most of the actions of the analytical engine were to be done through the use of punched cards. Even though Babbage worked on the analytical engine for nearly 40 years, he never actually made a working machine.
In 1889 Herman Hollerith, an American inventor, patented a calculating machine that counted, collated, and sorted information stored on punched cards. His machine was first used to help sort statistical information for the 1890 United States census. In 1896 Hollerith founded the Tabulating Machine Company to produce similar machines. In 1924, the company changed its name to International Business Machines Corporation. IBM made punch-card office machinery that dominated business until the late 1960s, when a new generation of computers made the punch card machines obsolete.
The first fully electronic computer used vacuum tubes, and was so secret that its existence was not revealed until decades after it was built. Invented by the English mathematician Alan Turing and in 1943, the Colossus was the computer that British cryptographers used to break secret German military codes. The first modern general-purpose electronic computer was ENIAC or the Electronic Numerical Integrator and Calculator. Designed by two American engineers, John Mauchly and Presper Eckert, Jr., ENIAC was first used at the University of Pennsylvania in 1946.
The invention of the transistor in 1948 brought about a revolution in computer development, vacuum tubes were replaced by small transistors that generated little heat and functioned perfectly as switches. Another big breakthrough in computer miniaturization came in 1958, when Jack Kilby designed the first integrated circuit. It was a wafer that included transistors, resistors, and capacitors the major components of electronic circuitry. Using less expensive silicon chips, engineers succeeded in putting more and more electronic components on each chip. Another revolution in microchip technology occurred in 1971 when the American engineer Marcian Hoff combined the basic elements of a computer on one tiny silicon chip, which he called a microprocessor. This microprocessor the Intel 4004 and the hundreds of variations that followed are the dedicated computers that operate thousands of modern products and form the heart of almost every general-purpose electronic computer.
By the mid-1970s, microchips and microprocessors had reduced the cost of the thousands of electronic components required in a computer. The first affordable desktop computer designed specifically for personal use was called the Altair 8800, first sold in 1974. In 1977 Tandy Corporation became the first major electronics firm to produce a personal computer. Soon afterward, a company named Apple Computer, founded by Stephen Wozniak and Steven Jobs, began producing computers. IBM introduced its Personal Computer, or PC, in 1981, and as a result of competition from the makers of clones the price of personal computers fell drastically. Just recently Apple Computer allowed its computers to be cloned by competitors.
During this long time of computer evolution, business has grasped at the computer, hoping to use it to increase productivity and minimize costs. The computer has been put on assembly lines, controlling robots. In offices computers have popped up everywhere, sending information and allowing numbers to easily be processed. Two key words that apply today are downsizing and productivity. Companies hope the increase worker productivity, meaning less working which then allows for downsizing. The computer is supposed to be the magic wand that will make productivity shoot through the roof, but in some cases the computer was a waste of time and money.
Reliance Insurance is an example of computer technology falling flat on its face, wasting a great deal of money, while producing little or no results. "Paper Free in 1983" was the slogan Reliance used because the it had just spent millions of dollars to put computers everywhere and network them. The employees had E-mail and other programs that where to eliminate paper and increase productivity. The company chiefs sat back and waited for a boom in productivity that never arrived.
Other examples of the disappointments of computer are not hard to find. Citicorp bank lost $200 million dollars developing a system in the 1980's that gave up to the minute updates on oil prices. Knight-Ridder tried to develop a home shopping network on the television, and lost $50 million. Wang laboratories almost went under when they put all of their resources toward developing imaging technology that no one wanted. Ben & Jerry's ice cream put in an E-mail system and out of 200 employees less than 30% used the system. Everything attempted then is currently very common today; on-line services provide stock and commodities quotes, QVC is a home shopping channel on cable television, almost every picture in a magazine has been retouched with imaging technology, and even JRHS has an E-mail system that seems to be valuable.
Other corporations have seized computer technology and used it to reduce costs, but usually the human factor is lost. The McDonalds fast food chain is an example of a company that has embraced computers to help productivity and lower operating costs. The McDonalds kitchen has become a computer timed machine, "You don't have to know how to cook, you don't have to know how to think. There's a procedure for everything and you just follow the procedure" . The workers have in essence become robots controlled by the computer to achieve maximum productivity. The computer knows the procedure and alerts the worker of events in the procedure and all the worker must do is execute what the beeper of buzzer means. With such little knowledge of the making of the food, workers have become disposable, "It takes a special kind of person to be able to move before he can think. We find people like that and use them until they quit." .
McDonalds managers work even more closely with the computers that control them. The computer generates a graph of expected business and tells the manager how many people to schedule and when, all the manager does is fill in the blanks with names. McDonalds computers also keep close track of sales and expenditures, "The central office can check . . . how many Egg McMuffins were sold on Friday from 9 to 9:30 two weeks ago or two years ago, either in an entire store or at any particular register." . The main things computers do in a manual job is to speed things up, "Thinking generally slows this operation down." , and for this reason computers have made manual jobs ones of extreme monotony and no creativity.
White collar jobs have remained virtually the same, computers have just helped to enhance creativity and attempted to raise productivity. E-mail, word processors, spreadsheets, and personal organization programs are widely used by white collar workers. These programs help to make impressive presentations, communicate, and keep track of everything so the worker can get more done, and therefore less workers are needed, dropping costs. This has not happened, over the last 30 years white collar worker productivity has remained the same, while blue collar productivity has almost quadrupled. This is due mainly to the fact that white collar workers are required to think and adapt to situations quickly, which computers at the moment are unable to due, they only follow code to give a planned response. The blue collar job requires less knowledge and skill, and so is easily replaceable by a computer.
Computers though have not been a failure in business, they allow information to be shared very quickly. The home office is a product of computers, people can work from home instead of going into an office. This has not become very popular due to the lack of touch between people, the loss of contact. It is the human factor that helps to make business run, the random thought that saves the day, something a computer is incapable of doing. Computers may help quicken business, but they will never replace people, only reduce their knowledge or creativity by automating the process.
Another form of computers is attempting to totally eliminate people from the picture. Expert systems are large mainframe computers that have the knowledge of an expert individual loaded into it, and makes decisions that are very complex. An expert in field is chosen and interviewed for sometimes over a year about their job and how they make decisions. All of this knowledge is refined and put into a computer. Another person then enters some statistics into the finished machine and magically a large printout will come out of the machine in minutes with the answers. Expert systems are used mainly in large investing corporations, but some have been developed to help diagnose diseases. The hope is one day a patient will lie down and a couple of sensors and probes will go over the body and then a computer printout will have the name of your illness and the drug to cure it. Expert systems have been used very little mainly due to their high price and because of the lack of trust in them.
Computers have also reached into other places besides business, schools. Children sit in front of computers and are drilled or taught about certain subjects selected by the teacher. This method of teaching has come under fire, some people believe the computer should be a tool not a teacher, while others believe why learn from a normal teacher when a computerized version of the best can teach. The technology of today could allow for a teacher in another country to teach a class through video confrencing. The attempts to spread computer technology into the class room have produced results and taught lessons as to how computers should be applied.
The Belridge school district in McKittrick California was one of the most technological school districts in America. Every student had two computers, one at school and one at home, which contained many brand new teaching programs. The high school had a low powered television station that broadcasted every day. The classes were small and parent involvement was high. Even with all of these wonderful things one-third of the first grade class was below the national average in standardized tests after the first year. Parents were enraged that after all of the money spent nothing had happened, that the technology hadn't made the children become smarter, and so all of the computers were gone the next year and traditional teaching was put back in place.
Belridge is an extreme example of people expecting the computers to do magic and make the children learn faster and better, much like companies hoped to raise productivity. The children were left to learn from the computer, which they did, but nothing changed things actually got worse. One parent realized, ". . . good teachers are the heart and soul of teaching." , because computers can only present facts and explain them to a certain extent, where as a good teacher can explain to the student in many ways.
The US has about 2.7 million computers for 100,000 schools, a ratio of about 1 computer for every 16 students. Experts say that, "Computers work best when students are left with a goal to achieve. . ." , and students are allowed to achieve this goal with proper direction from a teacher. After many attempts in the 1980's to put computers into the classroom a Presidential Plan was drawn up:
1. Give computers to teachers before students.
2. Move them out of the labs and into classrooms.
3. One workstation at least for every two or three students.
4. Still use flashcards for practice.
5. Give teachers time to restructure around computers.
6. Expect to wait 5 to 6 years for change.
This plan was to help guide the use of computers into the classroom, and maximize their ability as learning tool. The computer will enhance the future classroom, but it cannot be expected to produce results quickly. One thing the use of computers in the classroom will help with is the fear of computers and their ability to confuse people. Early exposure to computers will help increase computer use in society years from now.
The biggest network of connected computers is broadly referred to as the internet, information superhighway or electronic highway. The internet was started by the Pentagon as a way for the military to exchange information through computers using modems. Over the years the internet has evolved into a public resource containing limitless amounts of information. The main parts of the internet are FTP (file transfer protocol), gopher, telnet, IRC (internet relay chat), and the world wide web. FTP is used to download large files from one computer to another quickly. Gopher is much like the world wide web, but without the graphical interface. Telnet is a remote computer login, this is where most of the hacking occurs. The IRC is just chat boards where people meet and type in there discussions, but IRC is becoming more involved with pictures of the people and 3-D landscapes. Besides IRC, these internet applications are becoming obsolete due to the world wide web.
The most popular of the internet applications is the world wide web or WWW. It is a very graphical interface which can be easily designed and is easy to navigate. The WWW contains information on everything and anything possibly imaginable. Movies, sound bytes, pictures, and other media is easily found on the WWW. It has also turned into a business venture, most large businesses have a "page" on the WWW. A "page" is a section of the WWW that has its own particular address, usually a large business will have a server with many "pages" on it. A sample internet address would be "http://www.sony.com/index.html", the http stands for hypertext transfer protocol, or how the information will be transferred. "www.sony.com" is the serve name, it is usually a mainframe computer with a T-1 up to T-3 fiber optic telephone line. The server is expensive not because of the computer but because of the telephone line, a T-1 line which transfers up to 150 megabytes of information per second costs over $1000 a month, while a T-3 line transferring 450 megabytes of information can cost over $10,000 a month. The "index.html" is the name of the page on the server, of which the server could have hundreds.
The ability for all of this information has made for a virtual society. Virtual malls, virtual gambling, virtual identities, and even virtual sex have sprung up all over the internet wanting your credit card number or your First Virtual account number. First Virtual is a banking system which allows so much money to be deposited at a local bank to be spent on the internet. Much of the internet has become a large mail order catalog. With all of these numbers and accounts, questions come up about the security of a persons money and private life, which aren't easily answered.
Being safe is a new craze today, protection from hackers and other people who will steal personal secrets and then rob someone blind, or protection from pornography or white supremacists or millions of other things on the internet. The recent communications bill that passed is supposed to ban pornography on the internet, but the effects aren't apparent. There are still many US "pages" with pornography that have consent pages warning the user of the pornography ahead. Even if the US citizens stopped posting pornography, other nations still can and the newsgroups are also international. Programs such as Surf Watch and Internet Nanny have become popular, blocking out pornographic sites. The main problem or beauty of the internet is the lack of a controlling party, "It has no officers, it has no policy making board or other entity, it has no rules or regulations and is not selective in terms of providing services." . This is a society run by the masses that amounts to pure anarchy, nothing can be controlled or stopped. The internet is so vast many things could be hidden and known to only a few, for a long time if not forever. The real problem with controlling the interenet is self control and responsibility, don't go and don't see what you don't want to, and if that amounts to a boring time, then don't surf the net.
When speaking of computers and the internet one person cannot go unmentioned, Bill Gates, the president of Microsoft. Microsoft has a basic monopoly on the computer world, they write the operating system and then the applications to run of the system, and when everyone catches up, they change the version. Bill Gates started the company in the early 1980's with DOS, or Disk Operating System, which just recently was made obsolete by Windows 95. Bill Gates has now just ventured into the internet and is now tangling with Netscape, the company with the Internet monopoly. Netscape gives away its software for free to people who want the basic version, but a version with all of the bells and whistles can be purchased. Microsoft is hard pressed to win the internet battle, but will take a sizable chunk of Netscape business. Bill Gates will likely keep running the software industry, with his recent purchase of Lotus, a popular spreadsheet, he further cornered the market.
Computers are one of the most important items society posses today. The computer will be deeply imbedded in peoples lives even more when the technology progresses more and more. Businesses will become heavily dependent as video confrencing and working from home become increasingly more feasible, so businesses will break down from large buildings into teams that communicate electronically. Schools may be taught by the best teachers possible and software may replace teachers, but that is highly unlikely. The internet will reach into lives, offering an escape from reality and an information source that is extremely vast. Hopefully society will further embrace the computer as a tool, a tool that must be tended to and assisted, not left to do its work alone. Even so computers will always be present, because the dreams of today are made with computers, planned on computers, and then assembled by computers, the only thing the computer can't do is dream, at least right now.
f:\12000 essays\sciences (985)\Computer\Computers and the Disabled.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers and the Disabled
The computer age has changed many things for many people, but
for the disabled the computer has ultimately changed their entire
life. Not only has it made life exceedingly easier for all disabled
age groups, it has also made them able to be more employable in the
work force. Previously unemployable people can now gain the self
esteem from fully supporting themselves. Computers have given them
the advantages of motion were it had not previously existed.
Disabled children now have the advantage to grow up knowing that
they can one day be a competent adult, that won't have to rely on
someone else for their every need. Windows 95 has made many
interesting developments toward making life easier for the nearly
blind and for the deaf, including on screen text to synthesize
speech or Braille, and adaptive hardware that transforms a
computers audible cues into a visual format. Computers have given
the limited back their freedom to be an active part of the human
race.
According to the Americans with Disabilities Acts, any office
that has a staff of more than fifteen people now has to provide
adaptive hardware and software on their computers, so that workers
with disabilities can accomplish many tasks independently. Before
this Act was passed the disabled were normally passed over for jobs
because of their handicap, now however employers can be assured
that people with disabilities can work in the work place just like
people without disabilities. The self esteem disabled individuals
have gained from the experience to work and be self supporting, is
immeasurable.
Computerized wheelchairs have given disabled people a whole new
perception on life. It has given them the mobility to go just about
anywhere they want to go. It has given them the ability to explore
an unknown world, and progress intellectually as well as
spiritually. Computerized vans allow many disabled people to drive,
by having onboard computerized lifts to place the disabled in the
driver's seat. Movement sensitive hardware, as well as computerized
shifting devices allows the disable to control the van with very
little physical movement. Children with disabilities now have
access to many computerized devices that enable them to move freely
in their home as well as outside. The battery operated bigfoot
truck, much like the ones that we buy for our own children to play
on have been adapted and computerized for children with special
needs. These trucks have been designed for even some of the most
limited children to operate with ease. With the newest technology
these children can now go to public schools with their peers, and
have an active social life. They also are learning that there is a
place in this fast paced world for them, and are teaching the rest
of us that with strength and the will to succeed, all things are
possible.
The Windows 95 help system was designed to help users with hearing,
motor and some visual disabilities, they include information on the
built-in access features. The controls of these features are
centralized in the Accessibility Options Control Panel. This
specialized control panel lets the user activate and deactivate
certain access features and customize timing and feedback for a
limited individual. A program for the disabled called StickyKey
helps a person who doesn't have much control over hand movement to
use a computers delete command, or any other command that
normally uses both hands. StickyKeys allows a disabled person to
hit one key at a time so that they can access a multi-command
without pressing multiple keys simultaneously, it also allows for
mistakes by deleting any accidentally hit key that isn't held down
for the set amount of time. To use a mouse a person needs complete
control of hand movement. MouseKeys assist the disabled with the
use of the arrow keys on the keyboards numeric keypad to move the
mouse around the screen. ToggleKeys is another program that aids
the disabled, it provides audio feedback for certain key strokes by
providing high and low pitched beeps that tell the current status
of Caps Lock, Number Lock, and Scroll Lock keys. Windows 95 offers
several features for those with limited sight. They make a high
contrast layout that can be scaled to multiple sizes for easy
reading. Their program Showsounds lets you set a global flag that
shows sounds in a visual format.
In an age when computers seem to be used in just about every aspect
in life, the disabled have found something that makes their lives
more endurable. Considering the limitations that they have overcome
in their everyday lives the disabled should be commended for the
strength and will, that has let them overcome, at least somewhat,
the difficulties the world provides. The computer age has brought
them many changes and they have adapted and excelled in them. With
Windows 95 and programs like it, the computer world has been
brought to almost everyone, even people born with limited
abilities.
f:\12000 essays\sciences (985)\Computer\Computers I dont like computers So why cant iI get a job .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers, I don't like Computers. So why can't I get a job?
Today most jobs use computers. Employees are probably going to use one on the job. A lot of people are being refused jobs because they don't have enough(If any) Computer related experience. We are moving into the technology age whrere most everything is going to be run by computers. Is this good? That doesn't matter because people are trying to find the fastest way to get things done, and computers are the answer. One of my relatives is having trouble finding a job in a new city he just moved to. I feel sorry for him because he was not introduced to computers when he began his career. If only he was born near the technology age he might of been more succceptable to computers and would therefore have more experience with them thus having more of a chance of getting a high paying job. However computers are getting easier to operate as we speak. William Gates said that microsoft's key role of Windows 95 was to make the operating system easier for the average person to operate. My grandma is a key example, she was born way before there was any PC's or networked offices. She remembers the big punchcard monsters that she would have to insert cards into to give it instructions. But my point is that she was not exposed to a computer as everyday life. Now she is really behind so to speak in the computing world. Computers back then were huge, they were usually stored in wharehouses. The earlier ones used paper with holes in them to give it instructions. Later the pre-PC's used tape cartridges to store data on. Then came along in 1979 the first real personal computer. Apple came out on the market with the Apple PC. Two years later IBM came out with their version of the personal computer. When IBM came out with their computer they were now in the PC market. Apple's biggest mistake was not to make MS-DOS their operating system and they failed the market due to software. The computer was software driven like it is today. The computer is just a paper weight without the software to go along with it. Microsoft's success was getting into the market early wih software for the IBM personal computer. Apple was not doing this as well as IBM and microsoft was. We are now in the information age where information and computers are one. The information age is going to be responsable for most of the world's changes. In the future pc's are going to be connected as they are now, but with greater speeds, making it possible to video teleconference with your friends and co-workers. Tv will change too. Tv will be more interactive and direct, where you will be able to watch shows when you wanna watch them. So the future of computing has a far way to go before it will slowdown. I encourage everyone to be familiar with computers now rather then later when they most need it.
f:\12000 essays\sciences (985)\Computer\Computers in modern society.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Looking around at daily life, I noticed a pattern of computer oriented devices that make life easier and allow us to be lazier. These devices are in most daily activities ranging from waking up to an alarm clock that is computerized to watching the news before going to bed on a computerized television. All of these computerized facets of our society help to increase our daily productivity and help us to do whatever it is we need to accomplish in the day. The computer age is upon us and it will continue to grow in influence until society revolves around it daily without need for improvement.
In personal computers, the industry has began to create faster machines that can store much more information. For speed, the internal microprocessor has been tweaked to perform at high rates of speed. One such microprocessor is the Intel Pentium chip that is the fastest commercial microprocessor on the market. In addition to internal speed and to allow faster hook- up to the Internet, faster telephone lines, most notably the fiber optic lines, have been added, for an extra charge, to transfer data about 4 times faster than conventional phone lines (about 28,000 bits per second has been quadrupled to about 128,000 bits per second. As speed enhances, memory and storage space is needed to hold excess information. EDO RAM is a new, faster memory module that helps transfer RAM data twice as fast as normal RAM. For long term storage of large amounts of data, hard drives have been under a constant upgrade of performance, and it is not uncommon to find hard drives about 8-9 gigabytes on the market.
Along with technology, an ease of use factor has been instilled in the modern day PC's. The most notable ease of use enhancement is a GUI(Graphical User Interface), which allows the user to see a choice instead of reading about the choice. This is accomplished by using pictures and windows to simplify the choices. Windows 95 and the Macintosh OS both use GUI to simplify use. Another change in technology has been in almost putting manufacturing of typewriters into extinction. Offices are more and more turning to computers instead of typewriters because the computers integrate many office tasks in one machine, most notably the use of word processors. With the use of word processors on a computer comes the use of spell check which is only offered on a few typewriters.
The most growing part of the computer-oriented world is the Internet. It allows users to send electronic mail (E-mail), faster and more conveniently than conventional, or "snail" mail. In addition to text sent, the user may opt to send a program or picture attached to the letter. In addition to electronic mail, the Internet is also used to give information on almost any topics. It is a tool now common in college research because it offers millions of sources and there is no limit to finding information. It is not only a tool for research, but also a tool for business. Businesses use it to advertise and to try to sell items on-line. These companies set up their own web sites and place the sites' addresses in their television and radio ads. Business use is not limited to advertising and selling, but also to sell and buy from other companies faster than conventional methods.
Technology is all around us and there are many practical applications of computer technology. For example, the government uses the superhighway to verify drivers' licenses and Social Security numbers. The Internet is used by congressional committees to conduct research related to their current problems. Technology is used in automobiles to calculate the right amount of gas to air mixture in fuel injected cars. In auto garages, the technology is used to align the wheels and to find electrical system problems. Another example is radio and television, the two most important things in many lives. These devices would not be able to do what they do without the help of mini-computers that decipher the incoming signal. On the digitized radios and televisions, there are computers that control volume level. Banks are even installing technology into their operations by using Automated Teller Machines (ATM's). These machines take the place of human tellers and process transactions faster with built in logic control to prevent overwithdrawal. Even businesses use technology during non-business hours by having automated telephones that continue to do business long after the last person went home. They accomplish this by using prerecorded messages and a logic control to allow the dialer to get information if they have a touch-tone phone. This increases business productivity with minimal maintenance costs. So by using computers, the businesses have educated their consumer without having to manually speak to them.
There are many possibilities for future uses of computers to simplify daily life and enhance the life experience. The first which is already under development is highway navigation. This lets the cars drive themselves and should make the roads safer. To enhance the personal life, video phones will let you see who you are talking to. This technology will depend on how we develop data transfer and deciphering capabilities that are too expensive to use now. We will be able to use almost every major household luxury on a PC that we have now. We could watch TV, listen to the radio, and talk on the telephone. These technologies are under development and are available in limited form, except the radio which can be used in final form now.
Society is changing rapidly. This change is attributed to ease of use of computerized manufactured goods. These luxuries that will become standard living tools are creating a society in which computers will rule. We will continue to develop technology until life is as automated as it could get. Almost every daily task will be computerized and computers will dominate the world.
f:\12000 essays\sciences (985)\Computer\Computers Related to Turf Grass Industries.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Science
Term paper
Turfgrass Science
Dennis Zedrick
October 23, 1996
The field of turfgrass science, and golf course management has became very sophisticated in just the few short years that I have been involved. Much of the equipment has gone higher tech, as far as electric motors, and more computerized technology. Many golf course superintendents now are , "online via the web". If there is a question concerning a new disease or fertilizer one can log on to Texas A@M home page and hopefully find a solution to the problem. The technology in the computer field has also advanced the irrigation technology in the agriculture field. Irrigation systems can now be turned on with the touch of a button through IBM or MACINTOSH Personal computer. New computer technology will continue to make leaps and bounds for the turfgrass industry.
Ransome Industries, maker of fine turgrass mowing equipment, has come out with the first electric mowing machine. I myself am not in favor of this, or I would guess anyone in the petroleum industry is either for that matter. There has been a greater demand for environmental concern along the nations coastlines, and nation wide. Most of the worlds great golf courses are located along the coasts. Ransome was banking on that an electric mowing machine would fit that need. It has been slow to catch on as of late. It's benefits are an almost quiet no noise machine. (Beard 302). Many country club members would become outraged when the superintendents would send out the greensmowers daily at 6:00 A.M. The diesel and gasoline powered engines are noisy, and would wake up many members that live along the golf course. The second benefit is no cost of gasoline or oil, and therefore no chance of a petroleum leak or spill. There downfall lies in there initial cost,"$15,000 for a gasoline triplex mower, and $20,000 for an electric powered mower. Another real downfall is that they can only mow nine holes, then they have to be charged for ten hours, rendering them useless for the rest of the day. Hopefully technology can produce an environmental friendly machine, while not putting the oil industry in a bind, " And also keep the governments hands out of the cookie jar with new environmental taxes"!!!!!!
The Internet has become a very important tool to the people in the turfgrass industry. At any given time a golf course superintendent can log onto various company's home pages to learn something about their product.(Beard 101) If one day I am searching for a new fairway mower, I can bypass the phone calls and written estimates, and go strait to the information. Toro, Ransome, Jacobsen, and even John Deere all have home pages. You can inquire on a certain mower model, engine size or anything you need to know. It will list a price and even the shipping and handling and the salesman's commission. Perhaps the best part about the Internet, is all the turfgrass related information that is at your fingertips.(Beard 120) One can access the three dominating turfgrass schools in just seconds.(Beard 122) Those three schools would be Texas A@M, Mississippi State, and Oklahoma State. If it is in the middle of the summer, and there is a big tournament coming up they can be of great help. If your putting greens start to die in spots in the heat of the summer, one could log on to Texas A@M home page, and root around for some information, on what type of disease might be causing it.(Beard 420) They give identifying characteristics for each disease that is helpful in a quick diagnoses of the problem. They even offer helpful tips on what chemicals will best control the problem, and how much to spray. If that's not enough they give tips on employee management, and possible job opportunities with the college.
How can the Internet and computer technology possibly make my future job any easier, I might ask. Well that is an easy question to answer. Toro, Rainbird, and Flowtronics PSI, have found a way to make water management an easy task. Automatic water irrigation systems have been around since the early seventy's. First they were run off a mechanical pin and timer system for home lawn use. This was a very reliable system, but it lacked flexibility.(Wikshire95) Next came the automatic timer systems. These run off an electronic timer from a 110 volt wall outlet. These are still in use today, and it is a very good system.(Wikshire 112) Last but not least has come the water management system run from your personal Macintosh or IBM compatible computer. The personal computer actually works as the brain for the irrigation system.(Wikshire200) You down load the program into the computer, and bam it does all the work for you. It has a water sensor located outside that tells the system to shut off if it has rained to much, or to come on if it is getting extremely dry on a hot summer day. It also can measure the amount of nitrogen, phosphorous, and potassium in the soil, if necessary. It will test the water, and tell you the amount of salt or nitrates located in the water. Once a watering program is started it is also easily changed to another program if so desired.(Wikshire202)
This has benefited the turfgrass industry in many ways. It has saved superintendents from having to come and shut the irrigation off in the middle of the night if it starts raining hard. Most importantly it has saved money in the labor part of the budget. It keeps hourly employees occupied with other tasks, other than having to turn on individual sprinkler heads every day. The most popular program by far is the Rainbird Vari-Time V and VI programs.(Wikshire250) These two programs are leaps and bounds above the rest.
Having knowledge of computers and computer related programs will be very beneficial to me in the turfgrass industry. The technology will benefit me and others. From new high tech electric mowing machines, to non hydraulic mowers. The Internet could be the most useful tool for me in my job. It will give me useful knowledge on what is going on in the world. Also it could help save me from a costly mistake when it comes to disease control that could cost me my job.
The computer industry has also made great accomplishments when it comes to water conservation management. These programs can be downloaded into your personal computer. They are great labor savers, and most of all effective time management tools. I hope that the technology will keep advancing, and make my future job as a golf course superintendent much easier.
Works Cited
Beard, James. Turf Management for Golf Courses. New York: Macmillan Publishing
Company, 1992.
Beard, James. The Science of Agronomy. New York: Macmillan Publishing Company,
1994.
Wikshire, Don, and Charles Cason. The Principles and Technology of Irrigation and
Drainage. Englewood Cliffs: Prentice-Hall Inc., 1995.
f:\12000 essays\sciences (985)\Computer\COMPUTERS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[System Detection: 02/14/96 - 17:08:21]
Parameters "f;q;g=3;s=net", InfParams "", Flags=01004a6f
SDMVer=0400.950, WinVer=0616030a, Build=00.00.0, WinFlags=00000419
LogCrash: crash log not found or invalid
Devices verified: 0
Checking for: Manual Devices
Checking for: Programmable Interrupt Controller
QueryIOMem: Caller=DETECTPIC, rcQuery=0
IO=20-21,a0-a1
Detected: *PNP0000\0000 = [1] Programmable interrupt controller
IO=20-21,a0-a1
IRQ=2
Checking for: Direct Memory Access Controller
QueryIOMem: Caller=DETECTDMA, rcQuery=0
IO=0-f,81-83,87-87,89-8b,8f-8f,c0-df
Detected: *PNP0200\0000 = [2] Direct memory access controller
IO=0-f,81-83,87-87,89-8b,8f-8f,c0-df
DMA=4
Checking for: System CMOS/Real Time Clock
QueryIOMem: Caller=DETECTCMOS, rcQuery=0
IO=70-71
Detected: *PNP0B00\0000 = [3] System CMOS/real time clock
IO=70-71
IRQ=8
Checking for: System Timer
QueryIOMem: Caller=DETECTTIMER, rcQuery=0
IO=40-43
Detected: *PNP0100\0000 = [4] System timer
IO=40-43
IRQ=0
Checking for:
f:\12000 essays\sciences (985)\Computer\ComputerScience.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Science
Even before the first computer was conceptualized, data had already been
stored on hard copy medium and used with a machine. As early as 1801, the
punched card was used as a control device for mechanical looms. One and
one-half centuries later, IBM joined punched cards to computers, encoding
binary information as patterns of small rectangular holes. Today, punch
cards are rarely used with computers. Instead, they are used for a handful
of train tickets and election ballots. Although some may find it
surprising, a computer printout is another type of hard copy medium.
Pictures, barcodes, and term papers are modern examples of data storage
that can later be retrieved using optical technology. Although it
consumes physical space and requires proper care, non-acidic paper
printouts can hold information for centuries. If long-term storage is not
of prime concern, magnetic medium can retain tremendous amounts of data
and consume less space than a single piece of paper.
The magnetic technology used for computer data storage is the same
technology used in the various forms of magnetic tape from audiocassette
to videocassette recorders. One of the first computer storage devices was
the magnetic tape drive. Magnetic tape is a sequential data storage
medium. To read data, a tape drive must wind through the spool of tape to
the exact location of the desired information. To write, the tape drive
encodes data sequentially on the tape. Because tape drives cannot randomly
access or write data like disk drives, and are thus much slower, they have
been replaced as the primary storage device with the hard drive. The hard
drive is composed of thin layers of rigid magnetic platters stacked on top
of one another like records in a jukebox, and the heads that read and
write data to the spinning platters resemble the arm of a record player.
Floppy disks are another common magnetic storage medium. They offer
relatively small storage capacity when compared to hard drives, but unlike
hard drives, are portable. Floppy disks are constructed of a flexible
disk covered by a thin layer of iron oxide that stores data in the form of
magnetic dots. A plastic casing protects the disk: soft for the 51/4-inch
disk, and hard for the 31/2-inch disk. Magnetic storage medium, for all
its advantages, only has a life expectancy of twenty years.
Data can be stored on electronic medium, such as memory chips. Every
modern personal computer utilizes electronic circuits to hold data and
instructions. These devices are categorized as RAM (random access memory)
or ROM (read-only memory), and are compact, reliable, and efficient. RAM
is volatile, and is primarily used for the temporary storage of programs
that are running. ROM is non-volatile, and usually holds the basic
instruction sets a computer needs to operate. Electronic medium is
susceptible to static electricity damage and has a limited life
expectancy, but in the modern personal computer, electronic hardware
usually becomes obsolete before it fails. Optical storage medium, on the
other hand, will last indefinitely.
Optical storage is an increasingly popular method of storing data.
Optical disk drives use lasers to read and write to their medium. When
writing to an optical disk, a laser creates pits on its surface to
represent data. Areas not burned into pits by the laser are called lands.
The laser reads back the data on the optical disk by scanning for pits and
lands. There are three primary optical disk mediums available for storage:
CD-ROM (compact disc read-only memory), WORM (write once read many), and
rewritable optical disks. The CD-ROM is, by far, the most popular form of
optical disk storage; however, CD-ROMs are read-only. At the factory,
lasers are used to create a master CD-ROM, and a mold is made from the
master and used to create copies. WORM drives are used almost exclusively
for archival storage where it is important that the data cannot be changed
or erased after it is written, for example, financial record storage.
Rewritable optical disks are typically used for data backup and archiving
massive amounts of data, such as image databases.
Although there are many manufacturers of the data storage devices used in
the modern personal computer, each fits into one of four technological
classes according to the material and methods it uses to record
information. Hardcopy medium existed before the invention of the
computer, and magnetic medium is predominantly used today. Electronic
medium is used by every computer system, and is used to store instructions
or temporarily hold data. Finally, optical storage medium utilizes lasers
to read and write information to a disk that lasts indefinitely. Each
type of medium is suitable for certain functions that computer users
require. Although they use differing technologies, they all have equal
importance in the modern personal computer system.
f:\12000 essays\sciences (985)\Computer\ComputerVirus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It Is Contagious
"Traces of the Stealth_c Virus have been found in memory. Reboot to a
clean system disk before continuing with this installation..." This was the
message staring back at me from one of the computer monitors at my office.
Questions raced through my mind. "Stealth_c?" "What's a system disk?" "How
am I supposed to install anti-virus software if the computer system
already has a virus?" As a discouraging feeling of helplessness came over
me, I thought of all the people who had loaded something from disk on this
box or who had used this box to access the Internet. Because there was no
virus protection in the first place, it was going to be very difficult to
determine how many floppy disks and hard drives had been infected. I
wished I had learned about computer viruses a long time ago. What is a
computer virus, anyway? Is it a computer with a cold? A computer "virus"
is called a virus because of three distinct similarities to a biological
virus. They are: ? They must have the ability to make copies of, or
replicate, itself. ? They must have a need for a "host," or functional
program to which it can attach. ? The virus must do some kind of harm to
the computer system or at least cause some kind of unexpected or unwanted
behavior. Sometimes computer viruses just eat up memory or display
annoying messages, but the more dangerous ones can destroy data, give
false information, or completely freeze up a computer. The Stealth_c virus
is a boot sector virus, meaning that it resides in the boot sectors of a
computer disk and loads into memory with the normal boot-up programs. The
"stealth" in the name comes from the capability of this virus to possibly
hide from anti-virus software. Virtually any media that can carry
computer data can carry a virus. Computer viruses are usually spread by
data diskettes, but can be downloaded from the Internet, private bulletin
boards, or over a local area network. This makes it extremely easy for a
virus to spread once it has infected a system. The aforementioned
Stealth_c virus was transported by the least likely avenue; it was
packaged with commercial software. This is an extremely rare occurrence,
as most software companies go to great lengths to provide "clean"
software. There is a huge commercial interest in keeping computers
virus-free. Companies stand to lose literally thousands of dollars if they
lose computer data to a virus. An immense amount of time can be lost from
more productive endeavors if someone has to check or clean each computer
and floppy diskette of the virus because, no matter what, it will continue
to replicate itself until it uses every bit of memory available. To
service this market, companies sell anti- virus software, which scans
programs, searching for viruses. If one is found, a user can "kill" it by
cleaning the file, delete the file itself, move the file to a disk, or
ignore it. Ignoring a possible virus is an option provided because some of
the newer software utilizes heuristic algorithms to detect possible
viruses. This method of detection is highly effective but, because of the
sensitivity of the programs, false hits can occur. It is also very
important to keep your anti-virus software current. By some estimates,
forty to one hundred new virus programs are written every week by less
than ethical programmers. Most software companies put out new "vaccines"
every month. It is like an ongoing battle, the bad guys write a new virus
or even a new "species" of virus, the good guys get a copy from some poor
soul whose computer has been infected, and they write a vaccine. Some of
the more paranoid, or perhaps astute, have theorized that the companies
writing anti- virus software and the programmers writing viruses are one
in the same. However, the author of a computer virus means nothing to one
whose machine has lost data or has crashed due to infection. Detecting and
deleting the virus becomes the immediate action needed. This is impossible
without anti-virus software, and would be much simpler if the software is
already installed on a system. So, keep your computers "vaccinated,"because, it is contagious.
f:\12000 essays\sciences (985)\Computer\Conduit Technology versus Communication.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
~sKePtic~ : hi green, wanna drink ~sKePtic~ : Hi visitor, want a drink?
Ann_Organa : skep, what are u the welcome party Ann_Organa : Skep, are you the welcoming party now (Laughs out loud)
~sKePtic~ : ...maybe...trying to entertain in more ways then one J ~sKePtic~ : (Grins) Maybe, I try to entertain in more ways than one (smiles)
neXus : You 2 married, cuz you sound like it...IMHO neXus : In my humble opinon, you sound like a married couple!
Ann_Organa : only in your dreams, skep...he-he Ann_Organa : Only in your dreams, Skep (laughs)
~sKePtic : L ~sKePtic~ : (Frowns, in disappointment)
neXus : BRB, email... neXus : Be right back, I have email to check.
neXus dropped. neXus has left the chat room
~sKePtic~ : Boring now.. : - / ~sKePtic~ : It seems boring now (has a wry facial expression)
Ann_Organa : one more drink for the night.. Ann_Organa : One more drink for the night, please.
~sKePtic~ : BEER.WAV sent <> ~sKePtic~ : (a sound of beer being poured is heard) Smiles happily.
Ann_Organa : THX Ann_Organa : Thank you!
* * *
Old English? Nope. Shakespearean dialect? Not Exactly. Foreign language? Not really. Ebonics? Could be. English? Somewhat. What are the true meanings behind the symbols above? To many it is a form of communication, where symbols reflect expressions and spelling does not count. It is a dialect that continues to grow in the technological aspects of society developed and controlled by a machine we so commonly call "the computer". Has the way to communicate been degraded to the acronyms of the Internet? Or, is a new language gradually developing, one held true by a younger generation living by the standards of a machine? Language has usually followed the norm of time, expanding to the contingencies of new terms and slang expressions. Yet, time flows at an accelerated rate, as our own dialects are being challenged by technology's swift momentum. In the past, technology was kept in check with our way to communicate. Now we find our very own dialect, and ourselves, bending to the rules of technology.
In order to comprehend how technology creates variation within language, we must first understand how languages spoken in the United States progressively become linguistically diverse. All languages have both dialectical variations and registral variations. These variations, or dialects, can differ in lexicon, phonology, and/or syntax from the Standard Language that we often think of as the "Correct Language", although they are not necessarily less proper than, say, "Standard English". It depends on where, by whom,
and in what situation the dialect is used as to whether or not it is appropriate. Before computers, only factors of location, ethnicity, education, and age heavily influenced language. Most people are familiar with regional dialects, such as Boston, Brooklyn, or Southern. These types of variation usually occur because of immigration and settlement patterns. People with the same social class factors, education, and occupation tend to seek out others like themselves. While occupations often produce their own jargons, a person's occupation will also determine what style of speech is used. A lawyer and a laborer would not be likely to use the same dialect on the job. Likewise, a person with little education is not likely to use the same style of speech as a college professor. Customarily, those working together usually develop a certain, direct dialect that they can communicate with and ultimately understand each other. Ethnicity is also a contributing factor that produces language variation, particularly among immigrants. The rather widespread survival of dialects, such as Ebonics and Chicano English, seems to stem from the social isolation of the speakers (discrimination, segregation), that tends to make the variation more obvious. Furthermore, age factors in language variations in two ways. First, there are the generation differences. As the younger members of a speech community adopt new variants, the older members may not be affected, opting to instead to use their traditional dialects. The aged populace will communicate with the words they learned decades ago, as the younger generation communicates on the street using slang and phrases. The second way age produces change is over time, to correspond with various stages of an individual's life. This is particularly evident in teen slang. While this kind of slang does not generally hold over from one generation to the next, the teens that used it generally do not carry it into middle age, either.
Technology resembles our teen generation, in that they both continue to grow. The computer has developed into a communication conduit. Casually learning to transmit and acquire human conceptions through the services and programs it can offer. Feelings of humor, excitement, and sadness have been captured in a plastic keyboard containing a jumbled alphabet. Gradually, as technology expands, more
terms are thrust to the open public. The advent of IRC (Internet Relay Channels), the Internet, and chat rooms bring forth a new way for people to communicate. Without warning, laughter can be generated by over an infinite number of acronym combinations. Today, there are over 513 different characters describing human feelings of emotion and ideas in the computer age. As seen in the example chat session, feelings of sorrow, laughter, and happiness have been captured by the symbols LOL, BRB, IMHO, and THX.
However, with the absence of any visual or aural communication, these electronic characters slowly strip away true emotions usually reserved by body movement gestures. Many emotions can be interpreted incorrectly, and one can gain a totally different image of what is implied. Take the feelings of laughter and love. Both are compressed into the same three-letter character: LOL. Yet, both are complete opposites of expressions. One is defined as Lots of Laughter, with the flip side being Lots of Love. How can one interpret the difference? Only with the knowledge of a certain situation or location, can true intentions be made. Interpretations like this become difficult in real life for some, while others find a knack for it in chat sessions.
Instead of being the most common stereotypical computer user, who shuns all social contact and withdraws to a room to play with the computer, IRC allows a wide range of social interaction on a level unthinkable in the past. Some users stay on the computer an average of five hours a night - not to play video games, but to talk with other users all over the world. Through chatting (or real-time conferencing), friends meet over the E-mail lines. Some members fall in love without ever meeting face to face. Some E-mail subscribers have even gone through "virtual marriages" while maintaining a traditional family life on the other side of the computer monitor.
Because E-mail systems are text based, communication between people who do not know or cannot see each other sometimes can be difficult. Fortunately, a system of keyboard characters has been developed to give added meaning to messages and clear up misunderstandings. Named "emoticons" or
"smileys," these characteristics are used to convey pleasure, sadness, or sarcasm. Message writers use hundreds of types of smileys. Letters in place of long phrases called shorthands, also speed up communication. Some of the most widely used smiley and shorthand symbols are included in the conversation on the first page.
With the introduction of these characters is also the ever-growing threat to our language. Even now, we can slowly see the rippling effects. The computer has altered our language, and now it rules the network lines of communication, held fast together by the very computers that created it.
"What is a my name now? The 'Net has stripped away our identities as human beings..."
A typical chat session is filled with members greeting each other, making trivial conversation, and flirting. Members send messages that include aliases in which they have created, BB jargon, typographical errors, smileys, and shorthands. For example, take our above chat session, within "The StarSide Bar and Grill." Conversation is light, as a friendly bartender always wants to give you a free, VR alcoholic drink. (Perhaps these E-mail bars are the answer to the drinking and driving problem.) In the absence of an aural or visual communication, smileys are necessary to convey feelings and emotions. When visually based VR systems become common, such a symbol set may be eliminated and not needed. Yet, chat sessions have been slowly threatening our identity as people, and is questioning our human morality. Through personal interviews, I gained a deeper insight on this new form of interaction. "What's a name now? My true identity has been ripped apart from me, and I feel helpless. Has my enriched heritage name been degraded to the nicknames of the web, like "Q-BarfMan" and "-CornHulio-." Another told me, "To chat is like practice - for the real world. Here you can talk to girls without getting nervous or embarrassed, or start a fight without ever getting a scratch." The decency of chat rooms are still being disputed and argued today. And like technology, will never rest.
However, text-based systems will always be an important form of E-mail communication. For example, with an alias, no one can tell if the message writer on the other end of the line is male, female, Asian, Anglo, young, old, wheelchair-bound, or deaf. Consequently, people in an E-mail world are judged not by their physical attributes but by the content of their messages.
"...because of e-mail and chat, people haven't picked up a pencil in years..." (Heslop 401)
In Brent Heslop's essay "Return to Sender", he argues that E-mail has become is more advantageous to the individual then one may think. "With the Internet, E-mail has become the ultimate convenience," (402) Heslop writes. The arguments are presented through the eyes of businessmen, the speediness and relative cheapness of E-mail have more opportunities than that of lifting a pen. "E-mail programs enable users to send attachments of other documents, and transmit them in a fraction of the time it would take a courier service or U.S. postal system to deliver them." (Heslop 401) Because of the speed, reliability, and efficiency of E-mail, the postal services have lost thousands of dollars, it's delivery system ridiculed, and letters have been tentatively titled snail mail. However, downsides do occur. What happens when network servers go down and fail to deliver documents? Have we become to depend on keyboards to deliver information than that of pen in hand? And what is to become of the beauty of poetry? Can the same rhetorical sense of Shakespeare and Yeats be understood in the acronyms and abbreviations of the Internet lingo? Again, only the advancement of a single machine can determine the outcome of delivering information across each other in the near future.
In his science fiction novels, William Gibson uses the word cyberspace to describe the ethereal world of the electronic highway where the unusual and unlimited communication links are available. Space on the electronic highway comprises not asphalt or concrete, but electricity and light. Writer John Perry Barlow describes cyberspace as having:
...a lot in common with the 19th Century West. It is vast, unmapped, culturally, and legally ambiguous, verbally terse (unless you happen to be a court stenographer), hard to get around in, and up for grabs...In this silent world, all conversation is typed. To enter it, one forsakes both body and place and becomes a thing of words alone....It is, of course, a perfect breeding ground for both outlaws and new ideas. (South 63)
The Internet has become our brave new world. It has succeeded in fabricating a new environment of language, emotion, and expression. Everyday it threatens the existence of our language, and challenges our values as human beings. Our present world as we know it is being transformed, slowly sucked into the virtual worlds originated by computers and its creators. The Internet has become a secondary world in which we must talk to others without truly seeing them, and speak a peculiar dialect consisting of smileys, acronyms, and short hand. Ironically we have not done anything to change this, until now.
"The Internet has changed my life - I no longer have one..."
It is estimated that an alternation in our language occurs every minute. The Internet is the new frontier that pushes additional words into existence. However, many want to stop this rapid succession of word play, including the government. Those against it want to transform the Internet into a grammatically correct world of communication. Presently, the Internet is under scrupulous observation for indecent material by the not only the government, but the nation's communities as well. For myself this theme is a complex one, and a tad ironic. Agreed, there are many X-rated sites on the Internet which can be easy accessible by children, but wouldn't government control on a system based on information be an infringement of the First Amendment? Seems we have placed ourselves between a rock and a hard place. My explanation, part argument and solution, narrows down to the virtue of the user. The morality and values of the individual at the keyboard should ultimately decide the viewing control of one. For whose
right is it to control what you want to view on the Internet? Is it mine? Yours? Someone else? Or Uncle Sam's? Even today I was placed on the hot seat when I entered an argument with someone passionately fighting for the Decency Act (the act that government will have the right to abolish what they feel is indecent on the web). First we disputed what the true definition of indecency and decency was, and how the government and CDA officials may hold a different meaning to what is decent and what is not. Surely, the ideas of what is decent have its comparable differences to a 16-year-old teenager. "The Internet may have some good aspects, but as a whole it is terrible. It is the number one method that child molesters find their victims, and one of the number one causes in the kidnapping of children exposed on the web." I shot back with, "How do you know that this is true, do you have facts? Do you surf the 'Net daily looking for child molesting sites, and if so, how decent is that? Do have any proof, if so where? If you got it off the 'Net, how decent is it, and if you said the majority of the Internet is bad, how do I know your data is reliable?" From her, all I got was silence. The next thing you know I'm talking to an empty phone, as the phone hits the receiver with disgust and anger. It will become an interesting battle, and an issue that will be long talked about in the future.
Ironically those fighting for the Decency Act, have failed to see the obvious. With all the fighting circulated on pictures (such as pornography), 'Net frauds, and sites which aim to scam we have neglected to change language and its diversity found online. We have heard the expression "An image can be a thousand words," but can that apply to those placed on the Internet, especially those unacceptable. Why are we only fighting the images, should we not also fight for our language, too? If we want to stop the spread of variance, and stop the acronyms and smileys, shouldn't we fight for a language decency act first? Why have we become so blind? The injustices do not just lie in the images, but in the text as well. In order to stop change, the Internet must be reviewed as a whole, not just by pieces. Not implying that English professors should swamp Congress in order to change the written laws of the Internet. If we want to stop the torment
that language receives by technology, we should first make the text decent enough to understand. Then we could make decent sense in communicating to each other what is appropriate in online images. Only then will language progress with technology. As a joint effort, not a cat and mouse game.
Technology will always continue to push communication to the edge. For some, the edge is where they have to be. We find ourselves bending to the rules of communication in order to stay in constant check with friends, loved ones, and to check current news. For us to keep ahead of time itself, we have to play by the rules of technology, despite age, location, or ethnic background. But how can we eliminate the problems of technology in communication? The only answer is to wait, and let the patience of time go to work. Eventually we may lose this dialect as years go by, as younger generations continue to introduce new terms.
If history repeats itself, variations in our languages may create new dialects that will replace our current ones, for the better, or worse.
Discussion Page
I must say, I really did enjoy this paper. Not only was the writing and research fun, but also it was exciting. How many times can you retrospect upon history, only to find out that the answer of the future of communication may actually lie in the past? What I tried to show was a two-sided argument to this theme. Especially as the Internet will face constant gridlock, now that official hearings have begun on its decency has an information service provider. Since I, like others, are heavily affected by computers and the Internet, I wanted to share some of my personal thoughts and opinions with the reader, I only hope that it gives the paper more flavor and does not distract it from the main issue.
Even though there was excitement, it was also one of the most perplexing papers that I have worked on. I could have easily rambled on and on and basically made this paper around 20 pages! So many tangents opened up along the way! But, I wanted to show you (the reader) the hardcore facts, and the strength of the evidence used to back it up. What became so perplexing was they way language intermeddles with technology. It was as if I had one huge jigsaw puzzle spread out on a table and slowly I had to piece each one together to make the whole picture become visible and clear.
I must say (and question) on a personal level, that technology does have its benefits, yet has it done more evil to society then good? If we zip to the past, we can see that it was technology that created the gun, the atomic bomb, and the plane that was used to carry it. And today, we build even more deadly machines using even more sophisticated technology. These new weapons of destruction, created in the hands of technology, now have a better kill ratio and better efficiency to destroy its prey.
Ironically, technology can affect us on the subtle, personal levels, too. As I type this, there is a little assistant, modeled after Albert Einstein in the lower right hand corner. With my technology, I have upgraded
to the newest version of my word processor, and now this little assistant guy pops up. He just sits there. Staring, looking at his clock, yawning at times, and once in a while falling asleep when all is well. When a spelling error occurs he suddenly wakes up, telling me if he should correct it or not. Is this some sort of sick joke? Have humans become so terrible and lazy, that we cannot remember to spell, so now we have to have this "virtual" assistant attend to our own errors? He stares at me now, and I wonder if he is aware that I am typing about him? It is almost scary when you think about it.
So to stop myself from making this discussion page a whole other essay I'm going to stop here. I hope that I have sparked ideas and new horizons in you, as it did to me researching it and ultimately creating it. And in one last word, I want to comment on the cover page picture. Okay, so it might be a bit obscene, but it truly shows our eventually link and bond to machines and to computers. Besides, you'll never guess where I got it. (The Internet, of course.)
f:\12000 essays\sciences (985)\Computer\Coping with Computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CIS 101
DECEMBER 20, 1996 PROF. GARTNER
COPING WITH COMPUTERS
While the twentieth century has proven to be a technological revolution, there has not been a single development with as much impact on our day to day lives than that of the computer. For many, the development of the modern computer has provided more widespread business opportunities, greater production efficiency, and greater convenience at both work and home than any other innovation has provided us with.
Many of the degrees earned today did not exist twenty years ago. Many of the computer sciences degrees are based on technologies that were not even developed not so long ago. The resulting situation is a work force that has been caught with their 'pants down.' For many of the senior members of this workforce, they are at a disadvantage when it comes to competing with newer college graduates in today's computer world. This article deals with the feelings of one particular person in this position.
Linda Ellerbee, a journalist and author owns a television production company. She also has her own column in Windows magazine. Her experiences with modern computer technologies range from the terminals of the 1970's all the through today with the Internet and e-mail.
One of her first experiences with a computer involved sending a message over the AP news wire. As it turns out, she expressed her candid opinion on some very sensitive topics at the time, including but not limited to the Vietnam War. Consequently, the AP was not amused with the message and she was fired. At the time, this incident was popular enough to make it into Newsweek magazine.
Later on, she moved into television as a reporter, but now owns her own production company, Lucky Duck Productions. Here, she realized that computers act as the driving force in a technologically based industry. She also realized that the younger generations are certainly more comfortable and at home with personal computers.
While running her production company, she tells of her experience with her favorite 'ghost employee.' In her efforts to find a system administrator, she was referred to Columbia University's Center for Telecommunication Research. There, she negotiated a salary via e-mail, and whenever a system needs to be set up the ghost does it over the Internet. Of course, the bill is sent with e-mail as well. As of yet, she still has never seen the system administrator.
Despite her negative or unusual experiences with the technological revolution, Ellerbee admits that she does appreciate the technology that she and her office uses. She says that she has come to peace with technology, and I would have to say that her adaptation to this new system of operating is very admirable. Unfortunately, not everybody in Ellerbee's position is as adaptive to this type of change as she was. However, with children working with computers early in grade school, it will be doubtful if many upcoming professionals suffer from computer-phobia like so many do today.
f:\12000 essays\sciences (985)\Computer\CYBER CHIPS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What is a V-chip? This term has become a buzz word for any discussion evolving
telecommunications regulation and television ratings, but not too many reports define the
new technology in its fullest form. A basic definition of the V-chip; is a microprocessor
that can decipher information sent in the vertical blanking of the NTSC signal,
purposefully for the control of violent or controversial subject matter. Yet, the span of
the new chip is much greater than any working definition can encompass. A discussion of the
V-chip must include a consideration of the technical and ethical issues, in addition to
examining the constitutionally of any law that might concern standards set by the US
government. Yet in the space provided for this essay, the focus will be the technical
aspects and costs of the new chip. It is impossible to generally assume that the V-chip
will solve the violence problem of broadcast television or that adding this little device
to every set will be a first amendment infringement. We can, however, find clues through
examining the cold facts of broadcast television and the impact of a mandatory regulation
on that free broadcast. "Utilizing the EIA's Recommended Practice for Line 21 Data
Service(EIA-608) specification, these chips decode EDS (Extended Data Services)program
ratings, compare these ratings to viewer standards, and can be programmed to take a variety
of actions, including complete blanking of programs." Is one definition of the V-chip from
Al Marquis of Zilog Technology. The FCC or Capitol Hill has not set any standards for
V-chip technology; this has allowed many different companies to construct chips that are
similar yet not exact or possibly not compatible. Each chip has advantages and
disadvantages for the rating's system, soon to be developed. For example, some units use
onscreen programming such as VCR's and the Zilog product do, while others are considering
set top options. Also, different companies are using different methods of parental control
over the chip.
Another problem that these new devices may incur when included in every television is a
space. The NTSC signal includes extra information space known as the subcarrier and Vertical
blanking interval. As explained in the quotation from Mr. Marquis, the V-chips will use a
certain section of this space to send simple rating numbers and points that will be compared
to the personality settings in the chip. Many new technologies are being developed for
smart-TV or data broadcast on this part of the NTSC signal. Basically the V-chip will
severely limit the bandwidth for high performance transmission of data on the NTSC signal.
There is also to be cost to this new technology, which will be passed to consumers.
Estimates are that each chip will cost six dollars wholesale and must be designed into the
television's logic. The V-chip could easily push the price of televisions up by twenty five
or more dollars during the first years of production. The much simpler solution of set top
boxes allows control for those who need it and allow those consumers who don't to save
money and use new data technology. Another cost will most definitely be levied to
television advertisers for the upgrade of the transmitting equipment. Weather the V-chip
encoding signal is added upstream of the transmitter or directly into uplink units and
other equipment intended for broadcast; this cost will have to compensated for in
advertising sales and prices. The V-chip regulation may also require another staff employee
at most stations to effectively rate locally aired programs and events. All three of these
questions have been addressed in minute detail. Most debate has focused upon the new rating
system and its implementation. Though equally important, this doesn't deal with the ground
floor concerns for the television producing and broadcasting industries. Now as members of
the industry we must hold our breath until either the fed knocks the wind from free
broadcast with mandatory ratings' devices, or allows the natural regulation to continue.
f:\12000 essays\sciences (985)\Computer\Cyber rights.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cyberspace and the American Dream: A Magna Carta for the Knowledge Age
Release 1.2, August 22, 1994
This statement represents the cumulative wisdom and innovation of many dozens of people. It is based primarily on the thoughts of four "co-authors": Ms. Esther Dyson; Mr. George Gilder; Dr. George Keyworth; and Dr. Alvin Toffler. This release 1.2 has the final "imprimatur" of no one. In the spirit of the age: It is copyrighted solely for the purpose of preventing someone else from doing so. If you have it, you can use it any way you want. However, major passages are from works copyrighted individually by the authors, used here by permission; these will be duly acknowledged in release 2.0. It is a living document. Release 2.0 will be released in October 1994. We hope you'll use it is to tell us how to make it better. Do so by:
o Sending E-Mail to MAIL@PFF.ORG
o Faxing 202/484-9326 or calling 202/484-2312
o Sending POM (plain old mail) to 1301 K Street Suite 650 West, Washington, DC 20005
(The Progress & Freedom Foundation is a not-for-profit research and educational organization dedicated to creating a positive vision of the future founded in the historic principles of the American idea.)
Preamble
The central event of the 20th century is the overthrow of matter. In technology, economics, and the politics of nations, wealth -- in the form of physical resources -- has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things.
In a First Wave economy, land and farm labor are the main "factors of production." In a Second Wave economy, the land remains valuable while the "labor" becomes massified around machines and larger industries. In a Third Wave economy, the central resource -- a single word broadly encompassing data, information, images, symbols, culture, ideology, and values -- is actionable knowledge.
The industrial age is not fully over. In fact, classic Second Wave sectors (oil, steel, auto-production) have learned how to benefit from Third Wave technological breakthroughs -- just as the First Wave's agricultural productivity benefited exponentially from the Second Wave's farm-mechanization.
But the Third Wave, and the Knowledge Age it has opened, will not deliver on its potential unless it adds social and political dominance to its accelerating technological and economic strength. This means repealing Second Wave laws and retiring Second Wave attitudes. It also gives to leaders of the advanced democracies a special responsibility -- to facilitate, hasten, and explain the transition.
As humankind explores this new "electronic frontier" of knowledge, it must confront again the most profound questions of how to organize itself for the common good. The meaning of freedom, structures of self-government, definition of property, nature of competition, conditions for cooperation, sense of community and nature of progress will each be redefined for the Knowledge Age -- just as they were redefined for a new age of industry some 250 years ago.
What our 20th-century countrymen came to think of as the "American dream," and what resonant thinkers referred to as "the promise of American life" or "the American Idea," emerged from the turmoil of 19th-century industrialization. Now it's our turn: The knowledge revolution, and the Third Wave of historical change it powers, summon us to renew the dream and enhance the promise.
The Nature of Cyberspace
The Internet -- the huge (2.2 million computers), global (135 countries), rapidly growing (10-15% a month) network that has captured the American imagination -- is only a tiny part of cyberspace. So just what is cyberspace?
More ecosystem than machine, cyberspace is a bioelectronic environment that is literally universal: It exists everywhere there are telephone wires, coaxial cables, fiber-optic lines or electromagnetic waves.
This environment is "inhabited" by knowledge, including incorrect ideas, existing in electronic form. It is connected to the physical environment by portals which allow people to see what's inside, to put knowledge in, to alter it, and to take knowledge out. Some of these portals are one-way (e.g. television receivers and television transmitters); others are two-way (e.g. telephones, computer modems).
Most of the knowledge in cyberspace lives the most temporary (or so we think) existence: Your voice, on a telephone wire or microwave, travels through space at the speed of light, reaches the ear of your listener, and is gone forever.
But people are increasingly building cyberspatial "warehouses" of data, knowledge, information and misinformation in digital form, the ones and zeros of binary computer code. The storehouses themselves display a physical form (discs, tapes, CD-ROMs) -- but what they contain is accessible only to those with the right kind of portal and the right kind of key.
The key is software, a special form of electronic knowledge that allows people to navigate through the cyberspace environment and make its contents understandable to the human senses in the form of written language, pictures and sound.
People are adding to cyberspace -- creating it, defining it, expanding it -- at a rate that is already explosive and getting faster. Faster computers, cheaper means of electronic storage, improved software and more capable communications channels (satellites, fiber-optic lines) -- each of these factors independently add to cyberspace. But the real explosion comes from the combination of all of them, working together in ways we still do not understand.
The bioelectronic frontier is an appropriate metaphor for what is happening in cyberspace, calling to mind as it does the spirit of invention and discovery that led ancient mariners to explore the world, generations of pioneers to tame the American continent and, more recently, to man's first exploration of outer space.
But the exploration of cyberspace brings both greater opportunity, and in some ways more difficult challenges, than any previous human adventure.
Cyberspace is the land of knowledge, and the exploration of that land can be a civilization's truest, highest calling. The opportunity is now before us to empower every person to pursue that calling in his or her own way.
The challenge is as daunting as the opportunity is great. The Third Wave has profound implications for the nature and meaning of property, of the marketplace, of community and of individual freedom. As it emerges, it shapes new codes of behavior that move each organism and institution -- family, neighborhood, church group, company, government, nation -- inexorably beyond standardization and centralization, as well as beyond the materialist's obsession with energy, money and control.
Turning the economics of mass-production inside out, new information technologies are driving the financial costs of diversity -- both product and personal -- down toward zero, "demassifying" our institutions and our culture. Accelerating demassification creates the potential for vastly increased human freedom.
It also spells the death of the central institutional paradigm of modern life, the bureaucratic organization. (Governments, including the American government, are the last great redoubt of bureaucratic power on the face of the planet, and for them the coming change will be profound and probably traumatic.)
In this context, the one metaphor that is perhaps least helpful in thinking about cyberspace is -- unhappily -- the one that has gained the most currency: The Information Superhighway. Can you imagine a phrase less descriptive of the nature of cyberspace, or more misleading in thinking about its implications? Consider the following set of polarities:
Information Superhighway / Cyberspace
Limited Matter / Unlimited Knowledge
Centralized / Decentralized
Moving on a grid / Moving in space
Government ownership / A vast array of ownerships
Bureaucracy / Empowerment
Efficient but not hospitable / Hospitable if you customize it
Withstand the elements / Flow, float and fine-tune
Unions and contractors / Associations and volunteers
Liberation from First Wave / Liberation from Second Wave
Culmination of Second Wave / Riding the Third Wave
The highway analogy is all wrong," explained Peter Huber in Forbes this spring, "for reasons rooted in basic economics. Solid things obey immutable laws of conservation -- what goes south on the highway must go back north, or you end up with a mountain of cars in Miami. By the same token, production and consumption must balance. The average Joe can consume only as much wheat as the average Jane can grow. Information is completely different. It can be replicated at almost no cost -- so every individual can (in theory) consume society's entire output. Rich and poor alike, we all run information deficits. We all take in more than we put out."
The Nature and Ownership of Property
Clear and enforceable property rights are essential for markets to work. Defining them is a central function of government. Most of us have "known" that for a long time. But to create the new cyberspace environment is to create new property -- that is, new means of creating goods (including ideas) that serve people.
The property that makes up cyberspace comes in several forms: Wires, coaxial cable, computers and other "hardware"; the electromagnetic spectrum; and "intellectual property" -- the knowledge that dwells in and defines cyberspace.
In each of these areas, two questions that must be answered. First, what does "ownership" mean? What is the nature of the property itself, and what does it mean to own it? Second, once we understand what ownership means, who is the owner? At the level of first principles, should ownership be public (i.e. government) or private (i.e. individuals)?
The answers to these two questions will set the basic terms upon which America and the world will enter the Third Wave. For the most part, however, these questions are not yet even being asked. Instead, at least in America, governments are attempting to take Second Wave concepts of property and ownership and apply them to the Third Wave. Or they are ignoring the problem altogether.
For example, a great deal of attention has been focused recently on the nature of "intellectual property" -- i.e. the fact that knowledge is what economists call a "public good," and thus requires special treatment in the form of copyright and patent protection.
Major changes in U.S. copyright and patent law during the past two decades have broadened these protections to incorporate "electronic property." In essence, these reforms have attempted to take a body of law that originated in the 15th century, with Gutenberg's invention of the printing press, and apply it to the electronically stored and transmitted knowledge of the Third Wave.
A more sophisticated approach starts with recognizing how the Third Wave has fundamentally altered the nature of knowledge as a "good," and that the operative effect is not technology per se (the shift from printed books to electronic storage and retrieval systems), but rather the shift from a mass-production, mass-media, mass-culture civilization to a demassified civilization.
The big change, in other words, is the demassification of actionable knowledge.
The dominant form of new knowledge in the Third Wave is perishable, transient, customized knowledge: The right information, combined with the right software and presentation, at precisely the right time. Unlike the mass knowledge of the Second Wave -- "public good" knowledge that was useful to everyone because most people's information needs were standardized -- Third Wave customized knowledge is by nature a private good.
If this analysis is correct, copyright and patent protection of knowledge (or at least many forms of it) may no longer be unnecessary. In fact, the marketplace may already be creating vehicles to compensate creators of customized knowledge outside the cumbersome copyright/patent process, as suggested last year by John Perry Barlow:
"One existing model for the future conveyance of intellectual property is real-time performance, a medium currently used only in theater, music, lectures, stand-up comedy and pedagogy. I believe the concept of performance will expand to include most of the information economy, from multi-casted soap operas to stock analysis. In these instances, commercial exchange will be more like ticket sales to a continuous show than the purchase of discrete bundles of that which is being shown. The other model, of course, is service. The entire professional class -- doctors, lawyers, consultants, architects, etc. -- are already being paid directly for their intellectual property. Who needs copyright when you're on a retainer?"
Copyright, patent and intellectual property represent only a few of the "rights" issues now at hand. Here are some of the others:
o Ownership of the electromagnetic spectrum, traditionally considered to be "public property," is now being "auctioned" by the Federal Communications Commission to private companies. Or is it? Is the very limited "bundle of rights" sold in those auctions really property, or more in the nature of a use permit -- the right to use a part of the spectrum for a limited time, for limited purposes? In either case, are the rights being auctioned defined in a way that makes technological sense?
o Ownership over the infrastructure of wires, coaxial cable and fiber-optic lines that are such prominent features in the geography of cyberspace is today much less clear than might be imagined. Regulation, especially price regulation, of this property can be tantamount to confiscation, as America's cable operators recently learned when the Federal government imposed price limits on them and effectively confiscated an estimated $___ billion of their net worth. (Whatever one's stance on the FCC's decision and the law behind it, there is no disagreeing with the proposition that one's ownership of a good is less meaningful when the government can step in, at will, and dramatically reduce its value.)
o The nature of capital in the Third Wave -- tangible capital as well as intangible -- is to depreciate in real value much faster than industrial-age capital -- driven, if nothing else, by Moore's Law, which states that the processing power of the microchip doubles at least every 18 months. Yet accounting and tax regulations still require property to be depreciated over periods as long as 30 years. The result is a heavy bias in favor of "heavy industry" and against nimble, fast-moving baby businesses.
Who will define the nature of cyberspace property rights, and how? How can we strike a balance between interoperable open systems and protection of property?
The Nature Of The Marketplace
Inexpensive knowledge destroys economies-of-scale. Customized knowledge permits "just in time" production for an ever rising number of goods. Technological progress creates new means of serving old markets, turning one-time monopolies into competitive battlegrounds.
These phenomena are altering the nature of the marketplace, not just for information technology but for all goods and materials, shipping and services. In cyberspace itself, market after market is being transformed by technological progress from a "natural monopoly" to one in which competition is the rule. Three recent examples:
o The market for "mail" has been made competitive by the development of fax machines and overnight delivery -- even though the "private express statutes" that technically grant the U.S. Postal Service a monopoly over mail delivery remain in place.
o During the past 20 years, the market for television has been transformed from one in which there were at most a few broadcast TV stations to one in which consumers can choose among broadcast, cable and satellite services.
o The market for local telephone services, until recently a monopoly based on twisted-pair copper cables, is rapidly being made competitive by the advent of wireless service and the entry of cable television into voice communication. In England, Mexico, New Zealand and a host of developing countries, government restrictions preventing such competition have already been removed and consumers actually have the freedom to choose.
The advent of new technology and new products creates the potential for dynamic competition -- competition between and among technologies and industries, each seeking to find the best way of serving customers' needs. Dynamic competition is different from static competition, in which many providers compete to sell essentially similar products at the lowest price.
Static competition is good, because it forces costs and prices to the lowest levels possible for a given product. Dynamic competition is better, because it allows competing technologies and new products to challenge the old ones and, if they really are better, to replace them. Static competition might lead to faster and stronger horses. Dynamic competition gives us the automobile.
Such dynamic competition -- the essence of what Austrian economist Joseph Schumpeter called "creative destruction" -- creates winners and losers on a massive scale. New technologies can render instantly obsolete billions of dollars of embedded infrastructure, accumulated over decades. The transformation of the U.S. computer industry since 1980 is a case in point.
In 1980, everyone knew who led in computer technology. Apart from the minicomputer boom, mainframe computers were the market, and America's dominance was largely based upon the position of a dominant vendor -- IBM, with over 50% world market-share.
Then the personal-computing industry exploded, leaving older-style big-business-focused computing with a stagnant, piece of a burgeoning total market. As IBM lost market-share, many people became convinced that America had lost the ability to compete. By the mid-1980s, such alarmism had reached from Washington all the way into the heart of Silicon Valley.
But the real story was the renaissance of American business and technological leadership. In the transition from mainframes to PCs, a vast new market was created. This market was characterized by dynamic competition consisting of easy access and low barriers to entry. Start-ups by the dozens took on the larger established companies -- and won.
After a decade of angst, the surprising outcome is that America is not only competitive internationally, but, by any measurable standard, America dominates the growth sectors in world economics -- telecommunications, microelectronics, computer networking (or "connected computing") and software systems and applications.
The reason for America's victory in the computer wars of the 1980s is that dynamic competition was allowed to occur, in an area so breakneck and pell-mell that government would've had a hard time controlling it _even had it been paying attention_. The challenge for policy in the 1990s is to permit, even encourage, dynamic competition in every aspect of the cyberspace marketplace.
The Nature of Freedom
Overseas friends of America sometimes point out that the U.S. Constitution is unique -- because it states explicitly that power resides with the people, who delegate it to the government, rather than the other way around.
This idea -- central to our free society -- was the result of more than 150 years of intellectual and political ferment, from the Mayflower Compact to the U.S. Constitution, as explorers struggled to establish the terms under which they would tame a new frontier.
And as America continued to explore new frontiers -- from the Northwest Territory to the Oklahoma land-rush -- it consistently returned to this fundamental principle of rights, reaffirming, time after time, that power resides with the people.
Cyberspace is the latest American frontier. As this and other societies make ever deeper forays into it, the proposition that ownership of this frontier resides first with the people is central to achieving its true potential.
To some people, that statement will seem melodramatic. America, after all, remains a land of individual freedom, and this freedom clearly extends to cyberspace. How else to explain the uniquely American phenomenon of the hacker, who ignored every social pressure and violated every rule to develop a set of skills through an early and intense exposure to low-cost, ubiquitous computing.
Those skills eventually made him or her highly marketable, whether in developing applications-software or implementing networks. The hacker became a technician, an inventor and, in case after case, a creator of new wealth in the form of the baby businesses that have given America the lead in cyberspatial exploration and settlement.
It is hard to imagine hackers surviving, let alone thriving, in the more formalized and regulated democracies of Europe and Japan. In America, they've become vital for economic growth and trade leadership. Why? Because Americans still celebrate individuality over conformity, reward achievement over consensus and militantly protect the right to be different.
But the need to affirm the basic principles of freedom is real. Such an affirmation is needed in part because we are entering new territory, where there are as yet no rules -- just as there were no rules on the American continent in 1620, or in the Northwest Territory in 1787.
Centuries later, an affirmation of freedom -- by this document and similar efforts -- is needed for a second reason: We are at the end of a century dominated by the mass institutions of the industrial age. The industrial age encouraged conformity and relied on standardization. And the institutions of the day -- corporate and government bureaucracies, huge civilian and military administrations, schools of all types -- reflected these priorities. Individual liberty suffered -- sometimes only a little, sometimes a lot:
o In a Second Wave world, it might make sense for government to insist on the right to peer into every computer by requiring that each contain a special "clipper chip."
o In a Second Wave world, it might make sense for government to assume ownership over the broadcast spectrum and demand massive payments from citizens for the right to use it.
o In a Second Wave world, it might make sense for government to prohibit entrepreneurs from entering new markets and providing new services.
o And, in a Second Wave world, dominated by a few old-fashioned, one-way media "networks," it might even make sense for government to influence which political viewpoints would be carried over the airwaves.
All of these interventions might have made sense in a Second Wave world, where standardization dominated and where it was assumed that the scarcity of knowledge (plus a scarcity of telecommunications capacity) made bureaucracies and other elites better able to make decisions than the average person.
But, whether they made sense before or not, these and literally thousands of other infringements on individual rights now taken for granted make no sense at all in the Third Wave.
For a century, those who lean ideologically in favor of freedom have found themselves at war not only with their ideological opponents, but with a time in history when the value of conformity was at its peak. However desirable as an ideal, individual freedom often seemed impractical. The mass institutions of the Second Wave required us to give up freedom in order for the system to "work."
The coming of the Third Wave turns that equation inside-out. The complexity of Third Wave society is too great for any centrally planned bureaucracy to manage. Demassification, customization, individuality, freedom -- these are the keys to success for Third Wave civilization.
The Essence of Community
If the transition to the Third Wave is so positive, why are we experiencing so much anxiety? Why are the statistics of social decay at or near all-time highs? Why does cyberspatial "rapture" strike millions of prosperous Westerners as lifestyle rupture? Why do the principles that have held us together as a nation seem no longer sufficient -- or even wrong?
The incoherence of political life is mirrored in disintegrating personalities. Whether 100% covered by health plans or not, psychotherapists and gurus do a land-office business, as people wander aimlessly amid competing therapies. People slip into cults and covens or, alternatively, into a pathological privatism, convinced that reality is absurd, insane or meaningless. "If things are so good," Forbes magazine asked recently, "why do we feel so bad?"
In part, this is why: Because we constitute the final generation of an old civilization and, at the very same time, the first generation of a new one. Much of our personal confusion and social disorientation is traceable to conflict within us and within our political institutions -- between the dying Second Wave civilization and the emergent Third Wave civilization thundering in to take its place.
Second Wave ideologues routinely lament the breakup of mass society. Rather than seeing this enriched diversity as an opportunity for human development, they attach it as "fragmentation" and "balkanization." But to reconstitute democracy in Third Wave terms, we need to jettison the frightening but false assumption that more diversity automatically brings more tension and conflict in society.
Indeed, the exact reverse can be true: If 100 people all desperately want the same brass ring, they may be forced to fight for it. On the other hand, if each of the 100 has a different objective, it is far more rewarding for them to trade, cooperate, and form symbiotic relationships. Given appropriate social arrangements, diversity can make for a secure and stable civilization.
No one knows what the Third Wave communities of the future will look like, or where "demassification" will ultimately lead. It is clear, however, that cyberspace will play an important role knitting together in the diverse communities of tomorrow, facilitating the creation of "electronic neighborhoods" bound together not by geography but by shared interests.
Socially, putting advanced computing power in the hands of entire populations will alleviate pressure on highways, reduce air pollution, allow people to live further away from crowded or dangerous urban areas, and expand family time.
The late Phil Salin (in Release 1.0 11/25/91) offered this perspective: "[B]y 2000, multiple cyberspaces will have emerged, diverse and increasingly rich. Contrary to naive views, these cyberspaces will not all be the same, and they will not all be open to the general public. The global network is a connected 'platform' for a collection of diverse communities, but only a loose, heterogeneous community itself. Just as access to homes, offices, churches and department stores is controlled by their owners or managers, most virtual locations will exist as distinct places of private property."
"But unlike the private property of today," Salin continued, "the potential variations on design and prevailing customs will explode, because many variations can be implemented cheaply in software. And the 'externalities' associated with variations can drop; what happens in one cyberspace can be kept from affecting other cyberspaces."
"Cyberspaces" is a wonderful pluralistic word to open more minds to the Third Wave's civilizing potential. Rather than being a centrifugal force helping to tear society apart, cyberspace can be one of the main forms of glue holding together an increasingly free and diverse society.
The Role of Government
The current Administration has identified the right goal: Reinventing government for the 21st Century. To accomplish that goal is another matter, and for reasons explained in the next and final section, it is not likely to be fully accomplished in the immediate future. This said, it is essential that we understand what it really means to create a Third Wave government and begin the process of transformation.
Eventually, the Third Wave will affect virtually everything government does. The most pressing need, however, is to revamp the policies and programs that are slowing the creation of cyberspace. Second Wave programs for Second Wave industries -- the status quo for the status quo -- will do littl
f:\12000 essays\sciences (985)\Computer\Cyberporn On a Screen Near You.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cybersex. This word brings to mind a barrage of images, which might be on Stra Trek or virtual reality
video by aerosmith. Sex is everywhere today--in books, magazines, films, internet, television, and music
videos. Something about the combination of sex and computers seems to make children and adults for that
matter, a little crazy.
In an 18 month study, the team surveyed 917,840 sexually explicit pictures on the internet where
pornographic. Trading in explicit imagery is now "one of thelargest recreational applications of users of
computer networks." The great majority (71%) of the sexual images on the newsgrouls originate from adult
oriented bulletin-board systems (BBS). According to BBS, 98.9% of the consumers of online porn are men.
The women hold a 1.1% in chat rooms and on Bulletin boards. Perhaps because hard-core sex pictures are
so widely available elsewhere, the adult BBS market seems to be driven by demand for images that can't be
found on the average magazine rack, such as pedophilia, hebephilia, and paraphilia.
While groups like the Family Research Council insisdt that online child molesters represent a clear and
present danger, there is no evidence that it is any greater than that thousands of other threats children face
everyday. The Exxon bill proposed to outlaw obscene material and impose fines uo to $100,000and prison
term up to two years on anyone who knowinglymakes "indecent" material available to children under the age
of 18.
Robert Thomas spends his days like any other inmate at the U.S. Medicalcenter for Federal prisoners in
Springfield. Thomas,39 a amateur BBS in California, made headlines last year when he and his wife
were indicted for transmitting pornographic material to a government agent in Tennessee. This case shows
how tight of a squeeze the govenment is putting on internet Freedom.
f:\12000 essays\sciences (985)\Computer\Cyberspace Freedom.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Being one of millions of surfers throughout the Internet, I see that fundamental civil liberties are
as important in cyberspace as they are in traditional contexts. Cyberspace defined in Webster's
Tenth Edition dictionary is the on-line worlds of networks. The right to speak and publish
using a virtual pen has its roots in a long tradition dating back to the very founding of democracy
in this country. With the passage of the 1996 Telecommunications Act, Congress has prepared
to turn the Internet from one of the greatest resources of cultural, social, and scientific
information into the online equivalent of a children's reading room. By invoking the overboard
and vague term "indecent" as the standard by which electronic communication should be
censored, Congress has insured that information providers seeking to avoid criminal prosecution
will close the gates on anything but the most tame information and discussions.
The Communications Decency Act calls for two years of jail time for anyone caught
using "indecent" language over the net; as if reading profanities online affects us more
dramatically than reading them on paper. Our First Amendment states, "Congress shall make no
law respecting an establishment of religion, or prohibiting the free exercise thereof, or abridging
the freedom of speech, or of the press...." The Act takes away this right. The
Constitution-defying traitors creating these useless laws do not they understand the medium
they're trying to control. What they "claim" is that they are trying to protect our children from
moral threatening content.
This "protect our helpless children" ideology is bogus. If more government officials
were more knowledgeable about online information they would realize the huge flaw the
Communication Decency Act contains. We don't need the government to patrol fruitlessly on
the Internet when parents can simply install software like Net Nanny or Surf Watch. These
programs block all "sensitive" material from entering one's modem line. What's more,
legislators have already passed effective laws against obscenity and child pornography. We
don't need a redundant Act to accomplish what has already been written.
Over 17 million Web pages float throughout cyberspace. Never before has information
been so instant, and so global. And never before has our government been so spooked by the
potential power "little people" have at their fingertips. The ability for anyone to send pictures
and words cheaply and quickly to potentially millions of others seems to terrify the government
and control freaks. Thus, the Communications Decency Act destroys our own constitution rights
and insults the dreams of Jefferson, Washington, Madison, Mill, Brandeis, and DeToqueville.
It's funny, now that we finally have a medium that truly allows us to exercise our First
Amendment right, the government is trying to censor it. Forget them! Continue to engage in
free speech on the net. It's the only way to win the battle.
David Hembree
October 23, 1996
Dr. Willis
f:\12000 essays\sciences (985)\Computer\Cyberspace in Perspective.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1.
If a survey were being done on how people experience cyberspace, one would immediately notice that no two answers would be the same. Experiencing cyberspace is something that is different for every individual. I myself experience cyberspace psychologically, I experience it in my mind. There have been many attempts at trying to define the abstruse term, but up to date, no one has pinned the tail on the donkey. There cannot be one solid definition for a word that possesses so many meanings. I personally associate the word cyberspace with the idea of being able to travel to distant places without ever leaving my chair. Obviously, I know that there is no possible way of visiting different places or countries via my home computer, but in my mind, when I see the location that I am connected to, it feel as though a part of me is there. The best part is that I can switch from scenario to scenario without having to travel any ground. I do not feel a sense of distance or location, except when it takes a prolonged amount of time to connect to a host. When I travel from place to place (site to site), I do not cover any known physical distances, but instead I cover visual distance.
Just as many people do, I refer to the places that I visit as virtual worlds. I like calling them this because I never actually get to see the reality of the "world". I only get to see it electronically and digitally. The feeling that I experience while in cyberspace is knowing that I possess the power to visit any where I want. When I click one of the buttons on the mouse, or what I refer to as a transporter, I feel as though all the power in the world rests at the end of my fingertips. I am in my own sort of fantasy land. Once I land in a desired location, or website, I have the opportunity to click on pictures and words that take me to new worlds. These pictures and words have the power to make my virtual tour even more pleasing by introducing me to new and exciting things. People have referred to experiences in cyberspace, experiences such as mine, as a basic extension of the mind. I definitely agree with this statement. I believe that it takes imagination and creativity to experience all of the things that cyberspace has to offer. With all the colors, strange text and mind-boggling graphics, cyberspace is something that everyone must experience on their own. No two people experience it in the same way and it takes practice to learn different ways of experiencing all that it has to offer. I guess everyone must find their own little 'cyber-niche'.
2.
In today's technological oriented society, it is difficult for one to go about their daily lives without interacting in some form or another with digital components. Communication is the perfect example of how people interact with digital technology. Talking to loved ones who live on the other side of the globe, faxing a friend, or simply calling in sick to work, are all forms of communication, but these examples are taken for granted.
A popular form of digital communication, whether people realize it or not, is the cellular phone. Cellular phones have become very popular toys over the past few years and they are 100% digital. For people who are constantly on the go, the "Cell Phone" is a convenient digital advancement. I find the cellular phone to be of much help in the stickier situations; like when I am being forced to change a flat tire in -20o weather, when there's heavy traffic, when I want to find a quicker route to where I'm going, or when I get lost in an unfamiliar region. They are relatively expensive to use, but in most cases, I would say that they are well worth their price. I don't have five hour conversations using them, but I can let people know what I want to tell them in a short amount of time, which I find extremely handy. Before the ease of world-wide portable phones, there was a different breed of digital communication devices: beepers. Beepers, or pagers as they are commonly called, go hand in hand with cell phones. If a person does not own a cellular phone, a pager is a great alternative. It is a piece of digital equipment that allows the carrier to be notified when someone is trying to get in touch with them. I find that pagers are better in the sense that they cost less than cell phones do. I was given a pager years ago, and I still use the same one today. It's much easier to answer a pager than a cell phone when I am driving. Even though they are not as great of a communication tool as a cellular phone, I would probably never give mine up. I also communicate with the pager because I read it's LCD display when I am interested in finding out who is disturbing me.
Still on topic of communication, I cannot forget to mention television. It is a huge form of communication today, more so than ever before. Although televisions have changed dramatically in a short amount of time, TVs communicate several different messages. Everyone watches television at some point in time, and when they do, they are most likely interacting with a digital TV. I watch television for relaxation, or to keep up to date on world events. By simply changing the channels, or more complex things such as programming channels/time/date, I am interacting with the television. Television is visually pleasing. My concentration becomes affixed to the point that I have no idea, or very little idea, as to what is going on around me.
VCRs go hand in hand with the television. As I do with the TV, I have to give the VCR commands as well. Whether I tell it to PLAY, STOP or REWIND, I am interacting with it. As with the television, I am fixating all of my attention to what it is showing, or playing.
Going back to raw communication, a real piece of digital equipment that I have found to be handy is the fax machine. Fax machines transmit messages back and forth from one party to another. Fax machines are not as accessible as TVs or VCRs, but those that use them get the chance to interact with digital technology. I have found this type of digital equipment extremely useful when looking for a job. I have, in the past, faxed resumes to different employers who were looking for workers. I have also used them on many other occasions, but getting access to one is relatively difficult with out spending money to send copies. Another type of digital component that I interact with is my computer. I use it on a daily basis to type out assignments, e-mail, and to gather information off of the Internet. Also, being a student at Trent University, I have a student card which digitally allows me to take out books, eat, and get in and out of special events all by simply swiping it through a scanner. Digital pocket agendas have become quite common for students. I use one almost every time I use the phone. It contains several phone numbers, important dates, and many reminders that make organization much easier. There are many other digital components that I interact with, but I feel that the ones I have mentioned are the most prevalent in my life.
3.
Just as there are many forms of digital components that I interact with, there are many analog components as well. The most popular analog component that I use is the telephone. Some use fiberoptic lines, but I will refer to the older, more traditional phones. There are many reasons why I use the phone such as keeping in touch with family and friends, calling my boss at work, or ordering a pizza. In today's hi-tech environment, I can even call JoJo to find out what tomorrow will bring. For me, the telephone is the easiest way to get in touch with people. I find that it is much easier to express feelings over the phone for the simple fact that I do not have to be vis-à-vis with the person. Being face to face is sometimes much too personal, and I am far more interested in telling the boss that I cannot come into work without him having to see that I'm not actually sick. Therefore, it is clear that there are many reasons for using a telephone, and that these examples do not only apply to myself, but to many others who use a telephone.
Some feel that because we are in such a hi-tech world, they have to purchase the new high priced advancements. For example, buying a high quality digital watch. I think this to be very unnecessary. I own an old fashioned analog watch. The good old fashioned glow in the dark dial works best for me. I think it's just a status symbol to have the best watch, but then again, I've never seen a 24K gold digital watch. Also, when I am not listening to a CD, I still use cassette tapes. For me there is not much difference in the sound, but I can tell a difference when I want to cue (fast forward or rewind) a song. Obviously, CDs are the 'hip thing' of the 90's, but I find nothing is wrong with a cassette. There are many forms of music that I listen to that are not digital. Many people still have tape players in their vehicles, which goes to show that I am not the only person who still does not mind analog.
4.
When trying to figure out how far apart two phones are from each other, many forms of calculation come into play. It is easy to estimate that your phone is approximately thirty feet from your neighbors, but that is not very accurate, there are ways to measure the exact distance between two phones. When the police send out a signal that bounces off a moving object and then back to their radar gun, a computer program measures how long it took for the signal to come back and thus executes the calculations. A programmer could easily write a loop program that sends a signal to a far away computer (modem) and then wait for the signal to come back. This loop would allow the computer to figure out how long it took for the signal to reach it's destination, and come back, thus telling the person how far apart the two phones are from each other.
By looking at two phone numbers, one can tell by their dimensions how far apart they may be. For example, if two phones were being compared and one of the phone numbers was (519) 498-0872 and the other phone number was 011-356-7951, one might take a wild guess that these numbers are far from each other. Long distance calls, for some reason, seem to sound a little different than close connection calls. When one notices that the person on the other lines voice sounds faded or with static, they can guess that the call is coming from far away. One could get extremely technical and measure the resistance between two phones, but that is difficult for the average person to do. The easiest ways to tell how far apart two phones are is to estimate. People can estimate this by: voice quality, length of phone number, or by running a loop program.
f:\12000 essays\sciences (985)\Computer\CYBERSPACE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As described by Wiliam Gibson in his science fiction novel Neuromancer,
cyberspace was a "Consensual hallucination that felt and looked like a physical space
but actuallly was a computer-generated construct representing abstract data." Years
later, mankind has realized that Gibson's vision is very close to reality. The term
cyberspace was frequently used to explain or describe the process in which two
computers connect with each other through various telephone lines. In this
communication between the two systems there seems to be no distance between them.
There are now four catagories that describe the major components of todays
cyber space. One oof those is commercial on-line services. These large computer
systems can host thousands of users simultaneously. When a computer user purchases
an account from the company they recieve a screen name and a password. The user
then can use his or her screen name and password to log on and use the system. Most
of the online systems have chat rooms where users can chat in real time with one
another. some users even think of on-line services as a community.
The second catagory involves Bulletin Boards or (BBS's). These services allow the
user accounts like their larger on-line service cousins. These BBS's have less users
because they run on smaller computers. The system operators, more commonly
known as sysops, are running the boards. Since most BBS's are hobbies there is
usually no charge for an account. The same as on-line services, users use BBS's for
trades, games, and to chat among other users. Since bulletin boeard are so easy to
set up there are thousands of them located around the world. Each board has a
theme. These themes range from astronomy to racist neo-nazi crap. A boards theme
helps users in their search for a board that will satisfy their personal preference.
A third catagory is the Private System. These private systems sometime run
bulletin boards privately, not letting the public acess. In these private systems users
can perform specialized computer operations, or access to data. Through this private
network users within a company can send mail, faxes, and other messages to each
other through the companies computer network. If a worker was to look up a
customers information he could access it through the companies private network. The
public can not get access to the companies private system unless he or she knows the
systems password.
The fourth and last catagory is computer networks. These collections are a group
of connected computers that exchange information. One of the most well known is
the internet. The internet is the so called "network of networks." Through the
internet a user can transfer files to and from systems. The program that allows this
is called (FTP) File Transfer Protocol. This program allows users to send anything
from faxes to software from one to another. The progam is taken from one computer
and sent across the phone lines to the recieving computer which compiles the
information.
In cyberspace their are a number of tools a user can use. E-mail is a popular tool
which allows the transfer of electronic mail between users. This mail more convenient
than postage mail because it travels over phone lines. Software exchange is also a
popular tool. Some systems sell software while other times it comes free of charge.
The FTP program is the reason for the speediness between transfers. Games and
entertainment are another resource. A user on-line can play a game against someone
who is hundreds of miles away. It is now possible to go shopping from the privacy of
your own computer not even having to leave your home. The chat rooms that are
mostly found on-line allow users to communicate with a variety of people together in a
virtual room. Sometimes services will allow guest speakers to have access to the
rooms so multiple users can ask questions. One popular resourse is education. A user
can find endless amounts of information either on the internet, on-line services, chat
rooms, or even personal computer software. The internet is bigger than any library
and it is possible to find any type of information needed.
As described by Wiliam Gibson in his science fiction novel Neuromancer,
cyberspace was a "Consensual hallucination that felt and looked like a physical space
but actuallly was a computer-generated construct representing abstract data." Years
later, mankind has realized that Gibson's vision is very close to reality. The term
cyberspace was frequently used to explain or describe the process in which two
computers connect with each other through various telephone lines. In this
communication between the two systems there seems to be no distance between them.
There are now four catagories that describe the major components of todays
cyber space. One oof those is commercial on-line services. These large computer
systems can host thousands of users simultaneously. When a computer user purchases
an account from the company they recieve a screen name and a password. The user
then can use his or her screen name and password to log on and use the system. Most
of the online systems have chat rooms where users can chat in real time with one
another. some users even think of on-line services as a community.
The second catagory involves Bulletin Boards or (BBS's). These services allow the
user accounts like their larger on-line service cousins. These BBS's have less users
because they run on smaller computers. The system operators, more commonly
known as sysops, are running the boards. Since most BBS's are hobbies there is
usually no charge for an account. The same as on-line services, users use BBS's for
trades, games, and to chat among other users. Since bulletin boeard are so easy to
set up there are thousands of them located around the world. Each board has a
theme. These themes range from astronomy to racist neo-nazi crap. A boards theme
helps users in their search for a board that will satisfy their personal preference.
A third catagory is the Private System. These private systems sometime run
bulletin boards privately, not letting the public acess. In these private systems users
can perform specialized computer operations, or access to data. Through this private
network users within a company can send mail, faxes, and other messages to each
other through the companies computer network. If a worker was to look up a
customers information he could access it through the companies private network. The
public can not get access to the companies private system unless he or she knows the
systems password.
The fourth and last catagory is computer networks. These collections are a group
of connected computers that exchange information. One of the most well known is
the internet. The internet is the so called "network of networks." Through the
internet a user can transfer files to and from systems. The program that allows this
is called (FTP) File Transfer Protocol. This program allows users to send anything
from faxes to software from one to another. The progam is taken from one computer
and sent across the phone lines to the recieving computer which compiles the
information.
In cyberspace their are a number of tools a user can use. E-mail is a popular tool
which allows the transfer of electronic mail between users. This mail more convenient
than postage mail because it travels over phone lines. Software exchange is also a
popular tool. Some systems sell software while other times it comes free of charge.
The FTP program is the reason for the speediness between transfers. Games and
entertainment are another resource. A user on-line can play a game against someone
who is hundreds of miles away. It is now possible to go shopping from the privacy of
your own computer not even having to leave your home. The chat rooms that are
mostly found on-line allow users to communicate with a variety of people together in a
virtual room. Sometimes services will allow guest speakers to have access to the
rooms so multiple users can ask questions. One popular resourse is education. A user
can find endless amounts of information either on the internet, on-line services, chat
rooms, or even personal computer software. The internet is bigger than any library
and it is possible to find any type of information needed.
As described by Wiliam Gibson in his science fiction novel Neuromancer,
cyberspace was a "Consensual hallucination that felt and looked like a physical space
but actuallly was a computer-generated construct representing abstract data." Years
later, mankind has realized that Gibson's vision is very close to reality. The term
cyberspace was frequently used to explain or describe the process in which two
computers connect with each other through various telephone lines. In this
communication between the two systems there seems to be no distance between them.
There are now four catagories that describe the major components of todays
cyber space. One oof those is commercial on-line services. These large computer
systems can host thousands of users simultaneously. When a computer user purchases
an account from the company they recieve a screen name and a password. The user
then can use his or her screen name and password to log on and use the system. Most
of the online systems have chat rooms where users can chat in real time with one
another. some users even think of on-line services as a community.
The second catagory involves Bulletin Boards or (BBS's). These services allow the
user accounts like their larger on-line service cousins. These BBS's have less users
because they run on smaller computers. The system operators, more commonly
known as sysops, are running the boards. Since most BBS's are hobbies there is
usually no charge for an account. The same as on-line services, users use BBS's for
trades, games, and to chat among other users. Since bulletin boeard are so easy to
set up there are thousands of them located around the world. Each board has a
theme. These themes range from astronomy to racist neo-nazi crap. A boards theme
helps users in their search for a board that will satisfy their personal preference.
A third catagory is the Private System. These private systems sometime run
bulletin boards privately, not letting the public acess. In these private systems users
can perform specialized computer operations, or access to data. Through this private
network users within a company can send mail, faxes, a
f:\12000 essays\sciences (985)\Computer\Datorbrott.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hacking
och
andra datorbrott
av
Silvio Usilla
Fredrik Larsson
Patrik Bäckström
&
Anders Westerberg
Ht 92
Innehåll
Sammanfattning 1
Hacking 2
Hackerns historia
Hur det började
Sen kom datorn
Begreppet hacker ändrar betydelse, igen
Den typiske hackern
Den moderne hackern 3
Datavirus
Datorbrott i Sverige 5
Säkerhet 6
Sekretess problem 8
Auktorisation
Lösenordsrutiner 9
Diskussionssidan I
Källor II
Datorbrott, sammanfattning.
Vi har behandlat ämnet hacking, databrott och olaga dataintrång. Om ämnet hacking har vi behandlat historien och vad en hacker verkligen gör. Det finns ju så mycket rykten om hackers. Databrotten har vi behandlat som brott som indirekt är ihopkopplade med datorer. Vi har även skrivit om olaga dataintrång.
Hacking började med telefonen. Det var ofta blinda ungdomar som
började söka sig ut i världen via telefonen. Sedan ändrades betydelsen
när datorerna kom. Då var det dator-intresserade ungdomar som började kallas hackers. Massmedian beskriver hackers som ligister som inte gör annat än att bryta sig in i datasystem för att sabotera, detta stämmer inte. Det är crackers som ringer för att sabotera. En hacker får titeln av en annan hacker, annars är den inget värd. Hackers arbetar ofta tillsammans och hjälper varandra med problem, men är samtidigt försiktiga med vem som får informationen så att ingen novis missbrukar den. Massmedia målar ofta upp bilder som säger att hackers är allvarliga brottslingar, men så är oftast inte fallet. Hackers bryter sig nästan uteslutande in i system för att lära sig ett system och skaffa sig kunskap om dess funktioner, inte för att förstöra eller stjäla information. Däremot är en cracker mer destruktiv. Man skulle kunna säga att en cracker är en hacker som är destruktivt inriktad.
Datavirus, vad är det? Vad gör man åt det? Ett virus är ett program som är programmerat för att förstöra. Enda skyddet är att ha en försökskanin som man inplanterar viruset på.
Datorbrott i Sverige är inte så vanligt som man kan tro. Det visar faktiskt ingen tendens på att öka heller. Detta visar en undersökning som civildepartimenetet har gjort runt om i landet på olika företag. Civildepartimenetet frågade bl.a. om de kände sig oroade inför framtiden, men de svarade att de inte var det. De hade inte heller råkat ut för något allvarligare datorbrott tidigare heller, så såg framtiden an med tillförsikt.
Det är av största vikt att register och dylikt skyddas från obehöriga, det kan nämligen ha fruktansvärda konsekvenser för landets säkerhet men också för livet på det personligaplanet. Det finns flera sätt att titta på information på dataskärmar. Ett sätt är RöS (röjande singnaler) , man kan med hjälp av en antenn och en monitor se vad för information som kommer upp på en dataskärm utan att befinna sig i rummet,detta är på det mer sofistikerade planet. För att skydda sig vid överföring av data kan man använda sig av kryptering (dold text) , man bör undvika att göra egna krypterings-
programm. Det vanligaste är att man måste uppge ett lösen vid uppkoppling med ett datasystem. Slutsatsen är skydda dig efter informationens vikt och till en kostnad du anser rimlig, för det finns inget sådant som ett 100% skydd.
Vad är en hacker?
Hur det började.
De första personerna som kallades för hackers var ett antal personer på 1960-talet, spridda över hela världen, som lärde sig manipulera telefon nätet. De flesta av dessa var ungdomar, som ofta var blinda eller ensamma, ville ha kontakt med andra personer. Det började med att en blind kille i 10års åldern roade sig med att ringa med telefonen. Det var det roligaste han visste. Men han hade inte bara intresse för telefoner. Han hade även ett suveränt musik sinne. Av en slump kom han på att om han visslade en viss ton så kunde han få telefonen att kopplas till något annat nummer... På så sätt lärde han sig att vidarekoppla telefoner från sin telefon utan att behöva betala för det. På detta sätt kom han i kontakt med andra personer med samma intresse på alla möjliga ställen på jorden. Han lärde dem samma saker som han hade kommit på, och det började tillverkas lådor som kunde generera de toner som behövdes för att koppla om telefonerna. Dessa kallades Blå Lådor eller "Blue Boxes". En del företag började till och med tillverka dessa lådor, gissa om de tjänade pengar på det!
Sen kom datorn.
En bit in på 70-talet så började hemdatorerna komma så smått. Då ändrade ordet hacker mening till att betyda ungefär "En person som arbetar, sover och lever med datorer". Det fanns ett antal ungdomar i USA som levde på detta sätt. Det språket de talade förstod bara Hackers. En vanlig person kunde stå som ett frågetecken. Dessa ungdomar har betytt mycket i datorernas utveckling. Speciellt i begreppet time sharing eller multitasking som det också kallas. Det innebär att mer än en person kan använda samma dator samtidigt.
Begreppet hacker ändrar betydelse, igen.
I slutet av 70-talet började man använda telefon nätet för att koppla ihop datorer som kunde befinna sig på helt annan ort eller land. I samband med detta började även privat personer och företag öppna databaser som man kunde ringa till med ett s.k. modem (MOdulator/DEModulator) som är en 'apparat' som omvandlar de digitala signalerna från datorn till analoga som kan överföras över telefonnätet. På dessa databaser kunde man få information av alla de slag, byta erfarenheter med andra datorintresserade m.m. På dessa databaser växte det upp grupper av personer som hade som största intresse att försöka ta sig in i system som man inte får komma in i med hjälp av ett modem och en terminal. Dessa personer började nu kallas för hackers, och det gör de än idag.
Den typiske hackern.
Den typiske hackern är oftast :
* Man
* 15-25år
* Inte tidigare straffad.
* Ovanligt intelligent och envis.
* Har endast Datorer som intresse.
* Tycker skolan är för enkel och tråkig.
De flesta hackers gör inget direkt brottsligt mer än just intrånget. De gör det endast för att skaffa kunskap om olika system. Det är deras största intresse, skaffa så mycket kunskap som möjligt om allt som har med datorer att göra. Ofta kan de sitta uppe i flera dygn för att komma in i ett system.
Den moderne hackern.
"Hacker är en hederstitel. Den är värd något först när man får den av andra" citerar Jörgen Nissen en hacker i Ny Teknik 1990 nr49.
En hacker, om han nu förtjänar titeln, är oftast medlem i en grupp med andra hackers, inom denna grupp sysslar hackern med att underhålla utrustning , programera och förfina vissa programvaror. De har möten med andra hacker grupper där de umgås och diskuterar lite av varje. Ryktet som påstår att de skulle hålla på att kopiera programvara till varandra på dessa möten är nog inte helt falskt, men eftersom det inte är olagligt så är ju saken inte så aktuell, missupfatta mig inte nu, jag menade inte att det är inaktuellt att kopiera program utan att diskussionen inte är aktuell.
Datorerna har nu helt plötsligt startat ett nytt ämne där det är ungdomarna som ofta överträffar lärarna och nästan alltid föräldrarna. Detta har gjort att en hel del myter flutit upp i hacker kretsarna, t.ex. den om den unga vaktmästaren som råkar passera ett gäng datorexperter som inte klarar av ett visst datorproblem, vaktmästaren som är en ung hacker knappar in några koder och problemet löser sig. Detta är oftast bara påhitt men en gnutta sanning finns det nog allt med. Många hackers extraknäcker på olika datorföretag, en del installerar datorer andra anpassar datorsystem efter användaren och det finns t.o.m. dem som löser problem över telefon.
När massmedian tar upp ämnet hacking handlar det nästan uteslutande om olagligheter. Hackers som tagit sig in i ett företags datorsystem och saboterat, eller nästlat sig in i någon bankdator och roffat åt sig pengar på det ena eller andra sättet. Men att kalla detta för hacking är fel. Den rätta benämningen skall vara cracking, en hacker behöver inte nödvändigtvis vara en cracker men en cracker är nästan uteslutande en hacker. En cracker sysslar i princip med allt mellan att bryta kopieringsskydd på spelprogram till avancerade brott och rena sabotage.
Om en hacker nu bryter sig in i ett datorsystem så är det inte för att ställa till oreda utan mer att bevisa sin kompetens inför kamraterna och sig själv. Att programmera datorvirus är mycket ovanligt. Det är nämligen emot hackerns hederskodex, och det var ju bra att jag fick in lite om virus där. Då måste jag ju förklara det. Man har ju ofta hört talas om virus som ställer oreda, men vad är ett virus?
Datavirus
Ett datavirus är ett datorprogram som har i uppgift att förstöra information för andra. Det kan gömma sig i andra oskyldiga program eller bara vara ett program som man tror är ofarligt, såkallade Trojanska Hästar.
I vissa fall kan viruset bara ligga och vänta på en viss tidpunkt innan det helt oväntat utan någon som helst förvarning träder i aktion och börjar sabotera, som i fallet med Fredagen den trettonde viruset som bara träder i aktion på fredagen den trettonde.
Förr spreds virus mest genom bärbara lagringsenheter, typ flexskivor, magnetband m.m. Men nu när datakommunikationen kommit igång är det överföring via modem (ett slags telefon till datorn) som har blivit det vanligaste sättet. Nu kan en såkallad cracker , observera inte en hacker, bryta sig in i ett datorsystem och släppa lös sina virus. Om den nu smittade datorn är ihopkopplad med en massa andra datorer , ett såkallat nätverk, så kan man räkna med att de också kommer att bli smittade. Om det nu är ett privat företag som fått smittan och viruset förstör viktig information för dem så kan det bli miljonförluster. Men om det är ett statligt ägt företag som har hand om tex personuppgifter och dylikt kan följderna bli katastrofala. Men såvitt man vet har det bara varit persondatorerna som blivit mest utsatta.
Det finns vaccinations program mot virus men de är inte så pålitliga eftersom olika virus har olika karaktär så det är nästan omöjligt att få vaccinations programmet att känna igen alla, speciellt som det kommer nya hela tiden. För att man skall bli riktigt säker på att ett program är helt virusfritt så borde man först undersöka det med olika vaccinations program och sedan testköra programmet på en isolerad dator ett tag. Händer det ingenting efter ungefär ett år kan man vara ganska säker på att programmet är virusfritt. Men vem orkar göra allt detta? Dessutom är programmet ganska ålderdomligt nu också. Hittils så har det inte varit några allvarligare skador på datorsystem eftersom virusen man hittat varit ganska harmlösa, men bara att de finns där borde vara en varning åt oss alla. Ingen går säker. Nu när jag diskuterat lite om virus och hacking så är det väl dags att vi får smaka lite på datorbrotten...
Datorbrott i Sverige
Civildepartimentet har gjort en undersökning i Sverige om hur mycket företag drabbas av datorbrott. Det visade sig att i själva verket ingen kände sig drabbad av allvarliga datorbrott. Stora kupper som SPP/VPC-kuppen, där 53 miljoner kronor förskingrades, har aldrig hänt igen.
Rapporter från massmedia har varit missvisande då de behandlat dator-relaterade bedrägerer och dylikt. När man undersökt de olika fallen på djupet visar det sig att de inte ens kommer under det renodlade begreppet datorbrott. Slutsatsen är således att man inte ens kan påvisa någon förekomst överhuvudtaget.
De vanligaste datorbrotten lär vara förskingringar och bedrägerier.Detta är faktiskt vanlig traditionell brottslighet med den enda skillnaden att datorteknik använts. Detta kallas datorrelaterad brottslighet. Den förekommer naturligtvis men i begränsad omfattning. Det vanligaste är att en person vid t.ex. en bank eller ett postkontor eller försäkrings- kassan, som har tillgång till en terminal, faller för frestelsen att göra olagliga transaktioner. Detta är som sagt ovanligt och har marginell ekonomisk betydelse, enligt vissa källor. Glädjande nog ser det inte heller ut som att den skulle öka i någon nämnvärd omfattning.
En del fall, där gärningsmannen fällts, har det avslöjats att brottet har pågått i flera år, ibland 4-5 år. De intervjuade källorna visar inget uttryck för oro även om mörkertalet kan vara relativt högt. Att upptäcka de olika fallen av kriminalitet varierar. I vissa fall blir de upptäckte med hjälp av rutinmässiga internkontroller, men allra vanligast är det att slumpen lägger käppar i hjulen för gärningsmannen. Det kan vara att en arbetskamrat råkar upptäcka en liten ovanlighet och därmed är karusellen igång.
Ett bra exempel på vad den otroliga slumpen kan åstadkomma är förskingringen på Union Dime, visserligen inte i Sverige men endå ett bra exempel. En man, som vi kan kalla Bruce Banner, arbetade som kamrer vid sparbanken Union Dime. Han kände sig orättvist behandlad av företagsledningen och detta tjänade som motivation för honom. Han började manipulera huvuddatorn så att den skrev ut regelbundna rapporter om att alla konton var i sin ordning, fastän de i själva verket inte var det. Det Banner egentligen gjorde var att först förfalska bankens register, och sedan flyttade han skensummor av pengar hit och dit i registret på ett sådant sätt att det hela var i princip omöjligt att upptäcka. Han tog i första hand sådana konton som innehöll mycket pengar och där det gjordes transaktioner sällan. Banken hade ett sådant system att de som ville göra transaktioner var tvungna att komma med sina bankböcker till banken så att det kunde bokföras både i bankboken och i datasystemet. Efter bankens stängning studerade han dagens transaktioner och prickade av de konton som hade
stora behållningar. Om han nu såg ett konto på t.ex. 100 000 dollar gick han till sin terminal, som han hade rätt till, och gjorde en s.k. chefsannulering - en korrigering. Där beordrade han datorn att ändra summan till 50 000 dollar. Sedan öppnade han kassavalvet och tog med sig 50 000 i kontanter. Om nu denne person, som ägde kontot, inte gjorde något större uttag skulle detta aldrig upptäckas. Och mycket riktigt, han åkte inte fast förrän tre år senare. Men han åkte inte fast p.g.a. någon arbetskamrat eller något annat sådant. Nu var det så att Banner råkade vara en storspelare som satsade pengar på hästar. Detta spelande hade han börjat finansiera med de stulna pengarna på en lokal illegal spelhåla. Där gjorde sedan polisen razzia och fann Banners namn. Men anledningen till att polisen spårade upp honom var att de hade sett att han hade satsat upp till 30 000 dollar på en enda dag. De ville veta var han fick tag på pengarna. Han erkände sedan sitt handlande och sade att han ångrade sig. Banner fick tjugo månader, ett lågt straff, men släpptes efter femton p.g.a. gott uppförande.
Hur kunde han då klara av det hela. Ja, det han behövde var lite snabbtänkthet och de svagheter i systemet som han lärt känna genom erfarenhet. Något geni i programmering eller något liknande var han inte. Det enda han hade var den den utbildning han fått för att utföra sitt vanliga arbete.
Andra händelser är när skolungdomar använt annans lösenord i video- tex systemet, dock utan att förorsaka någon större skada. Källor säger att sådan brottslighet i Sverige inte nu utgör något problem.
All datorbrottslighet rör sig ju självklart inte om pengasvindlande. Det finns mer, mycket mer. Nu skall jag ta upp lite om säkerheten, vad som är värt att skydda, hur man skall skydda sig och dylikt.
Säkerhet
Det finns mycket som är värt att skydda t.ex. diverse register med blandat innehåll. Säg att försvarets register över beredskapsskjul där vapen ,konserver ,gasmasker och diverse läkemedel förvaras kom ut. Säg också att det kom ut var försvaret har sina dolda baser. Om detta kom ut till ,skall vi säga Norge kunde konsekvenserna bli förödande om dom fick för sig att starta krig. Nu är väl detta föga troligt men säg istället att VAM kom över registren, då skulle de kunna försöka ta död på olika ofentliga personer. Om de även kunde komma över vaktbolages register om larm m.m. Då kunde dom utnyttja dessa för att begå ytterligare brott. Som du ser skulle konsekvenserna kunna bli förödande om bara dessa två kanske tre register kom i fel händer.
Betänk då att det finns tusentals liknande register kanske inte av lika stor betydelse för nationens säkerhet, men om försäkringskassan eller socialbyråns register kom ut kunde detta få fruktansvärda följder på en arbetsplats. Betänk om du fick reda på att någon på din arbets plats var straffad för att ha våldfört sig på barn, hur skulle du reagera på detta? Eller att någon i din bekantskaps krets hade diverse smittsamma sjukdomar som han/hon åtdragit sig under en resa i ,skall vi säga Thailand. Det är alltså av yttersta vikt att sådana register inte blir offentliga.
Register av detta slag skyddas genom sekretesslagen hur mycket den nu hjälper mot hacking/cracking (man gör ett olaga intrång genom att forcera diverse lösen och andra säkerhetsspärrar) .Men det hjälper inte att göra en massa avancerade säkerhets program när fenomenet RöS (röjande singnaler ) existerar .Det var inte länge sedan man började tänka på RöS även i civila sammanhang .
RöS kann utgöras av diverse saker bl.a. ljud ,elektromagnetiska signaler ,videosignaler ,radiosignaler samt överlagrade signaler som leds ut i kraftnätetet. Datorer ,terminaler och även elektriska skrivmaskinner avger RöS. RöS är inte särskilt svårt att fånga upp i etern det går att göra med en vanlig tv, oftast då portabel samt en relativt enkel antenn. En elektrisk skrivmaskin är oftast inte så intressant men en dator där utkast till diverse hemliga dokument skrivs är klart mycket intresantare Normalt används inte detta sätt av den ordinäre hackern, detta används främst för industri spionage och för att komma över militära hemligheter . Hur skyddar man sig då mot RöS? Skydd går att åstakomma genom att bygga femton meter tjocka betongväggar runt data utrustningen men det finns något enklare metoder som t.ex skärmning , olika typer av avstörningsfilter och ljudisolering men detta är alla ganska klumpiga lösningar , forskning bedrivs intensivt och målet är att kunna bygga helt avstörd datautrustning till ett rimligt pris.
Det förekommer även buggning (man tappar av informationen på telefonnätet) kanske återigen inte för den vanlige hackern men det förekommer. Buggning är förbjudet enligt svensk lag, men både televerket och polisen vet att det förekommer . Det enklaste sättet att skydda sig mot denna avtappning av information är att kryptera särskilt viktig information . Det vanligaste är att man förvränger texten enligt ett vist förutbestämt mönster mellan sändaren och mottagaren, man bör dock passa sig för att skapa egna kodsystem då det kan bli allt för enkelt att hitta mönstret. Det finns även helt slumpmässiga program där datorn kodar medelandet men det förutsätter att mottagaren har samma program. Hur detta fungerrar i detalj vet vi inte, men det kan verka fullständigt ologiskt för den vanliga lekmannen.
Det finns många olika sätt att att ta sig in i datasystem samt skydda sig mot dom som försöker ta sig in . Ovan nämda exempel är bara en bråkdel av dom. Jag skrev nå´n mening om säkerhets program där uppe, det leder ju mig in på nästa ämne. Så lägligt, då.
Sekretess problem
När ett brott begås eller förbereds så är det vanligt att brottslingen, som oftast jobbar på företaget alltså ett såkallat "Inside job", lägger in vissa koder i ett program som används dagligen. När brottslingen sen kommer hem eller till en annan lämplig terminal så kan han ringa upp, köra programmet, kanske skriva in nå´t kommando och ,vips, så är han ute ur sekretess problemen. Det är därför viktigt att man har såkallade "Check Ups". Dessa utförs antingen manuellt eller automatiskt. Inom vissa speciella system finns det program som håller på att kolla upp varandra hela tiden. Detta är väldigt sofistikerade program och används vanligtvis inte av andra system. Men hursomhelst så är detta iallafall en version av ett automatiskt "Check up" program som kollar om någon manipulerat med andra program. En manuell "Check up" utförs av en anställd som får gå igenom programmet bit för bit. Detta är antagligen det säkraste sättet men också det dyraste och mest tidskrävande varför automatiska program oftast används.
Men det finns andra sätt att bryta sig in på. För att göra det lite svårare för brottslingen så har man speciella programpaket som frågar efter namn och lösenord. Detta kan ju klart knäckas, men bara om brottslingen vet något namn som används och ett lösenord som passar till namnet. Som det framgår av RöS så är detta fullt möjligt. Men om brottslingen bara har ett litet vagt hum om vad som är rätt , jag kommer fram till hur senare, så kan han ju pröva sig fram. Men alla som försöker komma in i datorsystemet loggförs, t.o.m. om man skrivit in fel lösenord så skrivs detta ner i en speciell fil. Detta medför att om brottslingen nu inte kom in med sitt namn och lösen som han klurat fram någonstans så finns de namn och lösenord han försökte med ändå loggförda. Alltså kan man ju kolla upp var läckan finns, med läcka menar jag antingen slarv eller att någon säljer information. Dessa loggfilerna kan bara läsas av en person med en viss status och är inte till för vilken personal som helst. Detta för oss genast till ämnet auktorisation.
Auktorisation
När en person skrivit in sitt namn och lösen för att komma in i systemet medför inte detta att han genast har tillgång till all information i systemet. Bara för att man skall vara säker på att informationen inte sprids till vem som helst har man vissa såkallade klassificieringsnivåer, som innebär att t.ex. chefen har den högsta nivån och alltså tillträde till allt medan de lite mindre, vanliga arbetarna bara har en normal nivå. De har alltså inte tillträde till mer än de behöver för sitt arbete. Hur många sådana här nivåer man har beror helt på företaget. Nu har vi talat så mycket om lösenord att jag liksom känner mig tvingad till en lite längre förklaring av detta.
Lösenordsrutiner
När man nu har ett sådant här auktorisations system infört måste man ju självklart ha någon form av identifikation. Det enklaste är att man helt enkelt skriver in sitt namn och eventuellt ett lösenord. Men det är inte så dumt att ha lösenord lite här och där i systemet. T.ex. ett gemensamt lösenord för inloggning i systemet, plus de andra identifikationerna då. Sedan är det ju rätt smart att ha ett kodord istället för sitt vanliga namn. Att byta lösenord ofta är ju inte heller helt fel. För att verkligen skydda sig borde man ha kombinerat av antingen gemensamma eller personliga lösenord för följande.
- Terminalen (Tangentbord och skärm)
- Datasystemet, och/eller delar av det
- Områden i arbetsminnet
- Program, och/eller delar av det
- Informationsregister, och/eller delar av det
- Speciella kategorier av informaton
- Speciella funktioner
Detta leder oss till det gigantiska problemet men har med personal. Det verkar som om de anställda är mer eller mindre inkompetenta när det gäller att komma ihåg sina lösenord. Man skall dessutom helst undvika att skriva ner sina lösenord. Och om man nu nödvändigtvis måste göra detta så skall papperslappen förvaras på ett mycket mycket säkert ställe, tex ett kassaskåp. Men inte ens detta fungerar. Det är tom ganska vanligt att det sitter lappar med lösenord på terminalerna.
Antalet symboler i ett lösenord bör vara minst fem, anledningen till att det är så få symboler är för att personalen lättare skall komma ihåg sina lösenord. Det börjar bli väldigt vanligt med att lösenord byts ut med helst ojämna mellanrum, ansvaret för att detta verkligen sker har den säkerhetsansvarige på företaget. De system som inte använder sig av detta har också en mycket högre statistik på datorintrång än de med ändringssystemet. Varför byter inte då alla till det här nya bättre system med oregelbundna byten av lösenord? För det första så blir det ju ännu mer problem för personal att komma ihåg sina lösenord, det är mycket slarv vid installationer av säkerhetssystem och den kanske löjligaste men ändå väldigt vanliga anledningen till att man inte byter till det bättre systemet, det är så svårt att komma på ett bra men ändå enkelt lösenord.
Själv använder jag mig av ett system som jag tycker är väldigt bra. För det första tar man och letar upp en bok i sin bokhylla, som man sedan använder varje gång man skall byta lösenord. Man bestämmer några sidor man skall använda, t.ex. 23, 110 och 132 dessa sidor får man inte glömma, och det är inte så farligt att anteckna dem någonstans eftersom en eventuell inbrottstjuv ändå inte kommer att tänka på följande. Du väljer ut ett passande ord från en sida, tar detta ord och kombinerar det på något fyndigt sätt med sidnummret. Ett exempel på en kombination av tex 132 och björn kan ju bli 132örn eller något liknande. Det är rätt lätt att slå upp denna sida och leta upp ordet man använde och lista ut sitt lösenord igen ifall man skulle glömma det.
Diskussion
Det har varit intressant att göra ett sådant här arbete även om det har varit svårt att göra en bra sammanställning av materialet. Jag vet inte om det är lämpligt att vara så mycket som fyra i en grupp, det blir roligare då men svårare att sammanställa alla texterna när det kommer från så många håll, kommunikationen funkade bra tycker jag, detta beror mycket på att vi bor i närheten av varandra det kan nog bli svårare om man bor i olika stadsdelar och är tvungen att ringa och diskutera hela arbetet över telefon och i skolan.
Man märker också hur lat man kan vara, man skjuter upp allt så länge som möjligt, varför göra något idag som du kan göra imorgon . Jag anser att bland det svåraste var att göra sammanfattningen, för det första vill ingen göra den, för det andra är det svårt att göra
en sammanfattning på ett redan nedskuret material.
För att då titta lite på själva arbetet, jag har fått två intryck medan jag har arbetat mad detta ämnet, det ena är att allmänheten tror att databrott förekommer mycket mer än det i själva verket gör, men detta kan bero på att dom flesta kanske inte riktigt vet vad ett databrott består av, för ovanstående påstående undantar jag programmkopiering som är mycket vanligt, det andra är att databrott, där man använder datorn som verktyg, det ses inte så allvarligt på, det kan till och med uppfattas som ganska så lustigt och fascinerande när någon som är skicklig att hantera en dator lyckas utföra ett intrång hos något företag, jag tillhör själv den här skaran av människor som tycker det är ganska fascinerande hur man kan ta sig förbi diverse spärrar och lösen ord och komma in i hemliga system och filer.
Detta är allt jag har att säga i min diskussionsbit.
Fredrik
Jaha... Så var det dags för detta också...Jag tycker att det har varit ganska kul att göra det här arbetet, iallafall så länge som det flöt. När vi skulle göra sammanfattningen så gick det ju VÄLDIGT trögt. Det har varit väldigt svårt att få nå't gjort.. vi sköt bara upp det till senare.. Men till slut tog vi tag i det och arbetade en hel del. Men så blev det stopp igen.. Det var tur att vi inte hade så mycket kvar då...Men iallafall.. i genomsnitt så har det gått hyfsat. Jag har ingen större lust att jobba i samma grupp igen.. Det var inte så bra som jag trodde det skulle bli att arbeta med några man känner så pass bra som vi gör. Jag tror att när man jobbar med någon som man inte känner så bra så gör man sitt bästa för att inte verka lat. När det gäller själva arbetet så var det svårt att få det att smälta ihop.. Men jag tycker att det till slut blev rätt bra ändå. Som jag fattade det så verkar det som media spred dålig information om hackers.
Hackers håller ju som vi skrivit tidigare inte på med databrott för att skaffa pengar. Och när det gäller andra databrott och sekretess så verkade det som om folk inte verkade bry sig speciellt mycket om det. Men som Fredrik skrivit så tycker jag också att det är fascinerande med personer som bryter sig in i datasystem. Detta var det jag hade att säga..
Anders
Ok, då får väl jag också göra ett intelligent inlägg i diskussionssidan. När vi nu har jobbat ihop det här arbetet så kan jag säga att det får duga. Det är ju inte det bästa jag gjort, men inte det sämsta heller. Av mig får det medelbetyg och detta av vissa orsaker.
En av orsakerna är materialet. Vi fick inte tag på så mycket material som vi hade planerat från början. Alla de bra böckerna var utlånade, så vi hade god tillgång på dåliga böcker. Vi fick helt enkelt välja ut det bästa av det sämsta, vilket inte blev mycket. Att skriva med dålig information som grund var inte lätt, men vi gjorde vårat bästa. På det stora hela kan jag säga att jag är nöjd. Arbetsförloppet var det heller inget fel på. Nja... Vi kanske sköt upp saker och ting lite för mycket, men det var ju aldrig något problem. Det vi sköt upp fixade vi lungt senare ändå. Vi hade läget under full kontroll. Jag hade ju förstås förväntat mig lite bättre arbete eftersom vi bor rätt nära varandra.
Innehållet i arbetet som sådant har jag inte så avvikan de åsikter om. Jag tycker att dataintrång och andra databrott är kriminella och skall stoppas. Det skall inte betraktas som en förmildrande omständighet att man använt en dator i sitt brott. Det är fortfarande lika brottsligt som allt annat. En bra sak är att databrott inte är så vanligt i Sverige. I och för sig så kan man väl ändå säga att det kan vara förmildrande när en ungdom endast har brutit sig in i ett system utan att förorsaka någon skada. Piratkopiering är i Sverige inte något brott, och det är nog den klart bästa regeln de har i datalagen. Jag tycker inte
f:\12000 essays\sciences (985)\Computer\Devopment of Computers and Technology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers in some form are in almost everything these days. From Toasters to Televisions, just about all electronic things has some form of processor in them. This is a very large change from the way it used to be, when a computer that would take up an entire room and weighed tons of pounds has the same amount of power as a scientific calculator. The changes that computers have undergone in the last 40 years have been colossal. So many things have changed from the ENIAC that had very little power, and broke down once every 15 minutes and took another 15 minutes to repair, to our Pentium Pro 200's, and the powerful Silicon Graphics Workstations, the core of the machine has stayed basically the same. The only thing that has really changed in the processor is the speed that it translates commands from 1's and 0's to data that actually means something to a normal computer user. Just in the last few years, computers have undergone major changes. PC users came from using MS-DOS and Windows 3.1, to Windows 95, a whole new operating system. Computer speeds have taken a huge increase as well, in 1995 when a normal computer was a 486 computer running at 33 MHz, to 1997 where a blazing fast Pentium (AKA 586) running at 200 MHz plus. The next generation of processors is slated to come out this year as well, being the next CPU from Intel, code named Merced, running at 233 MHz, and up. Another major innovation has been the Internet. This is a massive change to not only the computer world, but to the entire world as well. The Internet has many different facets, ranging from newsgroups, where you can choose almost any topic to discuss with a range of many other people, from university professors, to professionals of the field of your choice, to the average person, to IRC, where you can chat in real time to other people around the world, to the World Wide Web, which is a mass of information networked from places around the world. Nowadays, no matter where you look, computers are somewhere, doing something.
Changes in computer hardware and software have taken great leaps and jumps since the first video games and word processors. Video games started out with a game called Pong...monochrome (2 colors, typically amber and black, or green and black), you had 2 controller paddles, and the game resembled a slow version of Air Hockey. The first word processors had their roots in MS-DOS, these were not very sophisticated nor much better than a good typewriter at the time. About the only benefits were the editing tools available with the word processors. But, since these first two dinosaurs of software, they have gone through some major changes. Video games are now placed in fully 3-D environments and word processors now have the abilities to change grammar and check your spelling.
Hardware has also undergone some fairly major changes. When computers entered their 4th generation, with the 8088 processor, it was just a base computer, with a massive processor, with little power, running at 3-4 MHz, and there was no sound to speak of, other than blips and bleeps from an internal speaker. Graphics cards were limited to two colors (monochrome), and RAM was limited to 640k and less. By this time, though, computers had already undergone massive changes. The first computers were massive beasts of things that weighed thousands of pounds. The first computer was known as the ENIAC, it was the size of a room, used punched cards as input and didn't have much more power than a calculator. The reason for it being so large is that it used vacuum tubes to process data. It also broke down very often...to the tune of once every fifteen minutes, and then it would take 15 minutes to locate the problem and fix it. This beast also used massive amount of power, and people used to joke that the lights would dim in the city of origin whenever the computer was used.
The Early Days of Computers
The very first computer, in the roughest sense of the term, was the abacus. Consisting of beads strung on wires, the abacus was the very first desktop calculator. The first actual mechanical computer came from an individual named Blaise Pascal, who built an adding machine based on gears and wheels. This invention did not become improved significantly until a person named Charles Babbage came along, who made a machine called the difference engine. It is for this, that Babbage is known as the "Father of the Computer."
Born in England in 1791, Babbage was a mathematician, and an inventor. He decided a machine could be built to solve polynomial equations more easily and accurately by calculating the differences between them. The model of this was named the Difference Engine. The model was so well received that he began to build a full scale working version, with money that he received from the British Government as a grant.
Babbage soon found that the tightest design specifications could not produce an accurate machine. The smallest imperfection was enough to throw the tons of mechanical rods and gears, and threw the entire machine out of whack. After spending 17,000 pounds, the British Government withdrew financial support. Even though this was a major setback, Babbage was not discouraged. He came up with another machine of wheels and cogs, which he would call the analytical engine, which he hoped would carry out many different kinds of calculations. This was also never built, at least by Babbage (although a model was put together by his son, later), but the main thing about this was it manifested five key concepts of modern computers --
· Input device
· Processor or Number calculator
· Storage unit to hold number waiting to be processed
· Control unit to direct the task waiting to be performed and the sequence of calculations
· Output device
Parts of Babbage's inventions were similar to an invention built by Joseph Jacquard. Jacquard, noting the repeating task of weavers working on looms, came up with a stiff card with a series of holes in it, to block certain threads from entering the loom and blocked others from completing the weave. Babbage saw that the punched card system could be used to control the calculations of the analytical engine, and brought it into his machine.
Ada Lovelace was known as the first computer programmer. Daughter of an English poet (Lord Byron), went to work with Babbage and helped develop instructions for doing calculations on the analytical engine. Lovelace's contributions were very great, her interest gave Babbage encouragement; she was able to see that his approach was workable and also published a series of notes that led others to complete what he prognosticated.
Since 1970, the US Congress required that a census of the population be taken every ten years. For the census for 1880, counting the census took 71/2 years because all counting had to be done by hand. Also, there was considerable apprehension in official society as to whether the counting of the next census could be completed before the next century.
A competition was held to find some way to speed the counting process. In the final test, involving a count of the population of St. Louis, Herman Hollerith's tabulating machine completed the count in only 51/2 hours. As a result of his systems adoption, an unofficial count of the 1890 population was announced only six weeks after the census was taken. Like the cards that Jacquard used for the loom, Hollerith's punched cards also used stiff paper with holes punched at certain points. In his tabulating machine, roods passed through the holes to complete a circuit, which caused a counter to advance one unit. This capability pointed up the principal difference between the analytical engine and the tabulating machine; Hollerith was able to use electrical power rather than mechanical power to drive the device.
Hollerith, who had been a statistician with the Census Bureau, realized that the punched card processing had high potential for sales. In 1896, he started the Tabulating Machine Company, which was very successful in selling machines to railroads and other clients. In 124, this company merged with two other companies to form the International Business Machines Corporation, still well known today as IBM.
IBM, Aiken & Watson
For over 30 years, from 1924 to 1956, Thomas Watson, Sr., ruled IBM with an iron grip. Before becoming the head of IBM, Watson had worked for the Tabulating Machine Company. While there, he had a running battle with Hollerith, whose business talent did not match his technical abilities. Under the lead of Watson, IBM became a force to be reckoned with in the business machine market, first as a purveyor of calculators, then as a developer of computers.
IBM's entry into computers was started by a young person named Howard Aiken. In 1936, after reading Babbage's and Lovelace's notes, Aiken thought that a modern analytical engine could be built. The important difference was that this new development of the analytical engine would be electromechanical. Because IBM was such a power in the market, with lots of money and resources, Aiken worked out a proposal and approached Thomas Watson. Watson approved the deal and give him 1 million dollars in which to make this new machine, which would later be called the Harvard Mark I, which began the modern era of computers.
Nothing close to the Mark I had ever been built previously. It was 55 feet long and 8 feet high, and when it processed information, it made a clicking sound, equivalent to (according to one person) a room full of individuals knitting with metal needles. Released in 1944, the sight of the Mark I was marked by the presence of many uniformed Navy officers. It was now W.W.II and Aiken had become a naval lieutenant, released to Harvard to help build the computer that was supposed to solve the Navy's obstacles.
During the war, German scientists made impressive advances in computer design. In 1940 they even made a formal development proposal to Hitler, who rejected farther work on the scheme, thinking the war was already won. In Britain however, scientists succeeded in making a computer called Colossus, which helped in cracking supposedly unbreakable German radio codes. The Nazis unsuspectingly continued to use these codes throughout the war. As great as this accomplishment is, imagine the possibilities if the reverse had come true, and the Nazis had the computer technology and the British did not.
In the same time frame, American military officers approached Dr. Mauchly at the University of Pennsylvania and asked him to develop a machine that would quickly calculate the trajectories for artillery and missiles. Mauchly and his student, Presper Eckert, relied on the work of Dr. John Atanasoff, a professor of physics at Iowa State University.
During the late '30's, Atanasoff had spent time trying to build an electronic calculating device to help his students solve complicated math problems. One night, the idea came to him for linking the computer memory and the associated logic. Later, he and an associate, Clifford Berry, succeeded in building the "ABC," for Atanasoff-Berry Computer. After Mauchly met with Atanasoff and Berry, he used the ABC as the basis for the next computer development. From this association ultimately would come a lawsuit, considering attempts to get patents for a commercial version of the machine that Mauchly built. The suit was finally decided in 1974, when it was decided that Atanasoff had been the true developer of the ideas required to make an electronic digital computer actually work, although some computer historians dispute this decision. But during the war years, Mauchly and Eckert were able to use the ABC principals in dramatic effect to create the ENIAC.
Computers Become More Powerful
The size of ENIAC's numerical "word" was 10 decimal digits, and it could multiply two of
these numbers at a rate of 300 per second, by finding the value of each product from a
Multiplication table stored in its memory. ENIAC was about 1000 times faster than the previous generation of computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card input, 1 multiplier, 1 divider/square rooter, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable
instructions making up a program were embodied in the separate "units" of ENIAC, which
were plugged together to form a "route" for the flow of information. The problem with the ENIAC was that the average life of a vacuum tube is 3000 hours, and a vacuum tube would then burn out once every 15 minutes. It would take on average 15 minutes to find the burnt out tube and replace it.
Enthralled by the success of ENIAC, the mathematician John Von Neumann undertook, in 1945, a study of computation that showed that a computer should have a very basic, fixed physical construction, and yet be able to carry out any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new consciousness of how sensible, yet fast computers should be organized and assembled. These ideas, usually referred to as the stored-program technique, became important for future generations of high-speed digital computers and were wholly adopted. The Stored-Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very high speed operations attainable. An impression may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in concurrent order, no human programmer could induce enough instruction to keep the computer busy. Arrangements must be made, consequently, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently. Von Neumann met these two requirements by making a special type of machine instruction, called a Conditional control transfer -- which allowed the program sequence to be stopped and started again at any point - and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be changed in the same way as data.
As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in "libraries" and read into memory only when needed. Hence, much of a given program could be created from the subroutine library. The computer memory became the collection site in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. When the advantage of these techniques became clear, they became a standard practice.
The first generation of modern programmed electronic computers to take advantage of these improvements was built in 1947. This group included computers using Random- Access-Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched-card or tape I/O devices. Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first-generation stored-program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. This group of computers included EDVAC and UNIVAC, the first commercially available computers.
Early in the 50's two important engineering discoveries changed the image of the electronic-computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor - Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960's, with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories -- staffed with many programmers and support personnel. This situation led to modes of operation enabling the sharing of the high potential available. During this time, another important development was the move from machine language to assembly language, also known as symbolic languages. Assembly languages use abbreviations for instructions rather than numbers. This made programming a computer a lot easier.
After the implementation of assembly languages came high-level languages. The first language to be universally accepted was a language by the name of FORTRAN, developed in the mid 50's as an engineering, mathematical, and scientific language. Then, in 1959, COBOL was developed for business programming usage. Both languages, still being used today, are more English like than assembly. Higher level languages allow programmers to give more attention to solving problems rather than coping with the minute details of the machines themselves. Disk storage complimented magnetic tape systems and enabled users to have rapid access to data required.
All these new developments made the second generation computers easier and less costly to operate. This began a surge of growth in computer systems, although computers were being mostly used by business, university, and government establishments. They had not yet been passed down to the general public. The real part of the computer revolution was about to begin.
One of the most abundant elements in the earth is silicon; a non-metal substance found in sand as well as in most rocks and clay. The element has given rise to the name "Silicon Valley" for Santa Clara County, about 50 km south of San Francisco. In 1965, Silicon valley became the principle site of the computer industry, making the so-called silicon chip.
An integrated circuit is a complete electronic circuit on a small chip of silicon. The chip may be less than 3mm square and contain hundreds to thousands of electronic components. Beginning in 1965, the integrated circuit began to replace the transistor in machines was now called third-generation computers. An Integrated Circuit was able to replace an entire circuit board of transistors with one chip of silicon much smaller than one transistor. Silicon is used because it is a semiconductor. It is a crystalline substance that will conduct electric current when it has been doped with chemical impurities shot onto the structure of the crystal. A cylinder of silicon is sliced into wafers, each about 76mm in diameter. The wafer is then etched repeatedly with a pattern of electrical circuitry. Up to ten layers may be etched onto a single wafer. The wafer is then divided into several hundred chips, each with a circuit so small it is half the size of a fingernail; yet under a microscope, it is complex as a railroad yard. A chip 1 centimeter square it is so powerful that it can hold 10,000 words, about the size of an average newspaper.
Integrated circuits entered the market with the simultaneous announcement in 1959 by Texas Instruments and Fairchild Semiconductor that they had each independently produced chips containing several complete electronic circuits. The chips were hailed as a generational breakthrough because they had four desirable characteristics.
· Reliability - They could be used over and over again without failure, whereas vacuum tubes failed ever fifteen minutes. Chips rarely failed -- perhaps one in 33 million hours of operation. This reliability was due not only to the fact that they had no moving parts but also that semiconductor firms gave them a rigid work/not work test.
· Compactness - Circuitry packed into a small space reduces equipment size. The machine speed is increased because circuits are closer together, thereby reducing the travel time for the electricity.
· Low Cost - Mass-production techniques has made possible the manufacture of inexpensive integrated circuits. That is, miniaturization has allowed manufacturers to produce many chips inexpensively.
· Low power use -- Miniaturization of integrated circuits has meant that less power is required for computer use than was required in previous generations. In an energy-conscious time, this was important.
The Microprocessor
Throught the 1970's, computers gained dramatically in speed, reliability, and storage capacity, but entry into the fourth generation was evolutionary rather than revolutionary. The fourth generation was, in fact, furthering the progress of the third generation. Early in the first part of the third generation, specialized chips were developed for memory and logic. Therefore, all parts were in place for the next technological development, the microprocessor, or a general purpose processor on a chip. Ted Hoff of Intel developed the chip in 1969, and the microprocessor became commercially available in 1971.
Nowadays microprocessors are everywhere. From watches, calculatores and computers, processors can be found in virtually every machine in the home or business. Environments for computers have changed, with no more need for climate-controlled rooms and most models of microcomputers can be placed almost anywhere.
New Stuff
After the technoligical improvements in the 60's and the 70's, computers haven't gotten much different, aside from being faster, smaller and more user friendly. The base architecture of the computer itself is fundementally the same. New improvements from the 80's on have been more "Comfort Stuff", those being sound cards (For hi-quality sound and music), CD-ROMs (large storage capicity disks), bigger monitors and faster video cards. Computers have come a long way, but there has not really been alot of vast technological improvements, architecture-wise.
f:\12000 essays\sciences (985)\Computer\Does Microsoft Have Too Much Power .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Initially, there is nothing. Then, there is Bill Gates the founder of Microsoft. Once a young, eager teenager running a small business of other teenagers, now the richest man in the world controlling an operating system practically every IBM compatible computer in the world uses. Computers are not the only thing that Microsoft desires. Now, they wish to influence the Internet. With all the opportunities that it offers, many companies race to develop software to get people and businesses on the Internet. Many dislike the power Microsoft has come to possess and might gain more of, but is there anything anybody can do? IBM has taken on the leader of software with an innovative new operating system known as OS/2, but will they have a chance? Microsoft may be unstoppable with its foundation, influence and power but is that enough to practically own the computerized world as we know it?
Usually, when we mention Microsoft in any form, we must have the registered trademark symbol right the word. The name is a well-known word in virtually everyone's life. Although it is the super-empire it is today, Microsoft was once a small software business ran by a young Bill Gates in a tiny office. Consisting of a few young adults, they were not progressing as much as they would like too. Their competitor, Digital Research, created the first operating system, known as the CP/M-86 system. Though, not glamorized, CP/M did exist. Their competitors had it a little worse, working out of their not so tidy two story house, made up of a husband and wife. The massive change occurred when a couple of IBM representatives showed up at the door of the CP/M founders only to be turned away. Very rare to happen, since IBM was so highly respected by programmers at the time. IBM is introduced to a young man named Bill Gates, mistaken for an office helper but later strikes a serious offer for Microsoft products. The one program that was unavailable at the time would be an operating system soon to be called QDOS, a raw form of the Disk Operating System we know today. When called upon by IBM, Bill Gates discovers that a man had created an operating system to be pre-installed with the new IBM, scheduled to be released in 1981. The operating system would be similar to the CP/M-86 system created by Digital Research. The deal will make Bill Gates the wealthiest man in the United States, with an estimated worth of over thirteen billion dollars. Today, The Microsoft Cooperation is the worlds most lucrative software empire and yet still has dreams for the future.
Computers today are very popular among homeowners, businesses and schools. Microsoft began to suffice to the population by creating user-friendly programs such as the ever popular Windows. This graphical interface served as a bridge to the computer illiterate and then began the reign of Microsoft over the population. Untouched by wrath of Microsoft would later be a small minority of UNIX users and other DOS like programs. Various programs were made just for Windows which of course ran in DOS. OS/2 at this point was already made, not well known and not very popular. Ironically, Bill Gates worked closely with IBM in 1983, to help develop OS/2, even conceding to IBM that their OS/2 would one day overtake Microsoft's own attempt at a graphical interface, Windows. However, Windows advanced in its versions and graphics capabilities as well as DOS. In 1995, Microsoft announces its new creation which will revolutionize computers everywhere. Windows 95 is introduced as a powerful operating system, with an astounding graphical and user-friendly interface. Although, the propreitary nature of the Apple Macintosh operating system and OS/2 led to small market acceptance, and Windows and DOS become the world leading Personal Computer operating systems. The message Microsoft is trying to send to consumers is simple: "Windows 95 is it, if you don't use it, buy it, if your computer can't run it, replace it."
At present time, Microsoft has furthers its shadowy terrains toward Windows 97 which includes some minor adjustments such as faster loading capabilities along with better Internet/TCP/IP components. Along with the Windows Empire, Microsoft is moving towards the Internet. There are currently two competitors fighting for control of this vast information network, Microsoft and Netscape. By controlling the Internet, a cooperation would have to seize control of all Browsing, server, client programs and any other application granting access of the Internet to the consumer. In most online haunts, supporters and users of niche products like OS/2, Macintosh and all other competitive operating systems are drowned out by jeering proponents of Windows. In its hopes to try to succumb users, Microsoft has stated that Microsoft Explorer is free for the taking. Netscape Navigator is free for a trial period at which users will have to lighten their wallets if they wish to use the browser further. The future may not look bright for the Internet if Microsoft takes command.
Microsoft is definitely the software empire of the 20th century. The question is, should they have that much power? This will take time in order to see the answer. Having a history of power and business intuition, Microsoft may never be defeated. Advancements in the last ten years alone have been remarkable, to think how far we will be in the next decade is a pondering thought. Will Microsoft still be the software giant it is today or will somebody take its place? Ventures of business can easily be described as unpredictable. Although, users may have to ask themselves this one question, does Microsoft have the power to monopolize the computerized generation?
f:\12000 essays\sciences (985)\Computer\DTP Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ANALYSIS OF THE SYSTEM
My new objectives for the new system are web page for the Doha College and
Newsletter for the parents. I've made survey to know how the people want the Newsletter and Web page of Doha College to be. The News-letter which I am going to produce it's going to be same as people they want it to be colourful and it have background information of the Doha College. I am going to produce a newsletter by using Microsoft Word, Microsoft Publisher and 3D Studio.
I am going to use these software because they are really useful, for example Microsoft Word has tools are useful which another software dose not have it and because it is really easy to use, and for another example 3D Studio, I am going to use 3D Studio because it is the most powerful software in creating graphics animation.
I am going to use scanner, Camera-Video and printer to help me to produce my new system.
The software which I am going to use to produce a Web Page for Doha College are Microsoft Front Page, Microsoft Power Point, 3D Studio and Excel. I am going to use Front Page because I can create a professional Web page without programming so that means it is much easier and quicker to produce a Professional Web Page for Doha College. I am going to use MS Power Point to produce a Web Page because MS Power Point is really powerful in creating graphics pictures, I am going to create a picture through MS Power Point then I am going to move it to Front Page same thing with 3D Studio, I am going to create a graphic pictures and copy it to Front Page. I might use MS Word to write some documents to put in Doha College Web Site.
I am going to use scanner, Camera-Video and printer to help me to produce my two new systems.
To produce my new two systems very fast and perfectly published I require a very high a high speed of CPU (central processor unit), a lot of RAM's (Random Access Memory) etc.
Here the Technical Specifications in Hardware I require :
1 . CPU : Pentium 133Mhz
2 . Motherboard : Pentium - cache memory 256 k
3 . RAM : 16Mb minimum
4 . VGA card: Stealth 3D 3000 (4MB VRAM) / 64 bit processing
5 . HDD : 1.2Gb minimum
6 . I/O card: 32 bit
7 . Modem : 33.6 BPS
8 . Scanner : 16 million colour
9 . Printer : Ink jet printer/ 700p * 700p
10 . Camera : Which it can be connected to the computer
11 . CD-ROM : 6x speed CD-ROM minimum
These the specifications which I require to produce my two new systems.
The Software which I need to produce my new two systems are :
1 . Microsoft Windows 95
2 . DOS Version 6.22(Disk Operating System)
3 . Microsoft Office 95 ( Professional )
4 . 3D Studio version 4
5 . Microsoft Front Page
6 . Corel Draw
7 . Graphics Work Shop
8 . Paint Shop Pro
These are the software which I need to produce my two new systems.
Here are the terms of Hardware that in the machine which I am going to produce my two new systems :
1 . Central Processor Unit (CPU) : is the main part of the computer, consisting of the registers, arithmetic logic (ALU)unit and control unit.
2 . Motherboard : Is the printed circuit board (PCB) that holds the principal components in a microprocessor, such as microprocessor and clock chips, will be either plugged into the motherboard or soldered to it. The other name of th Mother Board is Main Board.
3 . RAM ( Random Access Memory) : Is the memory that has the same access time for all locations.
4 . Video Adapter : Is the circuitry which generates the signals needed for a video output to display computer date. VRAM is a separate high-speed memory into which the processor writes the screen data, which is then read to the screen for display. This avoids the use of any main memory to hold screen data.
5 . I/O card : Is a peripheral unit which can be used both as an input or as an output device.
6 . Modem (Modulator-Demodulator) : Is a data communication device for sending and receiving data between computers over telephone circuits.
7 . Scanner : A scanner scans a drawing and turns it into a bit map.
8 . HDD (Hard Disk Drive) : Is the unit made up of the mechanism that rotates the disks between the read/writes heads, and the mechanism that controls the heads. It uses rigid magnetic disk enclosed is a sealed container.
9 . CD-ROM : Is a CD-ROM drive with a mechanism for automatically changing the current disk for another selected disk. Is made up of the mechanism.
10 . Camera Capture: Is a camera connected to the computer through parallel cable, the camera convert the pictures which is been captured to digital so it can send it to the computer through the parallel cable (Parallel is connected to the I/O card).
An explanation of how the software which I am going to use will be used:-
Microsoft Word : To run Microsoft Word u have to run first Windows because MS Word works under windows. To insert objects click on Insert/object.. You can insert a lot of objects for example video clip, sound clip, clip arts etc.
Microsoft Publisher : MS Publisher runs under Windows,
Microsoft Power Point : It runs Under Windows,
3D Studio : It works under DOS.
End.
f:\12000 essays\sciences (985)\Computer\Ecodisk.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ECODISC
Ecodisc is a program which allows the user to take on the
role of a Nature Reserve Manager. It was designed by a man
named Peter Bratt, and Englishman in South Devon. Ecodisc
is designed so that the user can see what effects certain
changes can make on the environment with out actually making
the changes. Ecodisc is a good educational tool showing new
users the effects of certain decisions. It can also be used
a map, because it lets you see various parts of the nature
reserve without actually going there.
Ecodisc allows the user to take on the role of a nature
reserve manger, which is the person who basically decides
what changes will be made to the nature reserve. With aid of
the Ecodisc, the results of decisions can be shown without
actually doing anything, or doing any harm to the environment.
Ecodisc allows users to explore various parts of the
nature reserve and view it from different positions. You can
see the area from any direction (north, south, east or west),
and even from a helicopter position. Ecodisc lets you see the
areas of the reserve from any part of the year. For example,
you could view the reserve in the middle of winter and see
what it looks like in summer.
Ecodisc is one of the first interactive programmes, and there
are hopes of some day there being interactive broadcast
television. This is a breakthrough in visual entertainment,
because while television lets you see a place, interactive
video will let you explore it. Interactive video is where the
viewer decides the plot and characters of a movie, or show.
The viewer will basically be able to write their own scripts
and produce the movie at the same time.
Ecodisc would be very good for showing students (or anyone)
interested in managing nature reserves, working for national
parks or just as an interest thing.
Ecodisc is an invention which would greatly help both the
computer and television industries as well as the nature and
wildlife organisations across the world. Already there are
programmes that enable the user to take control of what is
happening, and Ecodisc, being one of the first, has greatly
aided the production of the others. Ecodisc is the start of a
new way of life in visual entertainment and may also aid
things like scientific research and study.
f:\12000 essays\sciences (985)\Computer\Electronic Commerce.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Initially, the Internet was designed to be used by government and academic users, but now it is rapidly becoming commercialized. It has on-line "shops", even electronic "shopping malls". Customers, browsing at their computers, can view products, read descriptions, and sometimes even try samples. What they lack is the means to buy from their keyboard, on impulse. They could pay by credit card, transmitting the necessary data by modem; but intercepting messages on the Internet is trivially easy for a smart hacker, so sending a credit-card number in an unscrambled message is inviting trouble. It would be relatively safe to send a credit card number encrypted with a hard-to-break code. That would require either a general adoption across the internet of standard encoding protocols, or the making of prior arrangements between buyers and sellers. Both consumers and merchants could see a windfall if these problems are solved. For merchants, a secure and easily divisible supply of electronic money will motivate more Internet surfers to become on-line shoppers. Electronic money will also make it easier for smaller businesses to achieve a level of automation already enjoyed by many large corporations whose Electronic Data Interchange heritage means streams of electronic bits now flow instead of cash in back-end financial processes. We need to resolve four key technology issues before consumers and merchants anoint electric money with the same real and perceived values as our tangible bills and coins. These four key areas are: Security, Authentication, Anonymity, and Divisibility.
Commercial R&D departments and university labs are developing measures to address security for both Internet and private-network transactions. The venerable answer to securing sensitive information, like credit-card numbers, is to encrypt the data before you send it out. MIT's Kerberos, which is named after the three-headed watchdog of Greek mythology, is one of the best-known-private-key encryption technologies. It creates an encrypted data packet, called a ticket, which securely identifies the user. To make a purchase, you generate the ticket during a series of coded messages you exchange with a Kerberos server, which sits between your computer system and the one you are communicating with. These latter two systems share a secret key with the Kerberos server to protect information from prying eyes and to assure that your data has not been altered during the transmission. But this technology has a potentially weak link: Breach the server, and the watchdog rolls over and plays dead. An alternative to private-key cryptography is a public-key system that directly connects consumers and merchants. Businesses need two keys in public-key encryption: one to encrypt, the other to decrypt the message. Everyone who expects to receive a message publishes a key. To send digital cash to someone, you look up the public key and use the algorithm to encrypt the payment. The recipient then uses the private half of the key pair for decryption. Although encryption fortifies our electronic transaction against thieves, there is a cost: The processing overhead of encryption/decryption makes high-volume, low-volume payments prohibitively expensive. Processing time for a reasonably safe digital signature conspires against keeping costs per transaction low. Depending on key length, an average machine can only sign between twenty and fifty messages per second. Decryption is faster. One way to factor out the overhead is to use a trustee organization, one that collects batches of small transaction before passing them on to the credit-card organization for processing. First Virtual, an Internet-based banking organization, relies on this approach. Consumers register their credit cards with First Virtual over the phone to eliminate security risks, and from then on, they uses personal identification numbers (PINs) to make purchases.
Encryption may help make the electric money more secure, but we also need guarantees that no one alters the data--most notably the denomination of the currency--at either end of the transaction. One form of verification is secure hash algorithms, which represent a large file of multiple megabytes with a relatively short number consisting of a few hundred bits. We use the surrogate file--whose smaller size saves computing time--to verify the integrity of a larger block of data. Hash algorithms work similarly to the checksums used in communications protocols: The sender adds up all the bytes in a data packet and appends the sum to the packet. The recipient performs the same calculation and compares the two sums to make sure everything arrived correctly. One possible implementation of secure hash functions is in a zero-knowledge-proof system, which relies on challenge/response protocols. The server poses a question, and the system seeking access offers an answer. If the answer checks out, access is granted. In practice, developers could incorporate the common knowledge into software or a hardware encryption device, and the challenge could then consist of a random-number string. The device might, for example, submit the number to a secure hash function to generate the response.
The third component of the electronic-currency infrastructure is anonymity--the ability to buy and sell as we please without threatening our fundamental freedom of privacy. If unchecked, all our transactions, as well as analyses of our spending habits, could eventually reside on the corporate databases of individual companies or in central clearinghouses, like those that now track our credit histories. Serial numbers offer the greatest opportunity for broadcasting our spending habits to the outside world. Today's paper money floats so freely throughout the economy that serial numbers reveal nothing about our spending habits. But a company that mints an electric dollar could keep a database of serial numbers that records who spent the currency and what the dollars purchased. It is then important to build a degree of anonymity into electric money. Blind signatures are one answer. Devised by a company named DigiCash, it lets consumers scramble serial numbers. When a consumer makes an E-cash withdrawal, the PC calculates the number of digital coins needed and generates random serial numbers for the coins. The PC specifies a blinding factor, a random number that it uses to multiply the coin serial numbers. A bank encodes the blinded numbers using its own secret key and debits the consumer's account. The bank then sends the authenticated coins back to the consumer, who removes the blinding factor. The consumer can spend bank-validated coins, but the bank itself has no record of how the coins were spent.
The fourth technical component in the evolution of electric money is flexibility. Everything may work fine if transactions use nice round dollar amounts, but that changes when a company sells information for a few cents or even fractions of cents per
page, a business model that's evolving on the Internet. Electric-money systems must be able to handle high volume at a marginal cost per transaction. Millicent, a division of Digital Equipment, may achieve this goal. Millicent uses a variation on the digital-check model with decentralized validation at the vendor's server. Millicent relies on third-party organizations that take care of account management, billing, and other administrative duties. Millicent transactions use scrip, digital money that is valid only for Millicent. Scrip consists of a digital signature, a serial number, and a stated value (typically a cent or less). To authenticate transactions, Millicent uses a variation of the zero-knowledge-proof system. Consumers receive a secret code when they obtain a scrip. This proves ownership of the currency when it's being spent. The vendor that issues the scrip value uses a master-customer secret to verify the consumer's secret. The system hasn't yet been launched commercially, but Digital says internal tests of transactions across TCP/IP networks
indicate the system can validate approximately 1000 requests per second, with TCP connection handling taking up most of the processing time. Digital sees the system as a way for companies to charge for information that Internet users obtain from Web
sites.
Security, authentication, anonymity, and divisibility all have developers working to produce the collective answers that may open the floodgates to electronic commerce in the near future. The fact is that the electric-money genie is already out of the bottle. The market will demand electric money because of the accompanying new efficiencies that will shave costs in both consumer and supplier transactions. Consumers everywhere will want the bounty of a global marketplace, not one that's tied to bankers' hours. These efficiencies will push developers to overcome today's technical hurdles, allowing bits to replace paper as our most trusted medium of exchange.
f:\12000 essays\sciences (985)\Computer\Electronic Monitoring Vs Health Concerns.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Electronic Monitoring vs. Health Concerns
Is privacy and electronic monitoring in the work place an issue that is becoming a problem? More and more employees are being monitored today then ever before and the companies that do it aren't letting off. While electronic monitoring in the work place may be the cause of increased stress levels and tension, the benefits far exceed the harm that it may cause.
Employees don't realize how often electronic monitoring happens in their work place. An estimated twenty million Americans are subjected to monitoring in their work place, commonly in the form of phone monitoring, E-mail searches, and searching through the files on their hard drive (Paranoid 435). A poll by MacWorld states that over twenty-one percent of all employees are monitored at work, and the larger the company, the higher the percentage (Privacy 445). Unaware of this electronic monitoring, most employees often are not working at their peak performance due to this type of scrutiny.
The majority of Americans believe that electronic monitoring should not be allowed. Supreme Court Justice Louis D. Brandeis states that of all of the freedoms that Americans enjoy, privacy "is the right most valued by civilized men (Privacy 441)." A poll taken by Yankelovich Clancy Shulman for Time, states that ninety-five percent of Americans believe that electronic monitoring should not be allowed (Privacy 444). Harriet Ternipsede, who is a travel agent, gave a lengthy testimonial on how electronic monitoring at her job caused her undue stress and several health problems including muscle aches, mental confusion, weakened eyesight, severe sleep disturbance, nausea, and exhaustion. Ternipsede was later diagnosed with Chronic Fatigue Immune Dysfunction Syndrome (Electronic 446). A study done by the University of Wisconsin found that eighty-seven percent of employees subjected to electronic monitoring suffered from higher stress levels and increased tension while only sixty-seven percent of those employees that were not subjected to monitoring had those same symptoms (Paranoid 436).
While it is obvious that most employees are against electronic monitoring, the use of electronic monitoring contributes to increased stress levels in employees. While the advantages derived from electronic monitoring far outweigh the disadvantages. Through the use of employee monitoring, companies can save money in overall operations cost by weeding out those employees who don't pull their weight, and cut down on employee theft. By monitoring employees, it is possible to measure their performance and see if they are meeting standards. By getting rid of those employees who don't meet standards the burden of daily tasks is lifted on every other employee in that department. Eighty to ninety percent of business theft is internal (Paranoid 432). Through the use of employee monitoring, the amount of money lost to theft can be dramatically reduced.
While electronic monitoring in the work place may contribute to employee stress, the benefits are far greater then the disadvantages. Not only do companies save money from employee theft, sabotage, and vandalism, employees can feel more confident that their coworkers who don't pull their own weight will be terminated. When the company and the employees both benefit from increased profits I would call this a win-win situation. If the savings are passed to the customer, you could even have a win-win-win situation.
Works Cited
CQ Researcher. "Privacy in the Workplace." Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 441-445.
Ternipsede, Harriet. "Is Electronic Monitoring of Workers Really Necessary?" Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 446-448.
Whalen, John. "You're Not Paranoid: They Really Are Watching You." Writing and Reading Across the Curriculum. Ed. Laurence Behrens and Leonard Rosen. 6th ed. New York: HarperCollins, 1997. 430-440.
f:\12000 essays\sciences (985)\Computer\Employment Skills.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thiru Thirunavukarasu April. 1, 1996
Employment Skills
By: Thiru Thirunavukarasu
Introduction
In my essay I will talk about the skills required to get a good
job nowadays. There will be three main points I will be discussing
such as academic, personal management, and teamwork skills. I
will give you examples of these skills, and reasons why this skill is
important for you to get a job.
Academic Skills
Academic skills are probably the most important skill you
will need to get a job. It is one of the or the first thing an employer
looks for in an employee. They are skills which give you the basic
foundation to acquire, hold on to, and advance in a job, and to
achieve the best results. Academic skills can be further divided into three
sub-groups; communication, thinking, and learning skills.
Thiru Thirunavukarasu April 1, 1996
Employment Skills-2
Communicate. Communication skills require you to understand and
speak the languages in which business is conducted. You must be a good
listener, and be able to understand things easily. One of the most important
communicating skills would be reading, you should be able to comprehend
and use written materials including things such as graphs, charts, and
displays. One of the newest things we can add to communicating skills
would be the Internet, since it is so widely used all around the world - you
should have a good understanding of what it is and how to use it.
Think. Thinking critically and acting logically to evaluate situations
will get you far in your job. Thinking skills consists of things such as solving
mathematical problems, using new technology, instruments, tools, and
information systems effectively. Some examples of these would be
technology, physical science, the arts, skilled trades, social science, and much
more.
Learn. Learning is very important for any job. For example, if your
company, gets some new software, you must be able to learn how to use it,
Thiru Thirunavukarasu April 1, 1996
Employment Skills-3
quickly and effectively after a few tutorials. You must continue doing this for
the rest of your career. It is one thing that will always be useful in any
situation, not just jobs.
Personal Management Skills
Personal management skills is the combination of attitudes, skills, and
behaviors required to get, keep, and progress on a job and to achieve the
best results. Personal management skills can be further divided into three
sub-groups just as academic skills, which are positive attitudes and
behaviors, responsibility, and adaptability.
Positive Attitudes And Behaviors. This is also very important to keep
a job. You must have good self-esteem and confidence in yourself. You
must be honest, have integrity, and personal ethnics. You must show your
employer you are happy at what you are doing and have positive attitudes
toward learning, growth, and personal health. Show energy, and persistence
to get the job done, these can help you to get promoted or a raise.
Thiru Thirunavukarasu April 1, 1996
Employment Skills-4
Responsibility. Responsibility is the ability to set goals and priorities
in work an personal life. It is the ability to plan an manage time, money, and
other resources to achieve goals, and accountability for actions taken.
Adaptability. Have a positive attitude toward changes in your job.
Recognition of an respect for people's diversity and individual differences.
Creativity is also important. You must have the ability to identify and suggest
new ideas to get the job done.
Teamwork Skills
Teamwork skills are those skills needed to work with others co-
operatively on a job and to achieve the best results. You should show your
employer you able to work with others, understand and contribute to the
organization's goals. Involve yourself in the group, make good decisions
with others and support the outcomes. Don't be narrow minded, listen to
what others have to say and give your thoughts towards their comments. Be a
leader not a loner in the group.
Thiru Thirunavukarasu April 1, 1996
Employment Skills-5
Conclusion
In conclusion I would like to say that all these skills I have discussed
are critical to get, keep, and progress in a job and to achieve the best results
possible for you. Of these skills though academic skills would be the most
important skills you will learn, I think. So if you keep at these skills you will
be happy with what you are doing unlike a lot of people who are forced to
get jobs that they do not like.
f:\12000 essays\sciences (985)\Computer\Escapism and Virtual Reality.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Escapism and Virtual Reality
ABSTRACT
The use of computers in society provides obvious benefits and some
drawbacks. `Virtual Reality', a new method of interacting with any computer,
is presented and its advantages and disadvantages are considered. The human
aspect of computing and computers as a form of escapism are developed, with
especial reference to possible future technological developments. The
consequences of a weakening of the sense of reality based upon the physical
world are also considered. Finally, some ways to reduce the unpleasant
aspects of this potential dislocation are examined. A glossary of computing
terms is also included.
Computers as Machines
The progression of the machine into all aspects of human life has continued
unabated since the medieval watchmakers of Europe and the Renaissance study
of science that followed Clocks . Whilst this change has been
exceedingly rapid from a historical perspective, it can nevertheless be
divided into distinct periods, though rather arbitrarily, by some criteria
such as how people travelled or how information was transferred over long
distances. However these periods are defined, their lengths have become
increasingly shorter, with each new technological breakthrough now taking less
than ten years to become accepted (recent examples include facsimile
machines, video recorders and microwave ovens).
One of the most recent, and hence most rapidly absorbed periods, has been that
of the computer. The Age of Computing began with
Charles Babbage in the late 19th century Babbage , grew in the
calculating machines between the wars EarlyIBM , continued during the
cryptanalysis efforts of World War II Turing,Bletchley and
finally blossomed in the late 1970's with mass market applications in the
developed countries (e.g. JapanSord ). Computers have gone through several
`generations' of development in the last fifty years and their rate of change
fits neatly to exponential curves Graphs , suggesting that the length of
each generation will become shorter and shorter, decreasing until some
unforeseen limit is reached. This pattern agrees with the more general
decrease of length between other technological periods.
The great strength of computers whether viewed as complex machines, or more
abstractly as merely another type of tool, lies in their enormous flexibility.
This flexibility is designed into a computer from the moment of its conception
and accounts for much of the remarkable complexity that is inherent in each
design. For this very reason, the uses of computers are now too many to ever
consider listing exhaustively and so only a representative selection are
considered below.
Computers are now used to control any other machine that is subject to a
varying environment, (e.g. washing machines, electric drills and car
engines). Artificial environments such as hotels, offices and homes are
maintained in pre-determined states of comfort by computers in the thermostats
and lighting circuits. Within a high street shop or major business, every
financial or stockkeeping transaction will be recorded and acknowledged using
some form of computer.
The small number of applications suggested above are so common to our
experiences in developed countries that we rarely consider the element which
permits them to function as a computer. The word `microprocessor' is used to
refer to a `stand-alone' computer that operates within these sorts of
applications. Microprocessors are chips at the heart of every computer, but
without the ability to modify the way they are configured, only a tiny
proportion of their flexibility is actually used. The word `computer' is now
defined as machines with a microprocessor, a keyboard and a visual display
unit (VDU), which permit modification by the user of the way that the
microprocessor is used.
Computers in this sense are used to handle more complex information than
that with which microprocessors deal, for example, text, pictures and large amounts of
information in databases. They are almost as widespread as the microprocessors
described above, having displaced the typewriter as the standard writing tool
in many offices and supplanted company books as the most reliably current form
of accountancy information. In both these examples, a computer permits a
larger amount of information to be stored and modified in a less
time-consuming fashion than any other method used previously.
Another less often considered application is that of communication. Telephone
networks are today controlled almost entirely by computers, unseen by the
customer, but actively involved in every telephone call phones . The
linking of computers themselves by telephone and other networks has led
people to communicate with each other by using the computer to both write the
text (a word-processor) and to send it to its destination. This is known as
electronic mail, or `email'.
The all pervasive nature of the computer and its obvious benefits have not
prevented a growing number of people who are vociferously concerned with the
risks of widespread application of what is still an undeniably novel
technology comp.risks,ACMrisks . Far from being reactionary prophets of
doom, such people are often employed within the computer industry itself and
yet have become wary of the pace of change. They are not opposed to the use of
computers in appropriate environments, but worry deeply when critical areas of
inherently dangerous operations are performed entirely by computers. Examples
of such operations include correctly delivering small but regular doses of
drugs into a human body and automatically correcting (and hence preventing)
aerodynamic stability problems in an aircraft plane1,plane2 . Both
operations are typical `risky' environments for a computer since they contain
elements that are tedious (and therefore error-prone) for a human being to
perform, yet require the human capacity to intervene rapidly when the
unexpected occurs. Another instance of the application of computers to a
problem actually increasing the risks attached is the gathering of statistical
information about patients in a hospital. Whilst the overall information about
standards of health care is relatively insensitive, the comparative costs of
treatment by different physicians is obviously highly sensitive information.
Restricting the `flow 'of such information is a complex and time-consuming
business.
Predictions for future developments in computing applications are notoriously
difficult to cast with any accuracy, since the technology which is driving the
developments changes so rapidly. Interestingly, much of what has been
developed so far has its conceptual roots in science fiction stories of the
late 1950's. Pocket televisions, lightning fast calculating machines and
weapons of pin-point accuracy were all first considered in fanciful fiction.
Whilst such a source of fruitful ideas has yet to be fully mined out, and
indeed, Virtual Reality (see below) has been used extensively
Neuromancer and others, many more concepts that are now appearing that
have no fictional precursors.
Some such future concepts, in which computers would be of vital importance,
might be the performance of delicate surgical procedures by robot, controlled
by a computer, guided in turn by a human surgeon; the control of the flow of
traffic in a large city according to information gathered by remote sensors;
prediction of earthquakes and national weather changes using large computers
to simulate likely progressions from a known current state weather ; the development of
cheap, fast and secure coding machines to permit guaranteed security in international
communications; automatic translation from one language to another as quickly as the words
are spoken; the simulation of new drugs' chemical reactions
with the human body. These are a small fraction of the possible future
applications of computers, taken from a recent prediction of likely developments
JapanFuture . One current development which has relevance to all the above, is the concept
known as `Virtual Reality' and is discussed further below.
Virtual Reality
Virtual Reality, or VR, is a concept that was first formally proposed in the
early Seventies by Ted Nelson ComputerDreams , though this work appears
to be in part a summary of the current thinking at that time. The basic idea
is that human beings should design machines that can be operated in a manner
that is as natural as possible, for the human beings, not the computers.
For instance, the standard QWERTY keyboard is a moderately good instrument for
entering exactly the letters which have been chosen to make up a word and
hence to construct sentences. Human communication, however, is often
most fluent in speech, and so a computer that could understand spoken words
(preferably of all languages) and display them in a standard format such as
printed characters, would be far easier to use, especially since the skills of
speech exist from an early age, but typing has to be learnt, often painfully.
All other human senses have similar analogies when considering
their use with tools. Pictures are easier than words for us to digest
quickly. A full range of sounds provides more useful information than beeps
and bells do. It is easier to point at an item that we can see than to specify
it by name. All of these ideas had to wait until the technology had advanced
sufficiently to permit their implementation in an efficient manner, that is,
both fast enough not to irritate the user and cheap enough for
mass production.
The `state of the art' in VR consists of the following. A pair of rather
bulky goggles, which when worn display two images of a computer-generated
picture. The two images differ slightly, one for each eye, and provide stereo
vision and hence a sense of depth. They change at least fifty times per
second, providing the brain with the illusion of continuous motion (just as with
television). Attached to the goggles are a pair of conventional high-quality
headphones, fed from a computer-generated sound source. Different delays in
the same sound reaching each ear provide a sense of aural depth. There is
also a pair of cumbersome gloves, rather like padded ice-hockey gloves, which
permit limited flexing in all natural directions and feed information about
the current position of each hand and finger to a computer.
All information from the VR
equipment is passed to the controlling computer and, most importantly, all
information perceived by the user is generated by the computer. The last
distinction is the essence of the reality that is `virtual', or
computer-created, in VR.
The second critical feature is that the computer should be able to modify the information
sent to the user according to the information that it received from the user.
In a typical situation this might involve drawing a picture of a room on the
screens in the goggles and superimposing upon it a picture of a hand, which
moves and changes shape just as the user's hand moves and changes shape. Thus,
the user moves his hand and sees something that looks like a hand move in
front of him.
The power of VR again lies in the flexibility of the computer. Since the
picture that is displayed need not be a hand, but could in fact be any created object
at all, one of the first uses of VR might be to permit complex objects to be
manipulated on the screen as though they existed in a tangible form.
Representations of large molecules might be grasped, examined from all sides
and fitted to other molecules. A building could be constructed from virtual
architectural components and then lit from differing angles to consider how
different rooms are illuminated. It could even be populated with imaginary
occupants and the human traffic bottlenecks displayed as `hot spots' within
the building.
One long-standing area of interest in VR has been the simulation of military
conflicts in the most realistic form possible.
The flight simulator trainers of the 1970's had basic visual displays and large hydraulic
rams to actually move the trainee pilot as the real aeroplane would have moved. This has
been largely replaced in more modern simulators by a massive increase in the amount of
information displayed on the screen, leading to the mind convincing itself that the physical
movements are occurring, with reduced emphasis on attempts to provide the actual movements.
Such an approach is both cheaper in equipment and more flexible in configuration, since
changing the the aeroplane from a fighter to a commercial airliner need only involve
changing the simulator's program, not the hydraulics.
Escapism
Escapism can be rather loosely defined as the desire to be in a more pleasant
mental and physical state than the present one. It is universal to human experience
across all cultures, ages and also across historical periods. Perhaps for this
reason, little quantitative data exists on how much time is spent practicing
some form of escapism and only speculation as to why it should feel so
important to be able to do so.
One line of thought would suggest that all conscious thought is a form of
escapism and that in fact any activity that involves concentration on
sensations from the external world is a denial of our ability to escape
completely.
This hypothesis might imply that all thought is practice, in some sense, for
situations that might occur in the future. Thoughts about the past are only
of use for extrapolation into possible future scenarios.
However, this hypothesis fails to include the pleasurable parts of escapist
thinking, which may either be recalling past experiences or, more importantly
for this study, the sense of security and safety that can exist within
situations that exist only in our minds. A more general hypothesis would note
the
separate concepts of pleasure and necessity as equally valid reasons for any
thought.
Can particular traits in a person's character be identified with a tendency to
escapist thoughts that lead to patterns of behaviour that are considered extreme
by their society? It seems unlikely that a combination of hereditary
intelligence and social or emotional deprivation can be the only causes of
such behaviour, but they are certainly not unusual ones, judging by the common
stereotypes of such people.
The line of thinking that will be pursued throughout this essay is the
idea that a person who enjoys extreme forms of escapist thoughts will often feel most
comfortable with machines in general and with computers in particular.
Certainly, excessive escapist tendencies have existed in all societies and
have been tolerated or more crucially, made use of, in many different ways.
For instance, apparent absent-mindedness would be acceptable in a
hunter/gatherer society in the gatherers but not for a hunter. A society with
a wide-spread network of bartering would value a combination of both the
ability to plan a large exchange and the interpersonal skills necessary to
conclude a barter, which are not particularly abstract. In a society with
complex military struggles, the need to plan and imagine victories becomes an
essential skill (for a fraction of the combatants).
Moving from the need for abstract thought to its use, there is a scale of
thought required to use the various levels of machines that have been
mentioned earlier. A tool that has no electronics usually has a function that
is easy to perceive (for example, a paperclip). A machine with a
microprocessor often has a larger range of possible uses and may
require an instruction manual telling the operator how to use it (e.g. a
modern washing machine or a television). Both of these examples can be used
without abstract thought, merely trusting that they will do what they either
obviously do, or have been assured by the manual that they will do.
The next level is the use of computers as tools, for example, for
word-processing. Now a manual becomes essential and some time will have to be
spent before use of the tool is habitual. Even then, many operations will
remain difficult and require some while to consider how to perform them. A
`feel' for the tool has to acquired before it can be used effectively.
The top level of complexity on this scale is the use of computers as flexible
tools and the construction of the series of instructions known as programs to
control the operation of the computer. Escapist thoughts begin when the
operations of the programs have to be understood. In many cases, it is either
too risky or time-consuming to set the programs into action without
considering their likely consequences (in minute detail) first. Such detailed
comprehension of the action of a program often requires the person constructing the lists of
instructions (the programmer) to enter a separate world, where the symbols and values of the
program have their physical counterparts. Variables take on emotional significance and
routines have their purpose described in graphic `action' language. A cursory examination of
most programmers' programs will reveal this in the comments that are left to help them
understand each program's purpose. Interestingly, even apparently unemotional people
visualise their programs in this anthropomorphic manner Weizenbaum76,Catt73 .
Without this ability to trace the action of a program before it is performed in
real life, the computing industry would cease to exist. This ability is so
closely related to what we do naturally and call `escapism', that the two have
begun to merge for many people involved in the construction of programs.
For some, what began as work has become what is done for pleasurable relaxation, which is a
fortunate discovery for large computer-related businesses. The need for time-clocks and
foremen has been largely eliminated, since the workers look forward to coming to work,
often to escape the mundane aspect of reality.
There are problems associated with this form of work motivation. One major
discovery is that it can be difficult to work as a team in this kind of
activity. Assigning each programmer a section of the project is the usual
solution, but maintaining a coherent grasp of the project's state then becomes
increasingly difficult. Indeed, this problem means that there are now
computers whose design cannot be completely understood by one person
MMMonth . Misunderstandings that result from this problem and the
inherent ambiguities of human languages are often the cause of long delays in
completion of projects involving computers. (The current statistics are that
cost over-runs of 300 are not uncommon, especially for larger projects and
time over-runs of 50 are common SWEng ).
Another common problem is that of developed social inadequacy amongst groups
of programmers and their businesses. The awkwardness of communicating complex
ideas to other (especially non-technical) members of the group can lead
them to avoid other people in person and to communicate solely by messages and
manuals (whether electronic or paper).
Up to now, most absorption of the information necessary to `escape' in this
fashion has been from a small number of sources located in an environment full
of other distractions. The introduction of Virtual Reality, especially with
regard to the construction of programs, will eliminate many of these external
distractions. In return, it will provide a `concentrated' version of the world
in which the programmer is working. The flexible nature of VR means that
abstract objects such as programs can be viewed in reality (on the goggles'
screens) in any format at all. Most likely, they will be viewed in a manner
that is significant for each individual programmer, corresponding to how he or
she views programs when they have escaped into the world that contains them.
Thus, what were originally only abstract thoughts in one human mind can now be
made real and repeatable and may be distributed in a form that has meaning for
other people. The difference between this and books or paintings is the amount
of information that can be conveyed and the flexibility with which it can be
constructed.
The Dangers of Virtual Reality
As implied above, the uses of Virtual Reality can be understood in two ways.
Firstly, VR can be viewed as a more effective way of communicating concepts,
abstract or concrete, to other people. For example, as a teaching tool, a VR
interface to a database of operation techniques would permit a surgeon to try
out different approaches on the same simulated patient or to teach a junior
basic techniques. An architect might use a VR interface to allow clients to
walk around a building that exists only in the design stage ArchieMag .
Secondly, VR can be used as a visualisation tool for each individual. Our own
preferences could be added to a VR system to such an extent that anyone else
using it would be baffled by the range of personalised symbols and concepts.
An analogy to this would be redefining all the keys on a typewriter for each
typist. This would be a direct extension of our ability to conceive objects,
since the machine would deal with much of the tedious notation and the many
symbols currently necessary in complex subjects such as nuclear physics. In
this form, VR would provide artificial support for a human mind's native
abilities of construct building and imagination.
It is the second view of VR, and derivations from it, that are of concern to
many experts. On a smaller scale, the artificial support of mental activities
has shown that once support is available, the mind tends to become lazy about
developing what is already present. The classic case of this is, of course,
electronic calculators. The basic tedious arithmetic that is necessary to
solve a complicated problem in physics or mathematics is the same whether
performed by machine or human, and in fact plays very little part in
understanding (or discovering) the concepts that lie behind the problem.
However, if the ability to perform basic arithmetic at the lowest level is
neglected, then the ability to cope with more complex problems does seem to
be impaired in some fashion. Another example is the ability to spell
words correctly. A mis-spelt word only rarely alters the semantic content of a
piece of writing, yet obvious idleness or inability in correct use of the
small words used to construct larger concepts often leaves the reader with a
sense of unease as to the validity of the larger concept.
Extending the examples, a worrying prediction is that the extensive use of VR
to support our own internal visualisations of concepts would reduce our
ability to perform abstract and escapist thoughts without the machine's
presence. This would be evident in a massive upsurge in computer-related
entertainment, both in games and interactive entertainment and would be
accompanied by a reduction of the appreciation and study of written
literature,
since the effort required to imagine the contents would be more than was
considered now reasonable.
Another danger of VR is its potential medical applications. If a convincing
set of images and sound can be collected, it might become possible to treat
victims of trauma or brain-injured people by providing a `safe' VR environment
for them to recover in. As noted Whalley , there are several
difficult ethical decisions associated with this sort of work. Firstly, the
decision to disconnect a chronically disturbed patient from VR would become
analogous to removing pain-killers from a patient in chronic pain. Another
problem is that since much of what we perceive as ourselves is due to the way
that we react to stimuli, whatever the VR creator defines as the available
stimuli become the limiting extent of our reactions. Our individuality would
be reduced and our innate human flexibility with it. To quote Whalley
Whalley directly,
quote
`` virtual reality devices may possess the potential to
distort substantially [those] patients' own perceptions of themselves and
how others see them. Such distortions may persist and may not necessarily be
universally welcomed. In our present ignorance about the lasting effects of
these devices, it is certainly impossible to advise anyone, not only mental
patients, of the likely hazards of their use."
quote
Following on from these thoughts, one can imagine many other abuses of VR.
`Mental anaesthesia' or `permanent calming' could be used to control long-term
inmates of mental institutions. A horrendous form of torture by deprivation of
reality could be imagined, with a victim being forced to perceive only what
the torturers choose as reality. Users who experienced VR at work as a tool may
chose to use it as a recreational drug, much as television is sometimes used
today, and just as foreseen in the `feelies' of Aldous Huxley's Brave New World
BNW .
Conclusions
Computers are now an accepted part of many peoples' working lives and yet
still retain an aura of mystery for many who use them. Perhaps the commonest
misapprehension is to perceive them as an inflexible tool; once a machine is
viewed as a word processor, it can be awkward to have to redefine it in our
minds as a database, full of information ordered in a different fashion.
Some of what people find difficult to use about today's machines will hopefully be
alleviated by the introduction of Virtual Reality interfaces. These should
allow us to deal with computers in a more intuitive manner.
If there ever comes a time when it is necessary to construct a list of tests to
distinguish VR from reality, some of the following observations might be of
use.
The most difficult sense to deceive over a long period of time will probably be
that of vision. The part of the human brain that deals with vision processing
uses depth of focus as one of its mechanisms to interpret distances. Flat
screens cannot provide this without a massive amount of processing to
deliberately bring the object that the eyes are focussed upon into a sharper
relief than its surroundings. Since this is unlikely to be economical in the
near future, the uniform appearance of VR will remain an indication of its
falsehood.
Another sign may be the lack of tactile feedback all over the body. Whilst
most tactile information, such as the sensation of wearing a watch on one's
wrist, is ignored by the brain, a conscious effort of detection will usually reveal its
presence. Even the most sophisticated feedback mechanisms will be hard-pressed to duplicate
such sensations or the exact sensations of an egg being crushed or walking barefoot on
pebbles, for example.
The sense of smell may prove to be yet another tell-tale sign of reality. The
human sense of smell is so subtle (compared to our present ability to
recreate odours) and is interpreted constantly, though we are often unaware of
it, that to mimic the myriad smells of life may be too complex to ever achieve
convincingly.
The computer industry will continue to depend upon employees who satisfy some
part of their escapist needs by programming for pleasure. In the near future,
the need for increased efficiency and better estimates of the duration of
projects may demand that those who spend their hours escaping are organised by
those who do not. This would lead to yet another form of stratification within
a society, namely, the dreamers (who are in fact now the direct labour force)
and their `minders'. It should also encourage societies to value the power of
abstract thought more highly, since direct reward will be seen to come from
it.
Virtual Reality is yet another significant shift in the way that we can
understand both what is around us and what exists only in our minds. A
considerable risk
associated with VR is that our flexibility as human beings means that we may
adapt our thoughts to our tool, instead of the other way round. Though
computers and our interaction with them by VR is highly flexible, this flexibility
is as nothing compared to the potential human range of actions.
Acknowledgements: My thanks go to Glenford Mapp of Cambridge University
Computer Laboratory and Olivetti Research Laboratory, Dr. Alan Macfarlane of
the Department of Social Anthropology, Cambridge University, Dr. John Doar
and Alan Finch for many useful discussions. Their comments have been fertile
starting grounds for many of the above ideas.
This essay contains approximately 4,500 words, excluding Abstract, Glossary
and Bibliography.
Glossary
Chip for microchip, the small black tile-like objects that make
electronic machines.
Computer machine with a microprocessor and an interface that
permits
by the user.
Database collection of information stored on a computer which permits.
to the information in several ways, rather like having multiple
in a book.
Email mail. Text typed into one machine can be transferred
to another remote machine.
Microprocessor stand-alone computer, with little option for change by the user.
Program series of instructions to control the operation of a microprocessor.
Risk often unforeseen dangers of applying computer-related technology
new applications.
Stand-alone to the rest of the electronic world.
User human who uses the machine or computer.
VDU Display Unit. The television-like screen attached to a computer.
Virtual to mean `imaginary' or `existing only inside a computer'
VR Reality. Loosely, an interface to any computer that
the user to use the computer in a more `involved' fashion.
Word processor application of a computer to editing and printing text.
Clocks
L. Mumford,
Technics and Civilisation ,
Harcourt Brace Jovanovich, New York, 1963, pp.13--15.
Babbage
J.M. Dubbey,
The Mathematical Work of Charles Babbage ,
Cambridge University Press, 1978.
EarlyIBM
William Aspray,
Computing Before Computers ,
Iowa State University press, 1990.
Turing
B.E. Carpenter and R.W. Doras (Editors),
A.M. Turing's ACE report of 1946 and other papers ,
The MIT Press, 1980.
Bletchley
David Kahn,
The Codebreakers ,
London, Sphere, 1978
JapanSord
Takeo Miyauchi,
The Flame from Japan ,
SORD Computer Systems Inc., 1982.
Graphs
J.L. Hennessy and D.A. Patterson,
Computer Architecture : A Quantitative Approach ,
Morgan Kaufmann, California, 1990.
phones
Amos E. Joel,
Electronic Switching : Digital Central Office Systems of the World ,
Wiley, 1982.
comp.risks
comp.risks , a moderated bulletin board available world-wide on computer
networks. Its purpose is the discussion of computer-related risks.
f:\12000 essays\sciences (985)\Computer\Essay On Hacking.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Essay On HAcking by Philip smith
A topic that i know very well is computers and computer hacking. Computers
seem very complicated and very hard to learn, but, if given time a computer can be very
useful and very fun. Have you ever heard all of that weird computer terminology? for and
example, Javascript. Javascript is basically a computer language used when programming
internet web pages. Have you ever been on the internet and seen words go across the
screen or moving images? This is all done by the Java language. If you don not see
moving images then its because your web browser cannot read javascript. If you don't
know what a web browser is then I will tell you, a web browser is simply a tool used to
view the various websites on the internet. All web browsers are different, some only
interpret html language, which is another programming language used to design web page
and then there are some browsers that can play videos and sounds.
Have you ever wondered why when you want to go to a website you have to type
http://name of site.com? well I have been wondering for ages but still can't figure out, but
sometimes you type ftp:// before the name of the site. This simply means File transfer
protocol. You use this when download image files or any other files. Now, onto
hacking. Most people stereotype people simply as "HACKERS," but what they don't
know is that there are three different types of computer whizzes.
First, there are hackers. Hackers simply make viruses and fool around on the
internet and try to bug people. They make viruses so simple. The get a program called a
virus creation kit. This program simply makes the virus of beholders choice. It can
make viruses that simply put a constant beep in you computer speakers or it can be
disastrous and ruin your computers hard-drive. Hackers also go onto chat rooms and
cause trouble. Chat rooms are simply a service given by internet providers to allow
people all over the world to talk. As I was saying, Hackers go into these rooms and
basically try to take over because in chat rooms there is one person in control. This
person has the ability to put you in control or simply ban you. These hackers use
programs that allow them to take full control over any room and potentially, make the
computers on the other side overload with commands which in end, makes their computer
collapse.
Another type of computer whiz is called a cracker, crackers are sort of malicious.
Crackers use security programs used by system operators for evil purposes. System
operators use these programs to search the net for any problems, but they can be used for
other purposes. When Crackers get into systems they usually just fool around but never
destroy things.
The last computer whiz is called a phreaker. Don't let the name fool you,
phreakers are very malicious and will destroy any information found when breaking into a
system. The phreakers use the same techniques as crackers but they go a step further.
When into systems, phreakers usually plant viruses and steal information.
Now that you know some important things about computers and the internet it will
take you no time to surf the web. But remember, never get into hacking, cracking and
phreaking because no matter how much you know about computers you should never use
it for malicious purposes.
f:\12000 essays\sciences (985)\Computer\ethics in cyberspace.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cyberspace is a global community of people using computers in networks. In order to function well, the virtual communities supported by the Internet depend upon rules of conduct, the same as any society. Librarians and information technologists must be knowledgeable about ethical issues for the welfare of their organizations and to protect and advise users.
What is ethics? Ethics is the art of determining what is right or good. It can also be defined as a general pattern or way of life, a set of rules of conduct or moral code. Ethical guidelines are based on values.
The Association of Computing Machinery (ACM) is one national organization which has developed a statement of its values. Every member of ACM is expected to uphold the Code of Ethics and Professional Conduct which includes these general moral imperatives:
1) contribute to society and human well-being
2) avoid harm to others
3) be honest and trustworthy
4) be fair and take action not to discriminate
5) honor property rights including copyrights and patents
6) give proper credit for intellectual property
7) respect the privacy of others
8) honor confidentiality.
The very nature of electronic communication raises new moral issues. Individuals and organizations should be proactive in examining these concerns and developing policies which protect liabilities. Issues which need to be addressed include: privacy of mail, personal identities, access and control of the network, pornographic or unwanted messages, copyright, and commercial uses of the network. An Acceptable Use Policy (AUP) is recommended as the way an organization should inform users of expectations and responsibilities. Sample AUPs are available on the Internet at gopher sites and can be retrieved by using Veronica to search keywords "acceptable use policies" or "ethics."
The Computer Ethics Institute in Washington, D.C. has developed a "Ten Commandments of Computing":
1) Thou shalt not use a computer to harm other people.
2) Thou shalt not interfere with other people's computer work.
3) Thou shalt not snoop around in other people's computer files.
4) Thou shalt not use a computer to steal.
5) Thou shalt not use a computer to bear false witness.
6) Thou shalt not copy or use proprietary software for which you have not paid.
7) Thou shalt not use other people's computer resources without authorization or proper compensation.
8) Thou shalt not appropriate other people's intellectual output.
9) Though shalt think about the social consequences of the program you are writing or the system you are designing.
10) Thou shalt always use a computer in ways that show consideration and respect for your fellow humans (Washington Post, 15 June 1992: WB3).
The University of Southern California Network Ethics Statement specifically identifies types of network misconduct which are forbidden: intentionally disrupting network traffic or crashing the network and connected systems; commercial or fraudulent use of university computing resources; theft of data, equipment, or intellectual property; unauthorized access of others' files; disruptive or destructive behavior in public user rooms; and forgery of electronic mail messages.
What should an organization do when an ethical crisis occurs? One strategy has been proposed by Ouellette and Associates Consulting (Rifkin, Computerworld 25, 14 Oct. 1991:
84).
1. Specify the FACTS of the situation.
2. Define the moral DILEMMA.
3. Identify the CONSTITUENCIES and their interests.
4. Clarify and prioritize the VALUES and PRINCIPLES at stake.
5. Formulate your OPTIONS.
6. Identify the potential CONSEQUENCES.
Other ethical concerns include issues such as 1) Influence: Who determines organizational policy? Who is liable in the event of lawsuit? What is the role of the computer center or the library in relation to the parent organization in setting policy? 2) Integrity: Who is responsible for data integrity? How much effort is made to ensure that integrity? 3) Privacy: How is personal information collected, used and protected? How is corporate information transmitted and protected? Who should have access to what? 3) Impact: What are the consequences on staff in the up- or down-skilling of jobs? What are the effects on staff and organizational climate when computers are used for surveillance, monitoring and measuring?
As the schools incorporate Internet resources and services into the curriculum and the number of children using the Internet increases, other ethical issues must be addressed. Should children be allowed to roam cyberspace without restriction or supervision? How should schools handle student Internet accounts? What guidelines are reasonable for children?
Organizations need to be proactive in identifying and discussing the ethical ramifications of Internet access. By having acceptable use policies and expecting responsible behavior, organizations can contribute to keeping cyberspace safe.
Selected Resources on Information Ethics
"Computer Ethics Statement." College & Research Libraries News
54, no. 6 (June 1993): 331-332.
Dilemmas in Ethical Uses of Information Project. "The Ethics
Kit." EDUCOM/EUIT, 1112 16th Street, NW, Suite 600,
Washington, D.C. 20036. phone: (202) 872-4200; fax: (202)
872-4318; e-mail: ethics@bitnic.educom.edu
"Electronic Communications Privacy Act of 1986." P.L. 99-508.
Approved Oct. 21, 1986. [5, sec 2703]
Feinberg, Andrew. "Netiquette." Lotus 6, no. 9 (1990): 66-69.
Goode, Joanne and Maggie Johnson. "Putting Out the Flames: The
Etiquette and Law of E-Mail." ONLINE 61 (Nov. 1991): 61-65.
Gotterbarn, Donald. "Computer Ethics: Responsibility Regained."
National Forum 71, no. 3 (Summer 1991): 26-31.
Hauptman, Robert, ed. "Ethics and the Dissemination of
Information." Library Trends 40, no. 2 (Fall 1991): 199-
375.
Johnson, Deborah G. "Computers and Ethics." National Forum 71,
no. 3 (Summer 1991): 15-17.
Journal of Information Ethics (1061-9321). McFarland, 1992-
Kapor, M. "Civil Liberties in Cyberspace." Scientific American
265, no. 3 (1991): 158-164.
Research Center on Computing and Society, Southern Connecticut
State University and Educational Media Resources, Inc.
"Starter Kit." phone: (203) 397-4423; fax: (203-397-4681;
e-mail: rccs @csu.ctstate.edu
Rifkin, Glenn. "The Ethics Gap." Computerworld 25, no. 41 (14
Oct. 1991): 83-85.
Shapiro, Normal and Robert Anderson. "Toward an Ethics and
Etiquette for Electronic Mail." Santa Monica, Calif.: Rand
Corporation, 1985. Available as Rand Document R-3283-NSF/RC
and ERIC Document ED 169 003.
Using Software: A Guide to the Ethical and Legal Use of Software
for Members of the Academic Community. EDUCOM and ITAA,
1992.
Welsh, Greg. "Developing Policies for Campus Network
Communications." EDUCOM Review 27, no. 3 (May/June 1992):
42-45).
f:\12000 essays\sciences (985)\Computer\Feasibility of complete system protection .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Jackman 1
Computer Viruses: Infection Vectors, and
Feasibility of Complete Protection
A computer virus is a program which, after being loaded into a computer's memory, copies itself with the purpose of spreading to other computers.
Most people, from the corporate level power programmer down to the computer hobbyist, have had either personal experience with a virus or know someone who has. And the rate of infection is rising monthly. This has caused a wide spread interest in viruses and what can be done to protect the data now entrusted to the computer systems throughout the world.
A virus can gain access to a computer system via any one of four vectors:
1. Disk usage: in this case, infected files contained on a diskette (including, on occasion, diskettes supplied by software manufacturers) are loaded, and used in a previously uninfected system thus allowing the virus to spread.
2. Local Area Network: a LAN allows multiple computers to share the same data, and programs. However, this data sharing can allow a virus to spread rapidly to computers that have otherwise been protected from external contamination.
Jackman 2
3. Telecommunications: also known as a Wide Area Network, this entails the connection of computer systems to each other via modems, and telephone lines. This is the vector most feared by computer users, with infected files being rapidly passed along the emerging information super-highway, then downloaded from public services and then used, thus infecting the new system.
4. Spontaneous Generation: this last vector is at the same time the least thought of and the least likely. However, because virus programs tend to be small, the possibility exists that the code necessary for a self-replicating program could be randomly generated and executed in the normal operation of any computer system.
Even disregarding the fourth infection vector, it can be seen that the only way to completely protect a computer system is to isolate it from all contact with the outside world. This would include the user programming all of the necessary code to operate the system, as even commercial products have been known to be shipped already infected with viruses.
In conclusion, because a virus can enter a computer in so many different ways, perhaps the best thing to do is more a form of damage control rather than prevention. Such as, maintain current backups of your data, keep your original software disks write-protected and away from the computer, and use a good Virus detection program.
Jackman 3
Sources Cited
Burger, Ralf. Computer Viruses and Data Protection. Grand Rapids: Abacus, 1991.
Fites, Philip, Peter Johnston, and Martin Kratz. The Computer Virus Crisis. New York: Van Nostrand Reinhold, 1989: 6-81.
McAfee, John, and Colin Haynes. Computer Viruses, Worms, Data Diddlers, Killer Programs, and Other Threats to Your System. New York: St. Martin's Press, 1989: i-195.
Roberts, Ralph. Compute!'s Computer Viruses. Greensboro: Compute! Publications, Inc., 1988: 29-82
Jackman ii
Outline
Thesis: Complete protection of a computer system from viruses is not possible, so efforts should be concentrated on recovery rather than prevention.
I. Introduction, with definition.
A. Define Computer Virus.
B. Define interest group.
C. Define problem.
II. Discus the ways that a virus can infect a computer.
A. Disk exchange and use.
B. Local Area Network.
C. Telecommunications also known as Wide Area Network.
D. Spontaneous Generation.
III. Summarize threat, and alternatives.
A. Must isolate from outside world.
B. Must write own programs.
C. Propose alternative of damage control.
f:\12000 essays\sciences (985)\Computer\Fiber Optics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fiber Optics
Fiber Optic Cable Facts
"A relatively new technology with vast potential importance, fiber optics is the
channeled transmission of light through hair-thin glass fibers."
[ Less expensive than copper cables
[ Raw material is silica sand
[ Less expensive to maintain
[ If damaged, restoration time is faster
(although more users are affected)
[ Backbone to the Information Superhighway
Information (data and voice) is transmitted through the fiber digitally by the use
of high speed LASERs (Light Amplification through the Simulated Emission of
Radiation) or LEDs (Light Emitting Diodes). Each of these methods create a highly
focused beam of light that is cycled on and off at very high speeds. Computers at the
transmitting end convert data or voice into "bits" of information. The information is
then sent through the fiber by the presence, or lack, of light. Computers on the
receiving end convert the light back into data or voice, so it can be used.
ORIGIN OF FIBER OPTICS
Information (data and voice) is transmitted through the fiber digitally by the use
of high speed LASERs (Light Amplification through the Simulated Emission of
Radiation) or LEDs (Light Emitting Diodes). Each of these methods create a highly
focused beam of light that is cycled on and off at very high speeds. Computers at the
transmitting end convert data or voice into "bits" of information. The information is
then sent through the fiber by the presence, or lack, of light. So, all of the data is sent
light pulses. Computers on the receiving end convert the light back into data or voice,
so it can be used.
All of this seems to be a very "modern" concept, and the technology we use is.
The concept though, was the idea of Alexander Graham Bell in the late 1800's. He just
didn't have a dependable light source... some days the sun doesn't shine! He thought
of the idea that our voices could be transmitted by pulses of light. The people who
thought that audio, video, and other forms of data could be transmitted by light through
cables, were present day scientists. Most of the things that are possible today,
Alexander Grahm Bell could never even have dreamed of.
Although the possibility of lightwave communications occurred to Alexander
Graham Bell (who invented the telephone), his ideas couldn't be used until the LASER
or LED had been invented. Most of these advances occurred in the 1970s, and by 1977
glass-purifying and other fiber-optic manufacturing techniques had also reached the
stage where interoffice lightwave communications were possible. With further
technological development, many intercity routes were in operation by 1985, and some
transoceanic routes had been completed by 1990. Now, in the mid-90's, worldwide
connections are possible through the Internet.
The light is prevented from escaping the fiber by total internal reflection, a
process that takes place when a light ray travels through a medium with an Index of
Refraction higher than that of the medium surrounding it. Here the fiber core has a
higher refractive index than the material around the core, and light hitting that material
is reflected back into the core, where it continues to travel down the fiber.
THE PROPAGATION OF LIGHT
AND LOSS OF SIGNALS
The glass fibers used in present-day fiber-optic systems are based on ultrapure
fused silica (sand). Fiber made from ordinary glass is so dirty that impurities reduce
signal intensity by a factor of one million in only about 16 ft of fiber. These impurities
must be removed before useful long-haul fibers can be made. But even perfectly pure
glass is not completely transparent. It weakens light in two ways. One, occurring at
shorter wavelengths, is a scattering caused by unavoidable density changes within the
fiber. In other words, when the light changes mediums, the change in density causes
interference. The other is a longer wavelength absorption by atomic vibrations. For
silica, the maximum transparency, occurs in wavelengths in the near infrared, at about
1.5 m (micrometers).
APPLICATIONS
Fiber-optic technology has been applied in many areas, although its greatest
impact has come in the field of telecommunications, where optical fiber offers the
ability to transmit audio, video, and data information as coded light pulses. Fiber optics
are also used in the field of medicine, all of the wire-cameras and lights are forms of
fiber optic cable. In fact, fiber optics have quickly become the preferred mode of
transmitting communications of all kinds. Its advantages over older methods of
transmitting data are many, and include greatly increased carrying capacity (due to the
very high frequency of light), lower transmission losses, lower cost of basic materials,
much smaller cable size, and almost complete immunity to any interference. Other
applications include the simple transmission of light for illumination in awkward places,
image guiding for remote viewing, and sensing.
ADVANTAGES OF FIBER OPTIC CABLE
This copper cable contains 3000 individual wires.
It takes two wires to handle one two-way conversation.
That means 1500 calls can be transmitted simultaneously on each
cable.
Each fiber optic cable contains twelve fiber wires.
Two fibers will carry the same number of simultaneous
conversations as one whole copper cable.
Therefore, this fiber cables replace six of the larger ones.
And 90,000 calls can be transmitted simultaneously on one fiber
optic cable.
LONG DISTANCE
FIBER-OPTIC COMMUNICATIONS SYSTEMS
AT&T's Northeast Corridor Network, which runs from Virginia to
Massachusetts, uses fiber cables carrying more than 50 fiber pairs. Using a
semiconductor LASER or a light-emitting diode (LED) as the light source, a transmitter
codes the audio or visual input into a series of light pulses, called bits. These travel
along a fiber at a bit-rate of 90 million bits per second (or 90 thousand kbps). Pulses
need boosting, about every 6.2 miles, and finally reach a receiver, containing a
semiconductor photodiode detector (light sensor), which amplifies, decodes, and
regenerates the original audio or visual information. Silicon integrated circuits control
and adjust both transmitter and receiver operations.
THE FUTURE OF FIBER OPTICS
Light injected into a fiber can adopt any of several zigzag paths, or modes. When
a large number of modes are present they may overlap, for each mode has a different
velocity along the fiber. Mode numbers decrease with decreasing fiber diameter and
with a decreasing difference in refractive index between the fiber core and the
surrounding area. Individual fiber production is quite practical, and today most
high-capacity systems use single fibers. The present pace of technological advance
remains impressive, with the fiber capacity of new systems doubling every 18 to 24
months. The newest systems operate at more than two billion bits per second per fiber
pair. During the 1990s optical fiber technology is expected to extend to include both
residential telephone and cable television service.
Currently Bell South is placing fiber cables containing up to 216 fibers, and
manufacturers are starting to build larger ones. Bell South has been placing fiber cables
in the Orlando area since the early 1980s, and currently has hundreds of miles in
service to business and residential customers.
BIBLIOGRAPHY
1. 1995 Grolier Multimedia Encyclopedia, Grolier Electronic Publishing, Inc.
2. 1994 Compton's Interactive Encyclopedia, Compton's NewMedia.
3. Fiber Optics abd Lightwave Communications Standard Dictionary, Martin H.
Weik, D.Sc., Van Nostrand Reinhold Company, New York, New York, 1981.
4. Fiber Optics and Laser Handbook, 2nd Edition, Edward L. Stafford, Jr. and John
A. McCann, Tab Books, Inc., Blue Ridge Summit, Pennsylvania, 1988.
5.5. Fiber Optics and Optoelectronics, Second Edition, Peter K. Cheo, Prentice Hall,
Englewood Cliffs, New Jersey, 1990.
f:\12000 essays\sciences (985)\Computer\First generation of computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The First Generation
The first generation of computers, beginning around the end of World War
2, and continuing until around the year 1957, included computers that used
vacuum tubes, drum memories, and programming in machine code.
Computers at that time where mammoth machines that did not have the
power our present day desktop microcomputers.
In 1950, the first real-time, interactive computer was completed by a
design team at MIT. The "Whirlwind Computer," as it was called, was a
revamped U.S. Navy project for developing an aircraft simulator. The
Whirlwind used a cathode ray tube and a light gun to provide interactively.
The Whirlwind was linked to a series of radars and could identify unfriendly
aircraft and direct interceptor fighters to their projected locations. It was to
be the prototype for a network of computers and radar sites (SAGE) acting
as an important element of U.S. air defense for a quarter-century after 1958.
In 1951, the first commercially-available computer was delivered to the
Bureau of the Census by the Eckert Mauchly Computer Corporation. The
UNIVAC (Universal Automatic Computer) was the first computer which was
not a one-of-a-kind laboratory instrument. The UNIVAC became a
household word in 1952 when it was used on a televised newscast to project
the winner of the Eisenhower-Stevenson presidential race with stunning
accuracy. That same year Maurice V. Wilkes (developer of EDSAC) laid the
foundation for the concepts of microprogramming, which was to become the
guide for computer design and construction.
In 1954, the first general-purpose computer to be completely
transistorized was built at Bell Laboratories. TRADIC (Transistorized
Airborne Digital Computer) held 800 transistors and bettered its
predecessors by functioning well aboard airplanes.
In 1956, the first system for storing files to be accessed randomly was
completed. The RAMAC (Random-Access Method for Accounting and
Control) 305 could access any of 50 magnetic disks. It was capable of
storing 5 million characters, within a second. In 1962, the concept was
expanded with research in replaceable disk packs.
f:\12000 essays\sciences (985)\Computer\From the Abacus to the Mac.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AnyView Professional 1.1 -- README.TXT -- 01/16/95
***************************************************************
Welcome to AnyView Professional! Whether you're a software
developer, a graphic artist or a computer "novice," you're
about to watch your computer perform in ways never before
thought possible.
***************************************************************
Table of Contents
1. Installation Notes
1a. System Requirements
1b. Installation on a Single-User System
1c. Quick Installation
1d. Custom Installation
1e. Network Client Station Installation
1f. Installation Using Microsoft Compliant OEM Setup Diskettes
2. Trouble Shooting
2a. Installation and reboot problems/errors
2b. AnyView driver not loaded
2c. Norton Desktop 3.0
2d. PC Tools
2e. Adobe Premiere
2f. BroderBund's Math Workshop
2g. The Learning Center's Reader Rabbit
2h. Number #9 GXE, GXE 64, GXE 64 Pro
2i. ATI Mach8
2j. ATI Mach32/64
2k. Cirrus Logic 5434
2l. Diamond Viper Pro
2m. Western Digital WD90C31
2n. Special notes regarding OEM provided display utilities
2o. Menu altering utilities
2p. Get-out-of-trouble Hot Keys
2q. Right Mouse Button Calls Toolbar
3. Technical notes on Color Depth Switching and Global Memory.
4. Changing Video Controllers and Reinstalling AnyView
Professional.
5. Uninstalling AnyView Professional.
***************************************************************
1. INSTALLATION NOTES
These are installation instructions for AnyView Professional.
AnyView Professional may be installed over previous versions of
itself.
***************************************************************
***************************************************************
1a. System Requirements
***************************************************************
Before you install AnyView Professional, please confirm that
your system is equipped with the following hardware and
software:
* An 80386 or better microprocessor (CPU), four megabytes or
more of RAM (eight megabytes or more is recommended), and
approximately two megabytes of hard disk space
* MS-DOS or PC-DOS version 3.1 or later, Microsoft Windows
version 3.1 or Windows for Workgroups 3.1 or 3.11, Enhanced Mode
* You should have all of your Windows display drivers for all the
resolutions and color depths that your video controller supports
installed on your system before installing AnyView Professional.
Also, the install may go more smoothly if you run Windows with
your 256 color drivers when installing AnyView Professional.
***************************************************************
1b. Installation On a Single-User System
***************************************************************
First, check AnyView Professional's requirements on the
preceding page to ensure that your system has the appropriate
resources to run this software. Next, start Windows as usual
and proceed as follows:
Starting the Installation
1. Place the AnyView Professional diskette in the appropriate
diskette drive (A or B).
2. Open the File menu in the Windows Program Manager or File
Manager and select Run.
3. Type [drive]:SETUP.EXE, where [drive] is the letter of the
drive where you placed the AnyView Professional diskette, and
press [Enter]. After Setup initializes your system, a screen
appears with four options: Quick Installation, Custom
Installation, View README.TXT and Exit. Choosing Quick
Installation will completely setup your system with a minimum
of effort. You can press Custom Installation to choose which
features and options to install. View README.TXT opens the
README.TXT file in the Notepad application. Exit terminates
the program.
***************************************************************
1c. Quick Installation
***************************************************************
Sit back and relax! Choosing Quick Installation automatically
creates a directory on the same drive as your Windows
directory, copies all of the AnyView Professional program
files, creates an icon group and deletes any older versions of
AnyView Professional that might be found. Press Restart
Windows when prompted and AnyView Professional will be up and
running.
Though Quick Installation recognizes most drivers using its
intuitive method of installation, there is a chance you may be
prompted to enter an OEMSETUP.INF disk. In this case, see the
section later in this chapter entitled "Installation Using a
Microsoft Compliant OEMSETUP Diskette."
After completing the Quick Installation, you may skip ahead to
the chapter entitled, "Getting to Know AnyView Professional."
Note: If you receive an advisory message from the Setup
application regarding drivers that are not installed on your
system, simply install them from their floppy disk or
directory using your video controller's installation program.
After Windows reloads, double click the AnyView Professional
icon in our program group to restart Windows with the AnyView
Professional driver.
***************************************************************
1d. Custom Installation
***************************************************************
1. After pressing Custom Installation, the Setup application
displays the directory into which it plans to install AnyView
Professional. The default directory is AVPRO on the drive
which contains the Windows directory. To accept the default
directory, select Continue. To select a different drive or
directory, type it in and then select Continue.
2. The Custom Window appears and you are asked to select which
features and options should be enabled or disabled. These
aren't permanent changes -- all can be changed from within the
AnyView Professional application.
Features: Though all of AnyView Professional's features are
installed when you run the Setup application, you can choose
to disable any of them should you desire to do so. These
features include Catalyst, Color WYZARD, DPI WYZARD, Green
Screen, OptiMemm, and True Switch Color Depth/Resolution
Switching. All of these features are enabled by default.
Start Up Options: By default, the Setup application creates an
AnyView Professional group for your Program Manager and puts
an AnyView Professional icon into the Startup group. Either of
these options may be disabled.
Monitor Selection: Choose the monitor selection which most
reflects your own. For information regarding interlaced and
non-interlaced monitors, consult your monitor's documentation.
Driver Installation: There may be times when you may to choose
to install AnyView Professional without using the Setup
Application's intuitive method of locating and installing
video drivers. For instance, you may have received new drivers
from your video controller manufacturer and you would like to
install them using a Microsoft Compliant OEM setup diskette.
The Setup Application's intuitive method of driver
installation is chosen by default.
Mouse Interface Call: By default, you can bring the AnyView
Professional Toolbar directly to your mouse by double clicking
on your right mouse button. Because there are other
applications which may use this same setting, the Setup
application gives you the choice of either setting the middle
mouse button as the interface call or of having none set at
all.
Interface: AnyView Professional's features can be accessed
through either the Toolbar or the Control Panel. Though the
Toolbar is smaller and less obtrusive than its counterpart,
the Control Panel is more detailed and helps you to monitor
your system. "Always Topmost" indicates that you want the
interface to always be visible no matter what application you
may be using. By default, the Toolbar is the chosen interface
and is always topmost.
Video Memory: This setting allows you to choose the amount of
video memory that is installed on your video controller. One
megabyte of memory is chosen as default.
After making your setting choices, select Continue.
3. If you have previously installed AnyView, Screen Commander
for Windows, the Setup application asks whether you'd like to
have it removed from your system. We recommend that you allow
the Setup application to make it so.
4. Restart Windows as prompted and AnyView Professional is
ready for use.
***************************************************************
1e. Network Client Station Installation
***************************************************************
It's possible to install AnyView Professional on most client
stations on Windows compliant networks. There are no special
steps, even with networked versions of Windows.
***************************************************************
1f. Installation Using Microsoft Compliant OEM Setup Diskettes
***************************************************************
There are times when AnyView Professional cannot install
intuitively because it does not have enough information
regarding your video controller and video drivers. When this
situation occurs, the installation program will bring up a
dialog screen entitled "Installation with a Microsoft
Compliant OEMSETUP Diskette." At this time, please insert the
diskette which was shipped with your video controller and
indicate the OEMSETUP.INF file using the directory tree on the
right side of the window. You can also point to an
OEMSETUP.INF file that has already been installed onto your
hard disk if you like -- however, this would be a directory
other than the System directory. An OEMSETUP.INF file in your
System directory may describe a different piece of hardware
and installation would not be able to continue.
Note: If you reach this window but do not have a Microsoft
compliant OEMSETUP diskette, go ahead and click the Cancel
button. Without access to the OEMSETUP.INF file, the
TrueSwitch color depth and resolution changing features cannot
be activated; however, all of AnyView Professional's other
features can still be installed. If you would like to
continue, choose Custom install and then uncheck the
resolution and color depth switching checkboxes. Continue
installation as discussed earlier in this section. You may
then contact our technical support department for information
on how to properly install for your video controller. Please
see the "Troubleshooting" section in Appendix A for more
information.
After you have pointed to the proper OEMSETUP.INF file, a
dialog box will appear for your display driver set up. For the
next few minutes, you will confirm the OEM supplied drivers
for the specific resolutions and color depths available for
your video controller. The installation program will list the
resolution for which it needs a driver. Below, in a list box,
the installation program will highlight the driver it believes
matches that resolution. If the suggestion is correct, simply
choose Select. If the choice is not correct, highlight the
correct driver and choose Select. If the installation program
cannot locate the driver to match the resolution, the Skip
button is highlighted. Because there are few video controllers
that are capable of running every resolution in every color
depths, it is likely you will need to skip some resolutions.
If you make a mistake at any time, you can choose Start Over
to make new choices.
***************************************************************
2. Trouble Shooting
Contains special notes and known incompatibilities pertaining
to specific applications and display controllers.
***************************************************************
***************************************************************
2a. Installation and reboot problems/errors
***************************************************************
If you encounter installation problems please refer to section
6 entitled "Uninstalling AnyView Professional."
***************************************************************
2b. AnyView driver not loaded
***************************************************************
If you reboot or shut off your computer without exiting
Windows, the next time you run Windows, the AnyView
Professional driver may not be loaded. Just run the AnyView
Professional application and select 'Yes' when prompted to
restart Windows.
***************************************************************
2c. Norton Desktop Version 3.0
***************************************************************
NDW 3.0 is not compatible with Color Switching On-the-Fly.
Dragging an icon in 16 or 24 bit with Color Switching
enabled causes an error. To avoid this you can either disable
Color Switching - see the AnyView Professional Desktop Dialog
Box - or do not re-arrange icons on the desktop while in 16 or
24 bit mode.
***************************************************************
2d. PC Tools for Windows
***************************************************************
With Color Switching enabled, icons created by PC Tools may
become distorted if you are not in 256 color mode. When
creating a new icon from the File menu, when importing a
group, or when installing new software, you should switch
to 256 color mode before doing so.
***************************************************************
2e. Adobe Premiere
***************************************************************
Switching resolutions or color depths while running Adobe
Premiere will cause an error. Switch to your desired resolution
and color depth before running Premiere.
***************************************************************
2f. Broderbund's Math Workshop
***************************************************************
The first time Math Workshop is run, it self configures by
profiling your system. This configuration will cause an error
with AnyView's OptiMemm set on High. Set OptiMemm to low to
install and first-run Math Workshop.
***************************************************************
2g. The Learning Center's Reader Rabbit
***************************************************************
Reader Rabbit is not compatible with Color Switching. In order
to run Reader Rabbit, you must disable Color Switching from
the AnyView Professional Desktop Dialog.
***************************************************************
2h. Number #9 GXE, GXE 64, & GXE 64 Pro
***************************************************************
The #9 GXE display drivers are not compatible with AnyView's
Color Switching On-the-Fly. (However, the GXE 64 and GXE 64 Pro
are supported.)
On the GXE 64 and GXE 64 Pro, AnyView Professional only works
with version 1.36 (or earlier) of the #9 drivers.
***************************************************************
2i. ATI Mach 8
***************************************************************
AnyView Professional is not compatible with the new ATI Mach 8
drivers (machw3.drv). You must install the Mach 32 type
drivers (mach.drv).
***************************************************************
2j. ATI Mach 32 and Mach 64
***************************************************************
Color and Resolution Switching on-the-fly are incompatible
with ATI's Crystal fonts. If you want to use Crystal Fonts,
disable Color and Resolution Switching from the AnyView
Professional Desktop Dialog, and then enable Crystal Fonts.
On some Mach 32 cards, Resolution or Color Switching may
take several seconds, during which time your screen will
remain black. This is normal for the ATI Mach 32.
***************************************************************
2k. Cirrus Logic 5434
***************************************************************
AnyView is not compatible with Version 1.2x of the Cirrus Logic
5434 display drivers. It is compatible with all previous versions.
***************************************************************
2l. Diamond Viper Pro
***************************************************************
If you encounter difficulties switching into 16 million color
mode (24 bit), try changing the following lines in the AVPRO.INI
file, [AnyViewProSupport] section, to read as follows:
Driver640x480x16M=p9100_32.drv
Driver800x600x16M=p9100_32.drv
***************************************************************
2m. Western Digital WD90C31
***************************************************************
At the time of AnyView Professional's release, the Western
Digital display drivers for the WD90C31 chipset will not work
with Color Switching On-the-Fly. AnyView Professional will
automatically disable this feature when it is installed.
***************************************************************
2n. Special notes regarding OEM provided display utilities
***************************************************************
If you use a display configuration utility other than
AnyView Professional to switch resolutions or color depths,
the AnyView driver will not be loaded after Windows
restarts. To reinstall the AnyView driver, run the AnyView
Professional application and select 'Yes' when prompted to
restart Windows. If you use a display configuration utility
to set your monitors refresh rates, you may have to do the
same. If, however, the display configuration utility allows
you to continue rather than just restarting Windows, select
this and then use AnyView Professional to change away from
your current resolution, and then back. With some cards
(Diamond Stealths, Orchid Celsius, Weitek P9x00, and more)
this action will save the new refresh rate information
without having to restart Windows.
***************************************************************
2o. Menu modifying applications
***************************************************************
Applications such as Icon Hear-it or Plug-in that modify other
application's menus may not work with the OptiMemm feature set
to High. If this occurs, set OptiMemm to low.
***************************************************************
2p. Get-out-of-trouble Hot Keys
***************************************************************
AnyView Professional has provided you with four hot keys to get
you out of trouble quickly should you select a configuration
that doesn't work properly or if you "get lost" on the Virtual
Desktop. Their default settings are listed in the upper left
of the "Interface" file folder:
* [CTRL]+[ALT]+[R] -- Restore to Last Mode. Choosing this hot
key causes AnyView Professional to return you to your last
screen mode. This option is useful when exiting the Hardware
Zoom or when you have chosen an invalid screen mode that
renders the screen unviewable.
* [CTRL]+[ALT]+[C] -- Center AnyView to Screen . This hot key
brings the AnyView Professional Toolbar or Control Panel to the
center of the screen
* [CTRL]+[ALT]+[V] -- Restart with VGA.DRV. This key sequence
is used to restore Microsoft's VGA.DRV. We recommend that you
try [CTRL]+[ALT]+[R] before resorting to this key sequence. Try
using [CTRL]+[ALT]+[V] if AnyView Professional does not install
correctly and you are presented with a blank screen.
* [CTRL]+[ALT]+[6] -- Reset Resolution to 640x480. This hot key
restores the display to 640 by 480 in 256 colors.
You may change these hot keys to any [CTRL]+[ALT] combination
that you would find convenient. This is helpful if you find
that one or more of your hot key combinations conflict with
those of another application.
***************************************************************
2q. Right Mouse Button Calls Toolbar
***************************************************************
There are two ways to change the mouse button call. The
first is to reinstall and choose "Custom". Then choose
to change the mouse button under "Interface." The
second way is to edit the AVPRO.INI in your AVPRO
directory. If MouseHook isn't present, add it to the
[AnyViewPro] section. MouseHook=on for the right
mouse call button, MouseHook=middle for center mouse
call button, and MouseHook=off for no mouse call.
***************************************************************
3. Technical notes on Color Depth Switching and Global Memory
***************************************************************
When running image editing software, it is recommended that
you switch to your desired color depth before running the
application. Editing a bitmap and then switching color
depths may degrade the quality of the bitmap. Saving the
bitmap file at that time will save the degraded version of
the bitmap.
Some monochrome bitmaps (dragging an icon, for instance)
won't display in color modes above 256 color mode. This is
done intentionally to prevent a bug that is in many display
drivers that occurs when running with AnyView Professional.
Your display drivers may not have this bug, and you can
determine this by adding the following line to the
[AnyViewPro] section of the AVPRO.INI file located in the
AVPRO directory:
DIB8to1on=on
When color switching is enabled, Windows will only boot up
initially in 256 color mode. AnyView Professional can be
configured to automatically switch you to the color mode
that you were in when you exited Windows. To do this, add
the following line to the [AnyViewPro] section of the
AVPRO.INI file located in the AVPRO directory:
BootRestoreColor=on
If the system or applications you are running are displaying
large bitmaps or a large number of bitmaps, a color switch
may delay for a long time. This is because AnyView
Professional needs to convert all of the bitmaps to the new
color mode.
When color switching is enabled, bitmaps that the system or
applications display require more memory than with color
switching disabled. It is recommended that you do not
configure your desktop to display a large bitmap as your
background wallpaper. (From Windows Control Panel's Desktop
configuration) It is also recommended that you do not open
multiple large bitmap files simultaneously, particularly when
you are in a high color (32K,64K,16M) mode. If you experience
memory problems, try increasing your Windows swap file size
to 4 or 8 megabytes.
Global Memory:
With the Color Switch feature enabled, the amount of global
memory available to Windows will decrease. OptiMemm cannot
help with this issue, as it increases the amount of Windows
Resource memory available, thus allowing you to run more
applications, but not the amount of global memory available.
An example of a need for a large amount of global memory is
opening a very large bitmap in a high color mode, or opening
multiple bitmap files simultaneously.
***************************************************************
4. Changing Video Controllers and Reinstalling AnyView
Professional
***************************************************************
We recommend that you reset Windows to the Microsoft VGA
driver before changing your video controller. You can do this
by running SETUP from the Windows directory while at the DOS
level. Please see your Windows manual for more information.
After inserting your new video controller, install the video
controller's drivers using the manufacturer's instructions.
When you have restarted Windows, run SETUP from your AnyView
Professional distribution diskette and follow the installation
instructions listed earlier in this section.
***************************************************************
6. Uninstalling AnyView Professional
***************************************************************
Getting Back into Windows if the installation process has
failed at the DOS level:
If you install AnyView Professional and find that Windows will
not restart after the initial reboot, SETUP has failed to
configure your system correctly. In order to fix this problem
and get you back into Windows, we have included a DOS level
uninstaller. The DOS uninstall is located in the directory
that is assigned to AnyView Professional during installation (the
default installation directory is "AVPRO"). From within
AnyView Professional's directory, type "AVUNINST" at the DOS
prompt. This command will reset Windows to the original
display driver, but it will not delete the AnyView Professional
files/components.
After returning to Windows you can perform a complete uninstall
of AnyView Professional by using the uninstall icon located in
the AnyView Professional Program Group.
Uninstalling from within Windows:
Should you decide for one reason or another to uninstall
AnyView Professional, the provided Uninstall program will do
the job quickly and efficiently. Use the AnyView Uninstall
icon in the AnyView Professional program group.
1. Click on the Uninstall icon in your AnyView Professional
group or run the AVUNINST.EXE file in your AnyView
Pro directory from the File Manager or Program Manager.
2. After Uninstall deletes AnyView Professional's icons from
the Program Manager, a dialog box will ask you to restart
Windows. Uninstall will be complete after you restart.
f:\12000 essays\sciences (985)\Computer\Gemstone 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Josh Barrie
1/15/97
Thomas Alva Edison
CAPTURE: I.) What would we do without incandescent light bulbs, phonographs, motion picture cameras - things we all take for granted?
A.) They were all invented by me, Thomas Alva
Edison.
B.) I made more than 1,000 inventions.
MOTIVATE: II.) How can I capture sound and motion in time? Is there a way to make a practical light bulb? Can people talk to each other from great distances? These are some of the questions I asked myself to make some of my most famous inventions.
ASSERT: III.) All of my inventions have changed many lives. I¹m here to talk about these.
PREVIEW: IV.) Let¹s discuss my most popular and important inventions I completed during my 84 year life.
A.) My incandescent light bulb
B.) My phonograph
C.) My improved telephone
POINT
SUPPORT: V.) My first invention I am going to talk about is my incandescent light bulb.
A.) There was already an electric light bulb out, called an arc bulb, but it was way to bright for practical use.
B.) My incandescent light bulb has a special wire, or filament, made out of tungsten.
VI.) Another invention I¹ll refer to is the phonograph.
A.) You may know this as a recorder and player.
B.) It was crude, but it worked and was used for a long time.
C.) It had a long tube with a funnel at the end where you talk into it. The rest of machine looked sort of like a typewriter.
VII.) My final invention I¹ll talk about, is my improved telephone.
A.) There were already telephones out, but they had low quality sound and short range.
B.) I improved it so it sounded a better and had a farther range.
ENDING: VIII.) When you turn on a light, go to a movie, call your friend on the phone, or listen to CD¹s, remember, it was all made possible by me, Thomas Edison, and my innovative mind.
f:\12000 essays\sciences (985)\Computer\Get Informed.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Get Informed!
Buying the right computer can be complicating. Because of this many people are detoured from using or purchasing a very beneficial machine. Some people have questions about memory, Windows95, and choosing the best system to purchase. Hopefully, I can clear up some of this terms and inform you on what hardware is available.
How much memory do you really need? As much as you can get. Due to todays sloppy programmers, you can't have too much memory. Today's software is about 50 percent wasted code. That means that there is a bunch of memory being used by your computer to do absolutely nothing. It's not like in the past when a programmer had to get a program to run under 512K. Programmers think you have unlimited memory. As a result, the programmers don't worry about how much memory they use. When writing a program, programmers use compilers like Visual C++. When they use prewritten routines from compilers, it adds a lot of useless data. Instead of removing this useless data, the lazy programmer leaves it. Not only does this effect you memory, it also effects how much hard drive space you need. The bigger the program, the more space it takes to save physically. I wouldn't suggest buying anything under a 2 geg hard drive. Why? Because by the time you load you system (Windows95, DOS) and other software; your hard drive is already filled up. How are you going to save your document you wrote in WordPerfect when your hard drive is full? It's usually cheaper in the long run to buy the biggest hard drive available. Plus, you always want to have room for your games. After all, who wants to spend their whole life working?
As far as processors, I suggest the Cryrix 6x686 166+. It's the best processor for the buck. It's one of the fastest. The processor costs about $300 cheaper then the Pentium version. Its got plenty of processing power to play those high graphic 3D games and make your Internet browser fly. It's also a necessity for programs like Auto Cad 3D and Adobe Photoshop.
For video, I suggest at least a 2 meg, Mpeg3 compatible video card. The best all around video card I think is the Maxtor Millennium 3D. It comes in 2meg, 4meg, and 8meg cards. The 4meg card runs around $230.00. You can't beat that. The reason you want the most memory on your video card that you can afford is the more memory you have, the faster the graphics and more colors you can display. The memory on a video card is used for loading up screen pages in advance before they're on your screen. For example, when you're watching a AVI or Mpeg movie. The computer has already loaded four screens of that movie before the computer needs it. This means you don't wait for it to load. A sign of not having enough video memory is when you're watching a AVI movie, you might see flicker or the movie stalls. This is because you're waiting for the computer to load the images up.
Windows 95. Is all the hype true? NO! Windows 95 has a lot of bugs in it. Most of the problems I've seen are in the installing part. When you go to install new hardware or software, you don't have complete control of what your computer does. Windows 95 wants to make all the decisions for you. Unfortunately, most of the time it doesn't make the right decisions. There are ways to get around this, it just takes a little patience. The biggest problem I've had is taking out software and hardware. Windows deletes all the drivers and the programs, but never cleans out the main system file. This means the program is gone but your system thinks it's still there. This can give you a lot of errors and, in some cases, cause your computer to crash. There is thankfully, software being written right now to solve this problem. Whether or not you like Windows 95, Microsoft has cornered the market and most software written today is for Windows 95. I personally think Windows 95 could be a great system if Microsoft would take the time to fix all the bugs and minor irritations instead of spending there time trying figure out a new way to scam the PC user, like Making Windows 97.
Hopefully, I haven't confused you. Instead, I hope I have cleared some things up for you. My best advice, to soon-to-be computer owners, is to take your time in buying your system. Do some research. Don't believe all the hype. Computer salesmen don't make money helping you out. They make money selling you a computer for the most profit.
f:\12000 essays\sciences (985)\Computer\Global Village Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Imagine a place where people interact in business situations, shop, play video games, do research, or study and get tutoring. Now imagine that there are no office buildings, no shopping centers, no arcades, no libraries, and no schools. These places all exist in a location called the Internet - "an anarchic eyetem (to use an oxymoron) of public and private computer networks that span the globe." (Clark 3). This technological advance not only benefits people of the present, but also brings forth future innovations. People use the Internet for many purposes, yet there are three popular reasons. First, there is the sending and receiving of messages through the electronic mail. Second, there are discussion groups with a wide range of topics in which people can join. Finally, people are free to browse into vast collection of resources (or databases) of the World Wide Web.
Electronic mail (e-mail) brings a unique perception into the way of communication. Although, it did not replace the traditional means of communication such as letters and telephone calls, it has created a new method of transmitting information in a more efficient way. E-mail saves time between the interval of sending and receiving a message. Sending an e-mail message halfway around the world can arrive at its destination within a minute or two. In comparison, a letter can take from a few days to a couple of weeks, according to the distance it travels. Furthermore, e-mail is inexpensive. The cost of connection to the Internet is relatively cheaper than that of cable television. Evidently e-mail is both time-saving and cost-effective.
Discussion groups are a great way to interact with others in the world and to expand the knowledge of one's horizon. The response is instantaneuos just like the telephone except it is non-verbal (typed). Discussion groups are on-line services that make use of non-verbal communication in the interest of the user. Services can range from tutor sessions to chat-lines where people just want to mingle. Communication through the Internet is a way of meeting new people. There is no racial judgement in meeting on the Internet because physical appearance is not perceived. However, attitude and personal characteristics are evident from the style in which a person talks (or types). This kind of communicaion helps narrow the gap between people and cultural differences. Communicating in discussion groups sometimes lead to even one-to-one conversations that soon enough become a link to friendship. Connections are being made when people meet each other; therefore, information on interest Web sites can be passed on.
The World Wide Web (WWW) holds information that answers questions to the user. The main purpose of the WWW is to give a variety of information ranging from literature to world geography. WWW contains Web sites that are created by government agencies and institutions to business companies and individuals. WWW carries text, graphics, and sound to catch the interest of people browsing through the different Web sites. These Web sites are being added daily, while the existing sites are being revised to compensate for more updated information and interests. This growth of information will soon become a world library of topics on anything that one can imagine. A person using the Internet for one day encounters more information than a person reading in the library for a whole year. It is the convenience of the Internet that allows a person to go through an enormous amoung of information in a short period of time. This information community can pull the minds' of users closer together, thus making the world smaller.
The Internet is full of people who are requesting and giving out information to the ones who are interested, since "information wants to be free." - Stewart Brand (Van der Leun 25). Hypothetically, if everyone is connected to at least one other person on the Internet, eventually everyone everyone will meet each other. In other words, the world will gradually evolve into a "global village" which can be defined as "the world, especially of the late 1900's, thought of as a village, a condition arising from shrinking distance by instantaneous world-wide electronic communication." (Nault 907). Thus, the Internet is a wonderful tool and medium in which people can interact with the information society. Afterall, information is like the building blocks of technological advancement.
f:\12000 essays\sciences (985)\Computer\Government Intervention on the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Government Intervention on the Internet
CIS 302 - Information Systems I
John J. Doe
XXX-XX-XXXX
March 12, 1997
During the last decade, our society has become based on the sole ability to
move large amounts of information across great distances quickly. Computerization
has influenced everyone's life in numerous ways. The natural evolution of computer
technology and this need for ultra-fast communications has caused a global network
of interconnected computers to develop. This global network allows a person to send
E-mail across the world in mere fractions of a second, and allows a common person
to access wealths of information worldwide. This newfound global network,
originally called Arconet, was developed and funded solely by and for the U.S.
government. It was to be used in the event of a nuclear attack in order to keep
communications lines open across the country by rerouting information through
different servers across the country. Does this mean that the government owns the
Internet, or is it no longer a tool limited by the powers that govern. Generalities such
as these have sparked great debates within our nation's government. This paper will
attempt to focus on two high profile ethical aspects concerning the Internet and its
usage. These subjects are Internet privacy and Internet censorship.
At the moment, the Internet is epitome of our first amendment, free speech. It
is a place where a person can speak their mind without being reprimanded for what
they say or how they choose to say it. But also contained on the Internet, are a huge
collection of obscene graphics, Anarchists' cookbooks, and countless other things that
offend many people. There are over 30 million Internet surfers in the U.S. alone, and
much is to be said about what offends whom and how.
As with many new technologies, today's laws don't apply well when it comes
to the Internet. Is the Internet like a bookstore, where servers can not be expected to
review every title? Is it like a phone company who must ignore what it carries
because of privacy; or is it like a broadcast medium, where the government monitors
what is broadcast? The problem we are facing today is that the Internet can be all or
none of the above depending on how it is used.
Internet censorship, what does it mean? Is it possible to censor amounts of
information that are all alone unimaginable? The Internet was originally designed to
"find a way around" in case of broken communications lines, and it seems that
explicit material keeps finding its "way around" too. I am opposed to such content on
the Internet and therefore am a firm believer in Internet censorship. However, the
question at hand is just how much censorship the government impose. Because the
Internet has become the largest source of information in the world, legislative
safeguards are indeed imminent. Explicit material is not readily available over the
mail or telephone and distribution of obscene material is illegal. Therefore, there is
no reason this stuff should go unimpeded across the Internet. Sure, there are some
blocking devices, but they are no substitute for well-reasoned law. To counter this,
the United States has set regulations to determine what is categorized as obscenity
and what is not. By laws set previously by the government, obscene material should
not be accessible through the Internet. The problem society is now facing is that
cyberspace is like a neighborhood without a police department. "Outlaws" are now
able to use powerful cryptography to send and receive uncrackable communications
across the Internet. Devices set up to filter certain communications cannot filter that
which cannot be read, which leads to my other topic of interest: data encryption.
By nature, the Internet is an insecure method of transferring data. A single E-
mail packet may pass through hundreds of computers between its source and
destination. At each computer, there is a chance that the data will be archived and
someone may intercept the data, private or not. Credit card numbers are a frequent
target of hackers. Encryption is a means of encoding data so that only someone with
the proper "key" can decode it. So far, recent attempts by the government to control
data encryption have failed. They are concerned that encryption will block their
monitoring capabilities, but there is nothing wrong with asserting our privacy.
Privacy is an inalienable right given to us by our constitution.
For example, your E-mail may be legitimate enough that encryption is
unnecessary. If you we do indeed have nothing to hide, then why don't we send our
paper mail on postcards? Are we trying to hide something? In comparison, is it
wrong to encrypt E-mail?
Before the advent of the Internet, the U.S. government controlled most new
encryption techniques. But with the development of the WWW and faster home
computers, they no longer have the control they once had. New algorithms have been
discovered that are reportedly uncrackable even by the FBI and NSA. The
government is concerned that they will be unable to maintain the ability to conduct
electronic surveillance into the digital age. To stop the spread of data encryption
software, they have imposed very strict laws on its exportation. One programmer,
Phil Zimmerman, wrote an encryption program he called PGP (Pretty Good Privacy).
When he heard of the government's intent to ban distribution encryption software, he
immediately released the program to be public for free. PGP's software is among the
most powerful public encryption tool available.
The government has not been totally blind by the need for encryption. The
banks have sponsored an algorithm called DES, that has been used by banks for
decades. While to some, its usage by banks may seem more ethical, but what makes
it unethical for everyone else to use encryption too? The government is now
developing a new encryption method that relies on a microchip that may be placed
inside just about any type of electronic equipment. It is called the Clipper chip and is
16 million times more powerful than DES and today's fastest computers would take
approximately 400 billion years to decipher it. At the time of manufacture, the chips
are loaded with their own unique key, and the government gets a copy. But don't
worry the government promises that they will use these keys only to read traffic when
duly authorized by law. But before this new chip can be used effectively, the
government must get rid of all other forms of cryptography.
The relevance of my two topics of choice seems to have been conveniently
overlooked by our government. Internet privacy through data encryption and Internet
censorship are linked in one important way. If everyone used encryption, there would
be no way that an innocent bystander could stumble upon something they weren't
meant to see. Only the intended receiver of an encrypted message can decode it and
view its contents; the sender isn't even able to view such contents. Each coded
message also has an encrypted signature verifying the sender's identity. Gone would
be the hate mail that causes many problems, as well as the ability to forge a document
with someone else's address. If the government didn't have ulterior motives, they
would mandate encryption, not outlaw it.
As the Internet grows throughout the world, more governments may try to
impose their views onto the rest of the world through regulations and censorship. If
too many regulations are enacted, then the Internet as a tool will become nearly
useless, and our mass communication device, a place of freedom for our mind's
thoughts will fade away. We must regulate ourselves as not to force the government
to regulate us. If encryption is allowed to catch on, there will no longer be a need for
the government to intervene on the Internet, and the biggest problem may work itself
out. As a whole, we all need to rethink our approach to censorship and encryption
and allow the Internet to continue to grow and mature.
Works Cited
Compiled Texts. University of Miami. Miami, Florida.
http://www.law.miami.edu/c6.html.
Lehrer, Dan. "The Secret Shares: Clipper Chips and Cyberpunks." The Nation.
Oct. 10, 1994, 376-379.
Messmer, Ellen. "Fighting for Justice on the New Frontier." Network World.
CD-ROM database. Jan. 11, 1993.
Messmer, Ellen "Policing Cyberspace." U.S. News & World Report.
Jan. 23, 1995, 55-60.
Webcrawler Search Results. Webcrawler. Query: Internet, censorship, and ethics.
March 12, 1997.
Zimmerman, Phil. Pretty Good Privacy v2.62, Online. Ftp://net-dist.mit.edu
Directory: /pub/pgp/dist/pgp262dc.zip.
f:\12000 essays\sciences (985)\Computer\Hackerne.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INFORMATIONS TEKNOLOGI
Hackere
Lavet af: Philip Svendsen
Fag: Informations Teknologi C+B
Lærer: Arne
HACKER
Onsdag den 10. november 1993 ringer en telefon på Københavns Universitet: "Hallo, du taler med sikkerhedschefen. Der er nogle hackere, som forsøger at misbruge din adgang til universitetets computersystem. Giv mig lige dit password, så jeg kan komme ind og stoppe dem." Manden på universitetet er naturligvis skeptisk, men stemmen i telefonen tilhører tydeligvis en computerekspert, og navnet på sikkerhedschefen er også rigtigt nok, så efter lidt overtalelse oplyser han sit hemmelige kodeord. Det skulle han aldrig have gjort. For stemmen i den anden ende er alt andet end sikkerhedsekspert. Han er hacker. Og for ham er kodeordet lige så værdifuldt, som brækjern og dirk er det for en indbrudstyv. Efter at være brudt ind i universitetets computere giver han sig til at snuse rundt i systemet. Han søger hverken efter forskningsresultater eller eksamensopgaver. For ham er universitetets computere kun en platform, der kan bruges til at komme videre ud i verden. Hver gang en hacker får magten over en ny computer, har han vundet en sejr. Det er i virkeligheden, det, det hele gælder om. At narre computeren til at give ham ubetinget magt over hele systemet. Så snart han har fået det, er computeren uinteressant, og han hopper videre ud på den elektroniske informationsmotorvej, hvor flere hundrede tusinde computere er forbundet i netværk verden over. Der forsøger han efter udgangen. Hvad hackeren og hans kolleger dog ikke ved, er, at den sikkerhedsmand, hvis navn de misbrugte til at skaffe sig det hemmelige kodeord, er lige i hælene på dem.
Nåede at hacke i elleve lande, før fælden klappede
Da indbruddet på Københavns Universitet skete, havde sikkerhedsfolk fra Danmarks edb-center for forskning og uddannelse, Uni-C, i månedsvis vidst, at en gruppe danske hackere var meget aktive. Men det var endnu ikke Lykkedes dem at få tilpas stort overblik over hakkernes færden til at sige noget præcist om, hvor de opererede fra.
Det lykkedes til gengæld en lille måned senere. Onsdag den 8. december klappede fælden, og fire hackere. De fire unge med dæknavnene "Le Cerveau",
"Dixie", "Zephyr" og "Wed-lock" var alle mellem 17 og 23 år, og med anholdelsen begyndte førstedel af det, der udviklede sig til Skandinaviens hidtil største hackersag. I de sidste dage før anholdelserne "arbejdede" hackerne næsten i døgndrift. De brød dagligt ind i et halvt hundrede computersystemer og nåede ud over Danmark at besøge Belgien, Brasilien, England, Grækenland, Japan, Israel, Norge, Sverige, Tyskland og USA.
Små uregelmæssigheder afslører hackerne
Ligesom i tidligere sager om hacking var sikkerhedskonsulent Jørgen Bo Madsen fra Uni-C også med i jagten på "Le Cerveau", "Dixie", "Zephyr" og "Wedlock". Men da sagen endnu ikke er færdigbehandlet, afviser han at tale om netop den. Han udtaler sig kun generelt: "En hackerjagt starter altid med, at nogen opdager, at noget ikke er, som det plejer." Det kan være tydelige spor, som at dele af et computersystem er blevet slettet, eller at særligt mange indtaster forkerte kodeord, når de ringer op til systemet. I ni ud af ti tilfælde findes der en naturlig forklaring. Men i det tiende tilfælde viser mistanken om et hackerangreb sig at være begrundet. Og så går jagten ind. "Nogle gange kommer vi for sent. Vi kan se tydelige spor efter hackerne - de kan fx havde oprettet en bagdør. Men de bruger den ikke mere, og så er festen forbi," fortæller Jørgen Bo Madsen. En bagdør er et hemmeligt hul computerens sikkerhedssystem, som hackerne selv laver, efter de er brudt ind i systemet. Bagdøren står altid åben, så hackerne til en hver kan bryde ind i computeren uden at oplyse så meget som et password. Hvis hackerne derimod stadig færdes i computeren, giver sikkerhedsfolkene sig i al ubemærkethed til at overvåge deres aktiviteter. "Vi ser, hvad de laver, og prøver at få overblik over deres færden. For os gælder det om at finde den røde tråd i deres aktiviteter, inden de mister interessen for maskinen og hopper videre til en anden." Det største problem for Jørgen Bo Madsen og hans kolleger er, at hackerne går mange og lange omveje for at sløre deres spor. Det er ikke ualmindeligt, at en hacker starter med at bryde ind i en computer i Danmark og derfra hopper videre til et system i fx USA. Der springer han måske videre mellem fire forskellige universiteter, inden han via Tyskland kommer tilbage til den computer i Danmark, han i virkeligheden ville prøve kræfter med. "Det gør det meget vanskeligt for os at spore dem. I løbet af kort tid kan 30 forskellige systemadministratorer være involveret Jorden rundt flere gange, samme sag.
Der er ikke meget James Bond over det
I modsætning til amerikanske film, hvor popsmarte digitaldetektiver zoomer ind på hackerne og nagler dem til et virtuelt gerningssted med nogle få tastetryk, er virkeligheden knap så actionpræget. "Der er ikke meget James Bond over det vi laver" siger Jørgen Bo Madsen og forklarer: "Langt det meste af tiden går med at kommunikere med andre system-administratorer og med at læse log-filer. En logfil er billedlig talt en "optagelse" af alt, hvad der foregår på en computer. Og selvomlogfilerne når at blive et par dage gamle, inden Jørgen Bo Madsen får dem hjem, er de mange tal og bogstaver stadig lige spændende læsning for ham. "Efterhånden, som jeg får overblik over logfilerne, lærer jeg også hackerne at kende. Fx kan jeg se, om det er ham den klodsede, som altid bytter om på bogstaverne E og R, der har været der. Eller jeg kan genkende ham den superdygtige der på ingen tid tjekker alle afkroge af systemet og laver sig en bag dør så han altid kan vende tilbage." Når Jørgen Bo Madsen kommer til det punkt, hvor han kan overskue hackernes færden og kender deres vaner, er der typisk gået tre måneder. I rundt regnet hundrede dage har han fulgt sporet og formentlig været. Jorden rundt flere gange. Men det er ikke kun hackerjægeren, der lever med hackerne dag og nat. Hackerne lever også med jægeren. Og hackere, der om nogen mestrer kunsten at gemme sig i computere, er også eksperter i at få færten af andre, der prøver at gemme sig i de samme computere. Hackere har flere sløringstaktikker. En af de mere raffinerede, som de dog kun kan bruge, hvis de har fuld kontrol med computeren, er at slette de logfiler, som vidner om deres færden. En anden og langt mere udbredt taktik er at skifte rute med jævne mellemrum, så jægeren bliver sat et par dage tilbage i sporingsarbejdet.
Jægeren er ikke et hak bedre end bytte
Men jægeren lægger også røgslør ud og sætter også fælder op.
"Vi tager chancer hele tiden. Hvis fx gerne vil have dem til at tage en anden rute, hvor vores sporingsmuligheder er bedre, eller hvis en computer er risikabel at have stående åben, så lukker vi maskinen ned. For ikke at vække hackernes mistanke lægger vi en falsk besked om, at systemet er nede på grund af service eller lignende," fortæller Jørgen Bo Madsen.
Hvis hackerne opfører sig, som hackere plejer, komer de dog før eller senere til at give jægerne en hjælpende hånd ganske ufrivilligt.
Det gør de, fordi antallet af computere, de har kontrol over, stiger eksplosivt i løbet af nogle få måneder. Hvert system giver adgang til en hel række nye systemer, der igen giver adgang til... De mange muligheder gør ofte hackerne uforsigtige. De bliver sløsede, glemmer, at de ikke er usårlige, og begynder at slække på "sikkerheden". Typisk skærer de ned på antallet af computere, de bruger som mellemstationer, og gør det på den måde nemmere for jægerne at følge deres spor.
Ikke alle tør melde hackerne til politiet
Når og hvis det lykkes for Jørgen Bo Madsen at spore den telefonlinie, hackerne arbejder fra, er hans arbejde i og for sig slut. Alt, hvad han har tilbage at gøre, er at ringe op til ejeren af det pågældende computersystem og fortælle, hvad han ved. Og kun hvis ejeren ønsker at melde det til politiet, kan en egentlig sporing af telefonlinien komme på tale. Men hvis ejeren af den første computer, hackerne bruger på deres verdens omspændende indbrudsturne, fx tilhører en bank, der ikke vil risikere offentlighedens søgelys, så slutter jagten få meter fra byttet. Ofte bliver flere firmaer angrebet samtidig. I den aktuelle sag mod de fire danske hackere var der sikre spor efter 13 digitale indbrud i Danmark. Kun ni af de angrebne henvendte sig til politiet. En af grundene til, at der "kun"blev fundet beviser for 13 tilfælde af hacking i Danmark, var, at mange af dem, der lå inde med belastende materiale, nåede at slette deres spor. Det gjorde de fordi, de blev advaret. Advarslen blev spredt i hele Danmark af pressen. Da rygtet om de første anholdelser nåede den hårde kerne i hackermiljøet, kontaktede de pressen og fortalte, at en stor hackerrazzia var i gang. Og takket være pressen gik der ikke mange timer, før rygtet nåede den fjerneste afkrog af landet. Det er frækt!
f:\12000 essays\sciences (985)\Computer\Hackers Information Warfare.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract
The popularity of the Internet has grown immeasurably in the past few years. Along with it the so-called "hacker" community has grown and risen to a level where it's less of a black market scenario and more of "A Current Affair" scenario. Misconceptions as to what a hacker is and does run rampant in everyone who thinks they understand what the Internet is after using it a few times. In the next few pages I'm going to do my best to prove the true definition of what a hacker is, how global economic electronic warfare ties into it, background on the Internet, along with a plethora of scatological material purely for your reading enjoyment. I will attempt to use the least technical computer terms I can, but in order to make my point at times I have no choice.
Geoff Stafford
Dr. Clark
PHL 233
There are many misconceptions, as to the definition, of what a hacker truly is, in all my research this is the best definition I've found:
Pretend your walking down the street, the same street you have always walked down. One day, you see a big wooden or metal box with wires coming out of it sitting on the sidewalk where there had been none.
Many people won't even notice. Others might say, "Oh, a box on the street.". A few might wonder what it does and then move on. The hacker, the true hacker, will see the box, stop, examine it, wonder about it, and spend mental time trying to figure it out. Given the proper circumstances, he might come back later to look closely at the wiring, or even be so bold as to open the box. Not maliciously, just out of curiosity. The hacker wants to know how things work.(8)
Hackers truly are "America's Most Valuable Resource,"(4:264) as ex-CIA Robert Steele has said. But if we don't stop screwing over our own countrymen, we will never be looked at as anything more than common gutter trash. Hacking computers for the sole purpose of collecting systems like space-age baseball cards is stupid and pointless; and can only lead to a quick trip up the river.
Let's say that everyone was given an opportunity to hack without any worry of prosecution with free access to a safe system to hack from, with the only catch being that you could not hack certain systems. Military, government, financial, commercial and university systems would all still be fair game. Every operating system, every application, every network type all open to your curious minds.
Would this be a good alternative? Could you follow a few simple guidelines for the offer of virtually unlimited hacking with no worry of governmental interference?
Where am I going with this?
Right now we are at war. You may not realize it, but we all feel the implications of this war, because it's a war with no allies, and enormous stakes. It's a war of economics.
The very countries that shake our hands over the conference tables of NATO and the United Nations are picking our pockets. Whether it be the blatant theft of American R&D by Japanese firms, or the clandestine and governmentally-sanctioned bugging of Air France first-class seating, or the cloak-and-dagger hacking of the SWIFT network (1:24) by the German BND's Project Rahab(1:24), America is getting screwed.
Every country on the planet is coming at us. Let's face it, we are the leaders in everything. Period. Every important discovery in this century has been by an American or by an American company. Certainly other countries have better profited by our discoveries, but nonetheless, we are the world's think-tank.
So, is it fair that we keep getting shafted by these so-called "allies?". Is it fair that we sit idly by, like some old hound too lazy to scratch at the ticks sucking out our life's blood by the gallon? Hell no.
Let's say that an enterprising group of computer hackers decided to strike back. Using equipment bought legally, using network connections obtained and paid for legally, and making sure that all usage was tracked and paid for, this same group began a systematic attack of foreign computers. Then, upon having gained access, gave any and all information obtained to American corporations and the Federal government.
What laws would be broken? Federal Computer Crime Statutes specifically target so-called "Federal Interest Computers."(6:133) (i.e.: banks, telecommunications, military, etc.) Since these attacks would involve foreign systems, those statutes would not apply. If all calls and network connections were promptly paid for, no toll-fraud or other communications related laws would apply.
International law is so muddled that the chances of getting extradited by a country like France for breaking into systems in Paris from Albuquerque is slim at best. Even more slim when factoring in that the information gained was given to the CIA and American corporations.
Every hacking case involving international break-ins has been tried and convicted based on other crimes. Although the media may spray headlines like "Dutch Hackers Invade Internet" or "German Hackers Raid NASA," those hackers were tried for breaking into systems within THEIR OWN COUNTRIES...not somewhere else. A hacker who uses the handle of 8lgm in England got press for hacking world-wide, but got nailed hacking locally(3). Australia's 'Realm Hackers': Phoenix, Electron & Nom hacked almost exclusively other countries, but use of AT&T calling cards rather than Australian Telecom got them a charge of defrauding the Australian government(3). Dutch hacker RGB got huge press hacking a US military site and creating a "dquayle" account, but got nailed while hacking a local university(3). The list goes on and on.
I asked several people about the workability of my proposal. Most seemed to concur that it was highly unlikely that anyone would have to fear any action by American law enforcement, or of extradition to foreign soil to face charges there. The most likely form of retribution would be eradication by agents of that government.
Well, I'm willing to take that chance, but only after I get further information from as many different sources as I can. I'm not looking for anyone to condone these actions, nor to finance them. I'm only interested in any possible legal action that may interfere with my freedom.
We must take the offensive, and attack the electronic borders of other countries as vigorously as they attack us, if not more so. This is indeed a war, and America must not lose.
There have always been confrontations online. It's unavoidable on the net, as it is in life, to avoid unpleasantness. However, on the net the behavior is far more pronounced since it effects a much greater response from the limited online environments than it would in the real world. People behind such behavior in the real world can be dealt with or avoided, but online they cannot.
In the real world, annoying people don't impersonate you in national forums. In the real world, annoying people don't walk into your room and go through your desk and run through the town showing everyone your private papers or possessions. In the real world, people can't readily imitate your handwriting or voice and insult your friends and family by letter or telephone. In the real world people don't rob or vandalize and leave your fingerprints behind.
The Internet is not the real world.
All of the above continually happens on the Internet, and there is little anyone can do to stop it. The perpetrators know full well how impervious they are to retribution, since the only people who can put their activities to a complete halt are reluctant to open cases against computer criminals due to the complex nature of the crimes.
The Internet still clings to the anarchy of the Arpanet that spawned it, and many people would love for the status quo to remain. However, the actions of a few miscreants will force lasting changes on the net as a whole. The wanton destruction of sites, the petty forgeries, the needless break-ins and the poor blackmail attempts do not go unnoticed by the authorities.
I personally could care less what people do on the net. I know it is fantasy land. I know it exists only in our minds, and should not have any long lasting effect in the real world. Unfortunately, as the net's presence grows larger and larger, and the world begins to accept it as an entity in and of itself, it will be harder to convince those inexperienced users that the net is not real.
I have always played by certain rules and they have worked well for me in the years I've been online. These rules can best be summed up by the following quote, "We are taught to love all our neighbors. Be courteous. Be peaceful. But if someone lays his hands on you, send them to the cemetery."
The moment someone crosses the line, and interferes with my
well-being in any setting (even one that is arguably unreal such as the Internet) I will do whatever necessary to ensure that I can once again go about minding my own business unmolested. I am not alone in this feeling. There are hundreds of net-loving anarchists who don't want the extra attention and bad press brought to our little fantasy land by people who never learned how to play well as children. Even these diehard anti-authoritarians are finding themselves caught in a serious quandary: do they do nothing and suffer attacks, or do they make the phone call to Washington and try to get the situation resolved?
Many people cannot afford the risk of striking back electronically, as some people may suggest. Other people do not have the skill set needed to orchestrate an all out electronic assault against an unknown, even if they pay no heed to the legal risk. Even so, should anyone attempt such retribution electronically, the assailant will merely move to a new site and begin anew.
People do not like to deal with police. No one LOVES to call up their local law enforcement office and have a nice chat. Almost everyone feels somewhat nervous dealing with these figures knowing that they may just as well decide to turn their focus on you rather than the people causing problems. Even if you live your life crime-free, there is always that underlying nervousness; even in the real world.
However, begin an assault directed against any individual, and I guarantee he or she will overcome such feelings and make the needed phone call. It isn't the "hacking" per se that will cause anyone's downfall nor bring about governmental regulation of the net, but the unchecked attitudes and gross disregard for human dignity that runs
rampant online.
What good can come from any of this? Surely people will regain the freedom to go about their business, but what of the added governmental attentions?
Electronic Anti-Stalking Laws?
Electronic Trespass?
Electronic Forgery?
False Electronic Identification?
Electronic Shoplifting?
Electronic Burglary?
Electronic Assault?
Electronic Loitering?
Illegal Packet Sniffing equated as Illegal Wiretaps? (7:69)
The potential for new legislation is immense. As the networks further permeate our real lives, the continual unacceptable behavior and following public outcry in that setting will force the ruling bodies to draft such laws. And who will enforce these laws? And who will watch the watchmen? Often times these issues are left to resolve
themselves after the laws have passed.
Is this the future we want? One of increased legislation and governmental regulation? With the development of the supposed National Information Super-Highway, the tools will be in place for a new body to continually monitor traffic for suspect activity and uphold any newly passed legislation. Do not think that the ruling forces have
not considered that potential.
The Information Age has arrived and most people don't recognize the serious nature behind it. Computers and the related technology can either be the answer to the human races problems or a cause for the demise of the race. Right now we rely on computers too much, and have too little security to protect us if they fail. In the coming years, we will see amazing technology permeate every part of our lives, some of which will be welcomed, some won't, and some will be used against us. If we don't learn to handle the power that computers give us in the next few years, we will all pay dearly for it. Remember the warning. The future is here now and most people aren't ready to handle it.
References
1. Timothy Haight, "High Tech Spies", Time Magazine, July 5, 1993, p.24
2. Mark Ludwig, "Beyond van Eck Phreaking", Consumertronics, 1988, p.47
3. 2600: The Hacker Quarterly, Summer 1992
4. Winn Schwartau. Chaos on the Electronic Superhighway. New York, NY; Thunder Mouth's Press. 1994, p.264-267.
5. Phrack, Issue #46
6. Neil Munro, "Microwave Weapon Stuns Iraqis", Defense News, April 15, 1992, p.133.
7. Alvin and Heidi Toffler, War and Anti-War. Pittsburgh, PA. Little, Brown and Co., 1993, p.69.
8. Hactic, Issue #16 - Fall 1994
Note: Bibliographies number 3,5, and 8 are underground electronic magazines published and spread entirely through the Internet and bulletin boards. There are no page numbers, no authors names are ever given (for security reasons due to content), and obviously no publisher.
f:\12000 essays\sciences (985)\Computer\Hackers Manifesto.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hackers Manifesto -
Another one got caught today, it's all over the papers. "Teenager Arrested
in Computer Crime Scandal", "Hacker Arrested after Bank Tampering"...
Damn kids. They're all alike.
But did you, in your three-piece psychology and 1950's technobrain, ever
take a look behind the eyes of the hacker? Did you ever wonder what made
him tick, what forces shaped him, what may have molded him?
I am a hacker, enter my world...
Mine is a world that begins with school...
I'm smarter than most of the other kids, this crap they teach us bores me...
Damn underachiever. They're all alike.
I'm in junior high or high school. I've listened to teachers explain for the
fifteenth time how to reduce a fraction. I understand it. "No, Ms. Smith, I
didn't show my work. I did it in my head..."
Damn kid. Probably copied it. They're all alike.
I made a discovery today. I found a computer. Wait a second, this is cool. It
does what I want it to. If it makes a mistake, it's because I screwed it up.
Not because it doesn't like me...
Or feels threatened by me...
Or thinks I'm a smart ass...
Or doesn't like teaching and shouldn't be here...
Damn kid. All he does is play games. They're all alike.
And then it happened...
a door opened to a world...
rushing through the phone line like heroin through an addict's veins, an electronic pulse is sent out, a refuge from the day-to-day incompetencies is sought...
a board is found.
"This is it... this is where I belong..."
I know everyone here...
even if I've never met them, never talked to them, may never hear from them again...
I know you all...
Damn kid. Tying up the phone line again. They're all alike...
You bet your ass we're all alike...
we've been spoon-fed baby food at school when we hungered for steak...
the bits of meat that you did let slip through were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the apathetic. The few that had something to teach found us willing pupils, but those few are like drops of water in the desert.
This is our world now...
the world of the electron and the switch, the beauty of the baud. We make use of a service already existing without paying for what could be dirt-cheap if it wasn't run by profiteering gluttons, and you call us criminals.
We explore...
and you call us criminals.
We seek after knowledge...
and you call us criminals.
We exist without skin color, without
nationality, without religious bias...
and you call us criminals. You build atomic bombs, you wage wars, you murder, cheat, and lie to us and try to make us believe it's for our own good, yet we're the criminals. Yes, I am a criminal. My crime is that of curiosity. My crime is that of judging people by what they say and think, not what they look like. My crime is that of outsmarting you, something that you will never forgive me for.
I am a hacker, and this is my manifesto.
You may stop this individual, but you can't stop us all...
after all, we're all alike.
f:\12000 essays\sciences (985)\Computer\Hacking to Peaces.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hacking to Peaces
The "Information Superhighway" possesses common traits with a regular highway. People travel on it daily and attempt to get to a predetermined destination. There are evil criminals who want to violate citizens in any way possible. A reckless driver who runs another off the road is like a good hacker. Hacking is the way to torment people on the Internet. Most of the mainstream hacking community feel that it is their right to confuse others for their entertainment. Simply stated, hacking is the intrusion into a computer for personal benefit. The motives do not have to be focused on profit because many do it out of curiosity. Hackers seek to fulfill an emptiness left by an inadequate education. Do hackers have the right to explore wherever he or she wants on the Internet (with or without permission), or is it the right of the general population to be safe from their trespasses?
To tackle this question, people have to know what a hacker is. The connotation of the word 'hacker' is a person that does mischief to computer systems, like computer viruses and cybercrimes. "There is no single widely-used definition of computer-related crime, [so] computer network users and law enforcement officials must distinguish between illegal or deliberate network abuse versus behavior that is merely annoying. Legal systems everywhere are busily studying ways of dealing with crimes and criminals on the Internet" (Voss, 1996, p. 2).
There are ultimately three different views on the hacker controversy. The first is that hacking or any intrusion on a computer is just like trespassing. Any electric medium should be treated just like it were tangible, and all laws should be followed as such. On the other extreme are the people that see hacking as a privilege that falls under the right of free speech. The limits of the law should be pushed to their farthest extent. They believe that hacking is a right that belongs to the individual. The third group is the people that are in the middle of the two groups. These people feel that stealing information is a crime, and that privacy is something that hackers should not invade. They are not as right wing as the people that feel that hackers should be eliminated.
Hackers have their own ideals to how the Internet should operate. The fewer laws there are to impede a hacker's right to say and do what they want, the better they feel. Most people that do hack follow a certain profile. Most of them are disappointed with school, feeling "I'm smarter than most of the other kids, this crap they teach us bores me" (Mentor, 1986, p. 70). Computers are these hackers only refuge, and the Internet gives them a way to express themselves. The hacker environment hinges on people's first amendment right to freedom of speech. Some justify their actions of hacking by saying that the hacking that they do is legitimate.
Some hackers that feel their pastime is legitimate and only do it for the information; others do it for the challenge. Still other hackers feel it is their right to correct offenses done to people by large corporations or the government. Hackers have brought it to the public's attention that the government has information on people, without the consent of the individual. Was it a crime of the hacker to show that the government was intruding on the privacy of the public? The government hit panic stage when reports stated that over 65% of the government's computers could be hacked into 95% of the time (Anthes, 1996, p. 21). Other hackers find dubious business practices that large corporations try to accomplish. People find this information helpful and disturbing. However, the public may not feel that the benefits out weigh the problems that hackers can cause. When companies find intruders in their computer system, they strengthen their security, which costs money. Reports indicate that hackers cost companies a total of $150 to $300 billion a year (Steffora & Cheek, 1994, p. 43). Security system implementation is necessary to prevent losses. The money that companies invest on security goes into the cost of the products that they sell. This, in turn, raises the prices of the products, which is not popular to the public.
The government feels that it should step in and make the choices when it comes to the control of cyberspace. However, the government has a tremendous amount of trouble with handling the laws dealing with hacking. What most of the law enforcement agencies follow is the "Computer Fraud and Abuse Act of 1986." "Violations of the Computer Fraud and Abuse Act include intrusions into government, financial, most medical, and Federal interest computers. Federal interest computers are defined by law as two or more computers involved in the criminal offense, which are located in different states. Therefore, a commercial computer which is the victim of an intrusion coming from another state is a "Federal interest" computer" (Federal, 1996, p. 1). Most of the time, the laws have to be extremely specific, and hackers find loopholes in these laws, ultimately getting around them. Another problem with the laws is the people that make the laws. Legislators have to be familiar with high-tech materials that these hackers are using, but most of them know very little about computer systems. The current law system is unfair; it tramples over the rights of the individual, and is not productive, as illustrated in the following case. David LaMacchia used his computers as "distribution centers for illegally copied software. In this case, the law was not prepared to handle whatever crimes may have been committed. The judge ruled that there was no conspiracy and dismissed the case. If statutes were in place to address the liability taken on by a BBS operator for the materials contained on the system, situations like this might be handled very differently" (Voss, 1996, p. 2). The government is not ready to handle the continually expanding reaches of the Internet.
If the government cannot handle the hackers, then who should judge the limits of hacking? This decision has to be in the placed in the hands of the public, but in all probability, the stopping of hackers will never happen. The hacker's mentality stems from boredom and a need for adventure, and any laws or public beliefs that try to suppress that cannot. Every institution that they have encountered has oppressed them, and hacking is the hacker's only means for release, the government or public cannot take that away from them. That is not necessarily a bad thing. Hacking can bring some good results; especially bringing oppressing bodies (like the government and large corporations) to their knees by releasing information that shows how suppressive they have been. However, people that hack to annoy or to destroy are not valid in their reasoning. Nothing is accomplished by mindless destruction, and other than being a phallic display, it serves no purpose. Laws and regulations should limit these people's capabilities to cause havoc. Hacking is something that will continue to be a debate in and out of the computer field, but maybe someday the public will accept hackers. On the converse, maybe the extreme hackers will calm down and follow the accepted behaviors.
References
Anthes, G. H. (1996, September 16). Few Gains Made Against Hackers. Computerworld, 30(38). 21.
Federal Bureau of Investigation. (1997, February). Federal Bureau of Investigation National Computer Crime Squad. [Internet]. Available: World Wide Web, http://www.fbi.gov/ programs/nccs/compcrim.htm
Mentor, The. (1986). Hacker's Manifesto, or The Conscience of a Hacker. In Victor J. Vitanza (Ed.), CyberReader (pp. 70-71). Boston: Allyn and Bacon.
Steffora, A. & Martin Cheek. (1994, February 07). Hacking Goes Legit. Industry Week, 243(3). 43-44, 46.
Voss, Natalie D. (1996, December). Crime on the Internet. Jones Telecommunication and Multimedia Encyclopedia. [Internet]. Available: World Wide Web, http://www.digitalcentury.com/encyclo/update/crime.html
f:\12000 essays\sciences (985)\Computer\Hebrew Text and Fonts.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hebrew Text and Fonts
Today's written language is quickly becoming history. Just as the carved tablet has become a conversation piece in the archeologist's living room, the written language is quickly becoming as ancient as the dead sea scrolls.
A new form of visual communication is taking over the entire world. Languages from across this widespread planet are now becoming more accessible to ever culture. As the pen and pencil begin to disappear into the history books, keyboards and monitors are making it easier for people to communicate in fast and effective ways.
The Hebrew Language has always been mysterious and bastardized , composed of ancient Greek and Egyptian symbol derivatives. The language eventually became independant, although it remains very mysterious, and is used mainly by the Israelites. Hebrew writing has now taken a new form , a form of which the English language has taken for many years. This new form called "type" is not new by any means, however, up until a few years ago, it was impossible to find a Hebrew Typeface on any word processing unite unless it was a specialized typewriter made in Jerusalem. The new Hebrew type has now been transformed into a computer compatible typeface found in two forms; script and print. The script form of the Hebrew type is equal to the commonly used italic form of the English typeface. Hebrew print form is a more linear and boxy form of the hebrew lettering.
The Hebrew fonts and word processing software is easily downloadable to anyone though access to the internet. These programs are not compatible with English software but work on their own to allow for the ease of typing and printing of Hebrew documents. They also allow for communication and access of the Hebrew language through the internet and e-mail.
Through this new step we see that the written language has taken another step forward in it's evolution. Language has become more easily understood by other cultures and has diminished the distance and the miscommunication between what at times seems to be a completely different world.
f:\12000 essays\sciences (985)\Computer\history of computers in America.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
History of the Computer Industry in America
Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. A machine that has done all this and more now exists in nearly every business in the U.S. and one out of every two households (Hall, 156). This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has it changed the American society. From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of peoples lives for the better.
The earliest existence of the modern day computer ancestor is the abacus. These date back to almost 2000 years ago. It is simply a wooden rack holding parallel wire on which beads are strung. When these beads are moved along the wire according to "programming" rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14). The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal's father who was a tax collector (Soma, 32).
In the early 1800, a mathematics professor named Charles Babbage designed an automatic calculation machine. It was steam powered and could store up to 1000 50-digit numbers. Built into his machine were operations that included everything a modern general-purpose computer would need. It was programmed by and stored data on cards with holes punched in them, appropriately called punch cards. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand
for such a device (Soma, 46).
After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest (Osborne, 45). Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation. The first major use for a computer in the U.S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human intervention (Gulliver, 82). Since the population of the U.S. was increasing so fast, the computer was an essential tool in tabulating the totals.
These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines (IBM), Remington-Rand, Burroughs, and other corporations. By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards was an enormous step forwards; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world's business computing and a good portion of the computing work in science (Chposky, 73).
By the late 1930's punched-card machine techniques had become so
well established and reliable that Howard Hathaway Aiken, in
collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken's machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations. Also, it had
special built-in programs to handle logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention (Chposky, 103).
The outbreak of World War II produced a desperate need for computing capability, especially for the military. New weapons' systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for "Electrical Numerical Integrator And Calculator". It could multiply two numbers at the rate of 300 products per second, by finding the value of
each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers (Dolotta, 47).
ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever
task he wanted the computer to do. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955
(Dolotta, 50).
Mathematician John von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper
programmed control without the need for any changes in hardware. Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for
future generations of high-speed digital computers and were universally adopted (Hall, 73).
The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory (RAM), which is a memory designed to give almost constant access to any particular piece of information (Hall, 75). These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program
computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years. Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming. This group of machines included EDVAC and UNIVAC, the first commercially available computers (Hazewindus, 102).
The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950's. Together they had formed the Mauchley-Eckert Computer Corporation, America s first computer company in the 1940's. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the U.S. Census Bureau in 1951 where it was used to help tabulate the U.S. population (Hazewindus, 124).
Early in the 1950s two important engineering discoveries changed the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950's computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit. Gone was the bulky, unreliable, but fast machine; now computers began to
become more compact, more reliable and have more capacity (Shallis, 49).
These new technical discoveries rapidly found their way into new models of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin. These machines were very
expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran. Such computers were typically found in large computer centres--operated by industry, government, and private
laboratories--staffed with many programmers and support personnel (Rogers, 77). By 1956, 76 of IBM's large computer mainframes were in use, compared with only 46 UNIVAC's (Chposky, 125).
In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 million words (Chposky, 147).
During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment. These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays,
and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in business for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units (CPUs) for such purposes did not need to be
very fast arithmetically and were primarily used to access large amounts of records on file. The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were
also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds (Rogers, 98).
The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centres and toward a broader range of applications for less-costly computer systems. Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer
installations, but great advances in applications programming languages removed these obstacles. Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks (Osborne, 146). In 1971 Marcian E. Hoff, Jr., an engineer at the Intel Corporation,
invented the microprocessor and another stage in the development of the computer began (Shallis, 121).
A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale integration techniques. In the 1950s it was realized that "scaling down" the size of electronic
digital computer circuits and parts would increase speed and efficiency and improve performance. However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960 photo printing of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means (Rogers, 142). In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration (VLSI), in which hundreds of thousands of transistors are placed on a single chip, became increasingly common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The
size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Rogers, 153).
One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380 (Rose, 32). The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front
of the box. It didn't include a monitor or keyboard, and its applications were very limited (Jacobs, 53). Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair.
For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business (Fluegelman, 16).
After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal
computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II (The Apple I was the first computer designed by Jobs and Wozniak in Wozniak s garage, which was not produced on a wide scale). Software was needed to run the computers as well. Microsoft developed a
Disk Operating System (MS-DOS) for the IBM computer while Apple developed its own software system (Rose, 37). Because Microsoft had now set the software standard for IBMs, every software manufacturer had to make their software compatible with Microsoft's. This would lead to huge profits for Microsoft (Cringley, 163).
The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity. Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of inventories. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in society and built up a huge industry (Cringley, 174).
The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years. As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall.
However, since the microprocessor technology will be increasing, it's higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily increase (Zachary, 42)
Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States. It now comprises thousands of companies, making everything from multi-million dollar high-speed
supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year (Malone, 192). Surely, the computer has impacted every aspect of people's lives. It has affected the way people work and play. It has
made everyone s life easier by doing difficult work for people. The computer truly is one of the most incredible inventions in history.
Works Cited
Chposky, James. Blue Magic. New York: Facts on File Publishing. 1988.
Cringley, Robert X. Accidental Empires. Reading, MA: Addison Wesley
Publishing, 1992.
Dolotta, T.A. Data Processing: 1940-1985. New York: John Wiley & Sons,
1985.
Fluegelman, Andrew. A New World, MacWorld. San Jose, Ca: MacWorld
Publishing, February, 1984 (Premire Issue).
Hall, Peter. Silicon Landscapes. Boston: Allen & Irwin, 1985
Gulliver, David. Silicon Valley and Beyond. Berkeley, Ca: Berkeley Area
Government Press, 1981.
Hazewindus, Nico. The U.S. Microelectronics Industry. New York:
Pergamon Press, 1988.
Jacobs, Christopher W. The Altair 8800, Popular Electronics. New
York: Popular Electronics Publishing, January 1975.
Malone, Michael S. The Big Scare: The U.S. Computer Industry. Garden
City, NY: Doubleday & Co., 1985.
Osborne, Adam. Hyper growth. Berkeley, Ca: Idthekkethan Publishing
Company, 1984.
Rogers, Everett M. Silicon Valley Fever. New York: Basic Books, Inc.
Publishing, 1984.
Rose, Frank. West of Eden. New York: Viking Publishing, 1989.
Shallis, Michael. The Silicon Idol. New York: Shocken Books, 1984.
Soma, John T. The History of the Computer. Toronto: Lexington Books,
1976.
Zachary, William. The Future of Computing, Byte. Boston: Byte
Publishing, August 1994.
f:\12000 essays\sciences (985)\Computer\History of Computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ENG 121
The volume and use of computers in the world are so great, they have become difficult to ignore anymore. Computers appear to us in so many ways that many times, we fail to see them as they actually are. People associated with a computer when they purchased their morning coffee at the vending machine. As they drove themselves to work, the traffic lights that so often hampered us are controlled by computers in an attempt to speed the journey. Accept it or not, the computer has invaded our life.
The origins and roots of computers started out as many other inventions and technologies have in the past. They evolved from a relatively simple idea or plan designed to help perform functions easier and quicker. The first basic type of computers were designed to do just that; compute!. They performed basic math functions such as multiplication and division and displayed the results in a variety of methods. Some computers displayed results in a binary representation of electronic lamps. Binary denotes using only ones and zeros thus, lit lamps represented ones and unlit lamps represented zeros. The irony of this is that people needed to perform another mathematical function to translate binary to decimal to make it readable to the user.
One of the first computers was called ENIAC. It was a huge, monstrous size nearly that of a standard railroad car. It contained electronic tubes, heavy gauge wiring, angle-iron, and knife switches just to name a few of the components. It has become difficult to believe that computers have evolved into suitcase sized micro-computers of the 1990's.
Computers eventually evolved into less archaic looking devices near the end of the 1960's. Their size had been reduced to that of a small automobile and they were processing segments of information at faster rates than older models. Most computers at this time were termed "mainframes" due to the fact that many computers were linked together to perform a given function. The primary user of these types of computers were military agencies and large corporations such as Bell, AT&T, General Electric, and Boeing. Organizations such as these had the funds to afford such technologies. However, operation of these computers required extensive intelligence and manpower resources. The average person could not have fathomed trying to operate and use these million dollar processors.
The United States was attributed the title of pioneering the computer. It was not until the early 1970's that nations such as Japan and the United Kingdom started utilizing technology of their own for the development of the computer. This resulted in newer components and smaller sized computers. The use and operation of computers had developed into a form that people of average intelligence could handle and manipulate without to much ado. When the economies of other nations started to compete with the United States, the computer industry expanded at a great rate. Prices dropped dramatically and computers became more affordable to the average household. Like the invention of the wheel, the computer is here to stay.
The operation and use of computers in our present era of the 1990's has become so easy and simple that perhaps we may have taken too much for granted. Almost everything of use in society requires some form of training or education. Many people say that the predecessor to the computer was the typewriter. The typewriter definitely required training and experience in order to operate it at a usable and efficient level. Children are being taught basic computer skills in the classroom in order to prepare them for the future evolution of the computer age.
The history of computers started out about 2000 years ago, at the birth of the abacus, a wooden rack holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation.
Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibnitz invented a special stopped gear mechanism for introducing the addend digits, and this is still being used.
The prototypes made by Pascal and Leibnitz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements included: Accumulation of partial results, storage and automatic reentry of past results (A memory function), and printing of the results. Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science.
While Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started in Cambridge, England, by Charles Babbage (of which the computer store "Babbages" is named), a mathematics professor. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically. He began to design an automatic mechanical calculating machine, which he called a difference engine. By 1822, he had a working model to demonstrate. Financial help from the British Government was attained and Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea; the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn't be appreciated until a full century later.
The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general - purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed.
As people can see, it took quite a large amount of intelligence and fortitude to come to the 1990's style and use of computers. People have assumed that computers are a natural development in society and take them for granted. Just as people have learned to drive an automobile, it also takes skill and learning to utilize a computer.
Computers in society have become difficult to understand. Exactly what they consisted of and what actions they performed were highly dependent upon the type of computer. To say a person had a typical computer doesn't necessarily narrow down just what the capabilities of that computer was. Computer styles and types covered so many different functions and actions, that it was difficult to name them all. The original computers of the 1940's were easy to define their purpose when they were first invented. They primarily performed mathematical functions many times faster than any person could have calculated. However, the evolution of the computer had created many styles and types that were greatly dependent on a well defined purpose.
The computers of the 1990's roughly fell into three groups consisting of mainframes, networking units, and personal computers. Mainframe computers were extremely large sized modules and had the capabilities of processing and storing massive amounts of data in the form of numbers and words. Mainframes were the first types of computers developed in the 1940's. Users of these types of computers ranged from banking firms, large corporations and government agencies. They usually were very expensive in cost but designed to last at least five to ten years. They also required well educated and experienced manpower to be operated and maintained. Larry Wulforst, in his book Breakthrough to the Computer Age, describes the old mainframes of the 1940's compared to those of the 1990's by speculating, "...the contrast to the sound of the sputtering motor powering the first flights of the Wright Brothers at Kitty Hawk and the roar of the mighty engines on a Cape Canaveral launching pad" (126).
Networking computers derived from the idea of bettering communications. They were medium sized computers specifically designed to link and communicate with other computers. The United States government initially designed and utilized these type of computers in the 1960's in order to better the national response to nuclear threats and attacks. The Internet developed as a direct result of this communication system. In the 1990's, there were literally thousands of these communication computers scattered all over the world and they served as the communication traffic managers for the entire Internet. One source stated it best concerning the volume of Internet computers by revealing, "... the number of hosts on the Internet began an explosive growth. By 1988 there were over 50,000 hosts. A year later, there were three times that many" (Campbell-Kelly and Aspray 297).
The personal computers that are in large abundance in the 1990's are actually very simple machines. Their basic purpose is to provide a usable platform for a person to perform given tasks easier and faster. They perform word processing, spread sheet functions and person to person communications just to name a few. They are also a great form of enjoyment as many games have been developed to play on these types of computers. These computers are the most numerous types in the world due to there relatively small cost and size.
The internal workings and mechanics of personal computers primarily consisted of a central processing unit, a keyboard, a video monitor and possibly a printer unit. The central processing unit is the heart and brains of the system. The functions of the central processing unit were based on a unit called the Von Neumann computer designed in 1952. As stated in the book The Dream Machine, the Von Neumann computer consisted of an input, memory, control, arithmetic unit and output as basic processes of a central processing unit. It has become the basic design and fundamental basis for the development of most computers (Palfreman and Swade 48).
Works Cited
Wulforst, Harry. Breakthrough to the Computer Age. New York: Charles Scribner's Sons, 1982.
Palferman, Jon and Doron Swade. The Dream Machine. London: BBC Books, 1991.
Campbell-Kelly, Martin and William Aspray. Computer, A History of the Information Machine. New York: BasicBooks, 1996.
f:\12000 essays\sciences (985)\Computer\History of The Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
History of The Internet
-------------
The Internet is a worldwide connection of thousands of computer
networks. All of them speak the same language, TCP/IP, the standard
protocol. The Internet allows people with access to these networks to share
information and knowledge. Resources available on the Internet are chat
groups, e-mail, newsgroups, file transfers, and the World Wide Web. The
Internet has no centralized authority and it is uncensored. The Internet
belongs to everyone and to no one.
The Internet is structured in a hierarchy. At the top, each country
has at least one public backbone network. Backbone networks are made of
high speed lines that connect to other backbones. There are thousands of
service providers and networks that connect home or college users to the
backbone networks. Today, there are more than fifty-thousand networks in
more than one-hundred countries worldwide. However, it all started with one
network.
In the early 1960's the Cold War was escalating and the United States
Government was faced with a problem. How could the country communicate
after a nuclear war? The Pentagon's Advanced Research Projects Agency,
ARPA, had a solution. They would create a non-centralized network that
linked from city to city, and base to base. The network was designed to
function when parts of it were destroyed. The network could not have a
center because it would be a primary target for enemies. In 1969, ARPANET
was created, named after its original Pentagon sponsor. There were four
supercomputer stations, called nodes, on this high speed network.
ARPANET grew during the 1970's as more and more supercomputer stations
were added. The users of ARPANET had changed the high speed network to an
electronic post office. Scientists and researchers used ARPANET to
collaborate on projects and to trade notes. Eventually, people used ARPANET
for leisure activities such as chatting. Soon after, the mailing list was
developed. Mailing lists were discussion groups of people who would send
their messages via e-mail to a group address, and also receive messages.
This could be done twenty-four hours a day. Interestingly, the first
group's topic was called Science Fiction Lovers.
As ARPANET became larger, a more sophisticated and standard protocol
was needed. The protocol would have to link users from other small networks
to ARPANET, the main network. The standard protocol invented in 1977 was
called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other
network was made possible. In 1983, the military portion of ARPANET broke
off and formed MILNET. The same year, TCP/IP was made a standard and it was
being used by everyone. It linked all parts of the branching complex
networks, which soon came to be called the Internet.
In 1985, the National Science Foundation (NSF) began a program to
establish Internet access centered on its six powerful supercomputer
stations across the United States. They created a backbone called NSFNET to
connect college campuses via regional networks to its supercomputer
centers. ARPANET officially expired in 1989. Most of the networks were
gained by NSFNET. The others became parts of smaller networks. The Defense
Communications Agency shut down ARPANET because its functions had been
taken over by NSFNET. Amazingly, when ARPANET was turned off in June of
1990, no one except the network staff noticed.
In the early 1990's the Internet experienced explosive growth. It was
estimated that the number of computers connected to the Internet was
doubling every year. It was also estimated that at this rapid rate of
growth, everyone would have an e-mail address by the year 2020. The main
cause of this growth was the creation of the World Wide Web.
The World Wide Web was created at CERN, a physics laboratory in
Geneva, Switzerland. The Web's development was based on the transmission of
web pages over the Internet, called Hyper Text Transmission Protocol or
HTTP. It is an interactive system for the dissemination and retrieval of
information through web pages. The pages may consist of text, pictures,
sound, music, voice, animations, and video. Web pages can link to other web
pages by hypertext links. When there is hypertext on a page, the user can
simply click on the link and be taken to the new page. Previously, the
Internet was black and white, text, and files. The web added color. Web
pages can provide entertainment, information, or commercial advertisement.
The World Wide Web is the fastest growing Internet resource. In conclusion,
the Internet has dramatically changed from its original purpose. It was
formed by the United States government for exclusive use of government
officials and the military to communicate after a nuclear war. Today, the
Internet is used globally for a variety of purposes. People can send their
friends an electronic "hello." They can download a recipe for a new type of
lasagna. They can argue about politics on-line, and even shop and bank
electronically in their homes. The number of people signing on-line is
still increasing and the end it not in sight. As we approach the 21st
century, we are experiencing a great transformation due to the Internet and
the World Wide Web. We are breaking through the restrictions of the printed
page and the boundaries of nations and cultures.
-------------
Phillip Johnson
f:\12000 essays\sciences (985)\Computer\History of UNIX.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Where did UNIX come from and why are there different versions of UNIX?
The first efforts at developing a multi-user, multi-tasking operating system were begun in the 1960's in a development project
called MULTICS. While working for Bell Telephone Laboratories in 1969 and 1970, Ken Thompson and Dennis Ritchie
began to develop their own single-user, multi-tasking small operating system and they chose the name UNIX. Their initial goal
was simply to operate their DEC PDP machines more effectively. In 1971, UNIX became multi-user and multi-tasking, but it
was still just being developed by a small group of programmers who were trying to take advantage of the machines they had at
hand. (In other words, this operating system that they were developing did not run on any machine made by Bell!)
In 1973, Dennis Ritchie rewrote the UNIX operating system in C (a language he had developed.) And in 1975, the portability
of the C programming language was used to "port" UNIX to a wide variety of hardware platforms. For legal reasons, Bell
Labs was not able to market UNIX in the 1970's, though they did share this operating system with many universities - most
notably UC-Berkeley. This led to some of the variations in UNIX which we see today. After the divestiture of the Bell System,
their parent company, AT&T, became much more interested in marketing a commercial version of UNIX. And today we see
that many companies have now licensed their own version:
AT&T's System V,
Versions of System V such as SCO's Xenix and IBM's AIX
Berkeley's UNIX (called "BSD" for "Berkeley System Development"),
Versions of Berkeley UNIX such as Sun Microsystem's SunOS, DEC's Ultrix and Carnegie Mellon University's Mach
(used on the NEXT).
f:\12000 essays\sciences (985)\Computer\Hollywood and Computer Animation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IS 490
SPECIAL TOPICS
Computer Graphics
Lance Allen
May 6, 1996
Table of Contents
Introduction 3
How It Was 3
How It All Began 4
Times Were Changing 6
Industry's First Attempts 7
The Second Wave 10
How the Magic is Made 11
Modeling 12
Animation 13
Rendering 13
Conclusion 15
Bibliography 16
Introduction
Hollywood has gone digital, and the old ways of doing things are dying. Animation and
special effects created with computers have been embraced by television networks,
advertisers, and movie studios alike. Film editors, who for decades worked by painstakingly
cutting and gluing film segments together, are now sitting in front of computer screens.
There, they edit entire features while adding sound that is not only stored digitally, but
also has been created and manipulated with computers. Viewers are witnessing the results of
all this in the form of stories and experiences that they never dreamed of before. Perhaps
the most surprising aspect of all this, however, is that the entire digital effects and
animation industry is still in its infancy. The future looks bright. How It Was
In the beginning, computer graphics were as cumbersome and as hard to control as dinosaurs
must have been in their own time. Like dinosaurs, the hardware systems, or muscles, of
early computer graphics were huge and ungainly. The machines often filled entire buildings.
Also like dinosaurs, the software programs or brains of computer graphics were hopelessly
underdeveloped. Fortunately for the visual arts, the evolution of both brains and brawn of
computer graphics did not take eons to develop. It has, instead, taken only three decades
to move from science fiction to current technological trends. With computers out of the
stone age, we have moved into the leading edge of the silicon era. Imagine sitting at a
computer without any visual feedback on a monitor. There would be no spreadsheets, no word
processors, not even simple games like solitaire. This is what it was like in the early
days of computers. The only way to interact with a computer at that time was through toggle
switches, flashing lights, punchcards, and Teletype printouts. How It All Began
In 1962, all this began to change. In that year, Ivan Sutherland, a Ph.D. student at (MIT),
created the science of computer graphics. For his dissertation, he wrote a program called
Sketchpad that allowed him to draw lines of light directly on a cathode ray tube (CRT). The
results were simple and primitive. They were a cube, a series of lines, and groups of
geometric shapes. This offered an entirely new vision on how computers could be used. In
1964, Sutherland teamed up with Dr. David Evans at the University of Utah to develop the
world's first academic computer graphics department. Their goal was to attract only the most
gifted students from across the country by creating a unique department that combined hard
science with the creative arts. They new they were starting a brand new industry and wanted
people who would be able to lead that industry out of its infancy. Out of this unique mix of
science and art, a basic understanding of computer graphics began to grow. Algorithms for
the creation of solid objects, their modeling, lighting, and shading were developed. This
is the roots virtually every aspect of today's computer graphics industry is based on.
Everything from desktop publishing to virtual reality find their beginnings in the basic
research that came out of the University of Utah in the 60's and 70's. During this time,
Evans and Sutherland also founded the first computer graphics company. Aptly named Evans &
Sutherland (E&S), the company was established in 1968 and rolled out its first computer
graphics systems in 1969. Up until this time, the only computers available that could
create pictures were custom-designed for the military and prohibitively expensive. E&S's
computer system could draw wireframe images extremely rapidly, and was the first commercial
"workstation" created for computer-aided design (CAD). It found its earliest customers in
both the automotive and aerospace industries. Times Were Changing
Throughout its early years, the University of Utah's Computer Science Department was
generously supported by a series of research grants from the Department of Defense. The
1970's, with its anti-war and anti-military protests, brought increasing restriction to the
flows of academic grants, which had a direct impact on the Utah department's ability to
carry out research. Fortunately, as the program wound down, Dr. Alexander Schure, founder
and president of New York Institute of Technology (NYIT), stepped forward with his dream of
creating computer-animated feature films. To accomplish this task, Schure hired Edwin
Catmull, a University of Utah Ph.D., to head the NYIT computer graphics lab and then
equipped the lab with the best computer graphics hardware available at that time. When
completed, the lab boasted over $2 million worth of equipment. Many of the staff came from
the University of Utah and were given free reign to develop both two- and three-dimensional
computer graphics tools. Their goal was to soon produce a full -length computer animated
feature film. The effort, which began in 1973, produced dozens of research papers and
hundreds of new discoveries, but in the end, it was far too early for such a complex
undertaking. The computers of that time were simply too expensive and too under powered, and
the software not nearly developed enough. In fact, the first full length computer generated
feature film was not to be completed until recently in 1995. By 1978, Schure could no longer
justify funding such an expensive effort, and the lab's funding was cut back. The ironic
thing is that had the Institute decided to patent many more of its researcher's discoveries
than it did, it would control much of the technology in use today. Fortunately for the
computer industry as a whole, however, this did not happen. Instead, research was made
available to whomever could make good use of it, thus accelerating the technologies
development. Industry's First Attempts
As NYIT's influence started to wane, the first wave of commercial computer graphics studios
began to appear. Film visionary George Lucas (creator of Star Wars and Indiana Jones
trilogies) hired Catmull from NYIT in 1978 to start the Lucasfilm Computer Development
Division, and a group of over half-dozen computer graphics studios around the country opened
for business. While Lucas's computer division began researching how to apply digital
technology to filmmaking, the other studios began creating flying logos and broadcast
graphics for various corporations including TRW, Gillette, the National Football League, and
television programs, such as "The NBC Nightly News" and "ABC World News Tonight." Although
it was a dream of these initial computer graphics companies to make movies with their
computers, virtually all the early commercial computer graphics were created for television.
It was and still is easier and far more profitable to create graphics for television
commercials than for film. A typical frame of film requires many more computer calculations
than a similar image created for television, while the per-second film budget is perhaps
about one-third as much income. The actual wake-up call to the entertainment industry was
not to come until much later in 1982 with the release of Star-Trek II: The Wrath of Kahn.
That movie contained a monumental sixty seconds of the most exciting full-color computer
graphics yet seen. Called the "Genesis Effect," the sequence starts out with a view of a
dead planet hanging lifeless in space. The camera follows a missiles trail into the planet
that is hit with the Genesis Torpedo. Flames arc outwards and race across the surface of
the planet. The camera zooms in and follows the planets transformation from molten lava to
cool blues of oceans and mountains shooting out of the ground. The final scene spirals the
camera back out into space, revealing the cloud-covered newly born planet. These sixty
seconds may sound uneventful in light of current digital effects, but this remarkable scene
represents many firsts. It required the development of several radically new computer
graphics algorithms, including one for creating convincing computer fire and another to
produce realistic mountains and shorelines from fractal equations. This was all created by
the team at Lucasfilm's Computer Division. In addition, this sequence was the first time
computer graphics were used as the center of attention, instead of being used merely as a
prop to support other action. No one in the entertainment industry had seen anything like
it, and it unleashed a flood of queries from Hollywood directors seeking to find out both
how it was done and whether an entire film could be created in this fashion. Unfortunately,
with the release of TRON later that same year and The Last Starfighter in 1984, the answer
was still a decided no.
Both of these films were touted as a technological tour-de-force, which, in fact, they
were. The films' graphics were extremely well executed, the best seen up to that point, but
they could not save the film from a weak script. Unfortunately, the technology was greatly
oversold during the film's promotion and so in the end it was technology that was blamed
for the film's failure. With the 1980s came the age of personal computers and dedicated
workstations. Workstations are minicomputers that were cheap enough to buy for one person.
Smaller was better, aster, an much, much cheaper. Advances in silicon chip technologies
brought massive and very rapid increases in power to smaller computers along with drastic
price reductions. The costs of commercial graphics plunged to match, to the point where
the major studios suddenly could no longer cover the mountains of debt coming due on their
overpriced centralized mainframe hardware.
With their expenses mounting, and without the extra capital to upgrade to the newer cheaper
computers, virtually every independent computer graphics studio went out of business by
1987. All of them, that is, except PDI, which went on to become the largest commercial
computer graphics house in the business and to serve as a model for the next wave of
studios. The Second Wave
Burned twice by TRON and The Last Starfighter, and frightened by the financial failure of
virtually the entire industry, Hollywood steered clear of computer graphics for several
years. Behind the scenes, however, it was building back and waiting for the next big break.
The break materialized in the form of a watery creation for the James Cameron 1989 film,
The Abyss. For this film, the group at George Lucas' Industrial Light and Magic (ILM)
created the first completely computer-generated entirely organic looking and thoroughly
believable creature to be realistically integrated with live action footage and characters.
This was the watery pseudopod that snaked its way into the underwater research lab to get a
closer look at its human inhabitants. In this stunning effect, ILM overcame two very
difficult problems: producing a soft-edged, bulgy, and irregular shaped object, and
convincingly anchoring that object in a live-action sequence. Just as the 1982 Genesis
sequence served as a wake-up call for early film computer graphics, this sequence for The
Abyss was the announcement that computer graphics had finally come of age. A massive
outpouring of computer-generated film graphics has since ensued with studios from across
the entire spectrum participating in the action. From that point on, digital technology
spread so rapidly that the movies using digital effects have become too numerous to list in
entirety. However they include the likes of Total Recall, Toys, Terminator 2: Judgment
Day, The Babe, In the Line of Fire, Death Becomes Her, and of course, Jurassic Park.
How the Magic is Made
Creating computer graphics is essentially about three things: Modeling, Animation, and
Rendering. Modeling is the process by which 3-dimensional objects are built inside the
computer; animation is about making those objects come to life with movement, and rendering
is about giving them their ultimate appearance and looks.
Hardware is the brains and brawn of computer graphics, but it is powerless without the
right software. It is the software that allows the modeler to build a computer graphic
object, that helps the animator bring this object to life, and that, in the end, gives the
image its final look. Sophisticated computer graphics software for commercial studios is
either purchased for $30,000 to $50,000, or developed in-house by computer programmers.
Most studios use a combination of both, developing new software to meet new project needs.
Modeling
Modeling is the first step in creating any 3D computer graphics. Modeling in computer
graphics is a little like sculpting, a little like building models with wood, plastic and
glue, and a lot like CAD. Its flexibility and potential are unmatched in any other art form.
With computer graphics it is possible to build entire worlds and entire realities. Each
can have its own laws, its own looks, and its own scale of time and space.
Access to these 3-dimensional computer realities is almost always through the 2-dimensional
window of a computer monitor. This can lead to the misunderstanding that 3-D modeling is
merely the production perspective drawings. This is very far from the truth. All elements
created during any modeling session possess three full dimensions and at any time can be
rotated, turned upside down, and viewed from any angle or perspective. In addition, they
may be re-scaled, reshaped, or resized whenever the modeler chooses. Modeling is the first
step in creating any 3-dimensional computer animation. It requires the artist's ability to
visualize mentally the objects being built, and the craftsperson's painstaking attention to
detail to bring it to completion. To create an object, a modeler starts with a blank screen
an sets the scale of the computer's coordinate system for that element. The scale can be
anything from microns to light years across in size. It is important that scale stays
consistent with all elements in a project. A chair built in inches will be lost in a living
room built in miles. The model is then created by building up layers of lines and patches
that define the shape of the object.
Animation
While it is the modeler that contains the power of creation, it is the animator who
provides the illusion of life. The animator uses the tools at his disposal to make objects
move. Every animation process begins essentially the same way, with a storyboard.
A storyboard is a series of still images that shows how the elements will move and interact
with each other. This process is essential so that the animator knows what movements need
to be assigned to objects in the animation. Using the storyboard, the animator sets up key
points of movements for each object in the scene. The computer then produces motion for
each object on a frame by frame basis. The final result when assembled, gives the form of
fluid movement. Rendering
The modeler gives form, the animator provides motion, but still the animation process is not
complete. The objects and elements are nothing but empty or hollow forms without any
surface. They are merely outlines until the rendering process is applied. Rendering is the
most computational time demanding aspect of the entire animation process. During the
rendering process, the computer does virtually all the work using software that has been
purchased or written in-house. It is here that the animation finally achieves its final
look. Objects are given surfaces that make it look like a solid form. Any type of look can
be achieved by varying the looks of the surfaces. The objects finally look concrete. Next,
the objects are lighted. The look of the lighting is affected by the surfaces of the
objects, the types of lights, and the mathematical models used to calculate the behavior of
light. Once the lighting is completed, it is now time to create what the camera will see.
The computer calculates what the camera can see following the designs of the objects in the
scene. Keep in mind that all the objects have tops, sides, bottoms, and possibly insides.
Types of camera lens, fog, smoke, and other effects all have to be calculated. To create
the final 2-D image, the computer scans the resulting 3D world and pulls out the pixels that
the camera can see. The image is then sent to the monitor, to videotape, or to a film
recorder for display. The multiple 2D still frames, when all assembled, produce the final
animation.
Conclusion
Much has happened in the commercial computer graphics industry since the decline of the
first wave of studios and the rise of the second. Software and hardware costs have
plummeted. The number of well-trained animators and programmers has increased dramatically.
And at last, Hollywood and the advertising community have acknowledged that the digital age
has finally arrived, this time not to disappear. All these factors have lead to an explosion
in both the size of existing studios and the number of new enterprises opening their doors.
As the digital tide continues to rise, only one thing is certain. We have just begun to see
how computer technology will change the visual arts.
BIBLIOGRAPHY
How Did They Do It? Computer Illusion in Film & TV , Alpha Books 1994;
Christopher W. Baker
Computer Graphics World, Volume 19, Number 3; March 1996;
Evan Hirsch, "Beyond Reality"
Computer Graphics World, Volume 19, Number 4; April 1996;
Evan Marc Hirsch, "A Changing Landscape"
Windows NT Magazine, Issue #7, March 1996;
Joel Sloss, "There's No Business Like Show Business"
Cinescape, Volume 1, Number 5; February 1995;
Beth Laski, "Ocean of Dreams"
16
f:\12000 essays\sciences (985)\Computer\How Magnets Affect Computer Disks.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How Magnets Affect Computer
Disks
BackGround
One of the most commonly used Computer data storaged mediums is a Computer
Disk or a Floppy. These are used in everyday life, in either our workplace or at home.
These disks have many purposes, such as:
Storing data: Floppies can be used to store software/data for short preiods of time,
Transferring data: Floppies are used to transfer/copy data from one computer to
another.
Hiding data: Floppies are also sometimes used to hide sensitive or confidential data,
because of the disk's small size it can be hidden very easily.
Advertising: Because floppies are cheap to buy, they are used to advertise different types
of software, such as: Software for the internet advertised on America Online Floppies.
Floppies are also considered to be very sensitve data storage mediums. These
Disks have numerous advantages and disadvanteges. Even though floppies are used so
commonly they are also not very dependable. They have numerous conditions under which
they should normally be kept. For example: the actuall magnetic disk inside the hard cover
of the disk must NEVER be touched, the magnetic disk inside, must be protected by the
metallic sliding shield, the disk must always be within the temperature of 50° to 140°
Fahrenheit and the disk must never be bought near a magnet! (3M Diskettes)
There are many such hazards to computer disks. Problems caused by magnets are
very common. A floppy can be damaged unknowingly if it is kept near a magnet, that may
be in the open or inside any device, such as a speaker phone in computer speakers or
stereo or a telephone. And becuase of the common use of magnets in everyday life, more
and more floppies are damaged everyday.
Even though protective coverings against magnets and other electrical hazards, are
available for floppies, they are not used very commonly. Therefore, floppies are not a very
safe media for storage, even though they are convienient.
Some of the most commonly used diskettes are by 3M and Sony and other such
companies. The floppies are sold in boxes with instructions on them to not to bring
floppies near magnets and other instructions of DOs and DONTs. These instructions must
always be followed.
Floppies have different capacities such as 720 KB (kilobytes) and 1.44 MB
(megabytes). Floppies also have different sizes, 3.5" and 5.25". The most commonly used
floppy is usually 3.5". It is not soft and cannot be bent, where as a 5.25" disk is soft and
can be bent!
A floppy is a round, flat piece of Mylar coated with ferric oxide, a rustlike substance
containing tiny particles capable of holding a magnetic field, and encased in a protective
plastic cover, the disk jacket. Data is stored on a floppy disk by the disk drive's read/write
head, which alters the magnetic orientation of the particles. Orientation in one direction
represents binary 1; orientation in the other, binary 0.
Purpose
The purpose of my experiment was to test Floppies to see how delicate they are
near magnets and how much damage can be done to the disks and to the software on it
bye a single magnet. I also hope my project will help others to be aware that computer
disks are very delicate and sensitive to temperature, weather, magnets...etc.
Hypothesis
When the magnets are bought near the disk, the disk should be damaged internally
along with the software in it. And the weakest magnet should cause the least damage and
the strongest magnet should cause the most damage.
Experimentation
Material:
Four 3.5" Floppy Diskettes.
Four different Magnets
One Personal Home Computer
Printer
Software:
Windows95
Norton Disk Doctor
Dos (Ver 4.00.950)
Procedure:
Every Floppy Diskette has 2874 sectors. This was calculated by dividing the total
number of bytes on a disk by the number of bytes every sector occupies. There is a total of
1,457,664 bytes on every Floppy, and every sector occupies 512 bytes. Therefore, 512 /
1457664 is 2874, ie. the total number of sectors on every Floppy.
First, I obtained the four 3.5" IBM formatted floppy diskettes (Highlandä). Next I
obtained the four different magnets of different strengths and sizes and tested and verified
their strengths by bringing iron filings near each of them and observing how much of iron
filings each one of them attracted and then noting which magnet was the strongest and
which was the weakest in order. Then I tested each of the disks for existing errors by
using a program called Norton Disk Doctor (NDD) which has the ability to detect and fix
error on a disk. There were no error on any of the four disks.
Next, I decided to hold the magnets near the disks for the experimentation for
about 30 seconds at about the same place on the disk. I did so on all of the four disk.
Then, I brought the disks home and tested all four of the disks in a disk testing and
repair program called Norton Disk Doctor. I notices that each one of the disks suffered
damage.
Every one of the four disk was numbered. The Floppy with the weakest magnet
was "Disk 1" and the Floppy with the strongest magnet was "Disk 4" respectively. This
was done to avoid possible confusion in the disks.
Result
Every Floppy Diskette has 2874 sectors. This was calculated by dividing the total
number of bytes on a disk by the number of bytes every sector occupies. There is a total of
1,457,664 bytes on every Floppy, and every sector occupies 512 bytes. Therefore, 512 /
1457664 is 2874, ie. the total number of sectors on every Floppy.
After every Floppy had been tested, I noted all the results. The results were as
follows:
Disk 1:
Total Bytes on Disk: 1,457,664
Total Bytes in Bad Sectors: 3584
Total Number of Sectors: 2874
Total Number of Bad Sectors: 7
Total Number of Good Sectors: 2867
Disk 2:
Total Bytes on Disk: 1,457,664
Total Bytes in Bad Sectors: 5632
Total Number of Sectors: 2874
Total Number of Bad Sectors: 11
Total Number of Good Sectors: 2863
Disk 3:
Total Bytes on Disk: 1,457,664
Total Bytes in Bad Sectors: 15360
Total Number of Sectors: 2874
Total Number of Bad Sectors: 30
Total Number of Good Sectors: 2844
Disk 4:
Total Bytes on Disk: 1,457,664
Total Bytes in Bad Sectors: 19968
Total Number of Sectors: 2874
Total Number of Bad Sectors: 39
Total Number of Good Sectors: 2833
After the testing, I discovered that even the smallest of the Magnets could cause
bad sectors and damage both, the disk and the data on the disk. Even thought the damage
wasn't very big, it was big enough to corrupt any program on the disk, becuase every part
of the present file would be necessary for its correct use and any bad sectors would almost
destroy the file and make it worthless.
Conclusion:
In conclusion, this experiment proved that floppies are very sensitive to magnets
and should not be brought near them at anytime. When the magnets were brought near the
floppies, the disks were damaged and the weakest magnet caused the least damage and the
strongest magnet caused the most damage.
f:\12000 essays\sciences (985)\Computer\How the Internet Affects Us.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How Technology Effects Modern America
U.S. Wage Trends
The microeconomic picture of the U.S. has changed immensely since 1973, and the trends
are proving to be consistently downward for the nation's high school graduates and high
school drop-outs. "Of all the reasons given for the wage squeeze - international
competition, technology, deregulation, the decline of unions and defense cuts - technology
is probably the most critical. It has favored the educated and the skilled," says M. B.
Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages
adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth
for high school graduates, and by about 7% for those with some college education. Only
the wages of college graduates are up.
Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon
University reports, "recruitment of it's software engineering students is up this year by over
20%." All engineering jobs are paying well, proving that highly skilled labor is what
employers want! "There is clear evidence that the supply of workers in the [unskilled labor]
categories already exceeds the demand for their services," says L. Mishel, Research Director
of Welfare Reform Network.
In view of these facts, I wonder if these trends are good or bad for society. "The danger of
the information age is that while in the short run it may be cheaper to replace workers with
technology, in the long run it is potentially self-destructive because there will not be enough
purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend
from unskilled labor to highly technical, skilled labor is a good one! But, political action
must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970,
a high school diploma could still be a ticket to the middle income bracket, a nice car in the
driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and
a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue).
However, in 1970, our government provided our children with a free education, allowing
the vast majority of our population to earn a high school diploma. This means that anyone,
regardless of family income, could be educated to a level that would allow them a
comfortable place in the middle class. Even restrictions upon child labor hours kept
children in school, since they are not allowed to work full time while under the age of 18.
This government policy was conducive to our economic markets, and allowed our country
to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly
technical world, that requires highly skilled labor. The natural answer to this problem, is
that the U.S. Government's education policy must keep pace with the demands of the
highly technical job market. If a middle class income of 1970 required a high school
diploma, and the middle class income of 1990 requires a college diploma, then it should be
as easy for the children of the 90's to get a college diploma, as it was for the children of the
70's to get a high school diploma. This brings me to the issue of our country's political
process, in a technologically advanced world.
Voting & Poisoned Political Process in The U.S.
The advance of mass communication is natural in a technologically advanced society. In
our country's short history, we have seen the development of the printing press, the radio,
the television, and now the Internet; all of these, able to reach millions of people. Equally
natural, is the poisoning and corruption of these medias, to benefit a few.
From the 1950's until today, television has been the preferred media. Because it captures
the minds of most Americans, it is the preferred method of persuasion by political figures,
multinational corporate advertising, and the upper 2% of the elite, who have an interest in
controlling public opinion. Newspapers and radio experienced this same history, but are
now somewhat obsolete in the science of changing public opinion. Though I do not
suspect television to become completely obsolete within the next 20 years, I do see the
Internet being used by the same political figures, multinational corporations, and upper 2%
elite, for the same purposes. At this time, in the Internet's young history, it is largely
unregulated, and can be accessed and changed by any person with a computer and a
modem; no license required, and no need for millions of dollars of equipment. But, in
reviewing our history, we find that newspaper, radio and television were once unregulated
too. It is easy to see why government has such an interest in regulating the Internet these
days. Though public opinion supports regulating sexual material on the Internet, it is just
the first step in total regulation, as experienced by every other popular mass media in our
history. This is why it is imperative to educate people about the Internet, and make it be
known that any regulation of it is destructive to us, not constructive! I have been a daily
user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which
makes me a senior among us. I have seen the moves to regulate this type of
communication, and have always openly opposed it.
My feelings about technology, the Internet, and political process are simple. In light of the
history of mass communication, there is nothing we can do to protect any media from the
"sound byte" or any other form of commercial poisoning. But, our country's public
opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first
experience I had in a course on Critical Thinking came when I entered college. As many
good things as I have learned in college, I found this course to be most valuable to my basic
education. I was angry that I hadn't had access to the power of critical thought over my
twelve years of basic education. Simple forms of critical thinking can be taught as early as
kindergarten. It isn't hard to teach a young person to understand the patterns of
persuasion, and be able to defend themselves against them. Television doesn't have to be a
weapon against us, used to sway our opinions to conform to people who care about their
own prosperity, not ours. With the power of a critical thinking education, we can stop
being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to
persuade us.
In conclusion, I feel that the advance of technology is a good trend for our society;
however, it must be in conjunction with advance in education so that society is able to
master and understand technology. We can be the masters of technology, and not let it be
the masters of us.
Bibliography
Where have the good jobs gone?, By: Mortimer B. Zuckerman
U.S. News & World Report, volume 119, pg 68 (July 31, 1995)
Wealth: Static Wages, Except for the Rich, By: John Rothchild
Time Magazine, volume 145, pg 60 (January 30, 1995)
Welfare Reform, By: Lawrence Mishel
http://epn.org/epi/epwelf.html (Feb 22, 1994)
20 Hot Job Tracks, By: K.T. Beddingfield, R. M. Bennefield, J. Chetwynd,
T. M. Ito, K. Pollack & A. R. Wright
U.S. News & World Report, volume 119, pg 98 (Oct 30, 1995)
f:\12000 essays\sciences (985)\Computer\HOW THE INTERNET GOT STARTED.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HOW THE INTERNET GOT STARTED
Some thirty years ago , the Rand corporation , America's formost cold war think
tank, faced a strange straegic problem. How could the US authrieties succesfully
communicate after a nuclear war?
Postnuclear America would need a comand-and-control network, linked from city
to city , state to state, base to base . But no matter how throughly that network was
armored or protected , its switches and wiring would always be vulnerable to the impact
of atomic bombs. A nuclear attack would reduce any conceivable network to tatters.
And how would the network itself be commanded and controlled ? Any central authority,
any network central citadel, would be an obvious and immediate target for man enemy
missle. Thecenter of the network would be the very first place to go.
RAND mulled over this grim puzzle in deep military secrecy, and arrived at a
daring solution made in 1964.The principles were simple . The network itself would be
assumed to be unreliable at all times . It would be designed from the get-go to tyranscend
its all times . It would be designed from the get-go to transcend its own unrreliability. All
the nodes from computers in the network would be equal in status to all other nodes , each
node with its own authority to originate , pass , and recieve messages. The messages
would be divided into packets, each packet seperatly addressed. Each packet would begin
at some specified source node , and end at some other specified destination node . Each
packet would wind its way through the network on an individual basis.In fall 1969, the
first such node was insalled in UCLA. By December 1969, there were 4 nodes on the
infant network, which was named arpanet, after its Pentagon sponsor.
The four computers could even be programed remotely from the other nodes. thanks to
ARPANET scientists and researchers could share one another's computer facilities by
long -distance . This was a very handy service , for computer-time was precious in the
early '70s. In 1971 ther were fifteen nodes in Arpanet; by 1972, thirty-seven nodes. And it
was good.
As early as 1977,TCP/IP was being used by other networks to link to
ARPANET.ARPANET itself remained fairly tightly controlled,at least until 1983,when its
military segment broke off and became MILNET. TCP/IP became more common,entire
other networks fell into the digital embrace of the Internet,and messily adhered. Since the
software called TCP/IP was public domain and he basic technology was decentralized and
rather anarchic by its very nature,it as difficult to stop people from barging in linking up
somewhere or other. Nobody wanted to stop them from joining this branching complex of
networks,whichcame tobe known as the "INTERNET".
Connecting to the Internet cost the taxpayer little or nothing, since each node was
independent,and had to handle its own financing and its own technical requirements. The
more,the merrier. Like the phone network,the computer network became steadily more
valuable as it embraced larger and larger territories of people and resources.
A fax machine is only valuable if everybody eles a fax machine. Until they do, a fax is
just a curiosity. ARPANET, too was a curiosity for a while. Then computer networking
became an utter necessity.
In 1984 the National Science Foundation got into theact,through its office of
Advanced Scientific Computing.
The new NSFNET set a blisteing pace for technical advancement linking
newer,faster,shinier supercomputers,through thicker, faster links,upgraded and
expanded,again and again,in l986,l988,l990.And other government agencies leapt
in:NASA,National Institutes of Health,Department of Energy,each of them maintaining a
digitl satrapy in the INTERNET confederation.
The nodes in this growing network-of-networks were divided up into basic
varieties. Foreighn computers,and a few American ones chose to be denoted by their
geographical locations. The others were grouped by the six basic Internet domains --gov,
{government} mil {military}edu{education} these were of course, the pioneers
Just think, in l997 the standards for computer networking is now global. In 1971, there
were only four nodes in the ARPANET network. Today there are tens of thousands of
nodes in the Internet,scattered over forty two countries and more coming on line every
single day. In estimate, as of December,l996 over 50 million people use this network.
Probably, the most important scientific instrument of the late twentieth century is the
INTERNET. It is spreading faster than celluar phones,faster than fax machines. The
INTERNET offers simple freedom. There are no censors,no bosses,etc. There are only
technical rules, not social, political,it is a bargain you can talk to anyone anywhere,and it
doesnt charge for long distance service. It belongs to everyone and no one.
The most widely used part of the"Net" is the world Wide Web. Internet mail is E
mail a lot faster than the US Postal service mail Internet regulars call the US mail the
"snailmail"File transfers allow Internet users to access remote machines and retrieve
programs or text. Many internet computers allow any person to acess them anonymously
to simply copy their public files,free of charge. Entire books can be transferred through
direct access in a matter of minutes.
Finding a link to the Internet will become easier and cheaper. At the turn of the
century,Network literacy will be forcing itself into every individuals life.
f:\12000 essays\sciences (985)\Computer\How to buy a computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Knowledge is Power
Buying a personal computer can be as difficult as buying a car. No matter how much one investigates, how many dealers a person visits, and how much bargaining a person has done on the price, he still may not be really certain that he has gotten a good deal. There are good reasons for this uncertainty. Computers change at much faster rate than any other kind of product. A two-year-old car will always get a person where he wants to go, but a two-year-old computer may be completely inadequate for his needs. Also, the average person is not technically savvy enough to make an informed decision on the best processor to buy, the right size for a hard drive, or how much memory he or she really needs. Just because buying a computer can be confusing does not mean one should throw up his hands and put himself at the mercy of some salesman who may not know much more than he does. If one would follow a few basic guidelines, he could be assured of making a wise purchase decision.
A computer has only one purpose; to run programs. Some programs require more computing power than others. In order to figure out how powerful a computer the consumer needs, therefore, a person must first determine which programs he wants to run. For many buyers, this creates a problem. They cannot buy a computer until they know what they want to do with it, but they cannot really know all of the uses there are for a computer until they own one. This problem is not as tough as it seems, however. The consumer should go to his local computer store, and look at the software that's available. Most programs explain their minimum hardware requirements right on the box. After looking at a few packages, it should be pretty clear to the consumer that any mid-range system will run 99% of the available software. A person should only need a top-of-the-line system for professional applications such as graphic design, video production, or engineering. Software tends to lag behind hardware, because it's written to reach the widest possible audience. A program that only works on the fastest Pentium Pro system has very limited sales potential, so most programs written in 1985 work just fine on a fast '486, or an entry-level Pentium system. More importantly, very few programs are optimized to take advantage of a Pentium's power. That means that even if the consumer pays a large premium for the fastest possible system, he may not see a corresponding increase in performance.
Buying the latest computer system is like buying a fancy new car. One pays a high premium just to get the newest model. When the consumer drives the car out of the showroom, it becomes a used car, and its value goes down several thousand dollars. Similarly, when a new computer model comes out in a few weeks, his "latest and greatest" becomes a has-been, and its value plummets. Some people think that if they only buy the most powerful computer available, they will not have to upgrade for a long time. These people forget, however, that a generation of computer technology lasts less than a year. By computer standards, a two-year-old model is really old, and a three-year-old model is practically worthless. Sinking a lot of money into today's top-of-the-line computer makes one less willing (and less financially able) to upgrade a couple of years from now, when a person may really need it. Here's something else to consider. While a faster processor will usually increase the speed of a system, merely doubling the processor speed usually will not double the performance. A 133MHz Pentium system may only be 50% faster than 75 MHz Pentium system, for example. That's because there are a lot of other limiting factors. Memory is a prime example. One may be better off buying a 75MHz Pentium system with 16MB of RAM than a 133 MHz system with 8MB. Even if buying the top machine did double a machine's performance, however, it still might not make as big a difference as a person might think. If his software performs any given task in under a second, doubling its speed saves the consumer less than half a second.
No products change as quickly as computers. Considering the pace of this change, it does not make sense to buy a computer today without planning for tomorrow. Every computer claims to be upgradeable, but there are varying degrees of expandability. A truly expandable unit has:
At least two empty SIMM sockets for memory upgrades
At least three empty expansion slots (preferably local-bus PCI slots)
A standard-sized motherboard that one can replace with a newer model
A large case with lots of room inside (I prefer the "mini-tower" design.)
The last two items require a bit of explanation. The motherboard is the computer's main circuit board, which holds the processor(such as a '486 or Pentium chip) and the memory (RAM). Even if the consumer buys the fastest Pentium Pro system available today, at some point he is going to need to go to a faster processor. Some motherboards try to provide a way to add a faster processor later. The problem is, computer manufacturers do not really know what features computers will have two years from now. The best way to guarantee that he will be able to upgrade his processor, therefore, is to make sure that the consumer can replace the motherboard. A person might think that it would be very expensive to replace the motherboard, but actually, it can be a very cost-effective way to upgrade a computer. For example, a friend of mine had an old 25 MHz 386SX computer with 2MB of RAM. By current standards, this computer was almost too slow to use. I replaced the '386 motherboard with one containing an 100MHz '486 DX/4 processor for about $200, including installation. The resulting computer is fast enough to run any of today's software, and the price was a lot less than Intel charges for its "Overdrive" chips, which add a fast new processor to a current (slow) motherboard. The reason I was able to perform the upgrade so inexpensively is that the original computer had an industry-standard sized motherboard in a roomy mini-tower case. I just slid out the old motherboard, and popped in the new one, using the same graphics card, sound card, hard drive, floppy drive, and memory modules as the original machine. The result was a unit identical to the previous one, only ten times as fast.
Unfortunately, upgrading is not always so easy. Many systems from "big-name" manufacturers such as Compaq, IBM, and Packard Bell, use proprietary motherboards and slim-line cases. The small size of these units makes them fit easily on a desktop, but does not leave much room inside for expansion.. These factors make the compact desktop units a nightmare to service and to upgrade. What is a buyer to do? He is to make sure that the computer he is to buy has full-sized case. Such a computer should be made up of individual components, each of which can be upgraded or replaced individually, and none of which costs more than about $200. This makes the unit easy to upgrade, and easy to service should something break later on. How does one make sure that the computer uses industry-standard parts, instead of some weird proprietary technology? One quick way is to look at the expansion slots on the back. If the computer is a desktop unit (one that is wider than it is tall), the slots should go up and down, perpendicular to the desk. If it's in a tower configuration (taller than it is wide), the slots should go left to right, parallel to the desk. The number of slots should be a tip-off. The right kind of case will have space for at least seven slots. Also, the consumer should look to see where the peripherals plug in. If there is a separate video card, for example, the monitor plug will be located on the rear bracket of an expansion slot. The more individual components a computer has, the easier it is to upgrade and replace them.
Computer technology changes so quickly that it does not make sense to pay a high premium for the fastest system on the market. Today's speed demon is tomorrow's has been. If one is looking to get the best value for his money, look to the middle of the pack. Today, for example, Pentium systems go from the 75MHz systems on the low end to 133 MHz systems on the high end. The middle systems, the 100 MHz and 120 MHz systems, are where he will find his best buys. This situation will no doubt change as 150 MHz and 166 MHz systems are introduced, and the 100 MHz systems become the new low end. The aspect that will not change is the fact that he will get the best buy with a system that falls somewhere in the middle. Mid-priced computers cost only a little more than the "El cheapo" systems, but perform almost as well as top-of-the-line models. They will not become obsolete as fast as the cheapest computers will, but they'll still leave the consumer with enough money so that one feels comfortable upgrading in a couple of years.
f:\12000 essays\sciences (985)\Computer\How to maintain a computer system.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to Maintain A Computer System
Start a notebook that includes information on your system. This notebook
should be a single source of information about your entire system, both hardware
and software. Each time you make a change to your system, adding or removing
hardware or software, record the change. Always include the serial numbers of all
equipment, vendor support numbers, and print outs for key system files.
Secondly periodically review disk directories and delete unneeded files.
Files have a way of building up and can quickly use up your disk space. If you think
you may need a file in the future, back it up to a disk. At a minimum, you should
have a system disk with your command.com, autoexec.bat, and config.sys files. If
your system crashes, these files will help you get it going again. In addition, back up
any files with an extension of .sys. For Windows systems, all files with an extension
of .ini and .grp should also be backed up.
Next any time you work inside you computer turn off the power and
disconnect the equipment form the power source. Before you touch anything inside
the computer, touch and unpainted metal surface such as the power supply. This
will help to discharge any static electricity that could damage internal components.
Now you should periodically, defragment your hard disk. Defragmenting
your hard disk reorganizes files so they are in contiguous clusters and makes disk
operations faster. Defragmentation programs have been known to damage files so
make sure your disk is backed up first.
A good thing to do now would be to protect your system from computer
viruses. Computer viruses are programs designed to infect computer systems by
copying themselves into other computer files. The virus program spreads when the
infected files are used by or copied to another system. Virus programs are
dangerous because they are often designed to damage the files in a system. You
can protect yourself from viruses by installing an anti-virus program.
Lastly learn to use system diagnostic programs, if they did not come with
your system, obtain a set. These programs help you identify and possibly solve
problems before you call for technical assistance. Some system manufactures now
include diagnostic programs with their systems and ask that you run the programs
before you call for help.
f:\12000 essays\sciences (985)\Computer\How to make a webpage!.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For my science project I chose to create a web (internet) page, dealing with science. This project consists of using a computer and a html editor to create a page that can be found on the internet. The next paragraph will explain how to make an internet page.
The steps to making a web page to post on the internet, is very easy. Most web pages are made in a code called html, which is what I am using to make my science web page. Html is an acronym for: Hyper Text Markup Language. The html codes are very easy to use, and remember. If you want to spice up your web page, you may want to use another code called, Java. The word Java is not an acronym, it comes from its maker, 'Sun Technologies', which is a tremendously huge company that deals with web page making and the internet. Java enables you to have those neat scrolling words at the bottom of your web browser, and the other neat moving things that you may find in web pages around the net. Another code to spice up your web page would be cgi. Cgi stands for: Common Gateway Interface, it is used to submit information on the internet. You can get a book at your local library that contains how to use html, java, and cgi. You now need to select one of the many programs that allow you to make a web page, using html, java, and cgi. Once you find this program, you may now start to enter your html, java, and cgi coordinates. After long hours of work you may now test your web page, depending on the program you are using, there is usually a button that you may press that enables you to look at the web page you have made. After revising and checking your web page, it is time to place it on the internet. To do this, you may have to contact your internet provider, and ask them if they allow their customers to place internet documents on their world wide web server. Once you have it on the net, tell all your friends about it so you can get traffic on your page, and maybe one day, you will win an award for it, and all that work will be paid off.
Every time I make a web page for something or someone, I always learn new html, java, and cgi commands, because I always like to try new things, to see if they will work, or to see what they will do. When I made this web page, I learned how to do different things all at once, which I had never done before. Making web pages are fun for the people who are experienced with web page making, and for people who are very computer literate.
f:\12000 essays\sciences (985)\Computer\How to make phones ring.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Samuri Presents
"Makin' Fones Ring"
Ok, this is easy. This is not realy phreaking, but still kind of
phun. This only works on Bell Atlantic fones and pay fones. All right,
to make a Bell Atlantic fone ring all you have to do is dial 811 then
the last four digits of the numer from which you are calling. You will
hear a dial tone as soon as you do this. Hang up for about 3 seconds
then pick up again, you will hear a strange tone. Hang up, in 5 seconds
the fone will ring. When someone picks up the fone they will hear the same
tone you just heard. When the fone is hung up again it will reset to normal.
You can do this with home fones AND pay fones. The phun part is IT WILL
KEEP RINGING UNTILL SOMEONE PICKS UP. You can do this with your own fone
and annoy your parrents or when you go over to someone's house. This is
phun to do at places with rows of pay fones, you can get them all to ring
at once.
----The
f:\12000 essays\sciences (985)\Computer\How to Surf the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to Surf the Internet
The term "Internet," or "The Net" as it is commonly in known in the computer
business, is best described as an assortment of over one thousand computer networks
with each using a common set of technical transfers to create a worldwide
communications medium. The Internet is changing, most profoundly, the way people
conduct research and will in the near future be the chief source of mass information. No
longer will a student have to rely on the local library to finish a research essay - anybody
with a computer, a modem, and an Internet Service Provider can find a wealth of information
on the Net. Anybody with a disease or illness and who has access to the Internet can obtain
the vital information they are in need of. And, most importantly, businesses are
flourishing at this present day because of the great potential the Internet holds.
First of all, for a person to even consider doing research on the Internet privately
they must own a computer. A computer that is fast, reliable, and one that has a great deal
of memory is greatly beneficial. A person also needs a modem (a device that transmits data
from a network on the Internet to the user's computer). A modem's quality and speed are
measured as something called a baud rate (how fast the modem transmits data in bits and
kilobits - similar to grams and kilograms). A kilobit is a term simply used to describe the
speed of a modem. For example, if somebody was to go out and purchase a 2400 baud modem,
they would be buying a modem that transmits data 2400 kilobits per second which is
definitely not the speed of a modem you want if your thinking of getting onto the Internet.
The speeds of modems then double in the amount of kilobits that can be transmitted per
second going from 4800 baud to 9600 baud and so on eventually getting up to 28800 baud
(which is the fastest modem on the market right now). To surf the Internet successfully, a
person will have to own a 9600 baud or higher, and with recent advancements the Internet has
offered, the recommended speed is a 14400 kilobytes per second modem. A modem ranges in
price, depending on the type of modem you want, the speed you need, and if it is an external
or internal type, modems range from as low as $20 to as high as $300. If a person is
unequipped with a computer most local libraries and nonprofit organizations provide Internet
access where research can be done freely. Having Internet access in libraries is extremely
beneficial for citizens who do not have access to the Internet as it gives them a chance to
survey the vast amount of information available on the Net. And it is absolutely true the
Internet is evolving as the greatest tool for searching and retrieving information on any
particular subject.
Searching for information on The Internet using libraries and other nonprofit
organizations can be a bit uncomfortable. For those people who already own a computer
and a modem, and are ready to take hold of the highway of information the Internet
provides, they might want to consider getting a commercial account with an Internet
Service Provider or ISP (a company or organization that charges a monthly a fee and
provides people with basic Internet services). Choosing your ISP may be the most difficult
thing you must decide when trying to get on the Net. You must choose a service that has a
local dial-in number so you do not end up with monstrous long distance charges. You must
also choose an ISP that is reliable, fast, and has a good technical support team who are
there when you're in trouble or have a problem. Typically, most ISP's charge around $25 to
$30 per month and they allocate approximately 90 hours per month for you to use the service.
You must be aware that even though there are some ISP's who charge only $10 to $15 per
month for unlimited access, they may not meet up to your expectations; so it would be
advisable to spend the extra $15 or $20 per month to get the best possible service. No
matter how a person gets connected to The Internet, they will always be able to search for
information about any topic that enters their minds. And it is the Internet that is
changing the traditional methods of how people research specific topics. The tools that
simplify the research processes make the Internet another invaluable method of obtaining
information.
Most people who already know how to surf the Internet properly have no trouble
finding information quickly and logically. However, for new people who are just starting to
use the Net, the process can be quite troublesome. Some of the tools used for searching the
Internet include Electronic Mail or E-mail which is a Messaging system that allows you to
send documents, reports, and facsimiles to users on the Internet. Every user on the
Internet has their own E-mail address and can send messages to anyone as long as they know
another person's E-mail address. One easy way of obtaining information about any topic is
to join a mailing list where mail sent directly to one user will cause the information to be
distributed to all members of that particular list. Mailing Lists are a fun and an easy way
of gaining the important information a person may find on the Net. This also shows another
way of how useful the Internet is and can be.
Another way a person can gain information through Electronic Mail is by people
exchange messages publicly over the Net and these messages are sorted into different
areas called News groups or often referred to as Usenet News. There are currently over
13,000 news groups for which any user with access to the Net can use. People send and
receive messages about what kind of topic the news group is devoted to and is an excellent
way of gaining information quickly and easily. Usenet news is also a way to receive up to
the minute information about timely topics.
A further tool for exploring The Internet is a tool called gopher which is perhaps the
most popular non-graphic way of searching the Internet. It provides interconnected links
between files on different computers around the Net. Gopher provides access to an enormous
amount of text files, documents, games, reference files, software utilities, and much more.
Gopher is menu-oriented making it fun and easy to search for information because the only
thing the user has to do is point and click.
The World Wide Web is a lot like gopher in that the only difference is that it uses a
mixture of text and graphics to display a wide assortment of information. The Web is one of
the most effective methods to provide information because of its visual impact and
multimedia foundation. Many search tools are available on the Web to help the user more
easily search for materials that are of interest to him or her.
There are some users who fret about having an information overload. They see
themselves surfing a sea of random facts, information of varying quality, humour and
entertainment references, people and places. The on-line world contains chaos as does
the real world. Although some say the Internet World contains too much information for
people to make sense of, there is tremendous proof people will find their place on the
Internet with plenty of help. And everybody will grow up to make sense of the information
available just as millions of users already do.
f:\12000 essays\sciences (985)\Computer\How will our future be.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HOW WILL OUR FUTURE BE?
The way the future is heading seems to be very clear but as before things may change. The time to come will never reveal itself until it has actually been. From this point of view I will try to describe the way I see the future coming our way.
One of the major aspects when discussing the future is how will the law be handled and how power will be dealt with. Will we be able to decide for us self what we want to do with our lives and will the right of every individual be respected, as written in the constitution. There is no way I could be forced to believe otherwise. Our society today is made to decide if every citizen in Denmark should have some sort of card that you used for multiple things. Your health-insurance, driver's licence, personal identification and many other things. Some people say that this is the beginning to the completely government controlled society where your every move is followed by the administration.
The year is 2096. We are standing in the airport near Copenhagen. A lot of people are walking by with their net-agents. A small computer-program that has been trained to inform you on all the things that you find interesting. To identify themselves they have their citizen-card plugged into the device. An agent is calling our net-computer. He wishes to inform us about all the activities in Copenhagen today but of course only the ones he knows we might be interested in. The agents are a very handy invention which was created in the late nineties, by a small company called Micro-help. Nowadays everybody has on or more. The net-agents work 24 hours a day at the global, fibre-optical network. The network is so fast you never experience the bottleneck when transferring data like in the old days. This is a major advantage for the large number of people working from their own homes. They use a technology called net-meeting if they have to discuss some paperwork over the net. It is possible to both look at your colleague in the video-chatter and at the same time write in the word-processor while the other person is watching and commenting. The schools all over the world is using this technique to exchange information. There is also a separate net called cyber-net. This is even more advanced than the other net. It is the ultimate cyber-space where you virtually can be anywhere on the entire planet, and you can even visit the other planets in our solar system. This net is based on the sophisticated Virtual Reality Nirvana 3D technology. In spite of the wide spread of computers in all the layers of society it is only the really big companies and their employees that takes VRN-3D into use. Imagine a world which is like a movie where there only will be good things, No pain.
In this society year 2096, every one almost is living in the cities from where they control their daily function. Even the farmers live in the city. They have given up the dirty work and started to maintain their acres and their stocks from computers. They have agents checking on the cattle 24 hours a day. They do this by a neurotic-implant called the CAT-Tracer. This implant can interface with the brain and thereby sense if everything is all right. The agents can even do the medical treatment if needed.
In Copenhagen like every other major city it has stopped growing wider and begun to grow upwards and downwards. This small finesse secures the environment from being run over by bulldozers. Every time more settlements are needed they just builds another flat on top, because it is cheaper than digging under the big city. This means that there is no lack of residences.
On the educational side of society there is a lot more to learn now than there were earlier. You have to be completely in to how a computer works and what its major potential is. The functionality of the programs. In other words you have to be an expert in this area. Furthermore you have to get an education in the field of the industrial oriented direction you may want to work with later on. The learning process is sped up by an implant for better perception and memorising. For years the human race thought that genetic manipulation was the way to a better race. Today we know that nature is much better at selecting the fittest. A lot of money is saved by not doing those extremely expensive experiments. The reason why we have selected to use implants is that such can be removed at any time and is not a positively last change as the genetic manipulation was.
Generally I think humanity has finally learned not to repeat the tragedies of history. We have to work with and for the nature and understand that it is our "BIG BROTHER", whom it is watching out for us and our every little step.
f:\12000 essays\sciences (985)\Computer\Httpwww CHANGE com.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HTTP://WWW.CHANGE.COM
Joe the mailman will no longer be coming to your door. You won't have to go pick
up your newspaper in the bushes at 6:00am anymore. Libraries will be a thing of the past.
Why is this all happening? Welcome to the information age.
"You've got mail!" is the sound most people are listening to. No more licking
stamps, just click on the "send" icon, and express delivery service will take on a whole
new meaning.
The future is here. Now, a mouse is better known as a computer device rather than
a rodent. Surfing is being done over the Internet instead of at the beach. Games are no
longer bought at toy stores, but are downloaded into our computers. All of this new
technology sounds fascinating, but will it benefit more than it will hurt?
Think about my opening sentence, catchy right? Well, think about it again. What is
going to happen to good 'ole Joe? And those nice librarians, what about them? Will they
be out of a job? Will they be forced to operate computers that are foreign to them? How
do we as a society adjust to technological change? The answer lies in society's ability to
effectively measure the costs and benefits of technological change.
The rapid growth of technology brings with it a massive amount of hope, but also
despair. Kids are growing up with computers. They are learning more and faster than
other generations could. This is wonderful, right? Maybe not. Will computers deplete the
social skills kids need to mature? Will being a member of America OnLine rather than a
youth group prove to be helpful or the opposite? Our generation will need to lead this
technological revolution in the right direction. We need to offset the obstacles in our path.
We need to make sure the flow of change is going to be a positive one. The answer lies in
our hands.
We need to utilize the technology given to us, and make sure it is used in a positive
sense. We need to take the Internet and the World Wide Web and rid it of its evils. We
need to make sure terrorist secrets and bomb recipes are not being exchanged, and make
sure educational tools are. We need to make the Internet a source to help find jobs, rather
than a catalyst to replace them. These are the hardships we must get rid of.
So what are we going to do about it? We need to educate everyone young and old,
and make computer illiteracy a thing of the past. We need to maximize computer security
to its fullest extent. Computers shouldn't replace jobs, but rather be a tool in them. Our
generation is being handed great technology and we have to rid it of its flaws. This is what
needs to be done to make technological change great.
The possibilities suggested by technology is endless. There are numerous problems
that arise from such powerful technology. However, with the number of smart minds out
there, it is likely that these problems will find solutions and information technology will
live up to its glamorous expectations.
So, Joe the mailman can keep on delivering that mail, but maybe with a computer
to help organize and make his deliveries quicker. The librarians can keep putting books on
the shelf along with software and multimedia too. Welcome to the future.
f:\12000 essays\sciences (985)\Computer\Human memory organization.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Human memory organisation.
Human memory organisation, from the outside, seems to be quite a difficult thing to analyse,
and even more difficult to explain in black and white. This is because of one main reason,
no two humans are the same, and from this it follows that no two brains are the same.
However, after saying that, it must be true that everyone's memory works in roughly the same
way, otherwise we would not be the race called humans. The way the memory is arranged, is
probably the most important part of our bodies, as it is our memory that controls us.
I think that it is reasonable to suggest that our memory is ordered in some way, and it is
probably easy to think of it as three different sections : short term, medium term, and long
term memory.
Short Term : This is where all of the perceptions we get come to. From the eyes, nose,
ears, nerves
etc. They come in at such a rate, that there needs to be a part of memory that is fast, and
can sift through all of these signals, and then pass them down the line for use, or storage.
Short term memory probably has no real capacity for storage.
Medium Term : This is where all of the information from the short term memory comes to be
processed. It analyses it, and then decides what to do with it (use it, or store it). Here
also is where stored information is called to for processing when needed. This kind of
memory has some kind of limited storage space, which is used when processing information,
however the trade-off is that is slower than Short term memory.
Long Term : Long term memory is the dumping ground for all of the used information. Here is
where the Medium term memory puts, and takes it's information to and from. It has a
large amount of space, but is relatively slow in comparison with the other kinds of
memory, and the way that the memory is stored is dubious as we are all knows to
forget things.
There is quite a good analogy in Sommerfield (forth edition p24-p25). Short term memory is
comparable to computers registers, medium term (Working memory) is like a volatile storage
place for information, and long term memory is like hard disk storage.
I think that this is quite a good way of describing our own memory hierarchy.
It seems that when information is being processed, and then in turn stored, it is not being
stored as raw information such as black, round etc., but is being stored as what we see. For
example, if we see a red cup, we store the information about the cup together, i.e. it's
red, how high it is, what shape it is. Now if we see a black cup, we still recognise that it
is a cup, even though the colour has changed.
Now, it is clear that if the small amount of storage capacity in short term memory did not
pass on the information quickly to the working memory (medium term memory), then as new
information comes in, the old information will be forgotten. Like wise, if working memory
tried to store too much, with more being passed to it from short term, again there will be
information loss.
The way that memory gets around this problem, is not unlike that of structured programming.
Here, tasks are divided into different steps (while, and if loops), so as the different
tasks that are contained in one problem can be tackled be the short term memory in stages.
This means that all of the related information is loaded in stages, the single task is
solved, and the memory gets updated with the next task, until the whole problem is solved.
This way of working, means that there is no need to load unrelated information at the same
time, saving on time, and work that the memory has to do.
f:\12000 essays\sciences (985)\Computer\Identification of designing a web page for your school.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Identification
The Doha College is a coeducational school for students aged between 11 and 18 years. It is situated in the Arabian Gulf in the capital city of the State of Qatar. It provides a British Curriculum to student from over 40 different countries. Although the culture here does not resemble the European and Western culture, the environment of the classes are the same. The Doha College is one of the few schools in Qatar that represents the British system of teaching. The College's foundation was the result of many members of the British Community, including the British Ambassador, who became the first ever Chairman of the Board of Governors. The Qatar Government approved to the project and continues to give its full and appreciated support plus its encouragement. The Doha College was opened in September 1980. It was moved from their old school to the new one, located in the Salwa Road in April 1988.
The college has now become one of the best schools in Doha. Over 650 students from all over the world are currently being taught in the school. Over a hundred students join the school every year. A lot of events take place around the school ,for example, fairs ,sport events and school parties. All of these events are published around school premises to tell the students about these events. These published document are found mainly on notice boards.
The College publicises itself in many way. For example:
College uniform.
Word of mouth (rumours)
Year Books
Sports events
School production
Fairs and fund raisers
Posters sent around schools
College prospectus
Add in logo on newspapers
Communication between parents and school e.g. news letters and parents teacher meeting.
The Task
Much of the printed publicity of the College tends to be rather dull. Although the printed publicity of the College is often very informative, it doesn't really attract anyone to actually sit down and read it. The printed publicity is often printed in a laser printer and is photocopied about 20 times and is given to the class students. This publicity comes out in two colours, black and white which makes the document very unattractive. These publicity are very unoriginal. The letters and information given to the parents from the school can be particularly straight forward but it still contains to much information, normally known as a information over load.
We have decided to concentrate on designing our own new publicity system for the Doha College. This system could include:
Letters home to parents
Poster advertising about the College
News Letters
Option Booklets
School Prospectus
Year Books
These publicity systems have to be :
Colourful
Attractive
Informative but not dull
Eye catching
Original
Modern
Something that the students or anyone won't forget about it.
We have decided to produce a newsletter as my publicity system. We are going to design my own newsletter, mentioning all the events taking place around the school. For example, We could mention any sport events or sport fixtures taking place around the school. We could also mention the school fairs, or any other special activities that are taking place around the premises.
To check if my newsletter is satisfying enough, we could hand them out to different people of different ages. We could take in consideration of all their different opinions and alter any changes needed in my new publicity system. We are going to compare our new publicity system (the newsletter), with the other newsletters, that are found in the college.
We have looked at the newsletters that are sent around by the school, and as you may already know they are particularly boring! So we are going to make our publicity system:-
a) Attractive:- It has got to make parents and students read it, if it is going to be unattractive, no one will even look at it.
b) Interesting:- It has got to be interesting, to allow parents and students to read the publicity system at the same time, enjoy what they are reading.
c) Original:- The newsletter will have to be original, because if it isn't parents and student will not be attracted to it, thinking it is going to be the same type of document again!
Hardware:
We have already identified, what our publicity system will look like. Now we must describe the different types of hardware We are going to use. Mainly we will be using DTP to work on the newsletter.
1. The Mouse:- The mouse is one of the most important tools to work in DTP. The mouse helps us to move around the document easily and allows us to transmit the movement of our hand to the computer.
2. The scanner:- The scanner will be a very important tool to work on in DTP. This is because, maybe we would want to incorporate our own pictures or photographs into our document.
3.The Printer:- The printer, whether it is an ink-jet or a laser printer, could be used to out put the document. Obviously to check, how are new publicity system looks like we are going to use the printer.
4.The Visual Display Unit:-The monitor should be as large as you can afford, because it will avoid eye strain. Also you might have to work with 2 screens at the same time, this is why we need a VDU.
5. The Digital Camera:- The digital camera may be used to take pictures of the school premises or photographs of the people in the school.
We would prefer to use this sort of hardware to work on our publicity system, mainly the digital camera or even the scanner to produce our publicity system. The newsletter has to be attractive, that is why we are mainly going to use the hardware that will provide us with pictures.
During the year we have already done some rough ideas and sketches on how our new publicity system will look like. Experimenting with Microsoft Publisher and Visio to do some rough designs on what our leaflet will look like. I personally believe this is good idea by getting us into the hand of working with DTP.
Web Page
Due to some research we have realised that we are not going to create a newsletter, but a web page.
Identification of Web Page project:
The Internet is quickly becoming the fastest medium of communications with over 60 million users world wide. Recently, Q-tel. introduced its own server for Internet access. While this project is new and the Qtel file server can hold a very limited amount of customers at one time, the service is very cheap (QR6 per hour) and has enjoyed a fair amount of success. The introduction of this technology to Qatar has provided people with a huge amount of information and more importantly, a chance to interact with people around the world through e-mail, newsgroups and homepages. This is where our project comes in. We plan to create a web page for the Doha College. Our plan will include using our knowledge of the Internet standard language, HTML, as well as a web authoring program. The one I have at present is Microsoft Front Page, but shareware versions of Hot Dog and other tools are available online for free. After creating the page we plan to publicise it through popular search engines such as 'Yahoo!' and 'Webcrawler'. Finally we will show the page to the IT department for evaluation. Once the page is up and running, heads of different departments. can write and update different material on their part of the page. This goes one step further than the Doha College's current publicity system because it will make the school known to Internet users around the world.
Our first draft for this project will be on a web site that offers its users free web sites. We already have an account on this site which includes a free homepage (http//:www.geocities.com/sunsetstrip/alley/3321) and an email address which we have not used yet. The disadvantages of this is that Geocities is a huge web site that gets millions of hits a day, so access is slow and some times, during peak hours, impossible. Even though a web page at Geocities is initially free, upgrades for the page cost a lot of money over the long run. Things like memory upgrades, personalised voice greetings and Java applets are either hard to operate or have a monthly charge. This charge is tiny though when compared to the cost of getting an original .com URL. Something like HTTP://WWW.DOHACOLLEGE.COM would cost an initial fee of $100.00. Banners on the top web pages like Yahoo! Can cost up to ten times that amount per week.
The advantages definitely outweigh the disadvantages though. A very small obscure web site gets visited by about ten people a day. With very high publicity inside the college and Doha in general, the Doha College homepage can guarantee at least triple that amount in one day from Q-tel users alone. Regular listing in all the top search engines is usually free. Things like counters are available from the web counter homepage at HTTP://WWW.DIGITS.COM. The counters count how many people visit a page. HTML text can be copied from one page to another through any browser. The latest versions of web authoring tools are becoming less expensive and are capable of many more tasks such as Java and background sound. The recent introduction of the new Qatar homepage (HTTP://WWW.QATAR-ONLINE.COM) and a serge in Internet users in the Gulf has made publicity easy. URLs of pages can be added at no cost to sites such as Qatar and Emirates-online. If we face a problem with any aspect of building the web page, online help is vastly available, with thousands of pages dedicated to help on one aspect alone. The web page construction will also be influenced by feedback from visitors through email.
We will need a lot of software to make the web page complete. The first thing we will need is an Internet connection with a server. We have one with Q-tel. which includes access to the Internet (QR6.00 an hour) and an email account (akkad@qatar.net.qa). The Q-tel. account can be very slow at times, and if a lot of users connect at one time the server may overload. The dial-up screen of Q-tel. is not very big on security. Many people have managed to hack into the password files. But Q-tel. is improving and access is much quicker than it was three months ago. The second thing we need is the latest version of a popular browser such as Netscape Navigator 3.0+ or Microsoft Internet Explorer 3.01+. Browser models are being upgraded all the time with Both 4.0 Versions of I.E. and Navigator being released soon. The capabilities of the best browsers include Java and ActiveX controls which allow animation and other multimedia to be viewed on the Internet. They can also view the HTML source of documents on the Internet which can be copied onto other sites. Plug-ins are available for all browsers and platforms which can make the Internet more dynamic with CD-quality sound and movies that can be played with Plug-ins such as Shockwave and RealAudio. Most browsers come with mail and news programs to send and receive mail and post to and read Newsgroups.
f:\12000 essays\sciences (985)\Computer\Identity theft speech.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Boo! Are you scared? You should be, you see I'm a ghost and everyday I tap in to the information cyber world. And everyday I have access to you. Worse yet I could be you. According to the secret service approximatly one half a billion dollars is lost every year to identity theft online. What people don't seem to realize is that the internet world is just like any other community. so it's safe to assume the cyberworld would act as any natural community would with entrapranaurs, vast corperations, little guys selling warez, doctors visiting patients in their cyber offices, church organazations, and cyber crime as well as cyber criminals.With so many new users signing on everyday the cyber world has become a tourast trap so to speak where nameless faceless conartists work the crowds. Ghosts.Anybody can fall victem to a ghost. Nobody is really truely safe because our personal information is distrubted like candy on halloween. We use our social security numbers as identification numbers Credit card num
bers are printed on every receipt. And our liscense number is used on a daily basis. Despite this there are ways to prevent yourself from falling victem to identity theft. you see a criminal likes easy prey. They don't want to have to work for it. It's like locking your car at the mall, sure someone might break in anyway, but chance are if your doors are lock the will probably move on to another car. First off... Never give your credit card number online out unless you are positive that the company you are dealing with is legitimate and reputable. If you aren't sure call the better burough of business. Never give out your social security number unless you absolutly have to, the only times you are legally obligated to give out your social security number is when you are requesting government aid of some kind or for employment reasons. Also I have information packets that I will hand out reguarding a company that for a small cost has information about everybody. The packets have detailed informatio
on how to have your name and your family members names removed from their data base system.Now you might be thinking " Granted I can see why you wouldn't want to give out your credit card numbers but What could actually be done with my social security number?" Everything. This is your most vital information. Say I were a cybercriminal. Say I came across your social security number while perusing the school database. With your social security number I can obtain information about you through the school by oh requesting a transcript for instance. Later I could sign on to my annonomous account and fill out an application for american express and maybe a master card and oh I could use a new beeper to, this one is shot.By changing your adress to a PO box or an abandoned apartment box. I could pick up my new cards and legitamatly become you. I could also take it another step and request a new birth certificate, because It's not at all dificult to find out where you were born because the service I mentioned e
rlier has all of this information for me. I can even get a photo license with your information. Scared yet?So I charge you thousands of dollars and you don't even know it until you try to take out a loan for your daughters new car. Granted in most cases you might not have to pay for the monitary damage directly but it will take you years to fix your credit. This is identity theft.. As I said earlier anybody can become a victem. But now you ave vital information that could prevent you from becoming one. So Never give out your information unless you have absolutly have to. Do yourself a favor and do your transactions in person. The information cyberworld is a wonderful place to visit, but just like in tiajauna don't let the little mexican guy sell you a gold necklace for $80 bucks and don't fall pray to the ghosts.
f:\12000 essays\sciences (985)\Computer\IMPLEMENTING A CAD SYSTEM TO REDUCE COSTS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IMPLEMENTING A CAD SYSTEM TO REDUCE COSTS
Introduction
This report will analyze a proposal on how Woodbridge Foam could become more competitive
through improvements in technology. This includes the saving of the companies money,
shortening the design time for new products, decreasing quoting time and improving quality
overall. By implementing a company wide CAD system, which would be networked together with
each customer and all plants, these improvements could be achieved. Research will include
interviewing various employees as to how business is done and what influences the winning or
loosing of a contract. Research will also include study of both customer and competitors
systems.
Project Scope & Current Evaluation
Goals Supported by CAD Initiative:
In converting to a completely independent CAD system, there are a few aspects of operation
which would be greatly improved. The first of the improvements would be the elimination of
paper communication. The need to transfer large drawings using mylars would cease to be,
thus helping provide a paper less environment. Another improvement as a result of CAD would
be that of achieving much tighter tolerances in building new products. Using a CAD system,
part designs could be received in an electronic format such as a math model. These models
are currently in use by customers such as GM, BMW and Mercedes. The effect of having math
models of all new products would enable a quicker turnaround in both quoting and production
of products. CAD Vendors & Hardware Suppliers: Upon observing the various systems used by
several customers and suppliers, the major CAD vendors worth consideration have been
identified. Manufacturers of high quality workstations which have been distinguish are:
Hewlett Packard (HP) IBM Silicon Graphics (SGI) SUN Premium, fully functional CAD solutions
are: CATIA (Dassault / IBM) Computervision (Computervision / TSI) SDRC (SDRC / CAD
Solutions) Unigraphics (EDS) Current System Description Success Factors: In implementing a
new, otherwise foreign system into an established habitual way of doing things, there are
several success factors which must be examined. If these factors are carefully thought
over, a favorable shift from old to new may be obtained. Some critical success factors are
as follows: Vendor availability - Will the chosen system supplier be readily available for
technical support? Product engineering acceptance - Will those who are set in their ways be
willing to abandon their habitual manner of operating? Training - Thorough training of all
related employees must be completed before introduction of the new system. Data management
- A new manner of recording all vital information must be established and proper procedures
documented. Customer interface - Will the chosen system be compatible with those used by our
customers and will needed data be easily convertible?
Company Weaknesses:
Currently, there are many aspects of our situation which present problems in coping with
changing times, which in turn affect the development of technology. Some weaknesses in the
company which curtail our affiliation with the developmental progress of our customers and
suppliers are: We cannot easily accept electronic data; We must deal in manual drawings; We
have many copies of similar drawings; We have multiple ECN levels; We have minimal CAD
knowledge; We must perform manual volume calculations.
Threats to Business:
If procedures are not taken in order to improve on the present company weaknesses, there are
bona fide threats which could potentially harm future progress and business. Once the
weakness in the company have been effaced, the following threat to our business may be
eliminated or greatly reduced. The immediate threats are: Suppliers may assume the design
role; Competitors able to accept electronic input; No business with new products;
Deterioration of communications; Lost productivity Process Description: As in most large
corporations, our process generally follows a standard order of operations. There are
several departments or areas which have functions. Based on the function of a department or
area, a focus area is established and followed. Department/Area Function Focus Area
Customer
Designs seat
Product Engineering
Designs tool to manufacture seat
Supplier
Builds tools and supplies components needed to manufacture and construct seat
Product Evaluation & Costing
Costs seat based on foam and components used, manufacturing costs and assembly
Purchasing
Locates seat component suppliers and oversees development and manufacture of components
Plant
Manufactures and assembles seat
Quality Control
Ensures that products meet our own and customer standards
Sales / Marketing
Processes orders and manages overall customer relationships
New System Requirements
CAD System Requirements:
The CAD system which is chosen must be capable of performing several specific tasks. In
order for a new system to be of any use to the company and an aid to its advancement, it
must present an improvement in various areas. Some of the short term requirement of a new
CAD system are: Capable of 3D modeling including solids; May be used for simple or complex
drafting applications; Suited to quickly perform volume calculations; Apt to translate
various forms of math data.
Product Evaluation & Costing (P.E.C.) Requirements:
With respect to all the various areas of the company, the role of the P.E.C. department is
one of the most important in the area of profit. Once the costing department receives a
part request from a customer, it is the responsibility of the costing department to ensure
that the life cycle of the part development is managed cost efficiently. When a current
product undergoes an engineering change, it is the responsibility of the Costing team
members to note the changes. The product must be re-costed, accounting for variances in
foam and components. If an increase in foam is noted, the change must be calculated. Using
manual calculations, the new part volume is derived and the customer is charged accordingly.
Because foam variances are obtained manually, customers may at times, not be fully charged
for the added cost of foam. Using a CAD system to perform a volume calculation, the answer
would be definitive. The time needed to ship a print is approximately two days. If math
models of products were sent via E-mail, the information needed by the costing department
would be obtained two days earlier. Once complete, a costing package would in turn, arrive
at a plant, also two days earlier from costing. In effect, a total of four days could be
eliminated from the time needed to begin manufacturing a product. Solution Evaluation &
Recommendation Benefits of CAD System In utilizing a CAD system, there are many areas of
operation which are directly or indirectly affected. Because of the speed and accuracy with
which a professional CAD system operates, time, and thus money, may be saved. Potential CAD
project benefits include: Improved accuracy in quotes and design; Reduction in copying and
courier costs; Faster and more accurate calculations of complex volumes; Management of
expanding drawing database; Improved electronic communication with customers and suppliers.
Recommended Vendor/Supplier
Based on thorough presentations made to executives of Woodbridge Foam by each candidate and
the penetration of these amongst key Woodbridge customers, it is recommended that
Unigraphics be implemented as the solution. The Unigraphics system is currently used b 40%
of Woodbridge customers. This system is also capable of performing all of the previously
mentioned tasks such as 3D modeling, drafting, volume calculations and translating different
forms of math data.
Justification of CAD & Unigraphics
CAD justification includes:
Elimination of Mylars;
Encouragement of a paper-less environment;
Reduction in copy and reproduction costs;
Reduction in courier costs;
Faster and more accurate part volume calculations.
Unigraphics justification includes:
Used by key customers such as Chrysler and GM;
Ability to convert data used by all customers;
Extra commitment and availability for technical support;
Extensive research into company prior to presentation.
Work Station Cost:
One time costs for one Workstation:
Unigraphics Software License
$30 000
Hewlett Packard Workstation
$45 000
EDS Assistance (Assessment/Help)
$ 5 000
Training (UG Education)
$10 000
Consulting Assistance
$ 7 500
Printer and Plotter
$30 000
Hummingbird/Exceed PC Access Software
$10 000
One time total costs
$137 500
Annual Maintenance Costs: $3 750
Cost Reductions:
As previously mentioned, the implementation of a CAD system will reduce costs in several
areas. By eliminating the need for physical prints, the cost of reproducing and shipping
prints will be eliminated. Some potential cost reductions in dollars are: Prints: 35,000
Mylars:
75,000
Courier:
5,500
Travel:
16,000
Plants (saved travel):
90,000*
Productivity Improvement
75,000*
TOTAL SAVED:
296,500
Productivity Improvements:
There are some improvements in productivity which do not present a monetary value. These
improvements however; will benefit the company and customer relations. These non-monetary
productivity improvements are: Improved accuracy; Improved customer satisfaction; Support
for higher tolerance of products; Improved on-line access to information; Improved internal
communication between Woodbridge departments.
Conclusion:
As advancements in technology continue to be the norm, it is essential that those who wish
to remain competitive, adhere to these advancements. In the case of the Woodbridge Foam
Corporation, maintaining and equal standing with technological advancements will allow for
improvements in the company as a whole. Cost saving may be incurred in the areas of print
and courier costs; while the need for paper transference is eliminated. Tolerances, quoting
time and an overall improvement in quality will in turn improve the satisfaction of our
customers. Because of these advancements in technology within the company, the saying "a
satisfied customer is a return customer" may be brought to life.
4
4
f:\12000 essays\sciences (985)\Computer\Improvements to the School Districts Local Area Network.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
This evaluation of our school district's network of personal computers will closely examine the current system and identify potential improvements to this system. This evaluation will be of the administrative departments in the school district which are handled separately from the educational departments.
Logistics of the Network
The logistics of the computer systems currently in use break out as such: Approximately 500 personal computers, 21 file servers, a Digital VAX, an Ethernet network running at 10 megabits that spans 52 buildings with wide area network links running at 56 kilo bits per second to 1.55 million bits per second. The primary operating system in use is Microsoft Windows 3.11, but Windows 95 is being phased in to become the primary workstation operating system. Novell Netware 3.12 is the primary network operating system, but Windows NT is being phased in to become the primary network operating system.
Each personal computer in use is utilized, for the most part, by only one user. This means that each system has a standard configuration at the system level, but at the GUI level the users are free to set up their environment however they wish. Virtually every administrative employee in the School District has a personal computer on their desk that they need in order to perform their assigned tasks. Each user of the network is processing mainly text based documents. In some cases graphics are being imbedded into documents to increase the professional look of the document but this use minor use of graphics would not be considered desk top publishing. Since the school district's departments are separated by function, there is no need for video teleconferencing and it is not being used.
Evaluation of System Configuration
Currently each building is handled as if it were a separate organization. Each site has it's own server, and all of the accounts are stored on the local server. Also everyone of the applications in use are stored on each client's workstation. This configuration is excellent for fault tolerance because each server can operate without the presence of any other server.
a. Servers
The servers were evaluated for manufacturer service support, available disk space, Random Access Memory present, processor speed, built in fault tolerance, and the type of network interface card being used.
The server's in use are actually personal computers that have had additional memory and larger hard drives installed so that they could be used in a server capacity. The manufacturer of the computers being used as servers is Gateway 2000. This companies service support is handled through phone and mail only. If there is a hardware problem with a component of the server, it will take at least 24 hours to receive a replacement part. This turn around time is not acceptable because as many as 200 employees would not be able to work while the server that they use is down.
The processors that are in use are Intel Pentium 90 MHz processors. This processor speed is adequate for the average demands being put on the server. This speed could become a problem with upgrades to different network operating systems, but for current utilization the processors are adequate. This is supported by the fact that average server utilization is 14% of the servers capability and by the fact that the servers are used as file servers only.
Built in fault tolerance on the servers is non-existent. The server configuration has only one hard drive, one controller card and usually no tape backup. If any component of the server fails, then the server will be down. The only backup being done on any server is from one tape backup unit on the main server is the MIS department. This is not effective because not all servers can be backed up every night. A bi-weekly or monthly backup may be the only backup available for any server. This is an extremely weak area in this network.
The network interface cards being used are SMC 's 32 bit Peripheral Component Interconnect (PCI) cards. These cards have a lifetime warranty and are replaced within 24 hours by the manufacturer. This type of card is adequate for the traffic being put on them by the users of the network. Current network speed only reach up to 10 million bits per second, and these cards support that very well. A nice feature of this type of cars is that it can be ordered with the BNC, thin net type of connector or the 10-base-T twisted pair connector. This allows greater flexibility when implementing their use.
b. Client "IBM Compatible" Personal Computers
The systems being used by the school district vary greatly from age, speed, and configuration. The average system being used is a 486 66 MHz Gateway PC with 8 megabytes of RAM and a 540 megabyte hard drive. There are also some 386-25 MHz computers still being used, but they are being replaced with Pentium 100 MHz systems. It would be impossible to examine each systems configuration and include it in this report but the districts standard configuration which is being implemented has been included. This represents the software being put on the systems and where they are stored.
The Configuration of the district's personal computers follows.
I. Operating System
1. Microsoft Windows 95
II. Default Applications.
1. Microsoft Office Version 7.0
a. Microsoft Word 7.0
b. Microsoft Access 7.0
c. Microsoft Excel 7.0
d. Microsoft Presentation 7.0
e. Microsoft Scheduler 7.0
2. Word Perfect Version 6.1
3. Quattro Pro Version 5.1
4. Insync Co-Session Remote Version 7.0
5. Reflections Version 5.1
6. F-Prot Professional for Windows 95 Version 2.22.1
III. Protocols
1. Microsoft IPX/SPX
2. Microsoft Netbeui
3. Walker, Richie and Quinn's LAT protocol Version 4.03
IV. Clients
1. Microsoft Windows Client
2. Microsoft Netware Client
V. Installed Printer Drivers
1. Hewlett Packard Laser Jet
2. Hewlett Packard Laser Jet Series II
3. Hewlett Packard Laser Jet 4/4M Plus
VI. Network Interface Card
1. Hewlett Packard Ethertwist Plus (27245B)
VII. Display configuration
1. Resolution and Refresh rate.
a. Super VGA 640 X 480
b. 75 Hertz
I. Windows 95 Environment Configuration
1. Auto Arrange: On
2. Accessibility Options: Off
3. Time Zone: Mountain
4. Screen Saver: Flying Windows (10 min delay)
5. Background: Blue Rivets
6. Installation Type: Typical
7. Desktop Icons:
a. Recycle Bin
b. Microsoft Internet
c. My Computer (User Specific)
d. Network Neighborhood (School District )
e. Microsoft Network
f. Word Perfect Shortcut
g. Quattro Pro Shortcut
h. Microsoft Word Shortcut
i. Reflection Shortcut (District VAX)
j. My Briefcase
7. Toolbar Icons
a. F-Prot Dynamic Virus Protection
b. STB Vision or ATI
8. Microsoft Office Professional Toolbar
II. Default Application Configuration
1. Microsoft Office Version 7.0
a. Microsoft Word Version 7.0
1. Default File Path: C:\mydocu~1 (C:\My documents)
2. Timed Backup: 10 Mins
3. Backup Location: C;\mydocu~1
b. Microsoft Access Version 7.0
1. Default File Path: C:\mydocu~1 (C:\My documents)
2. Timed Backup: 10 Mins
3. Backup Location: C:\mydocu~1
c. Microsoft Excel
1. Default File Path: C:\mydocu~1 (C:\My documents)
2. Timed Backup: 10 Mins
3. Backup Location: C:\mydocu~1
d. Microsoft Presentation
1. Default File Path: C:\mydocu~1 (C:\My documents)
2. Timed Backup: 10 Mins
3. Backup Location: C:\mydocu~1
e. Microsoft Scheduler
1. No custom settings made.
2. Word Perfect 6.1
a. Default File Path: C:\mydocu~1 (C:\My documents)
b. Timed backup: 10 mins
c. Backup Location: C:\office\wpwin\wpdocs
d. Application Location: C:\office\wpwin
3. Quattro Pro
a. Default File Path: C:\mydocu~1 (C:\My documents)
b. Timed backup: 10 mins
c. Backup Location: C:\office\qpw
d. Application Location: C:\office\qpw
4. Insync Co-Session Remote Version 7.0
a. Protocols Supported
1. SPX
2. Netbeui
b. Only Host Installed
5. Reflections Version 5.1
a. Connection: via LAT
b. Static Host List:
1. CSPS01
2. CSPS02
3. CSPS03
4. CSPS04
c. Color: PC Default 2
d. Settings File: C:\rwin\settings.r2w
e. Key Remap: VT => PC Keyboard F1-F4 keys
f. Runs in a maximized window
6. F-Prot Professional
a. Floppy A: protection: Disinfect/Query
b. Floppy B: protection: Report Only
c. Fixed Disk C:\ protection: Disinfect/Query
d. Network Drives: Report Only
e. Dynamic Virus Protection (DVP): Disinfect/Query
1. Scan first full 1 MB of memory
2. Run in minimum amount of memory
3. No schedule set for full scan
III. Default Protocols
1. Microsoft IPX/SPX Compatible Protocol
a. Set as the default protocol
b. Auto configures to 802.2 or 802.3
2. Microsoft Netbeui
3. Walker, Richie and Quinn's LAT Protocol
a. Static Host List
1. CSPS01
2. CSPS02
3. CSPS03
4. CSPS04
IV. Clients
1. Microsoft Windows Client
a. Not set to log into a domain
2. Microsoft Netware Client
a. Preferred Server is local server (disabled on NT clients)
V. Installed Printer Drivers
1. Hewlett Packard LaserJet
2. Any local printer drivers
VI. Network Interface Card
1. Hewlett Packard Ethertwist
a. Interrupt Request: 10
b. Input / Output Base Address: 330
c. Set to 16-Bit Real Mode Driver (To support WRQ LAT)
VII. Display Configuration
1. Set to PC local display driver
2. Set to 640 X 480 Resolution
3. Set to 75 Hz Refresh Rate
4. Set to Large Icons
Comments on Evaluation
As the systems were being evaluated, it was apparent that the systems are to be self sufficient and almost completely independent of the server. Again, for fault tolerance reasons this is a good decision. This means that if one of the servers were to go down the only effect on the workstation would be that there wouldn't be any file sharing available and shared printing could not be done. These two factors would not prohibit employees from getting work done effectively. It would add some inconvenience, but the employees could still function.
The choice of Windows 95 as the operating system was based on the fact that the computers being used were IBM compatible which would demand an IBM compatible operating system. Also the users of the PC's would mostly being using the computer for one or two applications that were not processor demanding. Also, Windows 95 is superior to Windows 3.11 in maintainability, security ,and multitasking. It would seem that an operating system such as Windows NT would be to powerful and to costly to implement. Also, OS/2 would be to powerful and is not as compatible as Windows 95 is with DOS based applications. Therefore, it seems that Windows 95 was a good choice for this type of environment.
It is also apparent that the systems have been configured to be managed and repaired remotely with the application Co-Session Remote. This application is configured to allow a workstation to be remotely controlled by a system administrator from a PC on the same network. This application has been configured for use over IPX/SPX and Netbeui which means that the connection would be very fast. So, instead of using dial up connections at 28.8 kbps the system can be controlled at 10 mbps which is significantly faster.
One weakness of this configuration is the necessity to load drivers in Real Mode instead of the 32 bit mode of Windows 95. This is necessary because this system must connect to a VAX using the LAT protocol and the LAT protocol runs only in the 16 bit real mode. This limitation does not significantly slow down the workstation, but it does cause communication to be slightly slower with the server. As soon as the LAT protocol is upgraded to allow it to run, the faster environment the configuration of all Windows 95 based machines should be upgraded.
Potential Improvements
After performing an in depth study of the systems being used by the school district, the issue that needs most attention is data backup. Currently there is not a routine procedure in place to safeguard the districts data. This should be a major concern and steps should be taken to resolve this problem before a disaster occurs. Additionally, the use of real mode network drivers needs to be fazed out as soon as possible. The users currently do not see degradation in performance but as their applications become more network intensive the problems also will become greater. Outside of the backup problem and the real mode drivers all other critical areas have been sufficiently addressed to give the users a robust system that can be easily upgraded and managed.
Conclusion
The school district's personal computer network is one that is used to provide employees with a means to compile, process and disseminate information that is relevant to business operations. Currently the primary type of information being processed is text based with some use of imbedded graphical images. There is no other medium being used such as video teleconferencing being utilized over the network. The district is currently in the process of providing employees with Internet access to their desktop which is used for such activities as funds acquisition, consulting State bid lists and personal e mail. The support for these 500+ systems comes from only support one professional that has to support over 50 separate buildings. The result is that the district needs systems that are fast, reliable, inexpensive, low maintenance and have the ability to communicate with many other personal computers and servers.
This evaluation found that the district is not at the level that it needs to be but steps in the right direction are being made to get there. The computers have good software configurations and most users have all of the hardware they need to perform their job functions. If the district can acquire more personnel to support this network and come up with a routine backup plan then the users of the network could continue to support the school district effectively with the use of this well designed technology.
f:\12000 essays\sciences (985)\Computer\Improving Cyberspace.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Improving Cyberspace
by
Jason Crandall
Honors English III
Research Paper
26 February 1996
Improving Cyberspace
Thesis: Though governments cannot physically regulate the
Internet, cyberspace needs regulations to prevent
illegal activity, the destruction of morals, and child
access to pornography.
I. Introduction.
II. Illegal activity online costs America millions and hurts
our economy.
A. It is impossible for our government to physically
regulate cyberspace.
1. One government cannot regulate the Internet by
itself.
2. The basic design of the Internet prohibits
censorship.
B. It is possible for America to censor the Internet.
1. All sites in America receive their address from
the government.
2. The government could destroy the address for
inappropriate material.
3. Existing federal laws regulate BBS's from
inappropriate material.
III. Censoring the Internet would establish moral standards.
A. Pornography online is more harsh than any other
media.
1. The material out there is highly perverse and
sickening.
2. Some is not only illegal, but focuses on
children.
B. Many industries face problems from illegal activity
online.
1. Floods of copyrighted material are illegally
published online.
2. Innocent fans face problems for being good fans.
IV. Online pornography is easily and illegally accessible
to minors.
A. In Michigan, anyone can access anything in
cyberspace for free.
1. Mich-Net offers most of Michigan access with a
local call.
2. The new Communications Decency Act could
terminate Mich-net.
B. BBS's offer callers access to adult material
illegally.
1. Most BBS operators don't require proof of age.
2. Calls to BBS's are undetectable to a child's
parents.
V. Conclusion.
Improving Cyberspace
"People don't inadvertently tune into alt.sex.pedophile
while driving to a Sunday picnic with Aunt Gwendolyn" (Huber).
For some reason, many people believe this philosophy and
therefore think the Internet and other online areas should not be
subject to censorship. The truth is, however, that computerized
networks like the Internet are in desperate need of regulations.
People can say, do, or create anything they wish, and as America
has proved in the past, this type of situation just doesn't work.
Though governments cannot physically regulate the Internet,
cyberspace needs regulations to prevent illegal activity, the
destruction of morals, and child access to pornography.
First, censoring the online community would ease the tension
on the computer software industry. Since the creation of the
first computer networks, people have been exchanging data back
and forth, but eventually people stopped transferring text, and
started sending binaries, otherwise known as computer programs.
Users like the idea; why would someone buy two software packages
when they could buy one and trade for a copy of another with a
friend? This philosophy has cost the computer industry millions,
and companies like Microsoft have simply given up. Laws exist
against exchanging computer software; violators face up to a
$200,000 fine and/or five years imprisonment, but these laws are
simply unenforced. Most businesses are violators as well.
Software companies require that every computer that uses one of
their packages has a separate license for that software
purchased, yet companies rarely purchase their required minimum.
All these illegal copies cost computer companies millions in
profits, hurting the company, and eventually hurting the American
economy.
On the other hand, many people believe that the government
cannot censor the Internet. They argue that the Internet is an
international network and that one government should not have the
power to censor another nation's telecommunications. For
example, American censors can block violence on American
television, but they cannot touch Japanese television. The
Internet is open to all nations, and one nation cannot appoint
itself police of the Internet. Others argue that the design of
the Internet prohibits censorship. A different site runs every
page on the Internet, and usually the location of the site is
undetectable. If censors cannot find the site, they can't shut
it down. Most critics believe that America cannot possibly
censor the Internet.
Indeed, the American government can censor the Internet.
Currently, the National Science Federation administers all
internet addresses, such as web addresses. The organization
could employ censors, who would check every American site
monthly. Any site the censors find with illegal material could
immediately lose their address, thus shutting down the site.
Some might complain about cost, but if the government raised the
annual price to hold an address from a modest $50 to say $500,
they could easily afford to pay for the censors. This would not
present a problem, because mostly businesses own addresses; it
would not effect use by normal people. For example,
microsoft.com is the address for Microsoft, but addresses like
crandall.com just do not exist. Bulletin Board Systems (BBS's)
are another computer media in need of censorship. Like the
Internet, some spots contain hard core pornography, yet some have
good content. Operators usually orient their BBS's for the local
community, but some operators open their system to users across
the world. The government can shut down a BBS if it transfers
illegal material across a state border according to federal law.
As a postal worker in Tennessee showed, shutting down a BBS with
illegal pornography is an easy process. When he called a BBS in
California and found illegal child pornography, he called his
local police. Two days later the police had closed the BBS and
Robert Thomas was awaiting prosecuting in a Tennessee jail
(Elmer-Dewitt). If the government were to employ censors like
that postal worker, thousands of BBS's transmitting illegal
material across state borders could be shut down immediately.
Secondly, censoring cyberspace would help establish moral
standards. According to a local survey, 83% of adults online
have downloaded pornographic material from a BBS. 47% of minors
online have downloaded pornographic material from a local BBS
(Crandall). In another world wide survey, only 22% of 571
responders thought the Internet needed regulation to prevent
minors from obtaining adult material (C|Net). Obviously,
something is wrong with America's morals. A child cannot walk
into a video store and walk out with X-rated movies. A minor
cannot walk out of a bookstore with a copy of Playboy. Why can
children sit in the privacy of their home and look at
pornographic material and we do nothing about it? It is time
America does something to establish moral standards.
Certainly, people accepted the fact that pornography exists
many years ago. In addition, however, they set limits as to how
far pornography could go, yet cyberspace somehow snuck past these
limits. Just after the vote on the Exon bill, Senator Exon said
"I knew it was bad, but when I got out of there, it made Playboy
and Hustler look like Sunday-School stuff" (Elmer-Dewitt). He
was talking about the folder of images from the Internet he
received to show the Senate just before the vote. An hour later,
the vote had passed 84 to 16. Demand drives the market, it
focuses on images people can't find in a magazine or video.
Images of "pedophilia (nude photos of children), hebephilia
(youths) and what experts call paraphilia -- a grab bag of
'deviant' material that includes images of bondage,
sadomasochism, urination, defecation, and sex acts with a
barnyard full of animals" (Elmer-Dewitt) floods cyberspace. Some
wonder how much of this is available, a Carnegie Mellon study
released last June showed that the Internet transmitted 917,410
sexually explicit pictures, films, or short stories over the 18
months of the study. Over 83% of all pictures posted on USENET,
the public message center of the Internet, were pornographic
(Elmer-Dewitt). What happened to our Information Superhighway,
is this what we are fighting to put into our schools?
Furthermore, illegal material other than pornography is
making its way online. When companies such as Paramount and FOX
realized they were loosing money because they were not online,
they took action. They realized that people make money online
just like they do on television. Several people make fan pages
with sound and video clips of their favorite television programs.
When companies heard of this, they wanted to do it themselves,
and sell advertising positions on their pages like with
television. Now these companies are pushing for court orders to
shut down these fan pages due to copyright infringement
(Heyman 78). If someone censored these pages for copyrighted
material in the first place, neither the company nor the owner of
the page would waste time and money in these legal matters. Now,
the company can sue the owner of the page for copyright
infringement. All this because some Star Trek fan wanted to
share some sound clips with other fans.
Most important, online pornography is easily accessible to
minors. What are parents to do, usually it is the child in the
family who is computer literate. If the child was accessing
pornographic material with computers, odds are the parents would
never know. Even if the parents are computer literate, children
can find it, even without looking for it. When 10 year old
Anders Urmachen of New York City hangs out with other kids in
America On Line's Treehouse chat room, he has good clean fun.
One day, however, when he received a message in e-mail with a
file and instructions on how to download it, he did. When he
opened the file, 10 clips of couples engaged in heterosexual
intercourse appeared on the screen. He called his mother who
said, "I was not aware this stuff was online, children should not
be subject to these images" (Elmer-Dewitt). Poor Anders Urmachen
didn't go looking for pornography, it snuck up on him, and as
long as America allows it to happen, parents are going to have to
accept the chance that their children may run into that stuff.
In addition, for several years the people of Michigan have
enjoyed access to the Internet through the state funded program
called Mich-Net. The program offers the public free access to
the Internet, along with schools throughout the state. On the
other hand, the Mich-Net program has one flaw. The program gives
anonymity, allowing anyone, of any age, to access anything on the
Internet. According to the new Communications Decency Act, which
Clinton signed into law February 8, 1996, the government could
terminate the entire Mich-Net program because a minor can access
pornography through it. This would be a huge loss to the state
of Michigan and it's schools. If we were to censor the Internet,
minors wouldn't be able to access the material, and the program
would have no problems.
Furthermore, BBS's offer minors adult material at no cost.
While some BBS's that only offer adult material to adults, others
make access very simple. Some simply say "Type YES if you are
over 18." This is simply unexplainable and unacceptable. Others
require a photo copy of a driver license showing the user is over
18, and other operators even require meeting their users. If all
it takes to access adult material is hitting three keys, what is
stopping children from it. Most young children do not have the
ability to decide where they should go and where they should not.
If it is available, they are going to want to see what it is. To
extend the problem further, these BBS's are usually undetectable
to a child's parents. Most BBS's are local phone calls, and are
free; the parents will never know if the child is accessing it.
For example, the Muskegon area has about 15 BBS's running 24
hours daily. Of these 15, about five operators devote their BBS
to adult material. Of these five, only one BBS requires that the
user meet the operator before receiving access, while three of
the boards simply ask for a photo copy of a drivers license. But
that last one has no security whatsoever, and anyone can access
anything. None of the five boards charge for access. This is
simply unacceptable, we cannot let children access adult material
in this manner.
Every day thousands of children tune into sex in cyberspace.
We do not subject our children to sex on television or other
medias, and even if we do, parents have ways to block it. Yet we
allowed computers to slip through the grips of parents.
Censoring the online community will also strengthen the computer
industry and eventually our economy. The longer we wait, the
more we hurt ourselves; let's regulate cyberspace before it is
too late.
Works Cited
C|Net. Survey Internet: 29 July 1995.
Crandall, Jason. Survey Muskegon, Michigan: 29 Jan. 1996.
Elmer-Dewitt, Philip. "On a Screen Near You: Cyberporn." Time
3 July 1995: Proquest.
Heyman, Karen. "War on the Web." Net Guide Feb. 1996: 76-80.
Huber, Peter. "Electronic Smut." Forbes 31 July 1995: 110.
f:\12000 essays\sciences (985)\Computer\In the Name of Malace or for Business.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IN THE NAME OF BUSINESS OR FOR MALICE
A look into the computer virus
by, Michael Ross
Engineering 201.02
January 22, 1997
Most of us swap disks with friends and browse the Net looking for downloads. Rarely do we ever consider that we are also exchanging files with anyone and everyone who has ever handled them in the past. If that sounds like a warning about social diseases, it might as well be. Computer viruses are every bit as insidious and destructive, and come in a vast variety of strains. A computer virus tears up your hard drive and brings down your network. However, computer viruses are almost always curable diagnosed, and cures for new strains are usually just a matter of days, not months or years, away.
Virus, a program that "infects" computer files (usually other executable programs) by inserting in those files' copies of itself. This is usually done in such a manner that the copies will be executed when the file is loaded into memory, allowing them to infect still other files, and so on. Viruses often have damaging side effects, sometimes intentionally, sometimes not. (Microsoft Encarta 1996)
Most viruses are created out of curiosity. Viruses have always been viewed as a well written, creative product of software engineering. I admit there are many out there who create them out of malice, but far more people are just meeting a challenge in software design. The people who make anti-virus software have much more to benefit from the creation of new virii. This is not a slam, just an observation. A common type of virus would be a Trojan Horse, or a destructive program disguised as a game, a utility, or an application. When run, a Trojan Horse does something devious to the computer system while appearing to do something useful (Microsoft Encarta, 1996). A Worm is also a popular type of virus. A worm is a program that spreads itself across computers, usually by spawning copies of itself in each computer's memory. A worm might duplicate itself in one computer so often that it causes the computer to crash. Sometimes written in separate "segments," a worm is introduced secretly into a host system either for "fun" or with intent to damage or destroy information. The term 'Worm' comes from a science-fiction (Microsoft Encarta 1996).
Some viruses destroy programs on computers although, the better virii do not. Most virus authors incorporate code that specifically destroys data after the virus determines certain criteria have been met, that is, a date, or a certain number of replications. Many virus do not do a good job of infecting other programs and end up corrupting, or making the program they are trying to infect completely unusable. The purpose of a virus, in many cases, is to infect as many files, with little or no noticeable difference to the user.
How does a virus scanner work?
Most virus scanners use a very simple method of searching for a particular sequence of bytes that make every virus unique, like a DNA sequence. When a new virus is discovered, a fairly long sequence of bytes from it is inserted into the anti-virus software database. That's why you need to keep them updated. Any virus scanner you buy should handle at least three tasks: virus detection, prevention, and removal. There are some virus scanners that use a method called heuristic scanning. They use 'rules of thumb' that can be used to identify some virii that has not even been put in the virus database yet. What are the rules of thumb? Well, they are basic assembly language clues that make the file suspicious, such as a JMP instruction at the top of the file. No virus scanner is infallible and anyone that tells you so have no idea what they are talking about. The two best virus scanners in my opinion are F-PROT and THUNDERBYTE. They use the heuristic method described above.
In conclusion; viruses are, and always will be, a part of the computing world. They have been around since programming began and will continue to thrive as long as computers are used. Technology will force us to adapt and be aware that any information we place on a computer may not be safe.
References
Deadly New Computer Viruses Want To Kill Your PC usability.
By James Daley http://www.headlines.yahoo.com/news/stories
originally published in Computer Shopper December 1996
Microsoft Encarta 96; Reference Material Microsoft corporation
f:\12000 essays\sciences (985)\Computer\INTEGRATION OF UMTS AND BISDN IS IT POSSIBLE OR DESIRABLE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTEGRATION OF UMTS AND B-ISDN - IS IT POSSIBLE OR DESIRABLE?
INTRODUCTION
In the future, existing fixed networks will be complemented by
mobile networks with similar numbers of users. These mobile
users will have identical requirements and expectations to the
fixed users, for on-demand applications of telecommunications
requiring high bit-rate channels. It will be necessary for
these fixed and mobile networks to interoperate in order to
pass data, in real time and at high speeds, between their
users.
But how far must this interoperation be taken? How much
integration of the fixed and mobile network structures is
needed? Here, a fixed network, B-ISDN, and a mobile network,
UMTS, under development at the same time, are examined to see
how well and closely they should work together in order to
meet expected user needs. Work already taking place on this is
discussed.
BACKGROUND
The Universal Mobile Telecommunication System (UMTS), the third
generation of mobile networks, is presently being specified as
part of the European RACE technology initiative. The aim of
UMTS is to implement terminal mobility and personal mobility
within its systems, providing a single world mobile standard.
Outside Europe, UMTS is now known as International Mobile
Telecommunications 2000 (IMT2000), which replaces its previous
name of Future Public Land Mobile Telecommunication System
(FPLMTS). [BUIT95]
UMTS is envisaged as providing the infrastructure needed to
support a wide range of multimedia digital services, or
teleservices [CHEU94], requiring channel bit-rates of less
than the UMTS upper ceiling of 2 Mbits/second, as allocated to
it in the World Administrative Radio Conference (WARC) '92
bands. UMTS must also support the traditional mobile services
presently offered by separate networks, including cordless,
cellular, paging, wireless local loop, and satellite services.
[BUIT95] Mobile teleservices requiring higher bit rates, from
2 to 155 Mbits/second, are expected to be catered for by
Mobile Broadband Services (MBS), the eventual successor to
UMTS, which is still under study. [RACED732]
Broadband Integrated Services Digital Network (B-ISDN),
conceived as an all-purpose digital network that will
supersede Narrowband ISDN (N-ISDN or ISDN), is also still
being specified. B-ISDN, with its transport layer of
Asynchronous Transfer Mode (ATM) is expected to be the
backbone of future fixed digital networks. [MINZ89]
It is anticipated that, by the year 2005, up to 50% of all
communication terminals will be mobile. [CHEU94] The Mobile
Green Paper, issued by the European Commission in 1994,
predicts 40 million mobile users in the European Union by
2000, rising to 80 million by 2010. This gives mobile users an
importance ranking alongside fixed-network users. [BUIT95]
One result of this growth in mobile telecommunications will be
the increase in teleservice operations that originate in
either the fixed or mobile network, but terminate in the
other, crossing the boundary between the two. UMTS is expected
to be introduced within the next ten years, and integration
with narrowband and broadband ISDN is possible in this time.
Interoperability between UMTS and ISDN in some fashion will be
necessary to support the interoperability between the fixed
and mobile networks that users have already come to expect
with existing mobile networks, and to meet the expectation of
consistency of fixed/mobile service provision laid out in the
initial RACE vision. [SWAI94]
One way of making UMTS attractive to potential customers is to
offer the same range of services that B-ISDN will offer,
within the bounds of the lower 2 Mbits/second ceiling of UMTS.
[BUIT95]
So, with the twin goals of meeting existing expectations and
making UMTS as flexible as possible to attract customers, how
closely integrated must UMTS be with B-ISDN to achieve this?
ALTERNATIVES FOR INTEGRATING UMTS WITH OTHER NETWORKS
The UMTS network could be developed along one of the following
alternative integration paths:
1. Developing an 'optimised' network structure and signalling
protocols tailored for the special mobile requirements of
UMTS. This would be incompatible with anything else. Services
from all fixed networks would be passed through via gateways.
This design-from-scratch method would result in highly
efficient intra-network operation, at the expense of highly
inefficient inter-network operation, high development cost,
scepticism relating to non-standard technology, and slow
market take-up. True integration with fixed networks is not
possible in this scenario.
Given the drawbacks, this is not a realistic option, and it
has not been considered in depth. One of the RACE goals was to
design UMTS not as a separate overlay network, but to allow
integration with a fixed network; this option is undesirable.
[BUIT95]
2. Integration with and evolution from the existing Global
System for Mobile telecommunication. (GSM, formerly standing
for Group Special Mobil during early French-led specification,
is now taken as meaning Global System for Mobile
communications by the non-French-speaking world.) GSM is
currently being introduced on the European market.
This option has the advantage of using already-existing mobile
infrastructure with a ready and captive market, but at the
expense of limiting channel bit-rate considerably, which in
turn limits the services that can be made available over UMTS.
Some of the technical assumptions of UMTS, such as advanced
security algorithms and distributed databases, would require
new protocols to implement over GSM. GSM would be limiting the
capabilities of UMTS. [BROE93a]
3. Integration with N-ISDN. Like the GSM option above, this
initially limits UMTS's channel bit-rate for services, but has
a distinct advantage over integration with B-ISDN - N-ISDN is
widely available, right now. However, integrating UMTS and
N-ISDN would require effective use of the intelligent network
concept for the implementation of mobile functions, and
modification to existing fixed network protocols to support
mobile access.
Integrating UMTS with N-ISDN makes possible widespread early
introduction and interoperability of UMTS in areas that do not
yet have B-ISDN available. This allows wider market
penetration, as investment in new B-ISDN equipment is not
required, and removes the dependency of UMTS on successful
uptake of B-ISDN for interoperability with fixed networks.
Eventual interoperability with B-ISDN, albeit with
constrictions imposed on UMTS by the initial N-ISDN
compatibility, is not prevented. [BROE93a]
4. Integration with B-ISDN. This scenario was the target of
MONET (MObile NETwork), or RACE Project R2066. Unlike the
above options, B-ISDN's high available bandwidth and feature
set does not impose limitations on the service provisioning in
UMTS. Fewer restrictions are placed on the possible uses and
marketability of UMTS as a result. Development of B-ISDN is
taking place at the same time as UMTS, making smooth
integration and adaptation of the standards to each other
possible.
For these reasons, integration of UMTS with B-ISDN has been
accepted as the eventual goal for interoperability of future
fixed and mobile networks using these standards, and this
integration has been discussed in depth. [BROE93a, BROE93b,
BUIT95, NORP94]
At present, existing B-ISDN standards cannot support the
mobile-specific functions required by a mobile system like
UMTS. Enhancements supporting mobile functions, such as call
handover between cells, are needed before B-ISDN can act as
the core network of UMTS.
Flexible support of fixed, multi-party calls, to allow B-ISDN
to be used in conferencing and broadcasting applications, has
many of the same requirements as support for mobile switching,
so providing common solutions to allow both could minimise the
number of mobile-specific extensions that B-ISDN needs.
As an example of how B-ISDN can be adjusted to meet UMTS's
needs, let's look at that mobile requirement for support for
call handover. Within RACE a multiparty-capable enhancement of
B-ISDN, upwardly compatible with Q.2931, has already been
developed, and implementing UMTS with this has been studied.
For example, a UMTS handover can be handled as a multi-party
call, where the cell the mobile is moving to is added to the
call as a new party, and the old cell is dropped as a party
leaving the call, using ADD(_party) and DROP (_party)
primitives. Other mobile functions can be handled by similar
adaptations to the B-ISDN protocols.
The enhancements to B-ISDN Release 2 and 3 that are required
for UMTS support are minimal enough to be able to form an
integral part of future B-ISDN standards, without impacting on
existing B-ISDN work. [BUIT95]
These modifications only concern high-level B-ISDN signalling
protocols, and do not alter the transport mechanisms. The
underlying ATM layers, including the ATM adaptation layer
(AAL) are unaffected by this.
THE INTELLIGENT NETWORK
The Intelligent Network (IN) is a means for service providers
to create new services and rapidly introduce them on existing
networks. As the IN was considered useful for implementing
mobility procedures in UMTS, it was studied as part of MONET,
and is now specified in the Q.1200 series of the ITU-T
recommendations.
The intelligent network separates service control and service
data from basic call control. Service control is then
activated by 'trigger points' in the basic call. This means
that services can be developed on computers independent of the
network switches responsible for basic call and connection
control. This gives flexibility to the network operators and
service providers, as well as the potential to support the
services on any network that supports the trigger points.
Eventually, IN can be expanded to control the network itself,
such as handling all UMTS mobile functions. [BROE93a]
Any network supporting the intelligent network service set
will be able to support new services using that service set
easily, making integration of networks easier and transparent
to the user of those services. The intelligent network is thus
an important factor in the integration of B-ISDN and UMTS.
UMTS, B-ISDN and the intelligent network set are all being
developed at the same time, allowing each to influence the
others in producing a coherent, integrated whole. [BUIT95]
CONCLUSION
In order to be accepted by users as useful and to provide as
wide a variety of services as possible, UMTS needs some form
of interoperabilty or integration with a fixed network.
Integration of UMTS with B-ISDN offers the most flexibility in
providing services when compared to other network integration
options, and constrains UMTS the least.
With the increase in the number of services that will be made
available in UMTS and B-ISDN over present standalone services,
it is unrealistic to develop two separate, and incompatible,
versions of each service for the fixed and mobile networks.
Integrating UMTS and B-ISDN makes the same service set
available to both sets of users in the same timescale,
reducing development costs for the services, and promoting
uptake and use in the market. The intelligent network concept
allows the easy provision of additional services with little
extra development cost. Integrating UMTS with B-ISDN, and with
the intelligent network set, is therefore desirable.
Work on this integration indicates that the mobile
requirements of UMTS can be met by extending existing B-ISDN
signalling to handle them, without significantly modifying
B-ISDN. Integration of UMTS with B-ISDN is therefore
technically feasible.
REFERENCES
[BROE93a]
W. van den Broek, A. N. Brydon, J. M. Cullen, S. Kukkonen,
A. Lensink, P. C. Mason, A. Tuoriniemi,
"RACE 2066: Functional models of UMTS and integration into
future networks",
IEE Electronics and Communication Engineering Journal, June
1993.
[BROE93b]
W. van den Broek and A. Lensink,
"A UMTS architecture based on IN and B-ISDN developments",
Proceedings of the Mobile and Personal Communications
Conference, 13-15 December 1993.
IEE Conference Publication 387.
[BUIT95]
E. Buitenwerf, G. Colombo, H. Mitts, P. Wright,
"UMTS: Fixed network issues and design options",
IEEE Personal Communications, February 1995.
[CHEU94]
J. C. S. Cheung, M. A. Beach and J. P. McGeehan,
"Network planning for third-generation mobile radio systems",
IEEE Communications Magazine, November 1994.
[MINZ89]
S. E. Minzer,
"Broadband ISDN and Asynchronous Transfer Mode (ATM)",
IEE Communications Magazine, September 1989.
[NORP94]
T. Norp and A. J. M. Roovers,
"UMTS integrated with B-ISDN",
IEEE Communications Magazine, November 1994.
[RACED732]
IBC Common Functional Specification, Issue D.
Race D732: Service Aspects.
[SWAI94]
R. S. Swain,
"UMTS - a 21st century system: a RACE mobile project line
assembly vision"
END.
f:\12000 essays\sciences (985)\Computer\Internet addiction.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INVESTIGATIVE RREPORT OF INTERNET ADDICTION
Prepared for
Dr. Jere Mitchum
By
Marwan
November 4 , 1996
TABLE OF CONTENT
LIST OF ILLUSTRATIONS ..........................................................................................................iv
ABSTRACT .........................................................................................................................................v
INTRODUCTION ............................................................................................................................1
Purpose .........................................................................................................................................1
Growth Of The Internet ............................................................................................................1
THE ADDICTION .............................................................................................................................2
What causes it ..............................................................................................................................2
Symptoms .....................................................................................................................................3
How To Overcome The Addiction ................................................................................................4
The elements of any addiction ...........................................................................................4
CONCLUSION ..................................................................................................................................8
One Last Interesting Question ........................................................................................................9
REFERENCES .................................................................................................................................10
LIST OF ILLUSTRATIONS
Figures
1. The number of networks connected to the Internet vs. Time.
2. The percentage of the Internet domains
3. Will the equation people = Internet Users be true in 2001?
ABSTRACT
Investigative Report of Internet Addiction
The problem of Internet addiction is not very noticeable now and that's why not many people are taking it seriously, but what these people are failing to see is the connection between the very rapid growth of the Internet and the addiction problem. It is really simple logic the bigger the Internet get the more users will be which will lead to a bigger number of addicts that can have their lives as well as others corrupted by this behavior. The main objective of this paper is to make sure that all reader know and understand what Internet addiction is and how it can be solved or avoided. I can not offer a professional psychiatric solution but I believe if a person knows more about the addiction, the better chance they have to help themselves as well as others; that's why I have included a short summary of the elements of addiction.
I hope that by the time you read my paper you will have a better understanding about this issue to keep yourself as well as others of taking Internet addiction lightly.
INTRODUCTION
Purpose
The purpose of this paper is to make you, the reader, alert and more aware of the newest type of addiction, Internet addiction. Many people would call it exaggeration to classify spending a lot of time on the Internet as an addiction, but since the subject is a fairly new not everybody is taking it as serious as they should be.
Growth of the Internet
I am sure that everybody knows what the Internet and used it at least a couple of times so there is no need for me to tell you what the Internet is. However, the incredible growth of the size and technology of the Internet is a fact well worth mentioning.
Ever since the Internet was commercially introduced to the public late in 1989 the number of the networks that form the Internet have been increasing exponentially. As you can see in figure 1 in the United States there is a new network connected to the Internet every 30 minutes.
Figure 1 Number of Networks connected (Source: ftp://nic.merit.edu/statistics/nsfnet)
Not all these networks are commercial, some are educational some are for organizations and some are simply networks that provide Internet services. All these different kind of networks can be identified on the Internet by their domain extension, or in other words the last three letters in the address -e.g. http://www.arabia.com is a commercial site because of the .com- in figure 2 the percentage of all four major domains is shown, and it is obvious that the big share goes to the commercial domains. It does not take a genius to figure out that since the Internet attracted that much commercial interest that means that more and more people are using the Internet, and even more are willing to spend time and money on it.
Figure 2 (Source of data: http://www.nw.com)
THE ADDICTION
With such vast growth of the Internet what is considered as a small problem can grow along with the Internet to cause an even bigger problem. In a recent publication in the Los Angeles Times Mathew McAlleseter reported on a survey conducted on the Internet by Victor Brenner who came up with the following results: "17% said that they spend more than 40 hours a week online, 31% said that their work performance had deteriorated since they started using the Internet, 7% got "into hot water" with their employers or schools for Internet related activities" (LA Times, 5/5/1996, pp A-18).However, Brenner acknowledges that his survey is unscientific in many ways; respondents are self-selected and many may be Internet researchers. On the other hand, Dr. Kimberly Young from the University of Pittsgurg-Bradford conducted a more accurate survey that included 396 men and women. In her point of view heavy on-line users in her study all met psychiatric criteria for clinical dependence applied to alcoholics and drug addicts. They had lost control over their Net usage and couldn't end it despite harmful effects on their personal and professional lives.
What Causes It
Finding a reason for Internet addiction can be as hard as finding a reason for smoking addiction, however, there are a couple of reasons that are obvious for some addicts:
* The power of instant access to all sorts of information and all kinds of people is a positive that can be overused.
* A different kind of community that can draw people who tend to "shy out" in the real world because this new virtual community does not require the social skill that real life does, all you have to do is be good on the keyboard.
* Adopting new personas and playing your favorite kind of personality is not hard when others can not see or hear you.
* Last but not least is the fascination with technology. This might be the best excuse -if there such a thing- to be addicted to the Internet, the information super highway, or cyber space.
Symptoms
When I was trying to collect more information about the symptoms of Internet addiction, I was surprised to find out that almost one half of the sites I visited took Internet addiction as a joke. So as a part of the research I decided to give you the top ten signs you may be addicted to the Internet :
10. You wake up at 3 a.m. to go to the bathroom and stop and check your e-mail on the way back to bed.
9. You get a tattoo that reads "This body best viewed with Netscape Navigator 2.0 or higher."
8. .You write down your URL when asked for your Home Address.
7. You turn off your modem and get this awful empty feeling, like you just pulled the plug on a loved one.
6. You spend half of the plane trip with your laptop on your lap...and your child in the overhead compartment.
5. Your home page sees more action than you do.
4. You start to notice how much this list describes you.
3. People ask why you turn your head to the side when you smile, i.e. :-) .
2. The last girl you picked up was a JPEG image.
1. Your modem burns up. You haven't logged in for two hours. You start to twitch. You pick up the phone and manually dial your service provider access number. You try to hum to communicate with the network. You succeed !!
On the more serious side, an Internet based support group for people who suffer from Internet addiction called the Internet Addiction Support Group (IASG) has established the Internet Addiction Disorder (IAD) to be the following:
A maladaptive pattern of Internet use, leading to clinically significant impairment or distress as manifested by three (or more) of the following, occurring at any time in the same 12-month period:
(I) tolerance, as defined by either of the following:
(A) A need for markedly increased amounts of time on Internet to achieve satisfaction.
(B) markedly diminished effect with continued use of the same amount of time on Internet.
(II) withdrawal, as manifested by either of the following :
(A) the characteristic withdrawal syndrome
(1) Cessation of (or reduction) in Internet use that has been heavy and prolonged.
(2) Two (or more) of the following, developing within several days to a month after Criterion 1:
(a) psychomotor agitation.
(b) anxiety.
(c) obsessive thinking about what is happening on Internet.
(d) fantasies or dreams about Internet.
(e) voluntary or involuntary typing movements of the fingers.
(3) The symptoms in Criterion 2 cause distress or impairment in social, occupational or another important area of functioning.
(B) Use of Internet or a similar on-line service is engaged in to relieve or avoid withdrawal symptoms.
(III) Internet is often accessed more often or for longer periods of time than was intended.
(IV) There is a persistent desire or unsuccessful efforts to cut down or control Internet use.
(V) A great deal of time is spent in activities related to Internet use (e.g., buying Internet books, trying out new WWW browsers, researching Internet vendors, organizing files of downloaded materials.)
(VI) Important social, occupational, or recreational activities are given up or reduced because of Internet use.
(VII) Internet use is continued despite knowledge of having a persistent or recurrent physical, social, occupational, or psychological problem that is likely to have been caused or exacerbated by Internet use (sleep deprivation, marital difficulties, lateness for early morning appointments, neglect of occupational duties, or feelings of abandonment in significant others.)
(Source: John Suler, Ph.D. - Rider University May 1996
http://www1.rider.edu/~suler/psycyber/SUPPORTGP.HTML)
How To Overcome The Addiction
Now that the problem has been established and given a fancy abbreviation (IAD), the next question is what to do about it. Several groups of people created support groups dedicated to help people who suffer from IAD. Some of the most famous support groups is the IASG which can be reached by a-mail at listserv@netcom.com and the Webaholics support group which can be reached on http://www.webaholics.com . However, the main key to getting rid of , or even avoiding, any type of addiction is to understand the basic elements of the addiction. Once you understand these elements you will have a better chance of overcoming the addiction or even not getting it at all.
The elements of addiction are :
(I) Denial
All people who are addicted (to anything) have some degree of denial. Without denial, most addictions would not have become established in the first place.
Denial can take many forms. At the milder extremes, a person may believe "I can handle this problem whenever I decide to do so." The fact that one has a problem is at least acknowledged. At the other extreme, denial often takes the form of: "What problem? I don't have a problem. You've got the problem, Dude. And besides, you're beginning to tick me off!"
(II) Failing to Ask for Help
The second trademark of most addictions is that people affected are very reluctant to ask for help. The mindset of most addicts is: "I can beat this myself." Not only are they reluctant to ask other people for help, but even when they do, they don't accept the advice of others easily.
The best thing to do is to look for individuals or professionals who know how to cure addicted people. While these resource people are rare, you should keep looking for them. If you hook up with someone who claims to have this ability, look at your results and don't hang around too long with this person if you don't see yourself making progress. Keep looking for the right experienced helper and you will eventually find one that works well with you.
(III) Lack of Other Pleasures In Other Activities
One thing that is true about most addictions is they are often either the only or the strongest source of pleasure and satisfaction in a person's life.
People who become addicted often do so because their lives are not fulfilling. They can't seem to find passion, enjoyment, adventure, or pleasure from life itself, so they have to get these pleasures in other ways.
This becomes important when you try to end your addiction. If you try to eliminate your main source of pleasure in life without being able to replace it immediately with other sources of pleasure, it is doubtful you will be able to stay away from your addictive behaviour very long.
(IV) Underlying Deficiencies in Other Aspects of Life
Addiction should never be viewed as a problem in and of itself. Addictions are much better viewed as a symptom of other underlying problems and deficiencies. This is why most addiction therapies are so universally unsuccessful.
To cure most addictions, you must look beyond the addiction itself and deal with underlying deficiencies in coping and life management skills that have given rise to it.
For example, people who become addicted to alcohol and other drugs usually have serious deficiencies in their life management, stress management, and interpersonal skills. Early on in life, they experience a great deal of pain and personal suffering that they can't figure out how to deal with effectively. This drives them to seek external relief and comfort in the form of alcohol or other substances. As this pattern of behaviour gets repeated over time, their bodies become physically addicted to the chemical substance, and the addiction then becomes even more difficult to end.
The same is true for cigarette addiction. Many people find that smoking helps them cope with stress or keep their weight under control. Even if they are successful at beating the physical part of cigarette addiction, they often quickly return to smoking because they fail to improve their repertoire of coping skills.
So if you are trying to deal with the problem of Internet Addiction, or any addiction for that matter, you should ask yourself the following questions:
1. What stress management skills or life management skills do I lack that led me to become addicted?
2. What problems in life do I have that my addiction helps me to avoid or to "solve."
3. What would I need to learn how to do in order to let go of my addictive behaviour?
4. What "benefits" or payoffs am I getting from my addictive behaviour?
(V) Giving in to Temptation
Once you decide to eliminate an established addiction, there are certain requirements and pitfalls you must be prepared for. One of these is dealing with temptation.
Whenever you try to stay away from something that previously gave you great pleasure, you're going to be tempted to return to that behaviour. Sometimes, the temptation may be very strong. But even if it is, you must be prepared to resist it.
Temptation, in truth, is nothing more than a powerful internal feeling state ,i.e. a desire. It is often accompanied by thoughts as well, that are designed to make you "cave in" and satisfy your intense internal cravings.
You, however, are always much stronger than any of your internal thoughts, feelings, or other internal states. You have the power to consistently ignore or to choose not to respond to your thoughts and demanding feelings. Thoughts and feelings have very little power at all (even though many people mistakenly "feel" that their thoughts and feelings are much more powerful than they).
Once you take on the challenge of dealing with any addiction, you will need to marshal your ability to successfully deal with temptation. If you don't have a sense that you have this power to succeed, you can use your addiction as an opportunity to discover that you really do have this important capability.
(VI) Failing to Keep Your Word
In order to change any established habit, be it an addiction or not, you must be able to give your word to yourself and KEEP YOUR WORD NO MATTER WHAT HAPPENS. All behaviour change involves deciding what actions are needed to break the established pattern and then taking those actions on a consistent basis over time. This is just another way of saying "you must give your word to yourself every day that you will do this or that or not do this or that. Then you must keep your word, no matter what happens around you or what temptations or seductive excuses you encounter."
Many addiction treatment programs fail because addicts are not empowered to rehabilitate their ability to give and keep their word. Many addicts, experience has shown, are very accomplished liars. Their promises and statements to others often can't be trusted. And their ability to keep promises to themselves is similarly impaired.
Without the ability to give and keep your word, especially to yourself, you've got very little chance of curing any addiction. On the other hand, if you make this goal part of your overall game plan, you may be able to emerge from your addiction a stronger, healthier, and more trustworthy human being.
(VII) Failing to Do What May Be Necessary
Be very clear about this one important point: ALL ADDICTIONS CAN BE CURED AS LONG AS THEY AGREE TO DO WHATEVER MIGHT BE NECESSARY. One reason most addictions appear to be "incurable" is because people shy away from the types of actions that are often necessary.
What types of actions are these? Well, they can be numerous, diverse, and highly specific for any individual. They might include any or all of the following (using Internet Addiction as an example):
1. Setting an absolute schedule or time limit for how much time you spend on the Internet.
2. Forcing yourself to stay away from the Internet for several days at a time.
3. Placing self-imposed computer "blocks" on certain types of recreational programs, which include the web browser.
4. Setting an absolute policy for yourself of never signing on to the net at work (unless this is required for your study).
5. Establishing meaningful (but not harmful) consequences for yourself for failing to keep your word.
6. Applying these self-imposed consequences until you do regain your ability to keep your word consistently.
7. Forcing yourself to do other things instead of spending time on the net.
8. Resolving to learn how to derive other more healthy sources of pleasure in life to replace or even exceed the pleasure you got from being on the Internet.
9. Asking for help whenever you feel you are not being successful.
10. Avoiding people or environments that might encourage you to return to your addictive behaviour, this might be impossible in college but it still is a good point.
These are not the only actions that can be taken, many of them will work for a majority of individuals. The point is that in order to cure an addiction, you've got to be willing to do things that may seem drastic or outrageous but not harmful to yourself or others.
So if you have a history of failing to make any type of desired behaviour change, all this may mean is that you weren't willing to do what is necessary. All addictions (and other dysfunctional behaviours) can ultimately be cured. It's just a matter of figuring out what specific actions will work (and will not cause you or others harm) and then executing those actions despite any thoughts or feelings you might have to the contrary.
(VIII) Failing to Anticipate and Deal With Relapses
No matter how much initial success you have in eliminating an addiction, unintended relapses are just around the corner. Something unexpected might happen in your life or you might otherwise succumb to a moment of weakness.
Good addiction treatment plans anticipate that such relapses commonly occur and prepare individuals to deal with them successfully.
A relapse does not mean that you have failed in your efforts to cure yourself of an addiction. If you stay away from cigarettes for 3 months and then smoke again for two days in a row, you can view this as a "failure" if you want, or you can focus on the fact that of the last 92 days, you successfully abstained for 97% of them. That's pretty good.
The trick is to keep 2 days from becoming 5 days, or 5 days from becoming 10 days, etc. Here you will need a game plan to keep an occasional relapse from triggering a return to the addiction.
Once you understand these elements, chances are you will not be and addict for long. And for those who were close, I don't think that you are smart enough not to get sucked in.
CONCLUSION
Internet addiction is a serious addiction that should not be taken lightly, it might not be life threatening like some drug addiction, but it can very harmful to the person professional and personal life. The key to staying away from this addiction is to understand its elements and have a strong will power to control one's self from all the temptations that the Internet might provide.
One Last Interesting Question
We all know that more and more people are gaining access to the Internet some way or another, but not every body had the chance of looking at figure 3 !
Figure 3. Will the equation people = Internet Users be true in 2001? (Source: ftp://nic.merit.edu/statistics/nsfnet)
REFERENCES
Elias, M. (7/7/1996) Net overuse called "true addiction", USA Today, pp 1-A.
McAllester, M. (5/5/1996), Study says some may be addicted to the Net; Bulldog Edition.,
Los Angeles Times, , pp A-18.
Network Wizards, [online]
Available URL: http://www.nw.com/zone/
Rodgers, J. (1994), Treatments that works, Vol. 27, Psychology Today, pp 34.
Young, Kimberly, Centre of on-line addiction (COLA), [online]
Available URL: http://www.pitt.edu/~ksy/
Merit Network Inc., [online]
Available URL: ftp://nic.merit.edu/statistics/nsfnet/
iv
f:\12000 essays\sciences (985)\Computer\internet beyond human control.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Internet Beyond Human Control
The Internet has started to change the way of the world during this decade. More homes,
companies, and schools are getting hooked online with the Internet during the past few years.
This change has started to become the new way of life present and future. The Internet system is
so advanced it is ahead of our time. This system is becoming predominately used everyday, but
every which way it works out this system ends up in a negative way.
The Internet System has started to migrate in many schools. The Schools that are hooked
online are mostly colleges. This is because the Internet is capable of flashing up pornographic
picture or comments at anytime. Also their is many different chat lines that consist of a lot of
profanity and violence. A majority of high school students are minors. This is why most colleges
are hooked up online to the Internet system. The government is trying to figure out ways to
police the Internet so this will not happen. The problem with that is it is a very hard task to do. It
is almost guaranteed this will not happen for another five to ten years.
Being hooked up online helps make high school easy to slide through. There is a student
at Chichester Senior High School that has a home computer hooked online with the Internet
system. So when he has a term paper due all he does is down load a term paper on the system
with the same topic. He just puts his name on the paper, hands it in, and receives an A. In return
when he hits college life he will not know how to write a term paper. This will cause him to drop
out. I know other students do the same thing he does. Now students will come out of high school
not well educated.
The Internet system is set up in a way we can give and receive mail. This mail is
called electronic mail usually known as e-mail. This mail will be sent to where you want it the
second you click send with the mouse. The regular U.S. mail takes two days if you are sending
mail from Philadelphia to Media. Now if you mail from coast to coast that could take up to two
weeks. When my parents went to Mexico for two weeks they tried to send me a postcard, but I
didn't receive it till the next day they came back. This could very well end up to become a
problem. Soon no one will even want to use U.S. mail. A big part of the government money is
from the costs of stamps. If no one uses U.S. mail ; there will be nobody buying stamps. Now
the government is not bringing the money they need. So for that the government must raise taxes.
The Internet system will start taking over a large amount of jobs. Through the Internet
anyone can buy items like CDs, tapes, or sheet music. Within a matter of five to ten years there
will not be any music stores in business. They will all be online through the Internet. The problem
is already happening. Out on the west coast a person can go grocery shopping on the Internet.
All a person has to do is go to the net search and type food stores. Than go to Pathmark or
Acme, make your list, and order the food on Visa. After that is all done go to the store you
ordered from, and pick up your groceries already pack in bags. This will eventually drift across to
the east coast. If this happens we are looking at the biggest percent of unemployment ever.
There is at least two hundred employees working in just one grocery store, and there millions of
grocery stores located in the U.S. That means their will be no need for any grocery stores to stay
in business. Where are all these employees going to go?
When a person registers for the Internet system, they have to give their full name and
address. If anyone wants to look up a person they can easily. All a person has to do is type
"finger: e-mail number" at the prompt. So nothing is confidential in the Internet system. If you
start talking to someone on a chat line he could know your address. If he was a serial killer or
burglar you are a easy target. If the person is a hacker he can find your social security number
and change your whole identity. The movie called "The Net" is a good example because that
could actually happen. That movie is about a woman who worked for a computer company. She
spent all of her freetime on the Internet chat lines. She had a hold of a disk that someone wanted
on that chatline. So she went away on vacation, and when she returned her whole identity was
changed. She was a completely different person.
The Internet system is so advanced nobody knows how to deal with the negativity. The
government should of never allowed this system to be released until their was some way to police
it. Now this could cause all of these problems and there is no way to deal with it happening. The
only way people will have jobs if they know computers. Soon there will have to be blockers in
the system to stop people from finding so much information about other people. If nothing
happens soon their will be nothing the government can do. If there is anyway the Internet could be
shut down till they find a way to solve these problems the Internet will work to our advantage.
f:\12000 essays\sciences (985)\Computer\Internet Censorship.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hi this is CYRAX'S paper
I got a real good grade on this
paper. Come visit my web site
v at http://free.websight.com/cyrax
see u there Zya
For centuries governments have tried to regular materials deemed inappropriate or
offensive. The history of western censorship was said to have begun when Socrates was
accused "firstly, of denying the gods recognized by the State and introducing new
divinities, and secondly of corrupting the young." He was sentenced to death for these
crimes. Many modern governments are attempting to control access to the Internet. They
are passing regulations that restrict the freedom people once took for granted.
The Internet is a world wide network that should not be regulated or censored by
any on country. It is a complex and limitless network which allows boundless possibilities
and would be effected negatively by the regulations and censorship that some countries are
intent on establishing. Laws that are meant for other types of communication will not
necessarily apply in this medium. There are no physical locations where communications
take place, making it difficult to determine where violations of the law should be
prosecuted. There is anonymity on the Internet and so ages and identities are not known
this makes it hard to determine if illegal activities are taking place in regards to people
under the legal age. As well, it is difficult to completely delete speech once it has been
posted, Meaning that distributing materials that are obscene are banned becomes easy
The American Library Association (ALA) has a definition that states censorship is
"the change in the access status of material, made by a governing authority or its
representatives. Such changes include: exclusion, restriction, remove, or age/grade level
changes." This definition, however, has a flaw in that it only recognizes one form of
censorship-governmental censorship.
Cyberspace, a common name for the Net, has been defined by one author as being
"made up of millions of people who communicate with one another through computers. It
is also "information stored on millions of computers worldwide, accessible to others
through telephone lines and other communication channels "that" make up what is known
as cyberspace." The same author went on to say " term itself is elusive, since it is not so
much a physical entity as a description of an intangible."
The complexity of the Internet is demonstrated through its many components. The
most readily identifiable part is the World Wide Web (WWW). This consists of web pages
that can be accessed through the use of a web browser. Web pages are created using a
basic programming language. Another easily identified section of the Internet is e-mail.
Once again it is a relatively user-friendly communication device. Some other less
publicized sections of the Internet include: Internet Relay Chat (IRC), which allows real
time chatting to occur among thousands of people, Gopher, which works similarly to the
WWW but for a more academic purpose, and File Transfer Protocol (FTP), Which allows
the transfer of files from one computer to another. Another service that is not Internet but
is carried along with it in many instances is Usenet or News. In Usenet there are many
newsgroups which center their conversations on varied topics. For example,
rec.music.beatles would focus the discussion on the Beetles. This would be done through
posts or articles, almost like letters sent into a large pot where everyone can read and
reply. Many controversial newsgroups exist and they are created easily. It is possible to
transfer obscene and pornographic material through these newsgroups. There is no
accurate way to determine how many people are connected to the Internet because the
number grows so rapidly everyday. Figures become obsolete before they can be
published. "[The Internet] started as a military strategy and, over thirty years later, has
evolved into the massive networking of over 3 million computers worldwide". One of the
most prominent features of the young Internet was it had freedom. It is " a rate example
of a true, modern, functional anarchy...there are no official censors, no bosses, no board of
directors, no stockholders". It is an open forum where the only thing holding anyone back
is a conscience. The Internet has "no central authority" and therefore it makes it difficult
to be censored. As a result of these and more, the Internet offers potential for a true
democracy.
The freedom of speech that was possible on the Internet could now be subjected to
governmental approvals. For example, China is attempting to restrict political expression,
in the name of security and social stability. It requires users of the Internet and e-mail to
register, so that it may monitor their activities. In the United Kingdom, state secrets and
personal attacks are off limits on the Internet. Laws are strict and the government is
extremely interested in regulating the Internet especially these issues. Laws intended for
other types of communication will not necessarily apply in this medium. Through all the
components of the Internet it becomes easy to transfer material that particular
governments might find objectionable. However, all of these ways of communicating on
the Internet make up a large and vast system. For inspectors to monitor every E-mail,
Webpage, IRC channel, Gopher site, Newsgroups, and FTP site would be near impossible.
This attempt to censor the Internet would violate the freedom of speech rights that are
included in democratic constitutions and international laws. It would be a violation of the
First Amendment. The Constitution of the United States of America declares that
"Congress shall make no law respecting an establishment of religion,
or prohibiting the free exercise thereof; or abridging the freedom of
speech, or of the press; or the right of the people peaceably to
assemble, and to petition the Government for a redress of grievances"
Therefore it would be unconstitutional for any sort of censorship to occur on the Internet
and affiliated services. Despite the of being illegal restrictions on Internet access and
content are increasing world-wide under all forms of government. In France, a country
where the press generally have a large amount of freedom, the Internet has recently been
in the spotlight.
"To enforce censorship of the Internet, free societies find that they become more
repressive and closed societies find new ways to crush political expression and opposition"
Vice-President Al Gore, while at an international conference in Brussels about the
Internet, in a keynote address said that "[Cyberspace] is about protecting and enlarging
freedom of expression for all our citizens...Ideas should not be checked at the border"
Another person attending that conference was Ann Breeson of the American Civil
Liberties Union, an organization dedicated to preserving many things including free
speech. She is quoted as saying "Our big victory at Brussels was that we pressured them
enough so that Al Gore in his keynote address make a big point of stressing the
importance of free speech on the Internet." Many other organizations have fought against
laws and have succeeded. A good example of this is the fight that various groups put on
against the recent Communication Decency Act (CDA) of the U.S. Senate. The Citizens
Internet Empowerment Coalition on February 26,1996 filed a historic lawsuit in
Philadelphia against the U.S. Department of Justice and Attorney General Janet Reno to
make certain that the First Amendment of the U.S.A. would not be compromised by the
CDA. The plaintiffs alone, including American Booksellers Association, the Freedom to
Read Foundation, Apple, Microsoft, America Online, the Society of Professional
Journalists, the Commercial Internet eXchange Association, Wired, and HotWired, along
with thousands of netizens (citizens of the Internet) shows the dedication that is felt by
many different people and groups to the cause of free speech on the Internet.
Just recently in France, a high court has struck down a bill that promoted the
censorship of the Internet. Other countries have attempted similar moves. The Internet
cannot be regulated in the way of other mediums simply because it is not the same as
anything else that we have. It is a totally new and unique form of communication and
deserves to be given a chance to prove itself. Laws of one country and this is applicable
to the Internet because there are no borders.
Although North American (mainly the U.S.A.) has the largest share of servers, the
Internet is still a world-wide network. This means that domestic regulations can not
oversee the rules of foreign countries. It would be just as easy for an American teen to
download (receive) pornographic material form England, as it would be from down the
street. One of the major problems is the lack of physical boundaries, making it difficult to
determine where violations of the law should be prosecuted. There is no one place
through which all information passes. That was one of the key points that was stressed
during the original days of the Internet, then called ARPANET. It started out as a defense
project that would allow communication in the event of an emergency such as nuclear
attack. Without a central authority, information would pass around until it got where it
was going. Something like a road system. It is not necessary to take any specific route,
but rather anyone goes. In the same way the information on the Internet starts out and
eventually gets to it's destination.
The Internet is full of anonymity. Since text is the standard form of
communication on the Internet it becomes difficult to determine the identity and/or age of
a specific person. Nothing is known for certain about a person accessing content. There
are no signatures or photo-ids on the Internet therefore it is difficult to certify that illegal
activities (regarding minors accessing restricted data) are taking place. Take for example
a conversation on IRC. Two people could be talking to one another, but all that they see
is text. It would be extremely difficult, if not impossible, to know for certain the gender
and/or age just from communication like this. Then if the conversationalist lies about any
points mentioned above it would be extremely difficult to know or prove otherwise. In
this way governments could not restrict access to certain sites on the basis of ages. A
thirteen year old boy in British Columbia could decide that he wanted to download
pornography from an adult site in the U.S. The sire may have warnings and age
restrictions but they have no way of stopping him from receiving their material if he says
he is 19 years old when prompted. The complexity in the way information is passed
around the Internet means that if information has been posted, deleting this material
becomes almost impossible. The millions of people that participate on the Internet
everyday have access to almost all of the data present. As well it becomes easy to copy
something that exists no the Internet with only a click of a button. The relative ease of
copying data means the second information is posted to the Internet it may be archived
somewhere else. There are in fact many sites on the Internet that are devoted to the
archiving of information including: Walnut Creek's cdrom.com, which archives an
incredible amount of software among others, The Internet Archive-www.archive.org,
which is working towards archiving as much of the WWW as possible, and The
Washington University Data Archive, Which is dedicated towards archiving software,
publications, and many other types of data. It becomes hard to censor material that might
be duplicated or triplicated within a matter of minutes.
The Internet is much too complex of a network for censorship to effectively occur.
It is a totally new and unique environment in which communications take place. Existing
laws are not applicable to this medium. The lack of touchable boundaries cause confusion
as to where violations of law take place. The Internet is made up of nameless interaction
and anonymous communication. The complexity of the Internet makes it near impossible
to delete data that has been publicized. No one country should be allowed to, or could,
regulate or Censor the Internet
f:\12000 essays\sciences (985)\Computer\Internet in the Classroom.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Internet in the Classroom
The Internet is a network of millions of computers worldwide, connected together. It is an elaborate source of education, information, entertainment, and communication. Recently, President Bill Clinton expressed an idea to put the Internet into every classroom in America
by the year 2000[4]. Considering the magnitude of this problem, and the costs involved, it is not realistically possible to set this as a goal.
The Internet allows the almost five million computers [1] and countless users of the system to collaborate easily and quickly either in pairs or in groups. Users are able to access
people and information, distribute information, and experiment with new technologies and services. The Internet has become a major global infrastructure used for education, research, professional learning, public service, and business.
The costs of setting up and maintaining Internet access are varied and changing. Lets take a look at some of the costs of setting up Internet service in a typical school. First comes the hardware. Hardware required is generally a standard Windows-based PC or Macintosh and a 14.4 Kbs or higher modem. This will cost about $1000 apiece. If the average school has 50 classrooms, already the cost has risen to $50,000 per school, for only one connection per classroom. Next you need actual Internet service. For 24-hour connections expect to pay $100 or more per month, per account.
If a school plans to have more than a few individual Internet users, it will need to consider a network with a high-speed dedicated line connected to the Internet. This school network would probably be a small- or medium-sized network in a single building or within very few geographically close buildings. Connecting an entire school may require more than one specific LAN(Local Area Network).
Most high-speed Internet connections are provided through a dedicated leased line, which is a permanent connection between two points. This provides a high quality permanent Internet connection at all times. Most leased lines are provided by a telephone company, a cable television company, or a private network provider and cost $200 per month or more. The typical connection from a LAN or group of LANs to the Internet is a digital leased line with a Channel Service Unit/Data Service Unit (CSU/DSU), which costs between $600 and $1000.
When budgeting for a school's Internet connection there are a number of factors to consider that might not seem immediately obvious. Technical support and training will incur additional ongoing costs, even if those costs show up only as an individual's time spent. Equipment will need to be maintained and upgraded as time passes, and even when all teachers have received basic Internet training, they will most likely have questions as they explore and learn more on their own. A general rule for budget planning is this: for every dollar you spend on hardware and software, plan to spend three dollars to support the technology and those using it[2].
There are approximately 81,000 public schools in America. Within these schools, there are about 46.6 million children in kindergarten through 12th grade[3]. Considering an average of about 50 classrooms per school, at an average cost of $1,000 per classroom for one connection(an extremely low estimate), this will give president Clinton's idea a price tag of roughly $4 billion. This estimate does not even begin to take into account the costs of constant upgrades, full-time technicians, and structural changes required to install these systems.
When you look into the actual facts of a problem, sometimes you see that certain ideas are not at all plausible. Putting Internet access into our nation's schools is an excellent idea, but do we really need it? Considering that all major and most minor colleges offer a wide range of Internet services, it is not necessary to have that same service in our public schools. Bill Clinton's idea of putting Internet service into every classroom in America by the year 2000 is not realistically possible. When you look into the facts, it is obvious that this plan has not been thought out at all, and will not be put into effect.
References
[1] Malkin, G., and A. Marine, "FYI on Questions and Answers:
Answers to Commonly Asked 'New Internet User' Questions", FYI
4, RFC 1325, Xylogics, SRI, May 1992.
[2]Answers to Commonly Asked "Primary and
Secondary School Internet User" Questions
Author: J. Sellers, NASA NREN/Sterling Software
[3] NATIONAL CENTER FOR EDUCATION STATISTICS
E.D. TABS July 1995
[4] The Whit, Rowan College paper
f:\12000 essays\sciences (985)\Computer\Internet Inventions.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Several inventions have changed the way people communicate with each other. From the old
fashioned telegraph to today's modern electronic forms of communicating, people have
beencreating easier ways to correspond. Electronic communication, such as e-mail and other
internet offerings, have created a cheap and incredibly fast communications system which is
gaining steady popularity.
E-mail is basically information, usually in letter form,addressed to a destination on the internet.
The internet is aninternational web of interconnected networks--in essence, anetwork of
networks; these consist of government, education, and business networks. Software on these
networks between the source and destination networks "read" the addresses on packets and
forward them toward their destinations. E-mail is a very fast and efficient way of sending
information to any internet location. Once an e-mail is sent, it arrives at its destination almost
instantly. This provides people with a way to communicate with people anywhere in the world
quickly without the costs of other forms of communicating such as telephone calls or postage for
letters. The savings to be gained from e-mail were enough of an inducement for many
businesses to invest heavily in equipment and network connections in the early 1990s. The
employees of a large corporation may send hundreds of thousands of pieces of E-mail over the
Internet every month, thereby cutting back on postal and telephone costs. It is not uncommon to
find internet providers from twenty to thirty dollars a month for unlimited access to internet
features. Many online services such as America Online and Prodigy offer e-mail software and
internet connections which work in an almost identical way, however, the cost is more expensive.
The World Wide Web (WWW) and USENET Newsgroups are amongother internet offerings
which have changed the way people communicate with each other. The WWW can be
compared to a electronic bulletin board where information consisting o fanything can be posted.
One can create visual pages consisting of text and graphics which become viewable to anyone
with WWW access. Anything from advertisements to providing people with information and
services can be found on the WWW. File transfers between networks can also be accomplished
on the WWW though Gopher and FTP (File Transfer Protocol) sites. Newsgroups are very
similar, but run in a different way. Newsgroups basically create a forum where people can
discuss a vast array of subjects. There are thousands of newsgroups available. Once one finds
a subject that interests them, they may post notes which are visible to anyone visiting that
particular newsgroup, and others may respond to such notes. Again, this can be advertising,
information, or, more commonly, gossip.
Though the internet can be a convenient way of communication, it can become problematic.
Networks can shut down resulting in lost e-mail and WWW sites and newsgroups to be down for
an amount of time. Another problem is the addicting factorassociated with most online services.
One can become attached toan online service as they are thrilled they can meet people al lover
the world. Much spare time can be used e-mailing and surfing the net creating a lack of real
human interaction for such an individual. Though this may not be a big concern for most people,
it is considered more healthy to be active rather than sitting in front of a computer for hours a
day. Also, the need for variety can cause one to subscribe to many providers with varying costs,
creating large monthly bills.
Though the lack of human interaction may seem like aproblem, technology is continuing to
create new ways to morefully interact with people on the internet. New inventions suchas the I-
Phone and miniature video cameras are further changingthe way we communicate with each
other. Now, with the I-Phone,one can actually talk with people over the internet with
thetelephone without normal long distance calling charges. Also,with the new video cameras
which can be connected to the computer, people can actually see who they are talking
to,regardless of location. No longer are people confining themselves to a room typing
information to one another; they're interacting with more progression.
Electronic communication is proving to be the way of the future. The affordable and sufficient
system of exchanginginformation is still gaining popularity and people, as well asbusinesses,
utilizing its many services.
f:\12000 essays\sciences (985)\Computer\Internet regulation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Government Intervention of the Internet
During the past decade, our society has become based solely on the ability to move large amounts of information across large distances quickly. Computerization has influenced everyone's life. The natural evolution of computers and this need for ultra-fast communications has caused a global network of interconnected computers to develop. This global net allows a person to send E-mail across the world in mere fractions of a second, and enables even the common person to access information world-wide. With advances such as software that allows users with a sound card to use the Internet as a carrier for long distance voice calls and video conferencing, this network is key to the future of the knowledge society. At present, this net is the epitome of the first amendment: free speech. It is a place where people can speak
their mind without being reprimanded for what they say, or how they choose to say it. The key to the world-wide success of the Internet is its protection of free speech, not only in America, but in other countries where free speech is not protected by a constitution. To be found on the Internet is a huge collection of obscene graphics, Anarchists' cookbooks and countless other things that offend some people. With over 30 million Internet users in the U.S. alone (only 3 million of which surf the net from home), everything is bound to offend someone. The newest wave of laws floating through law making bodies around the world threatens to stifle this area of spontaneity. Recently, Congress has been considering passing laws that will make it a crime punishable by jail to send "vulgar" language over the net, and to export encryption software. No matter how small, any attempt at government intervention in the Internet will stifle the greatest communication innovation of this century. The government wants to maintain control over this new form of communication, and they are trying to use the protection of children as a smoke screen to pass laws that will allow them to regulate and censor the Internet, while banning techniques that could eliminate the need for regulation. Censorship of the Internet threatens to destroy its freelance atmosphere, while wide spread encryption could help prevent the need for government intervention.
Jim Exon, a democratic senator from Nebraska, wants to pass a decency billregulating the Internet. If the bill passes, certain commercial servers that post pictures of unclad beings, like those run by Penthouse or Playboy, would of course be shut down immediately or risk prosecution. The same goes for any amateur web site that features nudity, sex talk, or rough language. Posting any dirty words in a Usenet discussion group, which occurs routinely, could make one liable for a $50,000 fine and six months in jail. Even worse, if a magazine that commonly runs some of those nasty words in its pages, The New Yorker for instance, decided to post its contents on-line, its leaders would be held responsible for a $100,000 fine and two years in jail. Why does it suddenly become illegal to post something that has been legal for years in print? Exon's bill apparently would also "criminalize private mail," ... "I can call my brother on the phone and say anything--but if I say it on the Internet, it's illegal" (Levy 53).
Congress, in their pursuit of regulations, seems to have overlooked the fact that the majority of the adult material on the Internet comes from overseas. Although many U.S. government sources helped fund Arpanet, the predecessor to the Internet, they no longer control it. Many of the new Internet technologies, including the World Wide Web, have come from overseas. There is no clear boundary between information held in the U.S. and information stored in other countries. Data held in foreign computers is just as accessible as data in America, all it takes is the click of a mouse to access. Even if our government tried to regulate the Internet, we have no control over what is posted in other countries, and we have no practical way to stop it.
The Internet's predecessor was originally designed to uphold communications after a nuclear attack by rerouting data to compensate for destroyed telephone lines and servers. Today's Internet still works on a similar design. The very nature of this design allows the Internet to overcome any kind of barriers put in its way. If a major line between two servers, say in two countries, is cut, then the Internet users will find another way around this obstacle. This obstacle avoidance makes it virtually impossible to separate an entire nation from indecent information in other countries. If it was physically possible to isolate America's computers from the rest of the world, it would be devastating to our economy.
Recently, a major university attempted to regulate what types of Internet access its students had, with results reminiscent of a 1960's protest. A research associate, Martin Rimm, at Carnegie Mellon University conducted a study of pornography on the school's computer networks. He put together quite a large picture collection (917,410 images) and he also tracked how often each image had been downloaded
(a total of 6.4 million). Pictures of similar content had recently been declared obscene by a local court, and the school feared they might be held responsible for the content of its network. The school administration quickly removed access to all these pictures, and to the newsgroups where most of this obscenity is suspected to come from. A total of 80 newsgroups were removed, causing a large disturbance among the student body, the American Civil Liberties Union, and the Electronic Frontier Foundation, all of whom felt this was unconstitutional. After only half a week, the college had backed down, and restored the newsgroups. This is a tiny example of what may happen if the government tries to impose censorship
(Elmer-Dewitt 102).
Currently, there is software being released that promises to block children's access to known X-rated Internet newsgroups and sites. However, since most adults rely on their computer literate children to setup these programs, the children will be able to find ways around them. This mimics real life, where these children would surely be able to get their hands on an adult magazine. Regardless of what types of software or safeguards are used to protect the children of the Information age, there will be ways around them. This necessitates the education of the children to deal with reality. Altered views of an electronic world translate easily into altered views of the real world. "When it comes to our children, censorship is a far less important issue than good parenting. We must teach our kids that the Internet is a extension and a reflection of the real world, and we have to show them how to enjoy the good things and avoid the bad things. This isn't the government's responsibility. It's ours (Miller 76)."
Not all restrictions on electronic speech are bad. Most of the major on-line communication companies have restrictions on what their users can "say." They must respect their customer's privacy, however. Private E-mail content is off limits to them, but they may act swiftly upon anyone who spouts obscenities in a public forum.
Self regulation by users and servers is the key to avoiding government imposed intervention. Many on-line sites such as Playboy and Penthouse have started to regulate themselves. Both post clear warnings that adult content lies ahead and lists the countries where this is illegal. The film and videogame industries subject themselves to ratings, and if Internet users want to avoid government imposed regulations, then it is time they begin to regulate themselves. It all boils down to protecting children from adult material, while protecting the first amendment right to free speech between adults. Government attempts to regulate the Internet are not just limited to obscenity and vulgar language, it also reaches into other areas, such as data encryption.
By nature, the Internet is an insecure method of transferring data. A single E-mail packet may pass through hundreds of computers from its source to destination. At each computer, there is the chance that the data will be archived and someone may intercept that data. Credit card numbers are a frequent target of hackers. Encryption is a means of encoding data so that only someone with the proper "key" can decode it.
"Why do you need PGP (encryption)? It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing our taxes, or having an illicit affair. Or you may be doing something that you feel shouldn't be illegal, but is. Whatever it is, you don't want your private electronic mail (E-mail) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution.
Perhaps you think your E-mail is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide. What if everyone believed that law-abiding citizens should use postcards for their mail? If some brave soul tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their E-mail, innocent or not, so that no one drew suspicion by asserting their E-mail privacy with encryption. Think of it as a form of solidarity (Zimmerman)."
Until the development of the Internet, the U.S. government controlled most new encryption techniques. With the development of faster home computers and a worldwide web, they no longer hold control over encryption. New algorithms have been discovered that are reportedly uncrackable even by the FBI and the NSA. This is a major concern to the government because they want to maintain the ability to conduct wiretaps, and other forms of electronic surveillance into the digital age. To stop the spread of data encryption software, the U.S. government has imposed very strict laws on its exportation.
One very well known example of this is the PGP (Pretty Good Privacy) scandal. PGP was written by Phil Zimmerman, and is based on "public key" encryption. This system uses complex algorithms to produce two codes, one for encoding and one for decoding. To send an encoded message to someone, a copy of that person's "public" key is needed. The sender uses this public key to encrypt the data, and the recipient uses their "private" key to decode the message. As Zimmerman was finishing his program, he heard about a proposed Senate bill to ban cryptography. This prompted him to release his program for free, hoping that it would become so popular that its use could not be stopped. One of the original users of PGP posted it to an Internet site, where anyone from any country could download it, causing a federal investigator to begin investigating Phil for violation of this new law. As with any new technology, this program has allegedly been used for illegal purposes, and the FBI and NSA are believed to be unable to crack this code. When told about the illegal uses of his programs, Zimmerman replies:
"If I had invented an automobile, and was told that criminals used it to rob banks, I
would feel bad, too. But most people agree the benefits to society that come from
automobiles -- taking the kids to school, grocery shopping and such -- outweigh
their drawbacks." (Levy 56).
The government has not been totally blind to the need for encryption. For nearly two decades, a government sponsored algorithm, Data Encryption Standard (DES), has been used primarily by banks. The government always maintained the ability to decipher this code with their powerful supercomputers. Now that new forms of encryption have been devised that the government can't decipher, they are proposing a new standard to replace DES. This new standard is called Clipper, and is based on the "public key" algorithms. Instead of software, Clipper is a microchip that can be incorporated into just about anything (Television, Telephones, etc.). This algorithm uses a much longer key that is 16 million times more powerful than DES. It is estimated that today's fastest computers would take 400 billion years to
break this code using every possible key. (Lehrer 378). "The catch: At the time of manufacture, each Clipper chip will be loaded with its own unique key, and the Government gets to keep a copy, placed in escrow. Not to worry, though the Government promises that they will use these keys to read your traffic only when duly authorized by law. Of course, to make Clipper completely effective, the next logical step would be to outlaw other forms of cryptography (Zimmerman)."
The most important benefits of encryption have been conveniently overlooked by the government. If everyone used encryption, there would be absolutely no way that an innocent bystander could happen upon something they choose not to see. Only the intended receiver of the data could decrypt it (using public key cryptography, not even the sender can decrypt it) and view its contents. Each coded message also has an encrypted signature verifying the sender's identity. The sender's secret key can be used to encrypt an enclosed signature message, thereby "signing" it. This creates a digital signature of a message, which the recipient (or anyone else) can check by using the sender's public key to decrypt it. This proves that the sender was the true originator of the message, and that the message has not been subsequently altered by anyone else, because the sender alone possesses the secret key that made that signature. "Forgery of a signed message is infeasible, and the sender cannot later disavow his signature (Zimmerman)." Gone would be the hate mail that causes many problems, and gone would be the ability to forge a document with someone else's address. The government, if it did not have alterior
motives, should mandate encryption, not outlaw it.
As the Internet continues to grow throughout the world, more governments may try to impose their views onto the rest of the world through regulations and censorship. It will be a sad day when the world must adjust its views to conform to that of the most prudish regulatory government. If too many regulations are inacted, then the Internet as a tool will become nearly useless, and the Internet as a mass communication device and a place for freedom of mind and thoughts, will become non existent. The users, servers, and parents of the world must regulate themselves, so as not to force government regulations that may stifle the best communication instrument in history. If encryption catches on and becomes as widespread as Phil Zimmerman predicts it will, then there will no longer be a need for the government to meddle in the Internet, and the biggest problem will work itself out. The government should rethink its approach to the censorship and encryption issues, allowing the Internet to continue to grow and mature.
Works Cited
Emler-Dewitt, Philip. "Censoring Cyberspace: Carnegie Mellon's Attempt to Ban
Sex from it's Campus Computer Network Sends A Chill Along the Info Highway."
Time 21 Nov. 1994; 102-105.
Lehrer, Dan. "The Secret Sharers: Clipper Chips and Cypherpunks." The Nation
10 Oct. 1994; 376-379.
"Let the Internet Backlash Begin." Advertising Age 7 Nov. 1994; 24.
Levy, Steven. "The Encryption Wars: is Privacy Good or Bad?" Newsweek 24
Apr. 1995; 55-57.
Miller, Michael. "Cybersex Shock." PC Magazine 10 Oct. 1995; 75-76.
Wilson, David. "The Internet goes Crackers." Education Digest May 1995; 33-36.
Zimmerman, Phil. (1995). Pretty Good Privacy v2.62, [Online]. Available Ftp:
net-dist.mit.edu Directory: pub/pgp/dist File: Pgp262dc.zip
f:\12000 essays\sciences (985)\Computer\Internet security.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Internet Security
Many people today are familiar with the Internet and its use. A large number of its users however, are not aware of the security problems they face when using the Internet. Most users feel they are anonymous when on-line, yet in actuality they are not. There are some very easy ways to protect the user from future problems.
The Internet has brought many advantages to its users but has also created some major problems. Most people believe that they are anonymous when they are using the Internet. Because of this thinking, they are not careful with what they do and where they go when on the "net." Security is a major issue with the Internet because the general public now has access to it. When only the government and higher education had access, there was no worry about credit card numbers and other types of important data being taken. There are many advantages the Internet brings to its users, but there are also many problems with the Internet security, especially when dealing with personal security, business security, and the government involvement to protect the users.
The Internet is a new, barely regulated frontier, and there are many reasons to be concerned with security. The same features that make the Internet so appealing such as interactivity, versatile communication, and customizability also make it an ideal way for someone to keep a careful watch on the user without them being aware of it (Lemmons 1). It may not seem like it but it is completely possible to build a personal profile on someone just by tracking them in cyperspace. Every action a person does while logged onto the Internet is recorded somewhere (Boyan, Codel, and Parekh 3).
An individual's personal security is the major issue surrounding the Internet. If a person cannot be secure and have privacy on the Internet, the whole system will fail. According to the Center for Democracy and Technology (CDT), any website can find out whose server and the location of the server a person used to get on the Internet, whether his computer is Windows or DOS based, and also the Internet browser that was used. This is the only information that can be taken legally. However, it can safely be assumed that in some cases much more data is actually taken (1). These are just a few of the many ways for people to find out the identity of an individual and what they are doing when on the Internet.
One of the most common ways for webmasters to find out information about the user is to use passive recording of transactional information. What this does is record the movements the user had on a website. It can tell where the user came from, how long he stayed, what files he looked at, and where he went when he left. This information is totally legal to obtain, and often the webmaster will use it to see what parts of his site attracts the most attention. By doing this, he can improve his site for the people that return often (Boyan, Codel, and Parekh 2).
There is a much more devious way that someone can gain access to information on a user's hard-drive. In the past, the user did not need to be concerned about the browser he used; that changed when Netscape Navigator 2.0 was introduced. Netscape 2.0 takes advantage of a programming language called Java. Java uses the browser to activate programs to better enhance the website the user was viewing. It is possible for someone to write a program using Java that transfers data from the user's computer back to the website without the user ever being aware of anything being taken. Netscape has issued new releases that fix some but not all of the two dozen holes in the program (Methvin 3).
Many people do not realize that they often give information to websites by doing something called direct disclosure. Direct disclosure is just that, the user gives the website information such as their e-mail address, real address, phone number, and any other information that is requested. Often, by giving up information, a user will receive special benefits for "registering" such as a better version of some software or being allowed into "member only areas" (Boyan, Codel, and Parekh 2).
E-mail is like a postcard. E-mail is not like mailing a letter in an envelope. Every carrier that touches that e-mail can read it if they choose. Not only can the carriers see the message on the e-mail, but it can also be electronically intercepted and read by hackers. This can all be done without the sender or the receiver ever knowing anything had happened (Pepper 1). E-mail is the most intriguing thing to hackers because it can be full of important data from secret corporate information to credit card numbers (Rothfeder, "Special Reports" 2).
The only way to secure e-mail is by encryption. This makes an envelope that the hacker cannot penetrate. The downside to using encryption on a huge network like the Internet is that both users must have compatible software (Rothfeder, "Special Reports" 2). A way to protect a persons e-mail is to use an autoremailer. This gives the sender a "false" identity which only the autoremailer knows, and makes it very difficult to trace the origin of the e-mail (Boyan, Codel, and Parekh 4).
Another but more controversial way of gathering data is by the use of client-side persistent information or "cookie" (Boyan, Codel, and Parekh 2). Cookies are merely some encoded data that the website sends to the browser when the user leaves the site. This data will be retrieved when the user returns at a later time. Although cookies are stored on the user's hard-drive, they are actually pretty harmless and can save the user time when visiting a web site (Heim 2).
Personal security is an important issue that needs to be dealt with but business security is also a major concern. "An Ernst and Young survey of 1271 companies found that more than half had experienced computer-related break-ins during the past two years; 17 respondents had losses over $1 million" ("November 1995 Feature"). In a survey conducted by Computer Security and the FBI, 53 percent of 428 respondents said they were victims of computer viruses; 42 percent also said that unauthorized use of their systems had occurred within the last 12 months (Rothfeder, "November 1996 Feature" 1).
While electronic attacks are increasing more rapidly than any other kind, a large number of data break-ins are from the inside. Ray Jarvis, President of Jarvis International Intelligence, says "In information crimes, it's not usually the janitor who's the culprit. It's more likely to be an angry manager who's already looking ahead to another job"(Rothfeder, "November 1996 Feature" 3).
While electronic espionage is increasing, so is the ability to protect computer systems. "The American Society for Industrial Security estimates that high-tech crimes, including unreported incidents, may be costing U.S. corporations as much as $63 billion a year" (Rothfeder, "November 1996 Featuer" 1).
There are many ways for businesses to protect themselves. They can use a variety of techniques such as firewalls and encryption.
Firewalls are one of the most commonly used security devices. They are usually placed at the entrance to a network. The firewalls keep unauthorized users out while admitting authorized users only to the areas of the network to which they should have access. There are two major problems with firewalls, the first, is that they need to be installed at every point the system comes in contact with other networks such as the Internet (Rothfeder, "November 1996 Feature" 5). The second problem is that firewalls use passwords to keep intruders out. Because of this, the firewall is only as good as the identification scheme used to log onto a network (Rothfeder, "November 1996 Feature" 2).
Passwords, a major key to firewalls, are also the most basic of security measures. The user should avoid easily guessable passwords such as a child's name, birthdate, or initials. Instead, he should use cryptic phrases and combine the use of small and capitalized letters such as "THE crow flys AT midnight". Another easy way to avoid problems is to change the password or phrase at least once a month (Rothfeder, "November 1996 Feature" 5).
Just in case an intruder does get through the first layer of security, a good backup is to have all the data on the system encrypted. Many browsers come with their own encryption schemes, but companies can buy their own stand-alone packages as well. Most encryption packages are based on a public-private key with their own private encryption key to unlock the code for a message and decipher it. Encryption is the single best way to protect data from being read, if stolen, and is rather cost effective (Rothfeder, "November 1996 Feature"5).
Businesses need protection but they cannot do it alone. The Federal government will have to do its part if the Internet is going to give us all the returns possible. Businesses will not use the Internet if they do not have support from the government.
In the United States there is no set of laws that protect a person's privacy when on the Internet. The closest rules that come to setting a standard of privacy is an assortment of laws beginning with the Constitution and continuing down to local laws. These laws unfortunately, are not geared for the Internet. These laws are there only to protect a person's informational privacy (Boyan, Codel, and Parekh 3).
Now, because of the booming interest and activity on the Internet in both the personal and the business level, the government has started investigating the Internet and working on ways to protect the users.
The Federal Bureau of Investigation (FBI), the Central Intelligence Agency (CIA), and the National Security Agency have all devoted small units to fighting computer security crimes. After Senate hearings, the Justice Department proposed that a full-time task force be set up to study the vulnerability of the nations informational infrastructure. This would create a rapid-response team for investigating computer crimes. They also proposed to require all companies to report high-tech break-ins to the FBI (Rothfeder, "November 1996 Feature" 4).
Security for the Internet is improving, it is just that the usage of the Internet is growing much faster. Security is a key issue for every user of the Internet and should be addressed before a person ever logs on to the "net". At best, all users should have passwords to protect themselves, any businesses need to put up firewalls at all points of entry. These are low cost security measures which should not be over looked in a possible multi-billion dollar industry.
Works Cited
Boyan, Justin and Eddie Codel and Sameer Parekh. Center for Democracy and Technology Web Page. Http://www.13x.com/cgi-bin/cdt/snoop.pl accessed January 26, 1997: 1-4.
Heim, Judy. "Here's How." PC World Online January 1997: 1-3.
Methvin, David W. "Safety on the Net." Windows Magazine Online (1996): 1-9.
Lemmons, Phil. "Up Front." PC World Online February 1997: 1-2.
November, 1995 Feature PC World Online November 1995 1-3.
Pepper, Jon. "Better Safe Than Sorry." PC World Online October 1996: 1-2
Rothfeder, Jeffrey. "February 1997 Special Report." PC World Online February 1997: 1-6
Rothfeder, Jeffrey. "November 1996 Features." PC World Online November 1996: 1-6
f:\12000 essays\sciences (985)\Computer\internet servers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
anonimous
CGS 1000
Assignment 1
02/25/97
Internet Servers
Here is a list of four common and not so common Internet servers, the list contains the basic features pushed by the servers, price of the service, and the basic software and hardware requirements to access the Internet.
THE MICROSOFT NETWORK:
Features:
the MSN has a very user friendly environment, everything is done in an easy customized format, customers can send and receive e-mail, access to chat rooms, specialized interest forums, and news groups. They also offer online shopping and the MSN BC on line news.
Cost:
To start the Microsoft Network has a free "one month" unlimited access trial plan, after that period of time they will start the first billing period. With the MSN the customers have a few choices on how they want to pay their bills and get their access, such as:
MSN premier plan: It is the easy way to get the best of the Internet, the premier plan provides exclusive MSN programming , plus five hours to explore everything the MSN and the Internet has to offer, for $6.95 per month plus $2.50 each additional hour.
MSN premier annual plan: the customers get twelve months of service for the price of ten, a single payment of $69.50 gives the annual membership, with this plan the customer still has to pay $2.50 for each additional hour after the first five hours each month.
MSN premier unlimited plan: the premier unlimited access gives everything the MSN and Internet has to offer for a flat rate of $19.95 per month. No hourly charges.
Customer support:
Service is very friendly and helpful if you are available to hold for up to twenty minutes, otherwise you are not going to be able to get in touch with them.
Hardware and software requirements:
-PC with 486dx or higher processor.
-Windows 95.
-CD-ROM drive.
-14.4 kbps modem.
-Mouse or compatible pointing device.
-VGA or higher resolution graphics card.
-8 Mb of memory required, 16 Mb recommended.
-50 Mb of additional hard disk space.
-Sound card recommended.
-Software is provide by the MSN free of charge.
Comments: Very user friendly interface. The new MSN is a little slower than the older version, because it has more graphics to load, the customer support is reachable after a substantial wait on hold, thereafter the people from MSN are very helpful.
NETCOM:
NETCOM is a nationwide company. Their emphasis is on business and productivity seeking individuals, NETCOM subscribers get unlimited access to the Internet over a high speed digital network. It also offers over 330 access points nationwide. They only have one unlimited $19.95 access rate available for the moment.
Netcom claims that they have a very user friendly software. For a beginner using Netcom, it will not take that person more than ten minutes to have it up and browsing. Packaged browsers include Netscape Navigator, Microsoft Internet explorer, and their own NETCOMPLETE browser. NETCOM also partners with various software companies, such as McAfee web scan, Eudora pro, Easy Photo, and Surf watch to name just a few. This company also claims that it is easy to send and receive e-mail. With their own Internet kit customers can connect to news groups, chat rooms, and web sites.
If a customer encounters any kind of problem related to NETCOM, there is a customer support available seven days a week, and twenty four hours a day at no charge. Over the phone the customers will be dealing with an automated system, and to deal with a person it has to be done via e-mail.
Main features:
Personal Services Portfolio:
It is offered to netcom customer as an extension of their base subscription rate of $19.95 per month. Some of these services are offered free of charge, and others require a nominal service fee. The current are the available pieces to the personal service portfolio.
Personal pages: It will enable a customer to create a home page on the world wide web at no additional cost. On this feature there is also available a tutorial to walk trough each step of the home page making.
Two ways to get news: The Personal News page direct will enable the customer to receive an e-mail of the top twenty headlines and summaries based on the news profiles predefined by the customer, and full text of the stories can be found on the news page web site. The other method to retrieve the news will be through the News Web Site. Up to the minute news feeds on the web, the customers can browse the top ten stories or make use of the Clarinet Newsfeeds wich are searchable by key word and category.
Personal Finances: is a set of customized financial foots, design to provide the means to make intelligent investment decisions. Information is available on over 77,000 stocks, mutual funds, options, and industry groups, the customer can get a listing of the best and worst mutual funds, and set up a personal portfolio with up to 150 entries.
Surf Watch: Allows the customer to block certain material on the Internet, this service is specially beneficial to parents who will like to keep their kids away from certain material on the Internet. It can also block the WWW, FTP, Gopher, IRC, and other sites likely to contain objectionable material.
Hardware and Software requirements:
-PC with a 486dx or higher processor.
-Microsoft windows 3.1 or Windows 95 operating system.
-CD-ROM drive.
-Above 9600 baud modem.
-Mouse or compatible pointing device.
-VGA or higher resolution graphics card.
-8 MB of memory required.
-15 MB of additional hard disk space.
-The software required is provided by NETCOM.
Comments: Customer support did not take more than three minutes to pick up the phone, and they were very helpful, I am not familiar with their interface, but judging from the looks of it seems to be very easy to use.
COMPUSERVE
To start with COMPUSERVE, you get thirty days and ten hours to explore for free, the free month includes e-mail, news, weather, stock quotes, Internet access, and hundred of special interest forums, from cats and dogs to entertainment. The price plan that they are trying to push is $9.95 for a month with five hours, each additional hour will cost $2.95 per hour, that is called the standard plan, they also have another plan called the super value plan that costs 24.95 per month with twenty hours, each additional hour costs $2.95 per hour.
Main features:
Redesigned Interface:
COMPUSERVE features a redesigned interface called COMPUSERVE 3.0, the new extensively tested user Interface helps members to find contents and features more quickly and easily, this interface can even be customized.
Multitasking: COMPUSERVE claims that this feature will save their customers time and money, the customer does not have to wait for a task to be finished to move on to another, for example, the customer can chat while downloading a file. A to do list enables start up to do multiple tasks in a background session, making it more efficient than ever to retrieve files on line.
COMPUSERVE forums: COMPUSERVE forums are gathering places of people with similar interests, such as animals and fish, home computing, health or business. The forum conference room is a more intimate chat room environment than the conference center, and more interest specific than the chat sites.
COMPUSERVE no modem e-mail: The COMPUSERVE communication card lets the customers use any telephone to listen to e-mail via text-to-voice synthesizer, the COMPUSERVE card also allows to forward e-mail messages to a fax machine, receive voice mail, set up conference calls, use speed dial, and access information services, such as news and travel. When it is used as a traditional calling card, the COMPUSERVE card allows savings up to 58% compared to the rates of other calling cards.
Hardware and software requirements:
-486 DX or higher processor.
-300 baud modem with local access.
-8 MB of ram memory.
-10 MB of additional disk space.
-Mouse or equivalent.
-VGA or higher graphic resolution.
-windows.
-Free software is provided by COMPUSERVE.
THE LIGHTHOUSE CONNECTION
Features:
News groups and chat channels are available, as well as free e-mail addresses, TLC offers their customers free disk space for their own web page at no additional cost. The local access number is a free local call for Orange county and most of Seminole county, and TLC's modems are all 28.8k bps or faster. TLC's support is available Monday through Friday from 9:30 am to 7:30 PM, and Saturday from 10:00 am to 7:30 PM.
Rates:
A flat rate of $10.95 per month for unlimited access service.
Hardware and software requirements:
-PC with a 486 dx or higher processor.
-VGA graphics card.
-mouse or other pointing device.
-9600 bps modem or faster.
-Windows.
-8 MB of RAM memory.
-10 MB free hard disk space.
-Free software is provided by TLC.
Comments: I found that TLC's customer support was very helpful , they walked me through the installation process, that I found it to be more complicated than than the one of any other server, their system seemed to be faster than any other that I have tried, but their interface was not user friendly at all, TLC and their users have to give up comfort for cost.
Service Recommendation: Personally I will go with TLC, because once I get used to it, it will be just like any other server, I will get the same basic services for half the price.
f:\12000 essays\sciences (985)\Computer\Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Internet
MEMORANDUM
Mrs. -----, I understand that some students that have already graduated from College are
having a bit of trouble getting their new businesses started. I know of a tool that will
be extremely helpful and is already available to them; the Internet. Up until a few years
ago, when a student graduated they were basically thrown out into the real world with just
their education and their wits. Most of the time this wasn't good enough because after
three or four years of college, the perspective entrepreneur either forgot too much of what
they were supposed to learn, or they just didn't have the finances. Then by the time they
save sufficient money, they again had forgotten too much. I believe I have found the
answer. On the Internet your students will be able to find literally thousands of links to
help them with their future enterprises. In almost every city all across North America, no
matter where these students move to, they are able to link up and find everything they
need. They can find links like "Creative Ideas", a place they can go and retrieve ideas,
innovations, inventions, patents and licensing. Once they come up with their own products,
they can find free expert advice on how to market their products. There are easily
accessible links to experts, analysts, consultants and business leaders to guide their way
to starting up their own business, careers and lives. These experts can help push the
beginners in the right direction in every field of business, including every way to
generate start up revenue from better management of personal finances to diving into the
stock market. When the beginner has sufficient funds to actually open their own company,
they can't just expect the customers to come to them, they have to go out and attract them.
This is where the Internet becomes most useful, in advertising. On the Internet, in every
major consumer area in the world, there are dozens of ways to advertise. The easiest and
cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly
newsletters sent all over the world to major and minor businesses informing them about new
companies on the market. It includes everything about your business from what you
make/sell and where to find you, to what your worth. These groups also advertise to the
general public. The major portion of the advertising is done over the Internet, but this
is good because that is their target market. By now, hopefully their business is doing
well, sales are up and money is flowing in. How do they keep track of all their funds
without paying for an expensive accountant? Back to the Internet. They can find lots of
expert advice on where they should reinvest their money. Including how many and how
qualified of staff to hire, what technical equipment to buy and even what insurance to
purchase. This is where a lot of companies get into trouble, during expansion. Too many
entrepreneurs try to leap right into the highly competitive mid-size company world. On the
Internet, experts give their secrets on how to let their companies natural growth force its
way in. This way they are more financially stable for the rough road ahead. The Internet
isn't always going to give you the answers you are looking for, but it will always lead you
in the right direction. That is why I hope you will accept my proposal and make aware the
students of today of this invaluable business tool.
??
f:\12000 essays\sciences (985)\Computer\Internet1.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Internet has an enormous impact on the American Experience. First, It encourages the growth of businesses by providing new ways of advertising products to a large audience, and thus helps companies to publicize their products. Secondly, It allows more Americans to find out what goes on in other countries by learning about other cultures and by exchanging their opinions and ideas with other people worldwide. This may well promote a better global understanding. Finally, by allowing people to access vast amounts of information easily, it will change how they make decisions and ultimately also their lifestyle.
The Internet is a high-speed worldwide computer network which evolved from the Arpanet. The Arpanet was created by the Pentagon in the late 1969 as a network for academic and defense researchers. In 1983, the National Science Foundation took over the management of the Internet. Now the Internet is growing faster than any other telecommunications system ever built. It is estimated that in three years, the system will be used by over 100 million people (Cooke 61).
Since the World Wide Web (WWW or W3) became popular through point-and-click programs that made it easier for non-technical people to use the Internet, over 21,000 businesses and corporations have become accessible through the Internet (Baig 81). These companies range from corporate giants like IBM, AT&T, Ford and J.C. Penny, to small law firms. "With the Internet, the whole globe is one marketplace and the Internet's information-rich WWW pages can help companies reach new customers," says Bill Washburn, former executive director of Commercial Internet Exchange (Baig 81).
Through the Internet, new opportunities to save money are created for companies. One of the bigger savings is the cost of transmission. It is estimated that the administrative cost of trade between companies in the U.S. amounts to $250 billion a year (Liosa 160). Sending an ordinary one-page e-mail message from New York to California via the Internet costs about a penny and a half, vs. 32 cents for a letter and $2 for a fax (Liosa 158).
Hale & Dorr for example, a Boston based law firm, uses the Internet to its advantage. If a client company requests a contract for a foreign distributor, it can send electronic mail over the Internet to a Hale & Dorr computer, where a draft document will be constructed from the text. A lawyer will then review the documents and ship them back over the Internet to the client, including a list of lawyers in the other country (Verity 81).
The ability to process orders quickly has always been an important factor in the business world, especially for mail-order companies. Traditional methods however tended to be fairly expensive. On the average it has cost mail-order companies from $10 to $15 to process a telephone or mail order, says Rodney Joffe, president of American Computer Group Inc. Over the Internet, this cost falls to $4, and it is much faster this way, too (Verity 84).
Advertising on the Internet is another way to endorse products. Hyatt Hotels Corporation for instance advertises its hotels and resorts, and it even offers a discount for people who say they 'saw it on the net (Verity 81).'
Hundreds of computer software companies now have their own Internet sites on the World Wide Web, where customers can get immediate support directly from the experts or buy and register new software online. Even magazine publishers are joining the Internet to regularly publish special Internet versions of their magazines which are read by millions of people worldwide.
The Internet attracts so many companies because they can use it as a tool for communication, marketing, advertising, sales, and customer support. It is not only faster and more efficient than using traditional methods, but it is also cheaper.
The Internet doesn't just promote growth of businesses, it also creates new ways for Americans to get in touch with the rest of the world. It lets people expand their horizons and learn about different countries and cultures by getting insight into others people's lives across the globe. One of the many ways in which this can be done is to use Internet Relay Chat (IRC). IRC is a multi-user chat system, where people worldwide can convene on "channels" (a virtual place, usually with a topic of conversation) to talk in groups, or privately. When people talk on IRC, everything they type will instantly be transmitted around the world to other users who are connected at the time. They can then type something and respond to each other's messages.
Since starting in Finland, IRC has been used in over seventy-five countries spanning the globe. IRC is networked over much of North America, Europe, and Asia (Eddings 57). Topics of discussion on IRC are varied. Technical and political discussions are popular, especially when world events are in progress. Not all conversations need to have a topic however. Some people simply talk about their daily lives and experiences which they can share with thousands of other people. Most conversations are in English, but there are always channels in German, Japanese, and Finnish, and occasionally other languages. On the average, there are between five and six thousand people from many countries and cultures online at once.
In times when information from abroad is hard to acquire, it becomes clear how essential the Internet can be to global understanding. IRC gained international fame during the late Persian Gulf War, where updates from around the world came across the wire, and most people on IRC gathered on a single channel to hear these reports. Even during the coup attempt in Russia, people were providing live reports on the Internet about what was really going on (Eddings 48). These reports were widely circulated throughout the world over the Internet.
One startling instance that shows the importance of international communication through the Internet, is taking place in Croatia. Halfway around the world, Wam Kat regularly types articles on the political situation and daily life in Zagreb, Croatia on his computer. Kat's articles are not published in Yugoslav papers or magazines because the Croatian government owns all the media and already prosecuted a group of journalists for treason. Kat's articles exist in cyberspace only. He transfers them to a German Bulletin Board System via modem, from where they are spread to computers worldwide through the Internet. "Electronic mail is the only link between me and the outside world," says Kat (Cooke 60).
Kat is not the only one who participates in this community without boundaries. During recent coup attempts and catastrophes around the world, like the earthquake in Japan for example, the Internet provided and instant unfiltered link to the rest of the world. The Internet is changing the way people relate to one-another. It is re-sorting society into "virtual communities," as one author calls it (Cooke 61). Now groups of people from a variety of cultures, religions and countries can meet on the Internet, exchange ideas and learn from each other, instead of being bound by geographical location.
Although the Internet already has an enormous impact on Americans right now, it will influence us even more in the near future. In 1994, the Clinton administration requested a National Information Infrastructure, which would link every business, home, school and college (Cooke 64). That is why the Clinton administration has made the building of an improved data highway the main component of a determined plan to strengthen the U.S. economy in the 21st century (Silverstein 8). This improved national computer network will be called The Information Superhighway, which is nothing but an improved version of the Internet with a much greater capability for transmitting data. "The world is on the eve of a new era. The Information Superhighway will be crucial in creating long-term economic growth and maintaining U.S. leadership in basic science, mathematics and engineering," says Vice President Al Gore, the Clinton administration's leading high-tech advocate (Silverstein 9).
The Information Superhighway will make it possible to merge today's broadcasting, 500-channel cable TV, general video, telephone, and computer industries all into one giant computer network, because it will have a much greater capacity than today's Internet. This is made possible by replacing ordinary telephone wires with fiberoptic cable, which is made up of hair-thin strands of glass and can transmit 250,000 times as much data as a conventional telephone wire (Silverstein 9).
Through the Information Superhighway, our everyday living standards will be greatly improved. While the Internet primarily moves words, and is only able to broadcast images and sound at a very slow rate, the Information Superhighway will easily allow us to transmit sound and images quickly, making real-time video conferencing and actual spoken conversations on the computer possible for people worldwide. New technology like this will introduce even more practical and convenient applications.
"Virtual Medicine" for example could help save people's lives. If it is very difficult for a patient to get to a medical specialist, surgery could be performed over the Information Superhighway, through what is called Tele-presence Surgery. To be successful, It requires video, a fine motor control, a tactile, and physical feedback. The information can be digitized and transmitted over the Information Superhighway. The doctor will wear virtual reality goggles which contain small video screens that create a 3D-image of the patient. Sensors in the doctor's gloves, which will control robot-like hands on the other end, will detect the position of the doctors fingers (Eddings 156). Since this method of surgery is intended to work between two distant sites, it makes it possible for specialized doctors at major hospitals to operate at rural clinics.
The so-called Virtual Library, which will be established once the Information Superhighway is inaugurated, will greatly enhance the amount of information that can be accessed through computers. Already, people can search the Internet for databases of newspaper clippings, lists of government offices, supreme court rulings, and even get limited access to the Library of Congress through a system called MARVEL, which pulls together library catalogs from all over the world into one super catalog (Eddings 158). With the Information Superhighway, people will be able to retrieve even more massive amounts of information. In the future, Instead of going to the library and checking out books, people will simply turn on their home computer, log into another library mainframe computer, and be able to download large amounts of text, as they wish. Especially for institutions like schools and colleges, the Information Superhighway will have a great potential for the improvement of general education and the accessibility of important information.
The Internet is having a major influence on America. Its successor in the near future, the Information Superhighway will continue to do so for a long time as well. By creating new ways of publicizing products and helping businesses, the Internet has strengthened and reinforced the U.S. economy. It also promotes a better global understanding by allowing millions of Americans to communicate with other people on an international level because it provides a constant flow of instant, unbiased information for everyone at any time, anywhere. The ability to obtain information quickly and easily will become very essential in the future, now that America is entering the information age. The Information Superhighway, once built, promises a good start into the new era.
Eddings, Joshua. How the Internet Works. California: Ziff-Davis Press, 1994.
Cooke, Kevin. "The whole world is talking." Nation. July 12, 1993: 60-65.
Verity, John. "The Internet." Business Week. November 14, 1994: 80-88.
Silverstein, Ken. "Paving the Infoway." Scholastic Update. September 2, 1994: 8-10.
Liosa, Patty. "Boom time on the new frontier." Fortune. Autumn93, 1993: 153-161.
f:\12000 essays\sciences (985)\Computer\Intranets.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Abstract
These days Intranets are becoming more and more popular throughout the business world and other types of organizations. Many companies and organizations are already making this change and many more are considering it. The advantages offered by Intranets when compared to other types of networks are many, at a reduced cost for the owner. Less maintenance, less programming and more flexibility on the network platform make this change interesting. Unlike other types of networks, Intranets allow different types of machines and operating systems already at hand to be operating on the same network platform. This reduces the cost when trying to implement this type of network because the machines and operating systems already at hand can still be used throughout the network without conflicting with one another. Quick access and easy programming is also another consideration that is made when considering this type of network.
Intranets have just started to be implemented throughout the world and already a big change is being noticed. Companies are keeping track of all of their important information on web sites, which are restricted to users, unless they have the security code to access them. Thanks to Internet technology, companies and other types of organizations are able to keep all of their information organized and easily accessible with a click of a button.
The Internet, how has it changed the world around us? Government, education, business is all wrapping around it. Is this because of all of the information on it, simplicity or is it the quickness, with a simple point and click and the information appears on the screen.
The first intention of the Web, as it is referred to, was not to create a sea of web servers and surfers. The Department of Defense created it for it's own use, to keep contact with all of the locations throughout the world, making it easier for them to retrieve and send information when desired. As businesses, government and education discover the advantages of the Internet and web technologies they are starting to implement it for internal use. This is better known as an Intranet, which represents a new model for internal information management, distribution and collaborative computing and offers a simplistic but powerful implementation of client/server computing.
Intranets are private Web-based networks, usually within corporation firewalls, that connect employees and business partners to vital corporate information. Thousands of organizations are finding that Intranets can help empower their employees through more timely and less costly information flow. They let companies speed information and software to employees and business partners. Intranets provide users with capabilities like looking up information, sending and receiving e-mail, and searching directories. They make it easy to find any piece of information or resources located on the network. Users can execute a single query that results in an organized list of all matching information across all servers throughout the enterprise and onto the Internet.
As recent as two years ago, Intranets didn't exist. Now the market for internal web-servers is rapidly increasing. Intranet technology is beginning to be used all over the world. Intranets present the information in the same way to every computer. By doing this they are doing what computer and software makers have been promising but never actually delivered. Computers, software, and databases are pulled together into a single system that enables users to find information wherever it resides. Intranets are only logically "internal" to an organization but physically they can span the globe, as long as access is limited to a defined community of interests.
Countless organizations are beginning to build Intranets, bringing Internet and Web technologies to bear on internal organizational problems traditionally addressed by proprietary data base groupware and workflow solutions. Two-thirds of all large companies either have an internal web server installed of they are considering installing one.
The organizations to use Internet technologies on the corporate network generally move traditional paper-based information distribution on-line. Other types of information that might be put on-line would be the following:
· competitive sales information
· human resources/employee benefits statements
· technical support/help desk applications
· financial
· company newsletters
· project management
These companies typically provide a corporate home page as a way for employees to find their way around the corporate Intranet site. This page may have links to internal financial information, marketing, manufacturing, human resources and even non-business announcements. It may also have links to outside sites such as client home pages or other sites of interests.
Both the Internet and Intranets, center around TCP/IP (Transmission Control Protocol/Internet Protocol) applications. These applications are used for the transport of information for both wide area and local area. Enterprise networks nowadays are a mixture of many protocols. The most popular ones being IPX, IP, SNA and many others. This is all beginning to change by replacing these protocols with one typically being the IP Protocol. IP can handle both LAN and WAN traffic, it is supported by the majority of computing platforms from Macintoshes to Windows NT to the largest mainframe and on top of it all it is the protocol used by the Internet. There are three types of protocols considered under the TCP/IP applications. These are FTP (File Transfer Protocol), SMTP (Simple Mail Transport Protocol) and HTTP (Hypertext Transport Protocol). HTTP, or Hypertext Transport Protocol, is a newer Internet Protocol designed expressly for the rapid distribution of hypertext documents. It uses minimum network bandwidth, in addition, its simplicity makes it easier to design and implement an HTML server or client browser.
Once a server is set up, almost everybody can create web pages. From top managers to employees are all able to create web pages with the use of HTML, which is the World Wide Web universal language format. Converting documents into HTML format is getting easier and easier with the use of new programs that do everything for one. This is considered another big advantage of using web technology because fewer programmers are required to maintain it therefor reducing the expenses of a company. Intranets allow the programmers to make one copy of any information and run it anywhere, even across both client and server platforms.
But why is this internal web so popular? There are typically three main reasons. First, all internal webs contain text and non-text items, for example recorded speech, graphics and even video clips. This allows the users to listen to speeches, watch video clips and look at graphics ranging from pictures to graphs, etc. Second, web sites can contain all types of information, depending on the content, author and effort put in to them. Companies are able to make pages referring to the employee's payroll, to company sales, client contracts, and many others without limitation. Finally, each Intranet web server can be cross-linked to others, by means of hypertext links, whether they are located around the world or just down the street. It is the ability that gives the Intranet its power, and its attraction to many corporations.
Intranets are easy to implement, unlike most other types of networks Intranets don't require the replacement of all of the existing system, databases and applications. They embrace the already existing infrastructure investments, including desktop computers, servers, mainframes, databases, applications and networks. Other types of networks would not allow an organization to have different types of machines or operating systems on the same platform. For example on a LAN or WAN network one would not be able to use Macintosh computers on the same network as an IBM PC. These types of networks would also not allow the users to use different operating systems on different computers, on the other hand Intranets allow different types of machines and operating systems to be used on the same platform.
Security is also a big factor on Intranets. Protecting information on a private network is critical. Intranets security services provide ways for resources to be protected against unauthorized users, for communications to be encrypted and authenticated, and for the integrity of information to be verified. Corporations can issue and manage a security key infrastructure to give their employees the ability to conduct, company business securely across the network.
The full potential of Intranet technologies is far from being realized. Over the next few years or so, Intranets will be enhanced with new services that will make them the prime priority for any organization. Many companies and organizations are already changing to Intranets, but as Intranets are becoming more and more popular many more will convert their LANs and WANs to Intranets because of all of the benefits they offer. Money is a big factor when deciding the change from an already existing network but when considering Intranets this usually expensive change is drastically reduced making it very interesting for companies to consider.
Bibliography
Cortese, Amy. "Here Comes The Internet." Business Week
26 Feb. 1996: 3
Carr, Jim. "Intranets deliver Internet technology can offer cheap, multiplatform access to corporate data on private networks." Info World 19 Feb. 1996: 20
Strom, David. "Creating Private Intranets: Challenges and Prospects for IS"
Internet address: http://www.strom.com/pubwork/intranetp.html taken on Feb. 10, 1997: 1-8
f:\12000 essays\sciences (985)\Computer\Introduction to Computers Question Sheet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Tutorial Question Sheet for the IBM PS/2
Name/Date:___________________________________
Section One: Short Answer Questions
1. What are the maximum amount of letters/characters that can be in a filename?
________________________________________________________________________________________________________________________________________________
2. What are the main ways of exiting a program?
________________________________________________________________________________________________________________________________________________
3. What is the USUAL capacity of a standard, high density 3.5", formatted diskette?
________________________________________________________________________
4. Give three (3) examples of removable storage media.
________________________________________________________________________________________________________________________________________________
5. Give two (2) ways of adding peripherals to your computer.
________________________________________________________________________________________________________________________________________________
6. Give three (3) examples of output devices.
7. Give three (3) examples of input devices.
________________________________________________________________________
8. What is an OS?
________________________________________________________________________________________________________________________________________________
9. What does LAN stand for.
________________________________________________________________________
10. BONUS QUESTION:
WHAT DOES "MODEM" STAND FOR?
________________________________________________________________________
Section Two: The Computer, Inside and Out: Short Answer
1. What is the "brain" of your computer?
________________________________________________________________________
2. Define RAM.
3. Define ROM.
4. Give two (2) major areas of the keyboard.
5. What purpose does the microprocessor serve?
________________________________________________________________________
6. Give two (2) examples of memory.
________________________________________________________________________
7. Hardware + Sofware=___________________________________________________
8. What purpose do the cursor keys serve?
________________________________________________________________________
9. What purpose does the NUM LOCK key serve?
________________________________________________________________________
10. What is Hardware?
________________________________________________________________________
Section Three: True or False
Circle the correct letter.
1. T or F: Computers are perfect.
2. T or F: Computers can think for themselves.
3. T or F: Computers are going to take over all jobs previously done by humans.
4. T or F: Some computers become obsolete quickly.
5. T or F: Computers can process data quickly and efficiently.
6. T or F:Computers will eventually turn against computers because they are so smart.
7. T or F: Computers are used only in businesses for hard tasks.
8. T or F: Piracy of computer software is okay, everybody does it.
9. T or F: The microcomputer is the smallest of these: micro, mini, mainframe.
10. T or F: A computer is a high speed machine which performs arithmetic, makes comparisons, and remembers what it has done.
11. T or F:A mainframe computer is a computer which keeps track of all physical characteristics of the world and prints the results daily.
Section Four: Multiple Choice
1. Analog computers are:
a. machines capable of following instructions step by step with calculations
b. devices which measure physical quantities such as temperature and air pressure
c. books
d. toothbrushes
e. both a+b
2. Software is:
a. all instructions that make a typewriter work
b. all instructions that tells a person what to wear
c. all instructions that make a computer work in a required manner
d. both a+b+c
e. there is no E
3. Hardware is:
a. all the electronic or mechanical parts used to give instructions to people
b. all the electronic or mechanical parts that make up a computer
c. all the electronic or mechanical parts that make a computer think for itself
d. all of the above
e. none of the above
4. A computer is:
a. human
b. a tool
c. a machine
d. a hybrid
e. b+c
f. c+e
5. A computer has 5 standard features:
a. input, output, hardware, software, beachware
b. input, output, printout, control, modem
c. input, output, control, arithmetic logic, storage
d. input and output, CRT, t.v., modem
6. 2 examples of input are:
a. keyboard, printer
b. printer, mouse
c. scanner, monitor
d. joystick, scanner
7. computer processing is:
a. work a food processor does
b. work the microcomputer does
c. work that the printer does
8. What is an input/output device in this list?
a. monitor
b. keyboard
c. CPU
d. disk drive
e. a+c
9. When data is being read a copy is being sent to:
a. the C.U.
b. the B.A.U.
c. the A.L.U.
d. the C.L.U.
e. the B.U.
Section Five: Software
1. What type of software would you use for keeping track of finance records?
2. What type of software would you use to connect to another computer with a modem?
________________________________________________________________________
3. What type of software would you use to keep track of a database?
________________________________________________________________________
4. What would you use for typing a professional letter?
________________________________________________________________________
5. What kind of software would you use for engineering?
________________________________________________________________________
6. What type of software would you use to create a video game on your PC?
________________________________________________________________________
7. What is multitasking?
8. What type of software would you use for faxing a document with a fax modem?
________________________________________________________________________
9. What type of software would you use to create a document filled with pictures and text?
________________________________________________________________________
10. What does a math co-processor do?
Computer Tutorial Question Sheet for the IBM PS/2
Name/Date:___________________________________
Section One: Misc. Short Answer Questions
1. What are the maximum amount of letters/characters that can be in a filename?
There can be 8 characters, a separator (.), and an optional extension (3)for a total of 12.
2. What are the main ways of exiting a program?
Either clicking on the exit button with the mouse, pressing ESC, ALT-X or ALT-Q.
3. What is the USUAL capacity of a standard, high density 3.5", formatted diskette?
Approximately 1.44 megabytes.
4. Give three (3) examples of removable storage media.
Tape backup drive, removable hard drive (i.e. IOMEGA JAZ, SYQUEST EZDRIVE), CD-ROM, floppy disks, dat tapes etc.
5. Give two (2) ways of adding peripherals to your computer.
Plug to an external port outside the computer, or take apart computer case and add inside expansion slot.
6. Give three (3) examples of output devices.
Monitor, printer, speakers, web page etc.
7. Give three (3) examples of input devices.
Keyboard, digital computer camera, retina scanner, flatbed scanner etc.
8. What is an OS?
OS stands for OPERATING SYSTEM, which helps you navigate through, and manage files, computer resources and software on your computer.
9. What does LAN stand for.
Local area network
10. BONUS QUESTION:
WHAT DOES "MODEM" STAND FOR?
MODEM stands for modulate/demodulate.
Section Two: The Computer, Inside and Out: Short Answer
1. What is the "brain" of your computer?
The CPU/MPU or Central Processing Unit.
2. Define RAM.
Random Access Memory
3. Define ROM.
Read Only Memory
4. Give two (2) major areas of the keyboard.
Typing keys, computer keys, function keys, numeric keypad etc.
5. What purpose does the microprocessor serve?
Accepts your requests and executes them.
6. Give two (2) examples of memory.
RAM+ROM
7. Hardware + Sofware=FIRMWARE
8. What purpose do the cursor keys serve?
To navigate through certain programs/to select certain areas on screen.
9. What purpose does the NUM LOCK key serve?
To change the numeric keypad from cursor keys to numerals.
10. What is Hardware?
Hardware is any of the components/devices/peripherals which make up the computer.
Section Three: True or False
Circle the correct letter.
1. T or F: Computers are perfect.
2. T or F: Computers can think for themselves.
3. T or F: Computers are going to take over all jobs previously done by humans.
4. T or F: Some computers become obsolete quickly.
5. T or F: Computers can process data quickly and efficiently.
6. T or F:Computers will eventually turn against computers because they are so smart.
7. T or F: Computers are used only in businesses for hard tasks.
8. T or F: Piracy of computer software is okay, everybody does it.
9. T or F: The microcomputer is the smallest of these: micro, mini, mainframe.
10. T or F: A computer is a high speed machine which performs arithmetic, makes comparisons, and remembers what it has done.
11. T or F:A mainframe computer is a computer which keeps track of all physical characteristics of the world and prints the results daily.
Section Four: Multiple Choice
1. Analog computers are:
a. machines capable of following instructions step by step with calculations
b. devices which measure physical quantities such as temperature and air pressure
c. books
d. toothbrushes
e. both a+b
2. Software is:
a. all instructions that make a typewriter work
b. all instructions that tells a person what to wear
c. all instructions that make a computer work in a required manner
d. both a+b+c
e. there is no E
3. Hardware is:
a. all the electronic or mechanical parts used to give instructions to people
b. all the electronic or mechanical parts that make up a computer
c. all the electronic or mechanical parts that make a computer think for itself
d. all of the above
e. none of the above
4. A computer is:
a. human
b. a tool
c. a machine
d. a hybrid
e. b+c
f. c+e
5. A computer has 5 standard features:
a. input, output, hardware, software, beachware
b. input, output, printout, control, modem
c. input, output, control, arithmetic logic, storage
d. input and output, CRT, t.v., modem
6. 2 examples of input are:
a. keyboard, printer
b. printer, mouse
c. scanner, monitor
d. joystick, scanner
7. computer processing is:
a. work a food processor does
b. work the microcomputer does
c. work that the printer does
8. What is an input/output device in this list?
a. monitor
b. keyboard
c. CPU
d. disk drive
e. a+c
9. When data is being read a copy is being sent to:
a. the C.U.
b. the B.A.U.
c. the A.L.U.
d. the C.L.U.
e. the B.U.
Section Five: Software
1. What type of software would you use for keeping track of finance records?
Spreadsheet software
2. What type of software would you use to connect to another computer with a modem?
Communications software
3. What type of software would you use to keep track of a database?
Database software!
4. What would you use for typing a professional letter?
Word-processing software
5. What kind of software would you use for engineering?
CAD software such as AUTOCAD
6. What type of software would you use to create a video game on your PC?
Programming software
7. What is multitasking?
Multitasking is the ability to run multiple applications simultaneously.
8. What type of software would you use for faxing a document with a fax modem?
Communications/fax software, yet again.
9. What type of software would you use to create a document filled with pictures and text?
Desktop publishing software
10. What does a math co-processor do?
It aids the CPU in performing complex, mathematical tasks. It helps speed things up.
f:\12000 essays\sciences (985)\Computer\Is your information safe .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Is Your Information Safe?
He doesn't wear a stocking mask over his face, and he doesn't break a window to get into your house. He doesn't hold a gun to your head, nor does he ransack your personal possessions. Just the same he's a thief. Although this thief is one you'll not only never see, but you may not even realize right away that he's robbed you. The thief is a computer hacker and he "enters" your home via your computer, accessing personal information -- such as credit card numbers -- which he could then use without your knowledge -- at least until you get that next credit card statement. Richard Bernes, supervisor of the FBI's Hi-Tech squad in San Jose, California, calls the Internet "the unlocked window in cyberspace through which thieves crawl" (Erickson 1). There seems to be an unlimited potential for theft of credit card numbers, bank statements and other financial and personal information transmitted over the Internet.
It's hard to imagine that anyone in today's technologically oriented world could function without computers. Personal computers are linked to business computers and financial networks, and all are linked
together via the Internet or other networks. More than a hundred million electronic messages travel through cyberspace every day, and every piece of information stored in a computer is vulnerable to attack (Icove-Seger-VonStorch
1). Yesterday's bank robbers have become today's computer hackers. They can walk away from a computer crime with millions of virtual dollars (in the form of information they can use or sell for an enormous profit). Walking away is precisely what they do. The National Computer Crimes Squad estimates that 85-97 % of the time, theft of information from computers is not even detected (Icove-Seger-VonStorch 1).
Home computer users are vulnerable, not only for credit card information and login IDs, but also their files, disks, and other computer equipment and data, which are subject to attack. Even if this information is not confidential, having to reconstruct what has been destroyed by a hacker can take days (Icove-Seger-VonStorch 1). William Cheswick, a network-security specialist at AT&T Bell Labs, says the home computers that use the Internet are singularly vulnerable to attack. "The Internet is like a vault with a screen door on the back," says Cheswick. "I don't need jackhammers and atom
bombs to get in when I can walk in through the door" (Quittner 44).
The use of the Internet has become one of the most popular ways to communicate. It's easy, fun, and you don't have to leave your home to do it. For example, the advantage of not having to take the time to drive to the bank is so great that they never consider the fact that the information they store or transmit might not be safe. Many computer security professionals continue to speak out on how the lack of Internet security will result in a significant increase in computer fraud, and easier access to information previously considered private and confidential (Regan 26).
Gregory Regan, writing for Credit World, says that only certain types of tasks and features can be performed securely. Electronic banking is not one of them. "I would not recommend performing commercial business transactions," he advises "or sending confidential information across networks attached to the Internet" (26).
In the business world, computer security can be just as easily compromised. More than a third of major U.S. corporations reported doing business over the Internet -- up from 26 percent a year ago -- but a quarter of them say
they've suffered attempted break-ins and losses, either in stolen data or cash (Denning 08A).
Dr. Gregory E. Shannon, president of InfoStructure Services and Technologies Inc., says the need to improve computer security is essential. There are newly released computer tools intended to help keep the security of your PC information, but which can just as easily be accessed by computer hackers, as this information will be released as freeware (available, and free, to anyone) on the Internet (Cambridge 1). These freely distributed tools could make it far easier for hackers to break into systems. Presently, if a hacker is trying to break into a system, he has to keep probing a network for weaknesses. Before long, hackers will be able to point one of these freeware tools at a network and let it automatically probe for security holes, without any interaction from themselves (Cambridge 1). Hackers, it seems, have no trouble staying ahead of the computer security experts.
Online service providers, such as America Online, CompuServe and Prodigy, are effective in providing additional protection for computer information. First of all, you need to use a "secret password" -- a customer ID that is typed in when you log on to the network. Then you can only send information, and retrieve your own e-mail,
through your own user access. Sometimes the service itself is even locked out of certain information. CompuServe, for example, with its 800-plus private bulletin boards, can't even read what's on them without gaining prior permission from the company paying for the service (Flanagan 34).
Perhaps in an attempt to show how secure they are, these information services will give out very little information about security itself. They all take measures to protect private information, and give frequent warnings to
new users about the danger in giving out a password, but there is also danger in making the service easy to use for the general public -- anything that is made easy enough for the novice computer user would not present much of a challenge for a computer hacker. Still, there is a certain amount of protection in using a service provider -- doing so is roughly euqivalent to locking what might be an open door (Flanagan 34).
The latest weak spot that has been discovered is a flaw in the World Wide Web. The Web is the fastest-growing zone within the Internet, the area where most home computer users travel, as it's attractive and easy to use. According to an advisory issued on the Internet by a programmer in Germany, there is a "hole" in the software that runs most Web sites (Quittner 44). This entry point will provide an an intruder
with access to any and all information, allowing him to do anything the owners of the site can do. Network-security
specialist Cheswick points out that most of the Web sites use software that puts them at risk. With more and more home computer uses setting up their own home pages and Web sites, this is just one more way a hacker can gain access to personal information (Quittner 44).
Credit bureaus are aware of how financial information can be used or changed by computer hackers, which has a serious impact on their customers. Loans can be made with
false information (obtained by the hackers from an unsuspecting computer user's data base); and information can be changed for purposes of deceit, harassment or even blackmail. These occur daily in the financial services industry, and the use of Internet has only complicated how an organization or private individual keeps information private, confidential and, most importantly, correct (Regan 26).
Still, there are some measures that can be taken to help protect your information. If you use a virus protection program before downloading any files from the Internet, there is less of a chance a hacker can crack your system. Login passwords should be changed frequently (write
it down so you don't forget, but store it in a secure place), and they should never contain words or names that are easily guessed. It may be easier for you to remember your password if you use your son's name, but it's also easier for the hacker to detect it. Passwords should always be strictly private -- never tell anyone else what it is (Regan 26).
Evaluate products for their security features before you buy any tool to access the Internet or service providers. Remember, to change the default system password
-- the one you are initially given to set up the network on your computer (Regan 26).
Finally, and most importantly, it's best to realize that a computer system, regardless of the amount of precaution and protection you take, is never completely protected from outsiders. As protection software becomes more sophisticated, so do the hackers who want to break into your system. It's a good idea not to leave the silver on the dining table when you don't know for sure that a thief can't crawl through your window.
Works Cited
Cambridge Publishing Inc. "PC Security: Internet Security Tool to Deter Hackers." Cambridge Work-Group, (1995): Jan, pp 1.
Denning, Dorothy E. "Privacy takes another hit from new computer rules" USA Today, (1996): Dec 12, pp 08A.
Erickson, Jim. "Crime on the Internet A Growing Concern." Seattle Post Intelligencer, (1995): Nov 15, http://technoculture.mira.net.au/hypermail/0032.html
Flanagan, Patrick. "Demystifying the information highway." Management Review, (1994): May 1, pp 34.
Icove, David; Seger, Karl; VonStorch, William. "Fighting Computer Crime." http://www.pilgrim.umass.edu/pub/security/crime1.html
Quittner, Joshua. "Technology Cracks in the Net." Time, (1995): Feb 27, pp 44.
Regan, Gregory. "Securely accessing the Internet & the World Wide Web: Good or evil?", Credit World, v. 85, (1996): Oct 1, pp 26.
f:\12000 essays\sciences (985)\Computer\ISDN vs Cable modems.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1.0 Introduction
The Internet is a network of networks that interconnects computers around
the world, supporting both business and residential users. In 1994, a
multimedia Internet application known as the World Wide Web became
popular. The higher bandwidth needs of this application have highlighted
the limited Internet access speeds available to residential users. Even at 28.8
Kilobits per second (Kbps)-the fastest residential access commonly
available at the time of this writing-the transfer of graphical images can be
frustratingly slow.
This report examines two enhancements to existing residential
communications infrastructure: Integrated Services Digital Network (ISDN),
and cable television networks upgraded to pass bi-directional digital traffic
(Cable Modems). It analyzes the potential of each enhancement to deliver
Internet access to residential users. It validates the hypothesis that upgraded
cable networks can deliver residential Internet access more cost-effectively,
while offering a broader range of services.
The research for this report consisted of case studies of two commercial
deployments of residential Internet access, each introduced in the spring of
1994:
· Continental Cablevision and Performance Systems International (PSI)
jointly developed PSICable, an Internet access service deployed over
upgraded cable plant in Cambridge, Massachusetts;
· Internex, Inc. began selling Internet access over ISDN telephone
circuits available from Pacific Bell. Internex's customers are residences and
small businesses in the "Silicon Valley" area south of San Francisco,
California.
2.0 The Internet
When a home is connected to the Internet, residential communications
infrastructure serves as the "last mile" of the connection between the
home computer and the rest of the computers on the Internet. This
section describes the Internet technology involved in that connection.
This section does not discuss other aspects of Internet technology in
detail; that is well done elsewhere. Rather, it focuses on the services
that need to be provided for home computer users to connect to the
Internet.
2.1
ISDN and upgraded cable networks will each provide different functionality
(e.g. type and speed of access) and cost profiles for Internet connections. It
might seem simple enough to figure out which option can provide the needed
level of service for the least cost, and declare that option "better." A key
problem with this approach is that it is difficult to define exactly the needed
level of service for an Internet connection. The requirements depend on
the applications being run over the connection, but these applications are
constantly changing. As a result, so are the costs of meeting the applications'
requirements.
Until about twenty years ago, human conversation was by far the dominant
application running on the telephone network. The network was
consequently optimized to provide the type and quality of service needed for
conversation. Telephone traffic engineers measured aggregate statistical
conversational patterns and sized telephone networks accordingly.
Telephony's well-defined and stable service requirements are reflected in the
"3-3-3" rule of thumb relied on by traffic engineers: the average voice call
lasts three minutes, the user makes an average of three call attempts during
the peak busy hour, and the call travels over a bidirectional 3 KHz channel.
In contrast, data communications are far more difficult to characterize. Data
transmissions are generated by computer applications. Not only do existing
applications change frequently (e.g. because of software upgrades), but
entirely new categories-such as Web browsers-come into being quickly,
adding different levels and patterns of load to existing networks.
Researchers can barely measure these patterns as quickly as they are
generated, let alone plan future network capacity based on them.
The one generalization that does emerge from studies of both local and wide-
area data traffic over the years is that computer traffic is bursty. It does not
flow in constant streams; rather, "the level of traffic varies widely over
almost any measurement time scale" (Fowler and Leland, 1991). Dynamic
bandwidth allocations are therefore preferred for data traffic, since static
allocations waste unused resources and limit the flexibility to absorb bursts
of traffic.
This requirement addresses traffic patterns, but it says nothing about the
absolute level of load. How can we evaluate a system when we never know
how much capacity is enough? In the personal computing industry, this
problem is solved by defining "enough" to be "however much I can afford
today," and relying on continuous price-performance improvements in digital
technology to increase that level in the near future. Since both of the
infrastructure upgrade options rely heavily on digital technology, another
criteria for evaluation is the extent to which rapidly advancing technology
can be immediately reflected in improved service offerings.
Cable networks satisfy these evaluation criteria more effectively than
telephone networks because:
· Coaxial cable is a higher quality transmission medium than twisted
copper wire pairs of the same length. Therefore, fewer wires, and
consequently fewer pieces of associated equipment, need to be
installed and maintained to provide the same level of aggregate
bandwidth to a neighborhood. The result should be cost savings and
easier upgrades.
· Cable's shared bandwidth approach is more flexible at allocating any
particular level of bandwidth among a group of subscribers. Since it
does not need to rely as much on forecasts of which subscribers will
sign up for the service, the cable architecture can adapt more readily
to the actual demand that materializes.
· Telephony's dedication of bandwidth to individual customers limits
the peak (i.e. burst) data rate that can be provided cost-effectively.
In contrast, the dynamic sharing enabled by cable's bus architecture
can, if the statistical aggregation properties of neighborhood traffic
cooperate, give a customer access to a faster peak data rate than the
expected average data rate.
2.2 Why focus on Internet access?
Internet access has several desirable properties as an application to
consider for exercising residential infrastructure. Internet technology is
based on a peer-to-peer model of communications. Internet usage
encompasses a wide mix of applications, including low- and high-
bandwidth as well as asynchronous and real-time communications.
Different Internet applications may create varying degrees of
symmetrical (both to and from the home) and asymmetrical traffic
flows. Supporting all of these properties poses a challenge for existing
residential communications infrastructures.
Internet access differs from the future services modeled by other studies
described below in that it is a real application today, with growing
demand. Aside from creating pragmatic interest in the topic, this factor
also makes it possible to perform case studies of real deployments.
Finally, the Internet's organization as an "Open Data Network" (in the
language of (Computer Science and Telecommunications Board of the
National Research Council, 1994)) makes it a service worthy of study
from a policy perspective. The Internet culture's expectation of
interconnection and cooperation among competing organizations may
clash with the monopoly-oriented cultures of traditional infrastructure
organizations, exposing policy issues. In addition, the Internet's status
as a public data network may make Internet access a service worth
encouraging for the public good. Therefore, analysis of costs to provide
this service may provide useful input to future policy debates.
3.0 Technologies
This chapter reviews the present state and technical evolution of
residential cable network infrastructure. It then discusses a topic not
covered much in the literature, namely, how this infrastructure can be
used to provide Internet access. It concludes with a qualitative
evaluation of the advantages and disadvantages of cable-based Internet
access. While ISDN is extensively described in the literature, its use as
an Internet access medium is less well-documented. This chapter
briefly reviews local telephone network technology, including ISDN
and future evolutionary technologies. It concludes with a qualitative
evaluation of the advantages and disadvantages of ISDN-based Internet
access.
3.1 Cable Technology
Residential cable TV networks follow the tree and branch architecture.
In each community, a head end is installed to receive satellite and
traditional over-the-air broadcast television signals. These signals are
then carried to subscriber's homes over coaxial cable that runs from the
head end throughout the community
Figure 3.1: Coaxial cable tree-and-branch topology
To achieve geographical coverage of the community, the cables
emanating from the head end are split (or "branched") into multiple
cables. When the cable is physically split, a portion of the signal power
is split off to send down the branch. The signal content, however, is not
split: the same set of TV channels reach every subscriber in the
community. The network thus follows a logical bus architecture. With
this architecture, all channels reach every subscriber all the time,
whether or not the subscriber's TV is on. Just as an ordinary television
includes a tuner to select the over-the-air channel the viewer wishes to
watch, the subscriber's cable equipment includes a tuner to select
among all the channels received over the cable.
3.1.1. Technological evolution
The development of fiber-optic transmission technology has led cable
network developers to shift from the purely coaxial tree-and-branch
architecture to an approach referred to as Hybrid Fiber and Coax(HFC)
networks. Transmission over fiber-optic cable has two main advantages
over coaxial cable:
· A wider range of frequencies can be sent over the fiber, increasing
the bandwidth available for transmission;
· Signals can be transmitted greater distances without amplification.
The main disadvantage of fiber is that the optical components required
to send and receive data over it are expensive. Because lasers are still
too expensive to deploy to each subscriber, network developers have
adopted an intermediate Fiber to the Neighborhood (FTTN)approach.
Figure 3.3: Fiber to the Neighborhood (FTTN) architecture
Various locations along the existing cable are selected as sites for
neighborhood nodes. One or more fiber-optic cables are then run from
the head end to each neighborhood node. At the head end, the signal is
converted from electrical to optical form and transmitted via laser over
the fiber. At the neighborhood node, the signal is received via laser,
converted back from optical to electronic form, and transmitted to the
subscriber over the neighborhood's coaxial tree and branch network.
FTTN has proved to be an appealing architecture for telephone
companies as well as cable operators. Not only Continental
Cablevision and Time Warner, but also Pacific Bell and Southern New
England Telephone have announced plans to build FTTN networks.
Fiber to the neighborhood is one stage in a longer-range evolution of
the cable plant. These longer-term changes are not necessary to provide
Internet service today, but they might affect aspects of how Internet
service is provided in the future.
3.2 ISDN Technology
Unlike cable TV networks, which were built to provide only local
redistribution of television programming, telephone networks provide
switched, global connectivity: any telephone subscriber can call any
other telephone subscriber anywhere else in the world. A call placed
from a home travels first to the closest telephone company Central
Office (CO) switch. The CO switch routes the call to the destination
subscriber, who may be served by the same CO switch, another CO
switch in the same local area, or a CO switch reached through a long-
distance network.
Figure 4.1: The telephone network
The portion of the telephone network that connects the subscriber to
the closest CO switch is referred to as the local loop. Since all calls
enter and exit the network via the local loop, the nature of the local
connection directly affects the type of service a user gets from the
global telephone network.
With a separate pair of wires to serve each subscriber, the local
telephone network follows a logical star architecture. Since a Central
Office typically serves thousands of subscribers, it would be unwieldy
to string wires individually to each home. Instead, the wire pairs are
aggregated into groups, the largest of which are feeder cables. At
intervals along the feeder portion of the loop, junction boxes are placed.
In a junction box, wire pairs from feeder cables are spliced to wire pairs
in distribution cables that run into neighborhoods. At each subscriber
location, a drop wire pair (or pairs, if the subscriber has more than one
line) is spliced into the distribution cable.
Since distribution cables are either buried or aerial, they are disruptive
and expensive to change. Consequently, a distribution cable usually
contains as many wire pairs as a neighborhood might ever need, in
advance of actual demand.
Implementation of ISDN is hampered by the irregularity of the local
loop plant. Referring back to Figure 4.3, it is apparent that loops are of
different lengths, depending on the subscriber's distance from the
Central Office. ISDN cannot be provided over loops with loading coils
or loops longer than 18,000 feet (5.5 km).
4.0 Internet Access
This section will outline the contrasts of access via the cable plant with
respect to access via the local telephon network.
4.1 Internet Access Via Cable
The key question in providing residential Internet access is what kind of
network technology to use to connect the customer to the Internet For
residential Internet delivered over the cable plant, the answer is
broadband LAN technology. This technology allows transmission of
digital data over one or more of the 6 MHz channels of a CATV cable.
Since video and audio signals can also be transmitted over other
channels of the same cable, broadband LAN technology can co-exist
with currently existing services.
Bandwidth
The speed of a cable LAN is described by the bit rate of the modems
used to send data over it. As this technology improves, cable LAN
speeds may change, but at the time of this writing, cable modems range
in speed from 500 Kbps to 10 Mbps, or roughly 17 to 340 times the bit
rate of the familiar 28.8 Kbps telephone modem. This speed represents
the peak rate at which a subscriber can send and receive data, during
the periods of time when the medium is allocated to that subscriber. It
does not imply that every subscriber can transfer data at that rate
simultaneously. The effective average bandwidth seen by each
subscriber depends on how busy the LAN is. Therefore, a cable LAN
will appear to provide a variable bandwidth connection to the Internet
Full-time connections
Cable LAN bandwidth is allocated dynamically to a subscriber only
when he has traffic to send. When he is not transferring traffic, he does
not consume transmission resources. Consequently, he can always be
connected to the Internet Point of Presence without requiring an
expensive dedication of transmission resources.
4.2 Internet Access Via Telephone Company
In contrast to the shared-bus architecture of a cable LAN, the telephone
network requires the residential Internet provider to maintain multiple
connection ports in order to serve multiple customers simultaneously.
Thus, the residential Internet provider faces problems of multiplexing
and concentration of individual subscriber lines very similar to those
faced in telephone Central Offices.
The point-to-point telephone network gives the residential Internet
provider an architecture to work with that is fundamentally different
from the cable plant. Instead of multiplexing the use of LAN
transmission bandwidth as it is needed, subscribers multiplex the use of
dedicated connections to the Internet provider over much longer time
intervals. As with ordinary phone calls, subscribers are allocated fixed
amounts of bandwidth for the duration of the connection. Each
subscriber that succeeds in becoming active (i.e. getting connected to
the residential Internet provider instead of getting a busy signal) is
guaranteed a particular level of bandwidth until hanging up the call.
Bandwidth
Although the predictability of this connection-oriented approach is
appealing, its major disadvantage is the limited level of bandwidth that
can be economically dedicated to each customer. At most, an ISDN
line can deliver 144 Kbps to a subscriber, roughly four times the
bandwidth available with POTS. This rate is both the average and the
peak data rate. A subscriber needing to burst data quickly, for example
to transfer a large file or engage in a video conference, may prefer a
shared-bandwidth architecture, such as a cable LAN, that allows a
higher peak data rate for each individual subscriber. A subscriber who
needs a full-time connection requires a dedicated port on a terminal
server. This is an expensive waste of resources when the subscriber is
connected but not transferring data.
5.0 Cost
Cable-based Internet access can provide the same average bandwidth
and higher peak bandwidth more economically than ISDN. For
example, 500 Kbps Internet access over cable can provide the same
average bandwidth and four times the peak bandwidth of ISDN access
for less than half the cost per subscriber. In the technology reference
model of the case study, the 4 Mbps cable service is targeted at
organizations. According to recent benchmarks, the 4 Mbps cable
service can provide the same average bandwidth and thirty-two times
the peak bandwidth of ISDN for only 20% more cost per subscriber.
When this reference model is altered to target 4 Mbps service to
individuals instead of organizations, 4 Mbps cable access costs 40%
less per subscriber than ISDN. The economy of the cable-based
approach is most evident when comparing the per-subscriber cost per
bit of peak bandwidth: $0.30 for Individual 4 Mbps, $0.60 for
Organizational 4 Mbps, and $2 for the 500 Kbps cable services-versus
close to $16 for ISDN. However, the potential penetration of cable-
based access is constrained in many cases (especially for the 500 Kbps
service) by limited upstream channel bandwidth. While the penetration
limits are quite sensitive to several of the input parameter assumptions,
the cost per subscriber is surprisingly less so.
Because the models break down the costs of each approach into their
separate components, they also provide insight into the match between
what follows naturally from the technology and how existing business
entities are organized. For example, the models show that subscriber
equipment is the most significant component of average cost. When
subscribers are willing to pay for their own equipment, the access
provider's capital costs are low. This business model has been
successfully adopted by Internex, but it is foreign to the cable industry.
As the concluding chapter discusses, the resulting closed market
structure for cable subscriber equipment has not been as effective as the
open market for ISDN equipment at fostering the development of
needed technology. In addition, commercial development of both cable
and ISDN Internet access has been hindered by monopoly control of
the needed infrastructure-whether manifest as high ISDN tariffs or
simple lack of interest from cable operators.
f:\12000 essays\sciences (985)\Computer\ITT Trip Scheduling.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ITT Trip Scheduling
The Information, Tours and Tickets (ITT) office could use a system to assist them in creating trip schedules. In this paper I will outline a plan for a Decision Support System (DSS) that will assist ITT in creating schedules for their tours. This system will also track customer surveys and hold data about all of ITTs trips. They already have some computer systems, a spread sheet program and a data base management system (DBMS) which can all be used to build a small DSS. Using the DBMS and the spread sheet software I have designed a system to assist them in making decisions about scheduling trips. This system also allows them access to information about all of ITTs trips and the feedback from customers about each trip. In the next few paragraphs I go through the major steps in developing a system of this nature. A system of this type goes through several phases in its development. I start with the planning phase and go on to discuss research, analysis and conceptual design. Then I talk a little about the models used in this system. I finish by talking about the actual design, the construction, and the implementation of the new ITT system. I finish the paper with a discussion of maintaining the system.
The first step in building any DSS is planning. Planning is basically defining the problem. The planning also involves an assessment of exactly what is needed. In this case I deal with trip scheduling. In the case description this would include: How many trips to offer, the days of the week to have particular trips, and when to cancel trips. Obviously the scheduling ties to other information such as profit and participation, but for this paper I will only cover the scheduling portion of ITTs problem. Therefore I have defined the problem as a basic scheduling problem. I see a need for ITT to better schedule trips using the information they have and using the information they collect from customer surveys. With the problem defined we can now look at what information is needed to further analyze the problem.
After a problem is defined, information must be collected. The research phase of system development is just that, collecting information. The information collected will be used in the next phase of development to further analyze the problem and it will be used in this case to build the databases. The databases will then be used with decision support system (DSS) models to assist ITT in making scheduling decisions. Information in this case can come from their current schedules and trip description fliers. Also during this stage of development the current resources are assessed. This would include ITTs current information systems and their current budget. And, information such as Navy or ITT policies are collected as a reference. Once all the information is collected than the system can move to the next stage of development, analysis.
With all the data and information collected analysis of it begins. In this stage we determine what needs to be done to solve the problem. No work, on a new system, is started yet, but a system is conceptualised and possible solutions are identified. Also in this stage a final solution to the problem is chosen and system passes through another stage of development. For the ITT problem I have chosen a simple Management Information System (MIS) with small decision support models to aid in creating schedules. This system will provide ITT with the information they need to make decisions about scheduling their trips as well as allow them to create the schedules directly from computer models. I will discuss the models in the next paragraph. The system would not draw conclusions, but simply show the pros and cons to certain choices. The MIS portion of the system will simply provide information to the users and to the DSS. The DSS portion of the system will allow a schedule to be created using resources in an optimum manner. I decided to go with a small and simple system because of ITTs limited resources and because of a high employee turn-over. A complicated system would not be feasible in such an environment where new employees are constantly having to be trained to use it.
In this paragraph I side step from the development a little to talk about the models used in the system. As stated earlier the models used in this system should be kept simple and small if possible. Using standard spread sheet software, models can be created that will show the optimal schedule for trips. The basic information required for these models should include bus schedules, reservation requirements, customer satisfaction information, and cost data. Other data can also be added to assist in decisions. The models would first approximate the participation for each proposed trip. Then another model would determine if the trip is feasible given the costs involved. The next model would determine if the trip is even possible considering what is required as far as reservations and transportation. Another model could also determine if the trip would be able satisfy the customers given the past customer inputs. Finally after determining whether each trip is worth offering the ITT employees could use the computer to generate a new schedule. Now that we know what the system should do we can turn our attention to the design of the system.
In the design phase of DSS development the new system is designed to solve the problem. Here the information collected and the resources identified are examined to decide exactly what must be done and in what manner to solve the problem. Diagrams may be drawn to show how the components will fit together. Also the ground work is laid for the construction phase. In the ITT case I designed a database that contains information on all their trips along with information obtained from the customer surveys. This information is then combined with bus and reservation data in a standard spread sheet where it is manipulated to optimize the trip schedule. A manager or an employee can then use the information and the data in the database to create a calendar of events using an inexpensive program 'Calendar Creator.' Which is what ITT currently uses to create schedules manually.
The construction of a system is the bringing together of all the required parts and making the system do what it's supposed to do. In this case the system I have designed will only require a minimum of additional resources. I designed the system to work on their existing computers using their existing software. The databases and the spreadsheet models could be built by knowledgeable employees with minimal outside help. Once constructed the system could be run and results compared with the old system to determine if it is functioning properly. The results could also tell if the system is optimizing the schedule or just speeding up what is already done manually.
Implementing any system is the process of putting it into use. In this project the implementation phase should be a fairly easy conversion. The old way of manually deciding on trips and putting the results into Calendar Creator is simply replaced with an automated selection of trips that the employee can use to create a calendar. When the system is operating normally it should improve the way ITT does business.
After implementing this system it will have to be maintained. New models will have to be added and old models will have to be changed or removed. With the simple models used in this system that should not be difficult. The hardest component in this system to maintain will be the databases. They will have to contain the most current data in order for the system to operate properly. I suggest a data checking module be added at some point in order to maintain data consistency. Inconsistent data is something that can degrade the system performance and cause it to give inaccurate or incorrect information. A data checking module will insure that the information entered into the system is accurate and consistent with the rest of the system.
There could be many solutions to this problem, but given the limited budget most are not feasible. The system I have designed should be more than sufficient to assist them in creating schedules faster and more efficiently. Also it will give customers more of what they want and should improve repeat business.
f:\12000 essays\sciences (985)\Computer\Journalism on the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Journalism on the Internet
The common forms of media in today's world each have both
advantages and disadvantages. The Internet has been around for an
almost equal amount of time as most of them, but only recently has
it become a popular way of retrieving information. The Internet
takes the best of all other medium and combines them into a very
unique form. The Internet is the best way to retrieve information.
This combination of paper publishing, TV, radio, telephones, and
mail is the future of communications. The internet has several
types of journalism which can be defined into three sections. One
section is online magazines, online broadcasting, and other online
services. The next group is resource files and web pages. The third
is discussion groups/forums and e-mail. I will investigate these
areas of the net, showing the advantages and disadvantages of each
in comparison to the conventional forms.
In order to understand what all these topics are you must
first understand what the internet is. The simple answer is that it
is computers all over the globe connected together by telephone
wires. It was first made by the military, "No one owns the
Internet", to have a network with no centre. That way it could
never be destroyed by nuclear war. Since then, universities have
used it and it has evolved into what it is today. It is a library
that contains mail, stories, news advertising, and just about
everything else. "In a sense, freenets are a literacy movement for
computer mediated communication today, as public libraries were to
reading for an earlier generation." Now that the term "the net" is
understood lets look at some sections of the net.
An online magazine is a computer that lets users access it
through the net. This computer stores one or more magazines which
users can read. "PC magazine and other magazines are available on
the Web" "Maclean's Magazine and Canadian Business online; and
Reuters' Canadian Newsclips." This form is much better that
conventional publishing, "we are using the online service to
enhance the print magazine", for several reasons. It is
environmentally safe, "Publish without Paper", most are free, "$50
a month on CompuServe", you can get any article from any year at
the touch of a button, and you can search for key words. "Search
engines make it easy pinpointing just the information you need".
The articles don't have space limits so you will get a specially
edited full story version (depending on the reporter) and other
articles that didn't make the print. It is easy to compare the
story with another journalists view, or get the story from a
journalist from another country. This way, the reader can make
informed decisions on anything, without bias. A few people complain
that there is too much information to receive, "mass jumble", but
there are filter programs that will cut the information to any set
amount. CNN online is a broadcast web page (another computer). CNN
not only has the articles to read but video, and sound clips too.
Anyone can get up to the minute news, and reports. "We will send a
reporter to the game, who will interview people like the coach and
uplink the story while the game is being played." This is an
excellent addition to TV. It is a mix of TV and publishing. TV has
a schedule to keep and might cut out parts simply for time but
there is no time limit online. Also, because it is interactive,
users will remember the information longer than if they watched TV.
An online service is a web page that sells something. It is easy to
order anything, from flowers to even airline tickets.
"...opportunity to buy tickets through TicketMaster." But even
this has problems, "the Internet is new and many possible types of
fraud must be dealt with," but the solution is software, "Secure
Courier...a secure means of transferring financial transactions".
This service is the home shopping, catalogue, and printed flier
replacement. Their advantage is that you can buy directly, or skip
them if you wish, unlike TV.
Web pages on the internet are computers that are dedicated to
letting people access them. Many companies have a web page that
offers help to customers, news, services, product updates, advice
from experts, even "information on elections, government programs,
and so forth." "These new, online services include daily industry
news, classified, a directory of suppliers, an interactive forum,
and tons of reference material, including government documents,
surveys, speeches, papers, and statistics." Even home businesses
can have a page and advertise their products or services. The only
other medium that comes close to what a web page can do is the help
telephone lines, but a web page is much more useful. Resource files
are like a library of information. By using a search program a user
can find files on any topic. They can get, digital books, reports,
pictures, statistics, university essays, sound files, video, and
even programs, "You can even download the federal budget
simulator". However, there is always going to be the possibility
of false information, but because it is so easy to speak your mind
on the net, this bad information is quickly found and deleted.
"Established sources such as universities, libraries, and
government agencies can be considered reasonably reliable....Then
comes the free-for-all." "You must be a critical viewer of both
the source and the content"
The final area is discussion groups or forums. There is a
forum for just about any topic. "The overall advantage is the
spread of ideas, information, and thoughts between people who would
not otherwise correspond. The Result is a free flow if ideas with
little moderation or control". A forum is a mail group that allow
people all over the world discuss a topic, trade information, etc.
"everything from uploaded works by Canadian artists to chats on
hockey and politics." Each forum has many users, each with their
own point of view. Anyone can talk, bias or not, loving or hating
the topic. "There are no rules about what can or can not go on the
Internet. Legal standards are almost impossible to establish and
even less likely to be enforced on a global link,". However, this
free flow of information can cause problems. These are evident in
adult forums and the EFF. The Electronic Freedom Foundation is a
group of people that want all information to be available to
anyone. This information can be anything such as; how to build a
car bombs, atomic bombs, working computer virus code, government
files, UFO info, hacking, cracking (copying software), and pheaking
(free telephone calls). This information is illegal in some
countries, and can be harmful or fatal if used. It is still
available because of the freedom of information act. The
information has always been available, but only lately has it
become this easy to get. Adult forums and web pages have created a
stir in the government. There are explicit pictures, novels,
catalog, stories, mail, and even child porn on the net. The
government has set out to stop the child porn but allowed the other
adult material to pass by. It would be improper for a young child
to access this information. To stop this, parents can install
programs to lock out these web pages, but a knowledgeable child can
still get access to them. The government is currently working on
this problem and setting up laws to protect the people who want to
be protected, while not infringing on the rights of the people who
want access to this information.
As you can see, the Internet has the potential to be the
worlds #1 medium. With the ever expanding Web and a growing number
of users, this is only a matter of time. Journalism on the Internet
is only one of many things that will be available through the net.
As these technologies advance, barriers will be broken, rules set,
and the world's knowledge will be a phone call and a mouse click
away.
Footnotes in Order
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 20
Trueman, "The 1995 Canadian Internet Awards", The Computer
Paper, September, (1995), p. 94
Michael J. Miller, "Where Do I Want to Go Today", PC
Magazine, March 28, (1995), P. 75
Sorelle Saidman, "Online Canadian Content Expanding despite
Prodigy Setback", Toronto Computes, November, (1995), p. 9
Doug Bennet, "Confessions of an online publisher", Toronto
Computes, November (1995), p. 35
"The Internet Comes of Age" PC Magazine, May 30, (1995), P.
19
Casey Abell, "Letters", PC Magazine, May 30, (1995), P. 19
Rick Ayre and Don Willmott, "The Internet Means Business", Pc
Magazine, May 16, (1995), p. 197
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 20
Chris Carder, "Sports on the Internet a winner", Toronto
Computes, November, (1995), P. 98
Chris Carder, "Sports on the Internet a winner", Toronto
Computes, November, (1995), P. 98
Patrick McKenna, "Netscape's Digital Envelope For Internet
Transactions", The Computer Paper, September, (1995), p. 90
Patrick McKenna, "Netscape's Digital Envelope For Internet
Transactions", The Computer Paper, September, (1995), p. 90
Michael J. Miller, "Where Do I Want to Go Today", PC
Magazine, March 28, (1995), P. 75
Doug Bennet, "Confessions of an online publisher", Toronto
Computes, November (1995), p. 37
Michael J. Miller, "Where Do I Want to Go Today", PC
Magazine, March 28, (1995), P. 75
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 21
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 21
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 21
Sorelle Saidman, "Online Canadian Content Expanding despite
Prodigy Setback", Toronto Computes, November, (1995), p. 9
Bill Kempthorne, "Internet, So What?", The Computer Paper,
September, (1995), p. 22
f:\12000 essays\sciences (985)\Computer\Lasers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
LASERS
The laser is a device that a beam of light that is both scientifically and practically of great use because it is coherent light. The beam is produced by a process known as stimulated emission, and the word "laser" is an acronym for the phrase "light amplification by stimulated emission of radiation."
Light is just like radio waves in the way that it can also carry information. The information is encoded in the beam as variations in the frequency or shape of the light wave. The good part is that since light waves have much higher frequencies they can also hold much more information.
Not only is the particle the smallest light unit but it is a particle as well as a wave. In beams of light whether they are ordinary natural or artificial the photon waves will not be traveling together because they are not being emitted at exactly the same moment but instead at random short bursts. Even if the light is of a single frequency that statement would also be true. A laser is useful because it produces light that is not only of essentially a single frequency but also coherent, with the light waves all moving along in unison.
Lasers consist of several components. A few of the many things that the so-called active medium might consist of are, atoms of a gas, molecules in a liquid, and ions in a crystal. Another component consists of some method of introducing energy into the active medium, such as a flash lamp for example. Another component is the pair of mirrors on either side of the active medium which consists of one that transmits some of the radiation that hits it. If the active component in the laser is a gas laser than each atom is characterized by a set of energy states, or energy levels, of which it may consist. An example of the energy states could be pictured as a unevenly spaced ladder which the higher rungs mean higher states of energy and the lower rungs mean lower states of energy. If left disturbed for a long time the atom will reach its ground state or lowest state of energy. According to quantum mechanics there is only one light frequency that the atom will work with. There are three ways that the atom can deal with the presence of light either it can absorb the light, or spontaneous emission occurs, or stimulated emission occurs. This means that if the atom is at its lowest state that it may absorb the light and jump to its high state and emit extra light while doing so. The second thing it may do is if it is at its highest state it can fall spontaneously to its lower state thus emitting light. The third way is that the atom will jump from its upper state to its lower state thus emitting extra light. Spontaneous emission is not effected by light yet it is rather on a time scale characteristic of the states involved. That is called the spontaneous lifetime. In stimulated emission the frequency of the light is the same as the frequency of the light that stimulated it.
Carbon-monoxide, color center, excimer, free-electron, gas-dynamic, helium-cadmium, hydrogen-fluoride, deuterium-fluoride, iodine, Raman spin-flip, and rare-gas halide lasers are just a few of the many types of lasers there are out there in the world. The helium-neon laser is the most common and by far the cheapest costing about $170. The diode laser is the smallest being packed in a transistor like package. The dye laser are very good for their broad, continuously variable wavelength capabilities.
The theory of stimulated emission was first proved by Albert Einstein in 1916, then population inverse was discussed by V. A. Fabrikant in 1940. This led to the building of the first ammonia maser in 1954 by J. P. Gordon, H. J. Zeiger, and Charles H. Townes. In July of 1960 Theodore H. Maiman announced the generation of a pulse of coherent red light by means of a red crystal- the first laser. In 1987 Gordon Gould won a patent he had been trying to get for three years to build the first gas-discharged laser which he had conceived in 1957. In that same patent the helium-neon was included.
Bibliography:
Bertolotti, M., Masers and lasers: An Historical Approach (1983);
Kasuya, T., and Tsukakoshi, M., Handbook of Laser Science and Technology
(1988); Meyers,Robert, ed., Encyclopedia of Lasers, 3d ed. (1989); Steen, W. M., ed., Lasers in Manufacturing (1989); Whimmery, J. R., ed., Lasers: Invention to Application (1987); Young, M., Optics and Lasers, 3d rev. ed. (1986).
LASERS
The laser is a device that a beam of light that is both scientifically and practically of great use because it is coherent light. The beam is produced by a process known as stimulated emission, and the word "laser" is an acronym for the phrase "light amplification by stimulated emission of radiation."
Light is just like radio waves in the way that it can also carry information. The information is encoded in the beam as variations in the frequency or shape of the light wave. The good part is that since light waves have much higher frequencies they can also hold much more information.
Not only is the particle the smallest light unit but it is a particle as well as a wave. In beams of light whether they are ordinary natural or artificial the photon waves will not be traveling together because they are not being emitted at exactly the same moment but instead at random short bursts. Even if the light is of a single frequency that statement would also be true. A laser is useful because it produces light that is not only of essentially a single frequency but also coherent, with the light waves all moving along in unison.
Lasers consist of several components. A few of the many things that the so-called active medium might consist of are, atoms of a gas, molecules in a liquid, and ions in a crystal. Another component consists of some method of introducing energy into the active medium, such as a flash lamp for example. Another component is the pair of mirrors on either side of the active medium which consists of one that transmits some of the radiation that hits it. If the active component in the laser is a gas laser than each atom is characterized by a set of energy states, or energy levels, of which it may consist. An example of the energy states could be pictured as a unevenly spaced ladder which the higher rungs mean higher states of energy and the lower rungs mean lower states of energy. If left disturbed for a long time the atom will reach its ground state or lowest state of energy. According to quantum mechanics there is only one light frequency that the atom will work with. There are three ways that the atom can deal with the presence of light either it can absorb the light, or spontaneous emission occurs, or stimulated emission occurs. This means that if the atom is at its lowest state that it may absorb the light and jump to its high state and emit extra light while doing so. The second thing it may do is if it is at its highest state it can fall spontaneously to its lower state thus emitting light. The third way is that the atom will jump from its upper state to its lower state thus emitting extra light. Spontaneous emission is not effected by light yet it is rather on a time scale characteristic of the states involved. That is called the spontaneous lifetime. In stimulated emission the frequency of the light is the same as the frequency of the light that stimulated it.
Carbon-monoxide, color center, excimer, free-electron, gas-dynamic, helium-cadmium, hydrogen-fluoride, deuterium-fluoride, iodine, Raman spin-flip, and rare-gas halide lasers are just a few of the many types of lasers there are out there in the world. The helium-neon laser is the most common and by far the cheapest costing about $170. The diode laser is the smallest being packed in a transistor like package. The dye laser are very good for their broad, continuously variable wavelength capabilities.
The theory of stimulated emission was first proved by Albert Einstein in 1916, then population inverse was discussed by V. A. Fabrikant in 1940. This led to the building of the first ammonia maser in 1954 by J. P. Gordon, H. J. Zeiger, and Charles H. Townes. In July of 1960 Theodore H. Maiman announced the generation of a pulse of coherent red light by means of a red crystal- the first laser. In 1987 Gordon Gould won a patent he had been trying to get for three years to build the first gas-discharged laser which he had conceived in 1957. In that same patent the helium-neon was included.
Bibliography:
Bertolotti, M., Masers and lasers: An Historical Approach (1983);
Kasuya, T., and Tsukakoshi, M., Handbook of Laser Science and Technology
(1988); Meyers,Robert, ed., Encyclopedia of Lasers, 3d ed. (1989); Steen, W. M., ed., Lasers in Manufacturing (1989); Whimmery, J. R., ed., Lasers: Invention to Application (1987); Young, M., Optics and Lasers, 3d rev. ed. (1986).
f:\12000 essays\sciences (985)\Computer\Macintosh Rules.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What computer is the fastest? What computer is the easiest to use? What computer is number one in education, and multimedia? That's right, the Macintosh line of computers. A strong competitor in the realm of computing for a number of years, the Macintosh is still going strong. The reasons are apparent, and numerous.
For starters, who wants a computer with no power?Macintosh sure doesn't! Independent tests prove that today's Power Macintosh computers, based on the PowerPC processor, outperform comparable machines based on the Intel Pentium processor. In a benchmark test, conducted in June 1995, using 10 applications available for both Macintosh, and Windows 3.1 systems, the 120-megahertz Power Macintosh 9500/120 was, on average, 51 percent faster than a 120-megahertz
Pentium processor based PC. The 132-megahertz Power Macintosh 9500/132 was 80 percent faster when running scientific and engineering applications, and 102 percent faster when running graphics and publishing applications. You can understand why the education market is almost entirely apple based.
Recent surveys confirm that from kindergarten through college, Apple has cornered the market in education, and remains number one in this U.S. market. Apple Macintosh computers account for 60% of the 5.9 million machines in U.S. schools for the 1995-96 school year. Only 29% of schools use the Microsoft/Intel platform, and DOS only accounts for a measly 11%. Also it was reported that 18.4% of 4 year college students own the Macintosh. 55% of college students own a computer, and Apple's in the lead for that market too! The reason Apple says for this continued success is the Mac's ease of use.
There is no doubt that the Macintosh is the easiest computer around. The scrolling menu bar is the first example. If a Macintosh menu is too long to fit on the screen, you can scroll down to see all of the items. Windows 95 menus, by contrast,
don't scroll up or down. So if you put too many items into the Windows 95 Start button, some will remain out of reach, permanently! Windows 95 hierarchical menus can become confusing as they become more crowded. When you install many applications onto a PC, so they form two columns from the Start Programs menu, the menus may not be able to flow well together. You'll have to jump quickly across from menu list to menu list, which can be difficult to do. The second example I site is the better integration of hardware and software. Because Apple makes both the hardware and the operating system, the two work together easily; when a change is made
at the hardware level, the software automatically recognizes it and acts accordingly. In the PC world, Microsoft develops Windows 95 and many different manufacturers make the hardware systems. So, the software and hardware don't always work well together. Here are a few areas that the Macintosh is particularly strong in concerning compatability, floppy disks, memory management, monitor support, mouse support, adding peripherals, connecting to a network, and internet access and publishing. And the last example I'll show, is the ease of adding new resources. When you add capabilities to your Macintosh, it seems to anticipate what you're doing, and even try to help. For example, to add fonts or desk accessories to the Macintosh, all you have to do is drag them to the System Folder. The Mac OS, or operating system, places all of the items where they need to go, automatically. Here are the steps for Windows 95:
1.Double-click on the C: drive in "My Computer."
2.Open the Windows folder.
3.Open the Fonts folder.
4.Click Install New Font in the File menu.
5.Click the drive and the folder that contain the font you want to add.
6.Double-click the name of the font you want to add.
As anyone can plainly see, the the choice is obvious and the Mac's the best!
Multimedia is an exploding business throughout movies, advertising, and graphic design. Most multimedia developers create their applications on a Macintosh. According to one research company, Apple's Macintosh is the leading development platform for multimedia CD-ROM titles by a 72% to a 28% margin. As a recent article in the San Francisco Examiner puts it, "Walk into any newsroom, desktop publishing center, design studio, or online service office, and nine times out of 10 you will see a wall of Macs." That's quite a statement! There are definite reasons for this too.
Installing and using CD-ROM titles is easier with Macintosh computers than with PCs running Windows 95. Today's PCs have multiple standards for sound and graphics, and each standard and each piece of hardware requires a different
software driver. As a result, PC owners have problems matching the hardware and software in their systems to the hardware and software requirements of different CD-ROM titles, and different titles can run much differently. In contrast, CD-ROM titles for Macintosh are easier to install and use. Macintosh computers have a single, built-in
standard for sound and graphics, so no special drivers are required. And Macintosh was the first home computer to include built-in MPEG hardware playback for full-screen, full-motion video.
Apple's Power Macintosh 7500/100, and 8500/120 computers include nearly everything a user needs to quickly and easily begin videoconferencing. QuickTime Conferencing software, high-speed communications capability, and video/sound input are all included. Users need only connect a video camera to the Macintosh video-in connector. With Apple's QuickTime Conferencing software, users can call other videoconference participants over their existing local area networks. Users can see multiple participants at once, take snapshots during sessions, record sessions, and
work together on a shared document. Compare this simplicity and power with videoconferencing products in the Windows 95 world, where users must still purchase expensive add-on cards, and software totaling $1,400 or more, and then deal with the complexities of integrating the hardware and software themselves.
Speech integration with computers is the wave of the future, and guess who's got the jump in that department. With PlainTalk, you can open any Macintosh document or application by speaking its name. Just move an alias of the item into the Speakable Items folder, and the built-in PlainTalk and Speakable Items technologies
take care of the rest. For example, a user who wants to check her stock portfolio without opening several folders and launching an application can just say "check stocks," and the Macintosh will execute the necessary commands. Speakable items can also be AppleScript files, so users can execute an almost unlimited series of actions--including copying files, cleaning up the desktop, and so on, simply by speaking a command.
In conclusion, the Macintosh is the computer that can do it all. Handling business tasks, creating breath taking multimedia, and lots, lots more, all at the fastest speed available. It is no wonder Apple has made such a name forf itself, and will likely be in the market for a long time to come.
f:\12000 essays\sciences (985)\Computer\Macintosh vs IBM.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The IBM and Macintosh computers have been in competition with each other for years, and each
of them have their strong points. They both had their own ideas about where they should go in
the personal computer market. They also had many developments, which propelled themselves
over the other.
It all started when Thomas John Watson became president of Computing Tabulating Recording
in 1914, and in 1924 he renamed it to International Business Machines Corporation. He
eventually widened the company lines to include electronic computers, which was extremely new
in those days. In 1975 IBM introduced their first personal computer (PC) which was called the
Model 5100. It carried a price tag of about $9,000 which caused it to be out of the main stream
of personal computers, even though their first computer did not get off to as big as a start they
had hoped it did not stop them from continuing on. Later on IBM teamed up with Microsoft to
create an operating system to run their new computers, because their software division was not
able to meet a deadline. They also teamed up with Intel to supply its chips for the first IBM
personal computer. When the personal computer hit the market it was a major hit and IBM
became a strong power in electronic computers. Phoenix Technologies went through published
documentation to figure out the internal operating system (BIOS) in the IBM. In turn, they
designed a BIOS of their own which could be used with IBM computers. It stood up in courts and
now with a non IBM BIOS, the clone was created. Many manufacturers jumped in and started
making their own IBM Compatible computers, and IBM eventually lost a big share in the desktop
computers.
While IBM was just getting started in the personal computer market, Apple was also just getting
on its feet. It was founded by Steve Jobs and Steve Wozniak in 1976. They were both college
drop outs, Steve Jobs out of Reed College in Oregon and Steve Wozniak from the University of
Colorado. They ended up in Silicon Valley, which is located in northern California near San
Francisco. Wozniak was the person with the brains and Jobs was the one who put it all together.
For about $700 someone could buy a computer that they put together, which was called the
Apple I. They hired a multimillionaire, Armas Clifford Markkula, a 33 year old as the chief
executive in 1977. In the mean time Wozniak was working at Hewlett Packard until Markkula
encouraged him to quit his job with them, and to focus his attention on Apple. Apple went public
in 1977, for about $25 a share. In 1977 the Apple II was introduced which set the standard for
many of the microcomputers to follow, including the IBM PC.
The Macintosh and IBM computer have been in competition ever since they put out their first
personal computers. In 1980, the personal computer world was dominated by two types of
computer systems. One was the Apple II, which had a huge group of loyal users, and they also
had a large group of people developing software for the Apple II. The other system was the IBM-
Compatible, which for the most part all used the same software and plug in hardware. In 1983
Apple sold over $1 billion in computers and hardware. Now Apple was trying to appeal more to
the business world so they designed the Lisa computer that was a prototype for the Macintosh
and it cost around $10,000. It featured a never before seen graphical interface and the mouse,
which are as common as any other component on the computer today. IBM introduced a
spreadsheet program called Lotus 1-2-3, which caused anticipated sales of the Lisa computer to
drop to nearly half.
In order for Apple to compete with the IBM-Compatible they had to change some things around.
Jobs headed the development of the Macintosh, with the goal in mind of a "computer for the rest
of us." He wanted it to be easily set up out of the box and up in running in 15 minutes. The
developers of the Macintosh made it so that you could not upgrade it for they did not think that
you needed to open your computer. In 1984, they launched the Macintosh for $2,495. The
advertisements for it cost around $500,000 and more than $1.5 million to play it on Super Bowl
Sunday in 1984. They decided later that if they wanted to keep up with IBM they would have to
make the Macintosh cheaper and easier to upgrade in order to appeal to the business market. In
1991 Apple's desktop computing business was going down hill, and Motorola, who was their chip
manufacturer, was being known as the company that was always one step behind Intel. So
Apple lost developers for their personal computer.
This is the label on many of the current chips that are being shipped today.
One thing that is different between the IBM and Macintosh is the type of CPU architecture they
are using. The IBM computers have been using the same chip design as it did when it first
created the personal computer. They created their systems around a CPU design Intel created,
which used an architecture called CISC (Complex Instruction Set Computing). This also allowed
the IBM computer to be compatible through out the years with the older systems. For instance if
you had some sort of typing programming that was on an IBM-Compatible computer that had a
286-12 CPU, you could run that same exact software on one of your newest Pentiums today. So
even after 10 years the same software could be used. This also has it down sides, because that
means we have been using an internal CPU architecture that is at least 20 years old. One thing
that IBM users can look forward to is the advancements that Intel is making with it's CPUs. One
of the latest things that has hit the market is MMX, which allows programs that are more
graphically inclined to run faster, as well as programs that use sound. They already have chips
in the making going by the code name Klamath. These will be a cross form of the current
Pentium Pro chips and the Pentium MMX chips. They should be coming out in 1998, and will
have a MHz rating up to 400. Right now the MMX chips are shipping at 200 MHz and will soon
have one at 233 MHz. Intel is moving very swiftly in bringing us the top of the line technology.
Apple decided to go with a different CPU architecture. IBM created a RISC (Reduced Instruction
Set Computing) CPU that could run faster than the CISC model of the same MHz rating, so a
RISC chip with a MHz rating of 100 could run just as fast as a CISC chip with MHz rating of 133.
Now with the definitions of CISC and RISC you would think that the RISC chip has fewer
instructions, and actually in fact it is just the opposite, but since it started out with fewer
instructions then the CISC chip it kept that name. Now IBM did not want to put it into their own
personal computers because of the compatibility issues. The computer would not be able to use
the current hardware or software, that was being made for the IBM-Compatible computers. So
IBM sought out a company that would be willing to buy their RISC chip, and Apple was the
company they found. Motorola had previously been designing the chips for Apple, but they were
not as fast as IBM so the Macintosh development slowed down in comparison to IBM. IBM could
design RISC chips for Apple with no problem. With this Apple needed to get developers to make
applications made to run specifically for the RISC chip. IBM decided to team up with Motorola
because they were not equipped to put out chips in high volume like Apple needed. Apple had
already been creating a mother board based on the Motorola chip design, so with IBM and
Motorola teaming up they did not have to redesign their mother boards. So now an Apple
computer could run faster than an IBM, in a certain sense. A Macintosh Quadra 40 MHz using
Motorola 68040 chip would be faster than most 486DX-66 MHz CPUs. The reason being is that
the Macintosh computer was totally design to run with each other. So the Operating System in
the Macintosh would take advantage of the hardware's capabilities as well as the hardware
taking advantage of the Operating System. So with this interconnected system it would be faster
than a system not made to take advantage of every little thing in a piece of hardware.
Apple Macintosh Mouse
With the both companies in heated competition, the pressure was on for them to come out with
things that the other did not have. Apple came through very strongly in this area. They created
many devices that are used in many computers today. In 1984 Apple created the first GUI
(Graphical User Interface) this also brought about folders or directories, long file names, drag
and drop, and the trash can. All these devices are used in the more popular operating system
for the IBM-Compatible computer called Windows 95. Apple also created the mouse, which is as
common as the keyboard. One thing that helps the IBM-Compatible in the hardware area, is all
the third party developers. With the Apple computer, only Apple had the rights to develop
hardware for their computers. With IBM-Compatibles anyone can develop hardware for it, thus
we have many innovative accessories and hardware for the IBM-compatibles. One of the more
interesting devices for the IBM-compatible computers, that was featured at the 1997 Comdex
show in Vegas was a speaker system. It looks like a giant plastic dome that is placed above
your head pointing down towards you, and allows stereo sound to be heard only by the person
directly underneath it. One company that was showing it in action was Creative Labs, which is a
maker of Sound Cards and usually sets the standard for them. They had many computers
networked together and were running a popular game of 1996 called Quake, which is a first
person action game. They had put the dome shaped speakers above each computer station and
it allowed each player to hear what was going on around them, but it would not make any outside
noise or interfere with the person playing right next to them.
Installing a card can be very easy
One of the latest things with computers these days is Plug 'n' Play. It was meant to alleviate the
fear of people upgrading their computer themselves, even though some people will always pay
someone big time money to do it. If you are afraid of opening your computer it is strongly
suggested that you have a professional do it, for they have been doing that sort of thing for
years, and they know exactly what they are doing as well as what to do if they encounter any
problems that are uncommon to the regular consumer. The deal with Plug 'n' Play is that it
would allow you to install a new sound card or some other plug in card and then just turn on your
computer with out you having to change any jumpers or configure it in any way. The Macintosh
computer and the Windows 95 operating system both have this feature built into it as well as
some of the newer IBM-Compatible BIOS. There have been draw backs to it, for some of the
people that prefer to configure it themselves for the software used to configure the card might
not be able to use a configuration you wish to use.
Apple computers have many things that already come with it, that the IBM-Compatibles do not
always have. For instance they come with a 16-bit sound card, that has voice recognition built
into it. With the voice recognition the operating system was designed to use it in every way you
could think of, you could do anything without typing or clicking on a thing. For instance you
could tell it to "Shut Down" and it will go through and turn off the computer, or you could write a
letter to a long lost relative just by speaking. The Macintosh computer was designed so that
everything you did was made as easy as possible, so that is why all the software has to be
redone when they add new hardware. If you wanted to eject a disk you stuck into it, you went up
into the pull down menus and told it to "eject disk." You could also shut off the computer from
the pull down menus. This is basically the total opposite of the IBM-Compatible computers. To
eject the disk you just plainly press the little button on the disk drive, and if you wanted to turn off
the computer you just press the power button. The Macintosh computer could run into problems,
say if you had a disk in there and somehow the computer locked up or the power was off, you
would not be able to get that disk out of there. Some of the other things that the latest Macintosh
computers have been coming with are networking cards built into it already. If you wanted to
play a game or transfer files with a friend, you just grabbed a cord and plugged the two
computers together and then you are off. You could also do video conferencing and send email
over the network, as well.
With the way the Macintosh computer was designed you cannot upgrade the sound card for
everything is built into the system, but with an IBM-Compatible computer you could easily take
out one card and put in another. Anything that you add on to the Macintosh has to be put on the
outside, like CD-ROMs and Modems. Also because the Operating System of the Macintosh
relies on the computer's hardware and was designed for that particular hardware, if you ever
upgrade it you have to upgrade the operating system as well as many hardware components and
software that were made for that particular model. That is one reason many of the big time
business users would not want to buy a Macintosh for they would want their investment to last
awhile and if they needed to they would want to upgrade their systems as cheaply as possible
and the IBM-Compatible made it cheap for them to do so. The Macintosh computer itself usually
costs about two times as much as a comparable IBM computer. They also tend to confuse their
customers by bringing out many new models out all the time. For instance in 1993 alone, Apple
introduced 17 different models of their Macintosh computer.
Software for the Apple computers is harder to come by then for the IBM-compatible computer.
Apple controls all the software for their computers and will not license it to any other developer.
So you do not have the variety you do with the IBM computers. A big thing that has become
very popular in the last few years is something called the Internet. Almost everyone has
experienced the internet in some form or the other. You can almost do anything you wanted
over the internet. From writing a message to some distant relative and have it arrive to that
person in minutes, or playing a chess game with someone from Russia. You can also get any
program you are looking for over the internet, and many of these programs are usually only for
the IBM-compatible computer for there is more people with an IBM computer and thus more
people making applications and games for the IBM computer. So basically there is just a ton of
software out there for people who own an IBM-compatible computer.
With the IBM-compatible computer you can continue to upgrade it, even someone who bought a
computer five years ago could have upgraded it so that it is just as fast as any computer of
today, but with the Macintosh you basically would have to buy a new system. Also since IBM
had used a third-party for its operating system other companies could license the operating
system to make their own compatible operating systems, as well as any other software for it.
Compatible hardware could easily be assembled. As well as peripherals and components that
will improve the IBM compatible computer. From some of the common components, like CD-
ROMs, Modems, Sound Cards, and Printers. You even have a choice from about 20 different
styles of mice that you could use on your system, from three basic groups: Roller, Track balls,
and Touch Pads. They have some other ones, like one that clips onto your monitor and shoots
infrared beams across the screen to detect movements by your finger, and so it basically turns
your monitor into a touch screen. As well as hand held ones that move the cursor based on the
position of your hand.
The Apple computer has usually always appealed to the school systems. With the IBM-
compatible computers going more towards businesses and personal use. The main reasons
behind this are that the Apple had many types of software directed towards children and helping
them learn. They were also easier to use so that appealed to the school system, for they would
be able to have children that are five years old be able to use a computer with no problem. The
IBM computer went more with businesses, because of its ability to be upgraded and they would
be able to get longer use out of it. They could more easily adapt an IBM-compatible computer to
their way of doing things, just because of the many different software out there as well as its
ease of adding or upgrading it capabilities. The IBM-compatible computers have been becoming
increasingly more popular with the school systems, because of Apple going down hill and having
less and less software available for it.
The IBM and Macintosh computers have been in competition with each other for years, and each
of them have their strong points. Apple dominated in the personal computer market when it first
started, but when the IBM clone was created that started its downfall. Some of Apple's earlier
decisions caused it to lose in the battle with IBM as well. Motorola as its chip manufacturer,
caused them to be one step behind the Intel based IBM-compatibles. Not licensing out its
software so that third parties could create software for it, was also a down fall for it. Now, that
the IBM-compatible computer has a strong support it is very unlikely that Apple will be able to
bring back a large user group for its personal computer, even though their computers are faster.
f:\12000 essays\sciences (985)\Computer\Making Utilities for MSDOS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Michael Sokolov
English 4
Mr. Siedlecki
February 1, 1996
Making Utilities for MS-DOS
These days, when computers play an important role in virtually all aspects of our life, the issue of concern to many programmers is Microsoft's hiding of technical documentation. Microsoft is by far the most important system software developer. There can be no argument about that. Microsoft's MS-DOS operating system has become a de facto standard (IBM's PC-DOS is actually a licensed version of MS-DOS). And this should be so, because these systems are very well written. The people who designed them are perhaps the best software engineers in the world.
But making a computer platform that is a de facto standard should imply a good deal of responsibility before the developers who make applications for that platform. In particular, proper documentation is essential for such a platform. Not providing enough documentation for a system that everyone uses can have disastrous results. Think of it, an operating system is useless by itself, its sole purpose is to provide services to applications. And who would be able to develop applications for an operating system if the documentation for that system is confidential and available only to the company that developed it? Obviously, only the company that has developed that operating system will be able to develop software for it. And this is a violation of the Antitrust Law.
And now I start having a suspicion that this is happening with Microsoft's operating systems. It should be no secret to anyone that MS-DOS contains a lot of undocumented system calls, data structures and other features. Numerous books have been written on this subject (see bibliography). Many of them are vital to system programming. There is no way to write a piece of system software, such as a multitasker, a local area network, or another operating system extension, without knowing this undocumented functionality in MS-DOS. And, sure enough, Microsoft is using this functionality extensively when developing operating system extensions. For example, Microsoft Windows, Microsoft Network, and Microsoft CD-ROM Extensions (MSCDEX) rely heavily on the undocumented internals of MS-DOS.
The reader can ask, "Why do they leave functionality undocumented?" To answer that question, we should look at what this "functionality" actually is. In MS-DOS, the undocumented "functionality" is actually the internal structures that MS-DOS uses to implement its documented INT 21h API. Any operating system must have some internal structures in which it keeps information about disk drives, open files, network connections, alien file systems, running tasks, etc. And MS-DOS (later I'll call it simply DOS) has internal structures too. These structures form the core of undocumented "functionality" in MS-DOS. This operating system also has some undocumented INT 21h API functions, but they serve merely to access the internal structures.
These internal structures are extremely version-dependent. Each new major MS-DOS version up to 4.00 introduced a sMichael Sokolov
English 4
Mr. Siedlecki
February 1, 1996
Making Utilities for MS-DOS
These days, when computers play an important role in virtually all aspects of our life, the issue of concern to many programmers is Microsoft's hiding of technical documentation. Microsoft is by far the most important system software developer. There can be no argument about that. Microsoft's MS-DOS operating system has become a de facto standard (IBM's PC-DOS is actually a licensed version of MS-DOS). And this should be so, because these systems are very well written. The people who designed them are perhaps the best software engineers in the world.
But making a computer platform that is a de facto standard should imply a good deal of responsibility before the developers who make applications for that platform. In particular, proper documentation is essential for such a platform. Not providing enough documentation for a system that everyone uses can have disastrous results. Think of it, an operating system is useless by itself, its sole purpose is to provide services to applications. And who would be able to develop applications for an operating system if the documentation for that system is confidential and available only to the company that developed it? Obviously, only the company that has developed that operating system will be able to develop software for it. And this is a violation of the Antitrust Law.
And now I start having a suspicion that this is happening with Microsoft's operating systems. It should be no secret to anyone that MS-DOS contains a lot of undocumented system calls, data structures and other features. Numerous books have been written on this subject (see bibliography). Many of them are vital to system programming. There is no way to write a piece of system software, such as a multitasker, a local area network, or another operating system extension, without knowing this undocumented functionality in MS-DOS. And, sure enough, Microsoft is using this functionality extensively when developing operating system extensions. For example, Microsoft Windows, Microsoft Network, and Microsoft CD-ROM Extensions (MSCDEX) rely heavily on the undocumented internals of MS-DOS.
The reader can ask, "Why do they leave functionality undocumented?" To answer that question, we should look at what this "functionality" actually is. In MS-DOS, the undocumented "functionality" is actually the internal structures that MS-DOS uses to implement its documented INT 21h API. Any operating system must have some internal structures in which it keeps information about disk drives, open files, network connections, alien file systems, running tasks, etc. And MS-DOS (later I'll call it simply DOS) has internal structures too. These structures form the core of undocumented "functionality" in MS-DOS. This operating system also has some undocumented INT 21h API functions, but they serve merely to access the internal structures.
These internal structures are extremely version-dependent. Each new major MS-DOS version up to 4.00 introduced a significant change to these structures. Applications using them will always be unportable and suffer compatibility problems. Every computer science textbook would teach you not to mingle with operating system internals. That's exactly why these internal structures are undocumented.
This bring another question, "Why does Microsoft rely on these structures in its own applications?" To answer this question, we should take a look at an important class of software products called utilities. Utilities are programs that don't serve end users directly, but extend an operating system to help applications serve end users. To put it another way, utilities are helper programs. Perhaps the best way to learn when you have to mingle with DOS internals is to spend some time developing an utility for MS-DOS. A good example is SteelBox, an utility for on-the-fly data encryption. This development project have made me think about the use of DOS internals in the first place and it has inspired me to write this paper.
Utilities like SteelBox, Stacker, DoubleSpace, new versions of SmartDrive, etc. need to do the following trick: register with DOS as device drivers, get request packets from it, handle them in a certain way, and sometimes forward them to the driver for another DOS logical drive. The first three steps are rather straightforward and do not involve any "illicit" mingling with MS-DOS internals. The problems begin in the last step. MS-DOS doesn't provide any documented "legal" way to find and to call the driver for a logical drive. However, MS-DOS does have internal structures, called Disk Parameter Blocks (DPBs) which contain all information about all logical drives, including the pointers to their respective drivers. If you think of it, it becomes obvious that MS-DOS must have some internal structures like DPBs. Otherwise how would it be able to service the INT 21h API requests? How would it be able to locate the driver for a logical drive it needs to access?
Many people have found out about DPBs in some way (possibly through disassembly of DOS code). In the online community there is a very popular place for information obtained through reverse engineering, called The MS-DOS Interrupt List, maintained by Ralf Brown. This list is for everyone's input, and the people who reverse engineer Microsoft's operating systems often send their discoveries to Ralf Brown, who includes them into his list. The DPB format and the INT 21h call used to get pointers to DPBs are also in Interrupt List. As a result, many programmers, including me, have used this information in their utilities without much thinking.
However, this is not a good thing to do. DPBs exist since the first release of MS-DOS as IBM PC-DOS version 1.00, but the DPB format has changed three times throughout the history. The first change occured in MS-DOS version 2.00, when the hard disk support, the installable device drivers and the UNIX-like nested directories were introduced. The second change occured in MS-DOS version 3.00, when the array of Current Directory Structures (CDSs), a new internal structure, was introduced to support local area networks and JOIN/SUBST commands. The third change occured in MS-DOS version 4.00, when 32-bit sector addressing was introduced and an oversight with storing the number of sectors in a File Allocation Table (FAT) was fixed. The reader can see that each new major MS-DOS version up to 4.00 introduced a change in the DPB format. And this is typical with all MS-DOS undocumented internal structures.
Although one can probably ignore DOS versions earlier than 3.10, he still would have to deal with two different DPB formats. And prior to DOS version 5.00, where DPBs were finally documented, no one could be sure that a new DOS version wouldn't change the DPB format once again. In the first version of SteelBox, my utility that needs to know about DPBs in order to do its work, I simply compared the DOS version number obtained via INT 21h/AH=30h with 4.00. If the DOS version was earlier than 4.00, I assumed that it has the same DPB format as IBM PC-DOS versions 3.10-3.30. If the DOS version was 4.00 or later, I assumed that it has the same DPB format as IBM PC-DOS version 4.xx. However, there are problems with such assumptions. First, there are some versions of MS-DOS other than IBM PC-DOS, and some of them have their internal structures different from those of standard MS-DOS and PC-DOS. For example, European MS-DOS 4.00 returns the same version number as IBM PC-DOS version 4.00, but its internal structures much more closely resemble that of PC-DOS version 3.xx. Second, prior to Microsoft's documenting of DPBs in MS-DOS version 5.00, there was no guarantee that the DPB format wouldn't change with a new DOS version.
When I was developing a new version of SteelBox, I started to think about how to use DPBs properly and in a version-independent manner. I justified the use of DOS internals in the first place because I know that a lot of Microsoft's own utilities use them extensively. The examples are MS-DOS external commands like SHARE, JOIN, and SUBST, Microsoft Network, Microsoft Windows, Microsoft CD-ROM Extensions (MSCDEX), etc. Before we go any further, it should be noted that we mustn't be dumping unfairly on Microsoft. Originally I thought that DOS internals are absolutely safe to use and that Microsoft doesn't document them intentionally in order to get an unfair advantage over its competitors. My reasoning for this was that Microsoft's own utilities have never stopped working with a new DOS version.
To find the magic of "correct" use of DOS internals, I started disassembling Microsoft's utilities. First I looked at three DOS external commands, SHARE, JOIN, and SUBST. All three programs check for exact DOS version number match. This means that they can work only with one specific version of MS-DOS. This makes sense, given that these utilities are bundled with MS-DOS and can be considered to be parts of MS-DOS. One of the utilities, SHARE, unlike other DOS external commands, accesses the DOS kernel variables by absolute offsets in DOSGROUP, the DOS kernel data segment, in addition to getting pointers to certain DOS internal structures and accessing their fields. SHARE not only checks the MS-DOS version number, but also checks the flag at offset 4 in DOSGROUP. In DOS Internals, Geoff Chappell says that this flag indicates the format (or style) of DOSGROUP layout (501). If you look at the MS-DOS source code (I'll explain how to do it in a few paragraphs), you'll see that programs like SHARE access the kernel variables in the following way:
The kernel modules defining these variables in DOSGROUP are linked in with SHARE's own modules. Since the assembler always works the same way, the DOS kernel variables get the same offsets in the SHARE's copy of DOSGROUP as in the DOS kernel's copy. When SHARE needs to access a DOS kernel variable, it loads the real DOSGROUP segment into a segment register, tells the assembler that the segment register points to SHARE's own copy of DOSGROUP, and accesses the variable through that segment register. Although the segment register points to one copy of DOSGROUP and assembler thinks that it points to another one, everything works correctly because they have the same format. The reader can drawn the following conclusion from this aside: MS-DOS designers have made the MS-DOS internal structures accessible to other programs only for DOS'own use (since linking DOS modules in with a program is acceptable only for the parts of MS-DOS itself).
Having seen that DOS external commands are not a good example for a program that wants to be compatible with all DOS versions, I turned to Microsoft Network. One of its utilities, REDIR, is very similar to SHARE in its operation. Like SHARE, it accesses the DOS kernel variables by absolute offsets. I thought that unlike SHARE, REDIR is not tied to a specific DOS version. Unfortunatelly, I wasn't able to disassemble it, because as a high school student, I don't have a copy of Microsoft Network. However, Geoff Chappell says that it has separate versions for different versions of DOS, just like SHARE. Therefore, I turned to another utility again.
My next stop was MSCDEX, the utility for accessing the High Sierra and ISO-9660 file systems used by CD-ROMs. Unlike SHARE and REDIR, MSCDEX is not tied to one specific DOS version. I'm using MSCDEX version 2.21 with MS-DOS version 5.00, but the same version of MSCDEX can be used with PC-DOS version 3.30. However, it accesses the DOS kernel variables by absolute offsets in DOSGROUP, just like SHARE and REDIR. Of course, my question was "How does it do that in a version-independent manner?" When I disassembled it, I saw that it takes the flag at offset 4 in DOSGROUP and uses it to determine the absolute offsets of all the variables it needs. If this flag equals 0, MSCDEX assumes that all offsets it's interested in are the same as in DOS versions 3.10-3.30. If this flag equals 1, MSCDEX assumes that all offsets it's interested in are the same as in DOS versions 4.00-5.00. For all other values of this flag MSCDEX refuses to load.
Sharp-eyed readers might notice that this check already makes MSCDEX potentially incompatible with future DOS versions. The comments in the source code for MS-DOS version 3.30 (DOS\MULT.INC file) refer to MSCDEX, therefore, it had existed at the time of MS-DOS version 3.30. It is very doubtful that anyone, including the author of MSCDEX, could know what offsets would the kernel variables in DOS version 4.00 have at that time. If this is true, an MSCDEX version that predates MS-DOS version 4.00 won't run under DOS versions 4.00 and later.
MSCDEX uses the flag at offset 4 in DOSGROUP to determine not only the absolute offsets of the kernel variables, but also the "style" of all other DOS internals that had changed with DOS version 4.00. My first thought was that I can use this flag in my utilities when I need to cope with different "styles" of DOS internals. However, my next discovery really surprised me and gave me a real understanding of what I'm doing when I mingle with DOS internals. MSCDEX version 2.21 refuses to run under DOS versions 6.00 and later. So much for the idea that "Microsoft's own utilities have never stopped working with a new DOS version." In fact, Geoff Chappell refers to this in DOS Internals (501).
The last utility I looked at was Microsoft SmartDrive version 4.00, which is bundled with Microsoft Windows version 3.10. This utility also uses the DOS internal structures, including the version-dependent ones. However, unlike MSCDEX, SmartDrive doesn't have a "top" DOS version number. It compares the DOS version number with 4.00 and assumes that DOS similar to versions 3.10-3.30 if it's lower than 4.00 and to versions 4.00-5.00 if it's 4.00 or higher. SmartDrive assumes that all future DOS versions will be compatible with MS-DOS version 5.00 at the level of the internal structures.
The lack of clear pattern in the usage of the undocumented DOS internal structures by Microsoft's own utilities made me think seriously about the possibility of safe use of the DOS internals in the first place. Originally I thought that Microsoft has some internal confidential document that explains how to use the DOS internals safely, and that anyone having that magic document can use the undocumented DOS internals as safely as normal documented INT 21h API. However, the evidence I have obtained through reverse engineering of Microsoft's utilities puts the existence of that magic document under question. In Undocumented DOS Andrew Schulman notes that it is possible that on some occasions Microsoft's programmers have found out about the MS-DOS internals not from the source code or some other internal confidential documents, but from general PC folklore, just like third-party software developers. For example, the MWAVABSI.DLL file from the Microsoft Anti-Virus provides a function called AIO_GetListofLists(). This function calls INT 21h/AH=52h to get the pointer to one extremely important DOS internal structure. In the MS-DOS source code this structure is called SysInitVars. However, in Ralf Brown's Interrupt List and in general PC folklore is called the List of Lists. This is an indication that Microsoft's programmers sometimes act just like third-party software developers (Schulman et al., Undocumented DOS, 44).
On several occasions I have made references to the MS-DOS source code. However, most programmers know that the MS-DOS source code is unavailable to non-Microsoft employees. Therefore, before we go any further, I need to explain how could I look at the MS-DOS source code. Microsoft gives it to certain companies, mostly Original Equipment Manufactures (OEMs). Some people can claim that they are OEMs and get the Microsoft's documents available only to OEMs (however, this costs a lot of money). And then some people who don't care too much about laws start distributing the confidential information they have. This is especially easy in Russia, where copyright laws are not enforced. So one way or another, knowledge of some parts of MS-DOS source code spreads among the people. The MS-DOS OEM Adaptation Kit (OAK) contains commented source code for some MS-DOS modules and include files and .OBJ files made from some other modules.
Let's summarize what we've seen so far. MS-DOS, like any other operating system, has internal structures. Every computer science textbook would teach you not to rely on an operating system's internals. In MS-DOS, the internal structures are undocumented. Microsoft's own utilities do rely on them. By reverse engineering these utilities, looking at the MS-DOS source code, and thinking the problem through one can come to the conclusion that there is absolutely no safe way of using the MS-DOS internal structures. The only proper way of using them is not using them at all.
Not later than I have come to this conclusion, my SteelBox development project returned me back to the reality. No matter how bad it is to use of the MS-DOS internals, utility developers like me have to do it because they have no other choice. Now I'm almost sure that this is precisely why Microsoft uses the MS-DOS internals itself. Before we go any further, I need to clarify one important detail.
Once a programmer asked Microsoft to document the INT 2Fh/AH=11h interface, generally known as the network redirector interface. Microsoft responded:
The INT 2fh interface to the network is an undocumented interface. Only INT 2fh, function 1100h (get installed state) of the network services is documented.
Some third parties have reverse engineered and documented the interface (i.e., "Undocumented DOS" by Shulman [sic], Addison-Wesley), but Microsoft provides absolutely no support for programming on that API, and we do not guarantee that the API will exist in future versions of MS-DOS.
This sounds like Microsoft saying, "Here's where you get the info, but you better not use it." (Schulman et al., Undocumented DOS, 495). Some people might think that Microsoft has internal confidential documents describing the MS-DOS internals much better than Andrew Schulman's Undocumented DOS, but there are indications that the MS-DOS source code is the only "document" Microsoft has (I'll address this issue in a few paragraphs). Perhaps the Microsoft's programmers themselves use the same documentation as third parties.
So far we have seen that MS-DOS is not a perfect operating system, and it gives utility developers no other choice but to use its undocumented version-dependent internals. The reader might ask, "what can we do about it?" First of all, some of the former undocumented DOS functionality was documented in DOS version 5.00. The reason for that probably was that some INT 21h functions that were used by DOS external commands like PRINT don't actually deal with any DOS internals at all, and Microsoft had simply underestimated the usefulness of these functions originally. Microsoft has even documented the DPBs. However, Microsoft's documentation says that the DPBs are available only in DOS versions 5.00 and later, but the reader should remember that the DPB format has changed several times throughout the history. So in this case Microsoft even restricted themselves in the ability of making changes in MS-DOS by documenting the DPBs.
However, there are still a lot of undocumented internals in MS-DOS. It should be noted that documenting them is out of question. This would make it impossible to make significant changes in MS-DOS, thereby stalling its enhancement. In Undocumented DOS Andrew Schulman suggests that Microsoft could make an add-in to MS-DOS that would provide "clean" documented services that would eliminate the need for the use of DOS internals. Once Microsoft actually did this, when it introduced the IFSFUNC utility in MS-DOS version 4.00. This utility converted the "dirty" and extremely version-dependent redirector interface into a device-driver-like interface. However, this utility was removed from MS-DOS versions 5.00 and later (I'll explain why in a few paragraphs).
Fortunately, the ill-fated IFSFUNC utility was not the only effort to enhance MS-DOS. In Microsoft Windows versions 3.00 through 3.11, there is a component called Win386. It has got its name from Windows/386, its ancestor. In early beta releases of Microsoft's Chicago operating system this component was called DOS386. When Chicago was renamed into Windows 95, this component was given uninteresting name VMM32. Finally, the beta release of Microsoft C/C++ Compiler version 7.00 included this component from Microsoft Windows under the name MSDPMI. I think that the best name for this component is DOS386, so I'll call it this way.
Probably the reader would ask, "What this component is?" DOS386 is a multitasking protected-mode operating system. A close inspection of DOS386 reveals that it has almost nothing to do with Windows, and has a lot to do with DOS (that's why I prefer the name DOS386 over Win386). Two of DOS386's subcomponents, DOSMGR and IFSMGR, are perhaps the heaviest users of DOS internals. These modules know a lot about the internals of MS-DOS, and they provide their own interfaces which in fact can help an utility avoid using DOS internals. For example, let's return to our SteelBox utility.
This utility needs to access a file from inside an INT 21h call. Most DOS programmers know that DOS INT 21h API is non-reentrant. It means that no INT 21h calls can be made while an INT 21h call is already being serviced. Therefore, an utility like SteelBox would have to play tricks with DOS internals with all the consequences. On the other hand, DOS386's IFSMGR subcomponent provides an interface that replaces INT 21h. Unfortunately, IFSMGR is documented only in the Windows 95 Device Development Kit (DDK), and I don't have a copy of it yet. However, it is quite possible that the IFSMGR's interface is reentrant. If it is, all problems with SteelBox would be immediately solved, and it won't contain a single undocumented DOS call. Keep in mind, however, that DOS386 is relatively new, and perhaps its current version doesn't provide all the desired functionality. But certainly DOS386 is definitely a good foundation for a new operating system.
Although I definitely don't want to overblame Microsoft, I have to say some unpleasant truth about this company. In their run for profit, people at Microsoft violates some principles of free enterprize. In other words, they try to make a monopoly. One of the unfair things Microsoft does is called discriminatory documentation. Although the source code for MS-DOS, Microsoft Network, and other Microsoft products is supposedly unavailable to anyone, Microsoft has made the source code of some utilities available to selected vendors (Schulman et al., Undocumented DOS, 495).
Another example is the deliberate incompatibility of some Microsoft products with Digital Research's DR-DOS. Some programs, including Microsoft Windows version 3.10 beta and Microsoft C Compiler version 6.00, contain special code with sole purpose of making them incompatible with DR-DOS and other DOS workalikes. Although I'm definitely not a supporter of DOS workalikes, I think that Microsoft should use fair methods of competition.
Finally, there is a big problem with Microsoft's packaging of MS-DOS and DOS386. The most important problem with DOS386 is that it's currently available to users only as Win386 in Microsoft Windows. Furthermore, the usual Windows technical documentation (except the DDK) doesn't even mention the existence of Win386, because it's actually not a part of Windows. As a result, an amasing number of programmers don't even know about DOS386 (or Win386), and many of those how do greatly underestimate its tremendous importance.
Now Windows 95 comes into play. In this package, MS-DOS, DOS386, and Windows are thrown into one melting-pot. First of all, the integration of MS-DOS and DOS386 is a very good step. Given the volatility of DOS internals, the DOSMGR subcomponent of DOS386 (which, remember, is perhaps the heaviest user of DOS internals) cetainly should be tied to one specific DOS version. However, the tie between DOS/DOS386 and Windows is largely artificial. Try a simple experiment. Rename KRNL386.EXE file in your WINDOWS\SYSTEM directory into something else, and put something else (COMMAND.COM fits nicely) into that directory under the name KRNL386.EXE. And then try to run Windows. But instead of running Windows, this would load and activate Win386 without loading Windows. And there is no magic in this simple experiment. KRNL386.EXE is the first module of Windows, and Win386 runs it when it completes its initialization. By putting something else in place of KRNL386.EXE, one can break the artificial tie between Windows and DOS386.
At some point of time Microsoft probably throught of making a version of DOS386 which would not be tied to Windows. There was an utility called MSDPMI in the beta release of Microsoft C/C++ Compiler version 7.00, which was that very DOS386 without Windows. But now Microsoft is abandoning MS-DOS and everything else that is not Windows. Microsoft tries to persuade users that Windows 95 doesn't contain a DOS component, but this is not true. It is simply tied into Windows.
Now let's summarize the above. Microsoft is ignoring the minority users who don't like Windows and who want to use MS-DOS and DOS386 without Windows, because Microsoft cares only about its profit. One person cannot stop them doing that. Therefore, we, the programmers, should unite. If I call Microsoft alone, no one would listen to me. But if thousands of us do it together, we might achieve something. If you have any questions or suggestions about creating an association of programmers against Microsoft, please send E-mail to Michael Sokolov at gq696@cleveland.freenet.edu.
Bibliography
Brown, Ralf. The MS-DOS Interupt List. Not published on paper, available online for free.
Chappell, Geoff. DOS Internals. New York: Addison-Wesley Publishing Company, 1994.
Microsoft Corporation. Microsoft Windows Device Development Kit. Computer software. Redmond: Microsoft, 1990.
Pietrek, Matt. Windows Internals: The Implementation of the Windows Operating Environment. New York: Addison-Wesley Publishing Company, 1993.
Schulman, Andrew. , Ralf Brown, David Maxey, Raymond J. Michels, Jim Kyle. Undocumented DOS: A Programmer's Guide to Reserved MS-DOS Functions and Data Structures. New York: Addison-Wesley Publishing Company, 1994.
Schulman, Andrew. , David Maxey, Matt Pietrek. Undocumented Windows: A Programmer's Guide to Reserved Microsoft Windows API Functions. New York: Addison-Wesley Publishing Company, 1992.
f:\12000 essays\sciences (985)\Computer\Meet Mr Computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Have you ever seen a computer in a store and said, "Whoa! What a chick!"
? I am sure you would have, if you were familiar with the new 16xCD-ROM
and extra wide SCSI-2 9.0 GB hard drive it features, or if you knew about
the dual 225 MHz Pentium pro MMX chips blazing up its performance. To
tell you all about computers, it takes a total computer nut like me. After
working with computers almost all my life, I can tell you that a computer
is an electrical device, without which a guy like me probably cannot
survive. If you have no idea of what I am beeping about, read on. Experts,
I report no error in reading further.
Computers are very productive tools in our everyday lives. To
maximize the utility of a computer, what you need to do is get going with
the program. To do that, the minimum system requirements are a C.P.U.
or the central processing unit, a keyboard, a monitor, a mouse, and if you
want, a printer and a CD - ROM drive. The C.P.U. is that part of a
computer that faithfully does what his master tells him to do, with the
help of input devices like a keyboard or a mouse. After all this so called
sophisticated, next generation equipment, you need some sort of software.
Software is a set of instructions to the C.P.U. from a source such as a
floppy disk, a hard drive or a CD - ROM drive, in zillions of 1's and 0's.
Each of these tiny little instructions makes up a bit. Then they assemble
to form a byte. Bytes make up a program, which you run to use the
computer's various applications.
Now that you know more about computers than Einstein did, let me
tell you something more about them, so that you will beat the President in
the field of computing. In your computer, you require a good amount of
RAM, which is there to randomly accesses memory. That is required to
speed up your computer, so that it gives you more error messages in less
time. The faster the error messages it gives, the faster you call technical
help at 1-800-NeedHelp. The service is open 24 hours a day, but to get
through, you will have to wait, at least, until the next Halley's comet passes
by. The only thing now required, for you to become the master of this
part of the world, is to have a very BOLD determination to become a
computer geek. Since you have learnt everything about the basics, I would
like to transfer command to the owner's manual, that came with your
computer, to help you master the specific applications.
While learning the basic fifth generation of PCs, let's not forget the
choice of the new generation, network computing on the Internet and the
world wide web. Internet is probably the most important development in
the history of human beings, since the evolution of the Macintosh. The
Internet can do all the projects and presentations, your teachers demand
of you. It can also buy you some pizzas from Pizza-Hut and help you book
a ticket for your flight to Ithaca. But as every benefit has a big loophole, in
this case the problem is, once you dial up your Internet service provider,
you are welcomed by a busy signal! So boy, are you glad after half an hour
or so, that you finally meet with success getting on-line. After you go
on-line, you open the Netscape Navigator browser to go find what you
want. You go to a search engine, and then another search engine, and then
yet another search engine, and then you finally find out that what you
want is just what you don't get in this terrible world of advertisement. So
you quit and go join a chat group, talking with the weirdest of people you
can think of, thinking of the fun you are having in this beautiful world,
without knowing who it is that you are talking to, and forgetting the fact
that the $$$ meter is rising and climbing and mounting every hour you
are on-line.
Finally, you know that the typical use of computers is not only for
typing and calculating, but also for learning the masterful art of patience
and how to cope with the mistakes others make without cursing them. Life
is not possibly possible without this abnormally useful machine in these
good old 90's. Since all that starts well, ends well, to end this reading you
might want to close this page with your thumb and your forefinger, or
else you might get an error message, and then you will have to read this
all over again.
f:\12000 essays\sciences (985)\Computer\Microarchitecture of the Pentium Pro Processor.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A Tour of the Pentium(r) Pro Processor Microarchitecture
Introduction
One of the Pentium(r) Pro processor's primary goals was to significantly exceed the performance
of the 100MHz Pentium(r) processor while being manufactured on the same semiconductor process. Using the same process as a volume production processor practically assured that the Pentium Pro processor would be manufacturable, but it meant that Intel had to focus on an improved microarchitecture for ALL of the performance gains. This guided tour describes how multiple architectural techniques - some proven in mainframe computers, some proposed in academia and some we innovated ourselves - were carefully interwoven, modified, enhanced, tuned and implemented to produce the Pentium Pro microprocessor. This unique combination of architectural features, which Intel describes as Dynamic Execution, enabled the first Pentium Pro processor silicon to exceed the original performance goal.
Building from an already high platform
The Pentium processor set an impressive performance standard with its pipelined,
superscalar microarchitecture. The Pentium processor's pipelined implementation uses five
stages to extract high throughput from the silicon - the Pentium Pro processor moves to a
decoupled, 12-stage, superpipelined implementation, trading less work per pipestage for
more stages. The Pentium Pro processor reduced its pipestage time by 33 percent, compared
with a Pentium processor, which means the Pentium Pro processor can have a 33% higher clock
speed than a Pentium processor and still be equally easy to produce from a semiconductor
manufacturing process (i.e., transistor speed) perspective.
The Pentium processor's superscalar microarchitecture, with its ability to execute two
instructions per clock, would be difficult to exceed without a new approach.
The new approach used by the Pentium Pro processor removes the constraint of linear
instruction sequencing between the traditional "fetch" and "execute" phases, and opens up
a wide instruction window using an instruction pool. This approach allows the "execute"
phase of the Pentium Pro processor to have much more visibility into the program's
instruction stream so that better scheduling may take place. It requires the instruction
"fetch/decode" phase of the Pentium Pro processor to be much more intelligent in terms of
predicting program flow. Optimized scheduling requires the fundamental "execute" phase to
be replaced by decoupled "dispatch/execute" and "retire" phases. This allows instructions
to be started in any order but always be completed in the original program order. The
Pentium Pro processor is implemented as three independent engines coupled with an
instruction pool as shown in Figure 1 below.
What is the fundamental problem to solve?
Before starting our tour on how the Pentium Pro processor achieves its high performance it
is important to note why this three- independent-engine approach was taken. A fundamental
fact of today's microprocessor implementations must be appreciated: most CPU cores are not
fully utilized. Consider the code fragment in Figure 2 below:
The first instruction in this example is a load of r1 that, at run time, causes a cache miss.
A traditional CPU core must wait for its bus interface unit to read this data from main
memory and return it before moving on to instruction 2. This CPU stalls while waiting for
this data and is thus being under-utilized.
While CPU speeds have increased 10-fold over the past 10 years, the speed of main memory
devices has only increased by 60 percent. This increasing memory latency, relative to the
CPU core speed, is a fundamental problem that the Pentium Pro processor set out to solve.
One approach would be to place the burden of this problem onto the chipset but a
high-performance CPU that needs very high speed, specialized, support components is not a
good solution for a volume production system.
A brute-force approach to this problem is, of course, increasing the size of the L2 cache to reduce the miss ratio. While effective, this is another expensive solution, especially considering the speed requirements of today's L2 cache SRAM components. Instead, the Pentium Pro processor is designed from an overall system implementation perspective which will allow higher performance systems to be designed with cheaper memory subsystem designs.
Pentium Pro processor takes an innovative approach
To avoid this memory latency problem the Pentium Pro processor "looks-ahead" into its instruction pool at subsequent instructions and will do useful work rather than be stalled. In the example in Figure 2, instruction 2 is not executable since it depends upon the result of instruction 1; however both instructions 3 and 4 are executable. The Pentium Pro processor speculatively executes instructions 3 and 4. We cannot commit the results of this speculative execution to permanent machine state (i.e., the programmer-visible registers) since we must maintain the original program order, so the results are instead stored back in the instruction pool awaiting in-order retirement. The core executes instructions depending upon their readiness to execute and not on their original program order (it is a true dataflow engine). This approach has the side effect that instructions are typically executed out-of-order.
The cache miss on instruction 1 will take many internal clocks, so the Pentium Pro processor core continues to look ahead for other instructions that could be speculatively executed and is typically looking 20 to 30 instructions in front of the program counter. Within this 20- to 30- instruction window there will be, on average, five branches that the fetch/decode unit must correctly predict if the dispatch/execute unit is to do useful work. The sparse register set of an Intel Architecture (IA) processor will create many false dependencies on registers so the dispatch/execute unit will rename the IA registers to enable additional forward progress. The retire unit owns the physical IA register set and results are only committed to permanent machine state when it removes completed instructions from the pool in original program order.
Dynamic Execution technology can be summarized as optimally adjusting instruction execution by predicting program flow, analysing the program's dataflow graph to choose the best order to execute the instructions, then having the ability to speculatively execute instructions in the preferred order. The Pentium Pro processor dynamically adjusts its work, as defined by the incoming instruction stream, to minimize overall execution time.
Overview of the stops on the tour
We have previewed how the Pentium Pro processor takes an innovative approach to overcome a key system constraint. Now let's take a closer look inside the Pentium Pro processor to understand how it implements Dynamic Execution. Figure 3 below extends the basic block diagram to include the cache and memory interfaces - these will also be stops on our tour. We shall travel down the Pentium Pro processor pipeline to understand the role of each unit:
•The FETCH/DECODE unit: An in-order unit that takes as input the user program instruction stream from the instruction cache, and decodes them into a series of micro-operations (uops) that represent the dataflow of that instruction stream. The program pre-fetch is itself speculative.
•The DISPATCH/EXECUTE unit: An out-of-order unit that accepts the dataflow stream, schedules execution of the uops subject to data dependencies and resource availability and temporarily stores the results of these speculative executions.
•The RETIRE unit: An in-order unit that knows how and when to commit ("retire") the temporary, speculative results to permanent architectural state.
•The BUS INTERFACE unit: A partially ordered unit responsible for connecting the three internal units to the real world. The bus interface unit communicates directly with the L2 cache supporting up to four concurrent cache accesses. The bus interface unit also controls a transaction bus, with MESI snooping protocol, to system memory.
Tour stop #1: The FETCH/DECODE unit.
Figure 4 shows a more detailed view of the fetch/decode unit:
Let's start the tour at the Instruction Cache (ICache), a nearby place for instructions to reside so that they can be looked up quickly when the CPU needs them. The Next_IP unit provides the ICache index, based on inputs from the Branch Target Buffer (BTB), trap/interrupt status, and branch-misprediction indications from the integer execution section. The 512 entry BTB uses an extension of Yeh's algorithm to provide greater than 90 percent prediction accuracy. For now, let's assume that nothing exceptional is happening, and that the BTB is correct in its predictions. (The Pentium Pro processor integrates features that allow for the rapid recovery from a mis-prediction, but more of that later.)
The ICache fetches the cache line corresponding to the index from the Next_IP, and the next line, and presents 16 aligned bytes to the decoder. Two lines are read because the IA instruction stream is byte-aligned, and code often branches to the middle or end of a cache line. This part of the pipeline takes three clocks, including the time to rotate the prefetched bytes so that they are justified for the instruction decoders (ID). The beginning and end of the IA instructions are marked.
Three parallel decoders accept this stream of marked bytes, and proceed to find and decode the IA instructions contained therein. The decoder converts the IA instructions into triadic uops (two logical sources, one logical destination per uop). Most IA instructions are converted directly into single uops, some instructions are decoded into one-to-four uops and the complex instructions require microcode (the box labeled MIS in Figure 4, this microcode is just a set of preprogrammed sequences of normal uops). Some instructions, called prefix bytes, modify the following instruction giving the decoder a lot of work to do. The uops are enqueued, and sent to the Register Alias Table (RAT) unit, where the logical IA-based register references are converted into Pentium Pro processor physical register references, and to the Allocator stage, which adds status information to the uops and enters them into the instruction pool. The instruction pool is implemented as an array of Content Addressable Memory called the ReOrder Buffer (ROB).
We have now reached the end of the in-order pipe.
Tour stop #2: The DISPATCH/EXECUTE unit
The dispatch unit selects uops from the instruction pool depending upon their status. If the status indicates that a uop has all of its operands then the dispatch unit checks to see if the execution resource needed by that uop is also available. If both are true, it removes that uop and sends it to the resource where it is executed. The results of the uop are later returned to the pool. There are five ports on the Reservation Station and the multiple resources are accessed as shown in Figure 5 below:
The Pentium Pro processor can schedule at a peak rate of 5 uops per clock, one to each resource port, but a sustained rate of 3 uops per clock is typical. The activity of this scheduling process is the quintessential out-of-order process; uops are dispatched to the execution resources strictly according to dataflow constraints and resource availability, without regard to the original ordering of the program.
Note that the actual algorithm employed by this execution-scheduling process is vitally important to performance. If only one uop per resource becomes data-ready per clock cycle, then there is no choice. But if several are available, which should it choose? It could choose randomly, or first-come-first-served. Ideally it would choose whichever uop would shorten the overall dataflow graph of the program being run. Since there is no way to really know that at run-time, it approximates by using a pseudo FIFO scheduling algorithm favoring back-to-back uops.
Note that many of the uops are branches, because many IA instructions are branches. The Branch Target Buffer will correctly predict most of these branches but it can't correctly predict them all. Consider a BTB that's correctly predicting the backward branch at the bottom of a loop: eventually that loop is going to terminate, and when it does, that branch will be mispredicted. Branch uops are tagged (in the in-order pipeline) with their fallthrough address and the destination that was predicted for them. When the branch executes, what the branch actually did is compared against what the prediction hardware said it would do. If those coincide, then the branch eventually retires, and most of the speculatively executed work behind it in the instruction pool is good.
But if they do not coincide (a branch was predicted as taken but fell through, or was predicted as not taken and it actually did take the branch) then the Jump Execution Unit (JEU) changes the status of all of the uops behind the branch to remove them from the instruction pool. In that case the proper branch destination is provided to the BTB which restarts the whole pipeline from the new target address.
Tour stop #3: The RETIRE unit
Figure 6 shows a more detailed view of the retire unit:
The retire unit is also checking the status of uops in the instruction pool - it is looking for uops that have executed and can be removed from the pool. Once removed, the uops' original architectural target is written as per the original IA instruction. The retirement unit must not only notice which uops are complete, it must also re-impose the original program order on them. It must also do this in the face of interrupts, traps, faults, breakpoints and mis- predictions.
There are two clock cycles devoted to the retirement process. The retirement unit must first read the instruction pool to find the potential candidates for retirement and determine which of these candidates are next in the original program order. Then it writes the results of this cycle's retirements to both the Instruction Pool and the RRF. The retirement unit is capable of retiring 3 uops per clock.
Tour stop #4: BUS INTERFACE unit
Figure 7 shows a more detailed view of the bus interface unit:
There are two types of memory access: loads and stores. Loads only need to specify the memory address to be accessed, the width of the data being retrieved, and the destination register. Loads are encoded into a single uop. Stores need to provide a memory address, a data width, and the data to be written. Stores therefore require two uops, one to generate the address, one to generate the data. These uops are scheduled independently to maximize their concurrency, but must re-combine in the store buffer for the store to complete.
Stores are never performed speculatively, there being no transparent way to undo them. Stores are also never re- ordered among themselves. The Store Buffer dispatches a store only when the store has both its address and its data, and there are no older stores awaiting dispatch.
What impact will a speculative core have on the real world? Early in the Pentium Pro processor project, we studied the importance of memory access reordering. The basic conclusions were as follows:
•Stores must be constrained from passing other stores, for only a small impact on performance.
•Stores can be constrained from passing loads, for an inconsequential performance loss.
•Constraining loads from passing other loads or from passing stores creates a significant impact on performance.
So what we need is a memory subsystem architecture that allows loads to pass stores. And we need to make it possible for loads to pass loads. The Memory Order Buffer (MOB) accomplishes this task by acting like a reservation station and Re-Order Buffer, in that it holds suspended loads and stores, redispatching them when the blocking condition (dependency or resource) disappears.
Tour Summary
It is the unique combination of improved branch prediction (to offer the core many instructions), data flow analysis (choosing the best order), and speculative execution (executing instructions in the preferred order) that enables the Pentium Pro processor to deliver its performance boost over the Pentium processor. This unique combination is called Dynamic Execution and it is similar in impact as "Superscalar" was to previous generation Intel Architecture processors. While all your PC applications run on the Pentium Pro processor, today's powerful 32-bit applications take best advantage of Pentium Pro processor performance.
And while our architects were honing the Pentium Pro processor microarchitecture, our silicon technologists were working on an advanced manufacturing process - the 0.35 micron process. The result is that the initial Pentium Pro Processor CPU core speeds range up to 200MHz.
f:\12000 essays\sciences (985)\Computer\Microprocessors.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Inside of the mysterious box that perches ominously on your desk is one of the marvels of the modern world. This marvel is also a total enigma to most of the population. This enigma is, of course, the microprocessor. To an average observer a microprocessor is simply a small piece of black plastic that is found inside of almost everything.
In How Microprocessors Work they are defined as a computer's central processing unit, usually contained on a single integrated circuit (Wyant and Hammerstrom, 193). In plain English this simply means that a microprocessor is the brain of a computer and it is only on one chip. Winn L. Rosch compares them to being an electronic equivalent of a knee-joint that when struck with the proper digital stimulus will react in the exact same way each time (Rosch,37). More practically a microprocessor is multitudinous transistors squeezed onto as small a piece of silicon as possible to do math problems as fast as possible.
Microprocessors are made of many smaller components which all work together to make the chip work. A really good analogy for the way the inner workings of a chip operate can be found in How Microprocessors Work. In their book, Wyant and Hammerstrom describe a microprocessor as a factory and all of the inner workings of the chip as the various parts of a factory (Wyant and Hammerstrom, 71-103). Basically a microprocessor can be seen as a factory because like a factory it is sent something and is told what to do with it. The microprocessor factory processes information. This most basic unit of this information is the bit. A bit is simply on or off. It is either a one or a zero. Bits are put into 8 bit groups called bytes. The number 8 is used because it is offers enough combinations to encode our entire language (2^8=256). If only 4 bits are used only (2^4=16) combinations would be possible. This is enough to encode 9 digits and some operations. (The first microprocessors powered calculators) A half byte is called a nibble and consists of 4 bits. In the world of computer graphics the combination of bits is easier seen. In computer graphics bits are used to make color combinations, thus with more bits more colors are possible. Eight bit graphics will display 256 colors, 16 bit will display 65,536, and 24 bit graphics will display 16.7 million colors. The bus unit is described as the shipping dock because it controls data transfers, and functions between the individual pieces of the chip. The part of the chip that performs the role of a purchasing department is called the prefetch unit. It's job is to make certain that enough data is on hand to keep the chip busy. The decode unit performs the role of a receiving department. It breaks done complicated instructions from the rest of the computer into smaller pieces that the chip can manipulate more readily. The control unit is compared to the person who oversees the workings of the entire factory. It is the part of the chip that keeps all the other parts working together and coordinates their actions. The arithmetic logic unit is compared to the assembly line of the factory. It is the part of the microprocessor that performs the math operations. It consists of circuitry that performs the math and the registers which hold the necessary information. The memory management unit is likened to the shipping department of this digital factory. It is responsible for sending data to the bus unit. Together all of the individual pieces support each other to make this digital symbiosis work as fast as possible.
To an outsider, computer nerd vernacular and all other forms of computer people esoteric may or may not be considered frightening. Probably the most confused term in microprocessor performance is Megahertz (MHz). Basically these are millions of cycles per second. This is a measurement of chip speed but is better considered the RPM of the chip (Knorr, 135). For example a 486 100 MHz processor cannot touch the speed of a Pentium running at only 60 MHz. This is because the Pentium packs more power and can do more per clock cycle. The computer bus is the data line that connects the microprocessor the rest of the computer. The width of the bus (how many bits it consists of) controls how much data can be sent to the chip per clock cycle. MIPS or millions of instructions per second is simply how many instructions the chip can perform in one second divided by 1,000,000. RISC is a commonly used term in the computing world also. It is an acronym for Reduced Instruction Set Computer. Chips that incorporate RISC technology basically rely on simplicity to enhance performance. Motorola chips use this technology. The opposite of RISC is CISC which stands for Complex Instruction Set Computer. These chips use more hardwired instructions to speed up the processing process. All Intel PC products fall into this category. Pipelining, superscalar architecture, and branch prediction logic are currently technological buzzwords in the computer community presently. These technologies can be found in newer chips. Pipelining allows the chip to seek out new data while the old data is still being worked on (Wyant and Hammerstrom, 161). Superscalar architecture allows complex instructions to be broken down into smaller ones and then processed simultaneously through separate pipelines (Wyant and Hammerstrom, 161-163). Branch prediction logic uses information about the way a program has behaved in the past to try to predict what the program will do next (Wyant and Hammerstrom, 165). Bus speed is simply the speed in MHz at which the data bus travels. This is relative to how fast the microprocessor can communicate with the rest of the computer. A register is the part of the chip that hold the information that the chip is currently manipulating. The width of the register in bits is relative to how much data the chip can process simultaneously. Using very long instruction words is simply using instructions larger than 16 bits to increase the amount of data the chip can be sent at once. A new tool being used in making chips run faster is to place a cache on the chip. This cache is for holding data that the chip is most likely to need first. Since the data is stored inside the chip the access time is lowered dramatically. In the future more and more of the computer will be integrated on the main processing unit. Line width is also a sign of the technological times. It is simply how small the smallest feature is on a chip. Basically the smaller the lines the more transistors can be squeezed onto the wafer and thus increase performance while cutting manufacturing costs.
Non-technological issues also have a major effect on the microprocessing world. One such issue is heat. This may sound trivial but a Pentium chip can and will burn the skin of a person who touches one that has been running for longer than a few minutes. Without a fan most modern chips will melt and or destroy themselves. To combat this, large aluminum heat sinks are attached to the chips and a large fan is placed in the case. Some users prefer to use a separate fan above the heat sink for added insurance. Operating voltages can also add to the heat problem. Chips run from either 3.3 or 5 volts. Three and three tenths volts is preferred now because with less power less heat is generated and in the case of laptops battery life is extended.
Credit for the invention of the microprocessor is given to Intel. This first microprocessor was the 4004 and was released in 1971. This single chip matched the performance of the room size computer ENIAC from the fifties (Wyant and Hammerstrom, 19). This chip could only support a four bit bus. These four bits only offered the possibility of coding 16 symbols (2^4=16). Sixteen symbols was enough for digits 1-9 and then some operators. This limited the 4004 to calculator usage. The 4004 ran at 108 kHz which is 1/10 of 1 MHz (Rosch, 66). The smallest feature on the chip measured 10 microns and contained 2300 transistors.
The next generation of Intel chips used a 8 bit data bus. The first member of this generation was released in 1972 and was called the 8008. This chip was the same as the 4004, but it had 4 more bits on each register. This chip had enough bits to code 256 symbols (2^8=256). This number is easily enough to encode our alphabet, numerals, punctuation marks, etc. The 8008 also ran a little faster than the 4004 with its speedy clock of 200 kHz. The 8008 contained 3500 transistors and had line widths 10 microns. Both chips had a MIPS of 0.06 (Rosch, 66).
The next member of the Intel family was born in 1974 and was called the 8080. This chipped was intended to handle byte sized data (8 bit). The 8080 contained 6000 transistors and had 6 micron technology. This chip performed at 0.65 MIPS and had an internal clock speed of 2 MHz. This was one of the first chips to have the capabilities of running a small computer (Rosch, 66).
In June of 1978 the 8086 family was released by Intel. These chips used 16 bit registers. The fastest chip in this series ran at 10 MHz and could execute.75 MIPS. This chip forced engineers of the time to begin developing fully 16 bit devices, which were more expensive than their 8-bit brethren. Because of this, the 8086 family was considered ahead of it's time (Rosch, 67-68).
A year later Intel introduced the 8080. This chip was a step backwards in chip evolution with it's 8 bit data bus. The 8080 could process.64 MIPS with it's 6000 transistors. The 8080 used 6 micron technology. This chip is worth mentioning primarily because IBM chose to use it in it's first personal computer. IBM was able to use the 8088 with existing 8 bit hardware, which was more cost effective. Later IBM began using the 8086 in it's newer systems (Rosch, 68).
In 1982 Intel released the 80286. The 286 family was available in clock speeds of 8, 10, and 12 MHz that could execute 1.2, 1.5, and 1.66 MIPS respectively. The 80286 contained 134,000 transistors with 1.5 micron technology. These chips all used a 16 bit data bus and were used by IBM in it's AT models. This was also the first chip to use virtual memory, or using disk space as RAM (Random Access Memory). To allow full downward compatibility the 286 was designed to have two operating modes. These modes are real and protected mode. Real mode mimics the operation of an 8086. Protected mode allows multiple applications to be run simultaneously and not interfere with each other (Rosch, 70-71).
The next member to the Intel family was added in November 1985 and was the 80386. These chips are offered in speeds of 16, 20, 25, 33 MHz and can process 5.5, 6.5, 8.5, and 11.4 MIPS respectively. The number of transistors in the 80386 is 275,000 with 1.5 micron technology. The 386 family doubled the register size to 32 bits. Also the 386 uses 16 bytes of prefetch cache that the chip uses to store the next few instructions. The 386 has three models which are called the 386DX, 386SX, and the 386SL. The 386DX was the original and most powerful. The 386SX is a more economical sibling to the DX. It is basically scaled down, less powerful DX. Also the SX uses a 16 bit data bus. The SL also uses 16 bit buses but it includes power saving features targeted at notebook usage. The SL uses 1.0 micron technology and contains 855,000 transistors (Rosch, 72-78).
The 80486 family was introduced in April 1989 and became a "better 386" (Rosch, 78). The 486 was originally released in a DX model with speeds of 25, 33, and 50 MHz that processed 20, 27, and 41 MIPS respectively. The DX also contains a math coprocessor or floating point unit that helps speed up math operations. The 486DX uses a 32 bit bus and contains 1,200,000 transistors. It uses 1.0 micron technology in the 25 and 33 MHz models, but in the 50 MHz model uses 0.8. The next to be released was the 486SX. The SX was designed to cut cost at the price of not having a math coprocessor. As a result the SX will not perform as well as the DX in math intensive operations. The SX contains 1,185,000 transistors and uses the same technology as the DX. The SX is available in 16, 20, 25, and 33 MHz models that process 13, 16.5, 20, and 27 MIPS respectively. To add the power of a FPU (Floating Point Unit) to the SX Intel released the OverDrive upgrade processors in March 1992. The first, the 486DX2, incorporated clock doubling technology. These chips operate at double the bus speed. These chips are available in 50 and 66 MHz models that can process 41 and 54 MIPS respectively. The 50 MHz model was designed to replace the 25 MHz 486SX and the 66 MHz model was for the 33 MHz 486SX. The OverDrive chips contain 1.2 million transistors. The next to be released was the SL model which was, like the 386SL, targeted at laptop usage. The SL contains 1.4 million transistors and can process 15.4, 19, and 25 MIPS while running at 20, 25, and 33 MHz respectively. The 486DX4 was the next OverDrive chip to be released. It contains clock tripling technology. The DX4 can turn a 33 and 25 MHz 486's into DX4-100 and DX4-75 respectively. These chips can process 60 and 81 MIPS running at 75 and 100 MHz respectively. The DX4 uses 0.6 micron technology (Rosch 84-85).
The next addition to the Intel family was the Pentium. The Pentium was originally released in a 60 MHz model that operated at 5 volts. This chip contains 3,100,000 transistors and can process 100 MIPS. The next to be released was the 66 MHz model. It uses the same technology but is a 3.3 volt chip and can process 112 MIPS. Currently the Pentium is available in 66, 75, 90, 100, 120, 133, 150, and 166 MHz models. Beyond the 75 all Pentiums use 0.6 micron technology. A 180 MHz is slated for future release. The Pentium family is, like all of Intel's chips, uses CISC technology. Also they use pipelining, superscalar architecture, and branch prediction logic. A Pentium OverDrive is also available for upgrading 486 systems to Pentium technology. The Pentium OverDrive is available in a 63 and 83 MHz version (Rosch, 85-87).
After the Pentium, the only more advanced chip Intel has for personal use is the Pentium Pro. This chip has only been available for a short time and is targeted at workstation and server usage. It will only run Windows NT and native 32 bit software at an increased speed. When using 16 bit software, the less powerful Pentium will outperform its larger sibling. The Pentium Pro also contains 256K (256,000 bytes) of on chip cache memory.
The only certainty in the future of microprocessors is constant improvement. One prediction for the future is called Moore's Law. This prediction is named after Intel cofounder Gordon Moore who presented it in 1965. The law states the transistor densities will double every two years. Line width is also continuing to shrink and is estimated to be at 0.2 microns by the turn of the century. When all is considered the future of computers is very exciting (Wyant and Hammerstrom, 184-185).
Knorr, Eric. "From 586 to Pentium Pro: Choosing Your Dream PC." PC World February 1996: 133-142.
Rosch, Winn L. The Hardware Bible. Indianapolis: SAMS, 1994.
Wyant, Gregg, Hammerstrom, Tucker. Intel, How Microprocessors Work. Emeryville: Ziff-Davis, 1994.
f:\12000 essays\sciences (985)\Computer\Microsoft .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* Get More Information About Windows 95 *
For more information about Microsoft Windows 95, take
a look at Microsoft's WinNews file sections, which can
be found on most major online services and networks.
On the Internet use ftp or the World-Wide-Web
(ftp://ftp.microsoft.com/PerOpSys/Win_News,
http://www.microsoft.com).
On The Microsoft Network, open Computers and
Software\Software Companies\Microsoft\Windows 95\
WinNews.
On CompuServe, type GO WINNEWS.
On Prodigy JUMP WINNEWS.
On America Online, use keyword WINNEWS.
On GEnie, download files from the WinNews area under
the Windows RTC.
NEW SERVICE: To receive regular biweekly updates on
the progress of Windows 95, subscribe to Microsoft's
WinNews Electronic Newsletter. These updates are
e-mailed directly to you, saving you the time and
trouble of checking our WinNews servers for updates.
To subscribe to the Electronic Newsletter, send
Internet e-mail to enews@microsoft.nwnet.com with
the words SUBSCRIBE WINNEWS as the only text in your
message.
f:\12000 essays\sciences (985)\Computer\Microsoft Access An Overview.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Executive Summary
Microsoft Access 97 for the Windows 95 and Windows NT operating systems provides relational database power for your programs. Its visual design and event driven nature make Access a powerful and easy tool to learn. Access is quick and easy to use which makes it a popular tool with home users. Small business owners benefit greatly from Access because they can develop their own database applications and eliminate the cost of third party developers.
Microsoft Access 97 makes it easy to turn data into answers and includes tools that help even first time users get up and running quickly. For example, the Database Wizard can automatically build custom databases in minutes. The Table Analyzer Wizard quickly transforms linear lists or spreadsheets into powerful relational databases.
Access 97 offers greatly enhanced 32-bit performance with smaller forms, more efficient compilation, and better data manipulation technology for quicker queries and responses. Other features further improve execution time and help you build fast business solutions. The Performance Analyzer Wizard automatically recommends the best way to speed up your database. Additionally, Visual Basic for Applications and OLE make it simple to build quick solutions and integrate them with other Microsoft Office programs.
Table of Contents
WHAT IS ACCESS? 3
A BRIEF HISTORY OF ACCESS 4
ACCESS 1.X / ACCESS 2.0 4
ACCESS 95 / ACCESS 97 4
HARDWARE REQUIREMENTS 6
RAPID APPLICATION DEVELOPMENT 7
THE EVENT DRIVEN MODEL 7
VBA IN ACCESS 95 / 97 8
THE JET DATABASE ENGINE 10
WHERE IS ACCESS TYPICALLY USED? 11
ACCESS IN CORPORATE BUSINESSES 11
ACCESS IN SMALL BUSINESSES 13
ACCESS AT HOME 13
FEATURES OF ACCESS 97 15
INTEGRATING ACCESS WITH OTHER APPLICATIONS 17
MICROSOFT OFFICE 17
ACCESS AND VISUAL BASIC TOGETHER 18
CONCLUSION 20
GLOSSARY 20
ENDNOTES 23
BIBLIOGRAPHY 28
What is Access?
Microsoft Access for Windows is a relational database management system. Access uses the graphical abilities of Windows so that you can easily view and work with your data in a convenient manner. Access makes your data available to you quickly and easily, and presents it in an effective and readable way. Its ability to locate information using query by example eliminates keystrokes and consequently speeds up the development process.
Access lets you examine your data in a variety of ways. Sometimes the information in a record is easier to understand if the record's fields are arranged on a form or a report in a visually pleasing way; sometimes you need to see the maximum number of data records possible on your screen.
A Microsoft Access form is a special window that is used for data entry. You can use the visual capabilities of Windows to create a custom form using a combination of graphics and text. Forms can present data in a format that is easier to read and understand than a data sheet. The form wizard is an internal tool that helps you create data entry forms by asking the user to answer a number of predefined questions. The wizard asks about how the data is to be displayed and then sets up the layout based on the responses.
Overall, Access is a tool that allows users to create, edit, and maintain sophisticated databases. Users can accomplish this without programming skills. However, Access Basic provides programmers with additional abilities to automate and extend the functionality of their database programs.
A Brief History of Access
Access 1.x / Access 2.0
Access 1.x and 2.0 run under Windows 3.1. Access 1.0 debuted in 1993 and set the standard for databases in Windows. At this point, Access was a little too much for home use. The typical home PC user was not at the same level as Access was with respects to application development and understanding of relational data modeling. Access had only a few wizards and thus a long learning curve. Additionally, few home PCs had adequate RAM and processor speeds to accommodate Access.
The arrival of Access 2.0 changed things quite a lot, offering more wizards and add-ins to supplement the package.
Access 95 / Access 97
Access 95 and 97 are 32-bit applications which run under Windows 95 or Windows NT. Access 95 is also frequently referred to as Access 7.0. Microsoft has improved these products over previous version of Access so that they integrate better with other applications in the Microsoft Office suite. The 32-bit versions are also more heavily oriented towards the home user than previous versions.
Access 95 introduced Visual Basic for Applications as a means of integration and compatibility. Users could use Access to define data structures and their relationships, then export the generated schema to VB. This gave Access 95 an edge over competing products such as Paradox and Delphi.
Access 97 is a product with multiple personalities. On the surface, a quick tour of Access leads you to believe that it was created primarily for novice database programmers. Like a friendly personal assistant, Access helps to organize and store information by using features such as the following:
· A well organized Database window
· Wizards for constructing database objects
· A variety of built-in properties to define each object
· A simplified macro scripting language
Below the surface lies a completely different infrastructure. Access 97 has the following additional abilities:
· Automation allows Access to print reports from within Visual Basic or to edit Access 97 table data while inside an Excel worksheet.
· The Visual Basic for Applications programming language gives you the building blocks for creating robust applications in Access 97 and for automating complex business processes.
· The Access relational data model and Structured Query Language (SQL) foundation allow you to make uncomplicated representations of complex data.
· The Jet Database Engine exposes programmable Data Access Objects that provide your program code direct access to database data and structures.
Throughout the different versions of Access its user friendly interface has not changed much. It still makes designing a database look relatively easy, but it has become more flexible and powerful.
Hardware Requirements
Access is a resource hungry application. However, hardware requirements for a developers and end users are different. Be sure to note the actual as opposed to recommended requirements.
What Hardware Does Your System Require?
According to Microsoft documentation, the official minimum requirements to run Microsoft Access 7.0 for Windows 95 are as follows:
· 386DX processor
· Windows 95 or Windows NT 3.51 or later
· 12 megabytes of RAM on a Windows 95 machine
· 16 megabytes of RAM on a Windows NT machine
· 14 to 42 megabytes hard-disk space, depending on whether you perform a Compact, Typical, or Custom installation
· 3 1/2-inch high-density disk drive
· VGA or higher resolution (SVGA 256-color recommended)
· Pointing device
Recommended specifications for a development machine are much higher because you will probably run other applications along with Microsoft Access. In addition to Microsoft's requirements, these are the recommended requirements :
· A Pentium or Pentium Pro processor - 100 MHZ or faster
· A fast ATA-2 or SCSI hard drive
· At least 20 megabytes of RAM for Windows 95, and 24 megabytes for Windows NT. Increase this amount if you like to run multiple applications simultaneously.
· A high-resolution monitor (larger is better) and SVGA graphics
The bottom line for hardware requirements is that the more you have, the better off you are. The increased speed and performance will make you much happier when you use Access or any other large, powerful program.
Rapid Application Development
Visual application design tools such as Access, Delphi, Visual Basic, and Oracle Forms allow the user to begin program development with the user interface. This approach is radically different than traditional program development where the user interface is typically designed last. Access allows the programmer to draw the individual components of the program on the screen, then link code to each object on that form. The programmer creates the interface much like he would use a paint program. Different objects and painting tools are selected from tool bars and applied to the form with a click of the mouse.
This process is commonly referred to as Rapid Application Development, or RAD. RAD allows the programmer to develop applications quickly with very little turn around time and a minimal amount of coding. What RAD eliminates is duplication of effort. GUI design elements common to all Windows applications do not have to be recreated for each program.
The Event Driven Model
Event driven programming is a concept which goes hand in hand with RAD tools. Access and almost all competing RAD products fully support the event driven model. Traditional programs have a well defined flow of control. They execute sequentially from beginning to end. On the other hand, event driven programs do not have a logical beginning or ending point. The program will actually do nothing - until an event occurs. Once an event occurs, then the program will respond accordingly, depending on the type of event. Some examples of events include an application being run, mouse clicks, mouse movements, and keystrokes.
Unknown to the user, Windows traps events and notifies the application behind the scenes. Access traps these notifications, called messages, and allows the programmer to design his program around those events. For example, a double-click event on an OK button could initiate a database query or anything else the programmer desires.
VBA in Access 95 / 97
Visual Basic for Applications is the development language for Microsoft Access 95. It provides a consistent language for application development within the Microsoft Office suite. The core language, its constructs, and the environment are the same within Microsoft Access for Windows 95, Microsoft Visual Basic, Microsoft Excel, and Microsoft Project.
The early versions of Access used a coding engine called Access Basic or EB (Embedded Basic). It had some similarities with its other siblings like VBA, Excel and Project. However, a major difference is that Access Basic was written in assembler language and VBA was written entirely in C.
Microsoft was highly motivated to implement one common Basic engine for all of its development applications. The benefits of this standardization to the developer are:
· Reduced learning curve. Microsoft is distributing Basic more widely each year, adding it to everything from the entire Office suite to its Internet browsers and servers. As a solution developer, you can now learn one rendition of the Basic language and one development interface, then carry your skills and experience with Access VBA into your work with other VBA host products.
· Code portability. One of the current developer buzzwords is reusable objects, a term that describes self-contained servers (or something that provides services to an application). In order for a code procedure to qualify as a reusable object, you must be able to carry code from one host application into another to use it unmodified. VBA provides this capability.
· Shared resources. By sharing a centralized coding and run-time environment, multiple tools and applications on your machine share the same dynamic link libraries and type libraries. The performance of your workstation improves when you have fewer resources loaded to memory, and this speeds up your development efforts. Disk space consumption, application deployment efforts, and version control issues are all favorably impacted when multiple applications on your machine share central services.
Simple Access applications can be written using macros. Although macros are great for quick prototyping and very basic application development, most serious Access development is done using the VBA language. Unlike macros, VBA provides the ability to:
· Work with complex logic structures
· Utilize constants and variables
· Take advantage of functions and actions not available in macros
· Loop through and perform actions on table rows
· Perform transaction processing
· Programmatically create and work with database objects
· Implement error handling
· Create libraries of user-defined functions
· Call Windows API functions
· Perform complex DDE and OLE automation commands
The Jet Database Engine
Microsoft Access 97 ships with the Microsoft Jet database engine. This is the same engine that ships with Visual Basic and with Microsoft Office. Microsoft Jet is a 32-bit, multithreaded database engine that is optimized for decision-support applications and is an excellent workgroup engine.
Microsoft Jet has advanced capabilities that have typically been unavailable on desktop databases. These include:
· Access to different data sources. Microsoft Jet provides transparent access, via industry standard ODBC drivers, to over 170 different data formats. These formats include dBASE, Paradox, Oracle, Microsoft SQL Server, and IBM DB2. Developers can build applications in which users read and update data simultaneously in virtually any data format.
· Engine-level referential integrity and data validation. Microsoft Jet has built-in support for primary and foreign keys, database specific rules, and cascading updates and deletes. This means that a developer is freed from having to create rules using procedural code to implement data integrity. The engine itself consistently enforces these rules, so they are available to all application programs.
· Advanced workgroup security features. Microsoft Jet stores user and group accounts in a separate database, typically located on the network. Object permissions for database objects are stored in each database. By separating account information from permission information, Microsoft Jet makes it much easier for system administrators to manage one set of accounts for all databases on a network.
· Updateable dynasets. As opposed to many database engines that return query results in temporary views or snapshots, Microsoft Jet returns a dynaset that automatically propagates any changes users make back to the original tables. This means that the results of a query, even those based on multiple tables, can be treated as tables themselves. Queries can even be based on other queries.
Where is Access Typically Used?
Access in Corporate Businesses
Many midsize and large companies rely heavily on Access, but none rely exclusively on Access. Companies of any significant size usually have complex data needs, multiple database platforms and dozens to thousands of application users. In such an environment, no single product is sufficient to satisfy all needs. Access becomes one piece of an often complex puzzle of application development tools.
Virtually all technology companies with more than one hundred employees have some in-house development staff. These departments are usually called Information Systems (IS) or Information Technology (IT). Corporations with changing technology have the challenge of efficiently retraining their application development staff. Access wins big in such a circumstance for two main reasons.
First, Access has a reasonable learning and implementation cycle. It is neither the easiest nor the hardest development tool to learn. There are enough books, videos, courses, and conferences built around Access that companies can shop competitively and select the best staff retraining option they can find. There are also thousands of consultants and contractors that can help the IT staff make the transition to Access without wandering in the dark.
Secondly, Access is flexible. Access fits well into corporate development models because it can be extended in the following ways:
· Access coexists with other applications. Companies using Excel or Word find Access easy to add to existing desktops. Users are comfortable with the Office style user interface, appreciate the built-in data links between each of the products, and enjoy features like drag-and-drop. The IT staff can use Automation to add extra capabilities to the exchange of information between these products.
· Access connects to existing data. Using ODBC technology and ISAM drivers, Access can import or link to text file data, spreadsheet data, Xbase data, Paradox data, Web pages, and SQL based data residing on platforms ranging from PC servers to mainframes. Companies can continue to use data stored in non-Access formats and easily convert such data to native Access data when required.
· Access uses Basic. Many IT programmers have been writing in some dialect of Basic for years and find the transition to programming in Access only slightly challenging. Also, where Visual Basic is already part of an IT department's tool set, Access fits in well due to its many similarities to and compatibility with VB.
Access in Small Businesses
Access is best suited for small businesses. Microsoft had this market in mind when they started creating wizards in the Office product line. Because this market is comprised of people short on both time and money, they will not use Access if it cannot solve their problems quickly and cost effectively.
Many small business owners and managers use Access themselves as a productivity and decision support tool. Small businesses frequently have only a few computer literate employees on staff, so the ability of Access to manage a few dozen simultaneous users is quite adequate. Business owners on a tight budget find that they can learn enough about Access to produce a simple but effective custom application with a few weeks of training and a few more weeks of development time.
Of course, a very small businesses may not even need Access for the application development power it provides. Even without an application and its forms, you can be productive with Access by entering data into table datasheets, running summary queries, exporting data to Excel for analysis, and printing reports.
Access at Home
Four years ago, if you thought using Access 1.x at home was like using a sledgehammer to swat a fly, you were correct. At that time, home PC users lacked sophistication and most could not grasp the relational data model. Access had only a few wizards and a long learning curve. Few home PCs had the 16 megabytes of memory and 486 or Pentium processors that Access demands.
The current home marketplace is quite different. The explosion of multimedia PCs has given many home PC users more than enough power to run Access. Microsoft Office Professional, of which Access is a part, is convenient for home users who want to use the same software at home that they have already learned to use at work.
If you use or intend to use Access at home, you most likely fit into one of two categories:
· you are a business user bringing Access work home
· you are a home user, who knows Access through your job, and you want your home machine to resemble your work machine.
It is natural to reason that if Access can manage your business data, it can certainly handle your personal data as well.
When you create a new database in Access 97 you can select a template for the Database Wizard to meet your specific purpose. Some of these database templates, such as Book Collection, Donations, and Household Inventory, are quite obviously designed for home PC users.
Features of Access 97
· Database Wizard. This can help you create a database to manage home data using a standard template. The resulting application can then be modified.
· Table Wizard. This steps you through the process of creating commonly used tables and relationships.
· Form Wizard. This tool saves time by removing most of the tedious form layout work.
· Assistant. The Assistant character answers simple help requests and is designed to help new users feel less intimidated by the product.
· Import Wizards. Many home users keep their records in products that produce spreadsheet or text format files. The Import Wizards help you load such data into Access.
· Easy queries. Access 97 has a powerful SQL based query engine, but provides home users with layers of usability features (query wizards, sortable datasheets, query filters, and the like) on top of that engine. This enables users to easily ask everyday personal questions like "What is the oldest bottle of wine in my collection?"
· Macros. Home users often prefer to use macro scripts rather than program in Basic.
· Add-ins. As more copies of Access enter the home market, third parties will produce additional tools and wizards appropriate for home users.
· Export to Word. Historically, home PC users spend more time in their word processor than their database software. Access makes copying and merging data to Word easy.
· Publish to the Web. If you maintain a home page on the Internet you can use the new Internet data publishing features to help translate data into HTML.
· Multi-user. Access makes data available to workgroups of multiple users by providing built-in record locking. This is available in forms and table datasheets without any programming.
· Visual Basic for Applications. Access is highly programmable because its VBA language provides the ability to write custom procedures and because it provides event notifications that can be detected from code. Also, existing code from Visual Basic or Excel VBA libraries can be easily ported to Access VBA code libraries.
· Forms. IT groups can create complex entry/edit forms which provide selective access to records, validation of data, query-by-form capabilities, and spell checking.
· Reports. Corporate managers make many of their daily decisions from reports. Access lets them use graphical reports and can filter the reports using queries and parameters. Reports can also be connected to linked external data.
· SQL. Most IT programmers have been exposed to SQL while working on minicomputer or mainframe databases. They can quickly grasp the query capabilities of Access.
· Intranet capabilities. Access applications can provide users with links to Web pages on a corporate intranet through hyperlinks on form controls and in table fields.
· Interoperability. Features like Automation from Access to Excel and Word or the new Publish to the Web Wizard give users flexibility when they publish and report company data. Interoperability is covered in more depth in the following section.
Integrating Access with Other Applications
Microsoft Office
Access is an excellent tool for multifaceted solutions that involve integration with other Microsoft applications. Access 97 communicates better than ever with its siblings in Microsoft Office because of the following features:
· Drag-and-drop. You can drag-and-drop form data, cells from a table datasheet, and entire table and query objects into Excel worksheets and Word documents. Conversely, you can drag-and-drop Excel cells into Access to create a new table. You can also drop Access objects onto the Windows desktop to create shortcuts to databases.
· Save as Rich Text Format. You can save the output of a table datasheet, a form, or a report as a Rich Text Format (RTF) file that can be loaded into Word with the formatting preserved.
· Mail Merge Wizard. Using this wizard you can link a Word mail merge document to data in Access and retrieve the latest data from Access whenever you print your Word merge document.
· Save as an Excel worksheet. You can save the output of a table datasheet, a form, or a report as an Excel file with the formatting preserved.
· Excel AccessLinks. The AccessLinks add-in program in Excel lets you create Access forms and reports using data in Excel and export data from Excel into Access tables.
· E-mail attachments. Using the SendObject macro action or File Send... menu selection, you can attach an Access datasheet, form, report, or module to an e-mail message as a Rich Text Format file, an Excel worksheet, or a text file.
· Common interface elements. The new Office 97 Assistant and Command Bar features provide a common set of user interface construction tools.
Access and Visual Basic Together
With Access 2.0, a significant two-way migration of developers occurred. Many Access developers realized that the investment they had made in learning Access Basic enabled them to learn Visual Basic more easily and added another powerful product to their skill set. From the other direction, most Visual Basic programmers adopted the Jet Database Engine as their preferred file-server database technology and adopted Access to create their database structures, queries, and reports.
Thus, many Access developers became Visual Basic developers, and the reverse. This trend will only accelerate with the 97 versions of these products. The following three key areas help illustrate this point:
· Visual Basic for Applications. Both Visual Basic 5 and Access 97 utilize the same programming language engine. Program code developed in either environment can be easily ported to the other. The benefits include the following:
· You can create one common code library with procedures that work in both environments.
· Developers can be trained in one language and use it in multiple products, including Access 97, Excel 97, Project 97, PowerPoint 97, Visual Basic 5, and Word 97.
· You can quickly prototype applications destined for Visual Basic 5 in Access 97 using the Table and Form Wizards and some simple navigation code, then preserve any VBA code when moving it over to VB 5.
· Automation. The OLE communication wire between Access 97 and Visual Basic 5 runs in both directions:
· You can use Visual Basic 5 to drive Access 97 as an Automation server for editing table data or printing database reports from within a VB 5 application.
· You can create applications in VB 5 that are specifically designed to be OLE servers to Access 97, enhancing the capabilities of Access 97 while providing the faster performance of a compiled application.
· You can build ActiveX controls in Visual Basic or Visual C++, or buy them and use the same control and code to extend both Access 97 and VB 5. Both products are host containers for ActiveX controls (OCX files).
· Jet Database Engine. Visual Basic 5 makes even broader use of Jet through the same Data Access Objects coding language as Access 97 uses. More and more developers will create multifaceted solutions that use both Access 97 and VB 5 with the same back-end database in Jet.
Conclusion
Microsoft uses continuous user-driven research programs to gain insight on how customers use Microsoft Access and how it could be improved. Based on extensive research, Microsoft has designed Access for Windows 95 around the following design goals:
· Make it easier for people to get their work done using a database
· Strengthen integration with Microsoft Office applications
· Provide greater flexibility to a broad range of computer users
· Make it easier for developers to create custom database solutions
The result is that Microsoft Access for Windows 95 is the easiest to use and most integrated desktop database available. It includes innovative technologies that provide all types of users with compelling reasons to make Microsoft Access a standard part of their business computing desktops.
Glossary
ActiveX Microsoft's answer to Java. ActiveX is a stripped down implementation of OLE designed to run over slow Internet links.
API Application Program Interface. The interface (calling conventions) by which an application program accesses operating system and other services. An API is defined at source code level and provides a level of abstraction between the application and the kernel (or other privileged utilities) to ensure the portability of the code.
DDE Dynamic Data Exchange. A Microsoft Windows 3 hotlink protocol that allows application programs to communicate using a client-server model. Whenever the server (or "publisher") modifies part of a document which is being shared via DDE, one or more clients ("subscribers") are informed and include the modification in the copy of the data on which they are working.
DLL Dynamically Linked Library. A library which is linked to application programs when they are loaded or run rather than as the final phase of compilation. This means that the same block of library code can be shared between several tasks rather than each task containing copies of the routines it uses.
GUI Graphical User Interface.
HTML Hyper Text Markup Language.
ISAM Indexed Sequential Access Method. File access method supporting both sequential and indexed access.
IT Information Technology.
OCX OLE custom controls. An Object Linking and Embedding (OLE) custom control allowing infinite extension of the Microsoft Access control set.
ODBC Open DataBase Connectivity. A standard for accessing different database systems. There are interfaces for Visual Basic, Visual C++, SQL and the ODBC driver pack contains drivers for the Access, Paradox, dBase, Text, Excel and Btrieve databases.
OLE Object Linking and Embedding. A distributed object system and protocol from Microsoft.
RAD Rapid Application Development
RTF Rich Text Format. An interchange format from Microsoft for exchange of documents between Word and ot
f:\12000 essays\sciences (985)\Computer\Microsoft.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF CONTENTS
MICROSOFT HISTORY 1
EARLY INFLUENCES 2
FIRST BUSINESS VENTURE 3
EDUCATION ATTEMPT 3
THE MOTIVATIONAL SIDE OF FEAR 4
A JAPANESE CONNECTION 5
IBM INFLUENCE 5
SURVIVAL OF THE FITTEST 6
A CRUCIAL DEAL 6
COMPETITION ERRORS 7
BIRTH OF WINDOWS 7
MISSION STATEMENT AND ANALYSIS 8
INDUSTRY AND COMPETITVE ANALYSIS 9
DOMINANT ECONOMIC CHARACTERISTICS 9
Market Differentiation 9
Pace of technological change 10
Advances to the Printed Word 11
DRIVING FORCES 12
The Internet 13
The Information Highway 14
KEY SUCESS FACTORS 14
PORTERS 5 FORCES MODEL 15
INDUSTY STRATEGIES 16
COMPANY SITUATION ANALYSIS 16
SWOT ANALYSIS 17
Strengths 17
Weaknesses 18
Opportunities 20
Threats 21
STRATEGIC ISSUES 23
STRATEGY AND SITUATION 23
STRATEGIC FIT 24
FINANCIAL ANALYSIS 24
OEM REVENUES 24
WINDOWS 957 RETAIL UPGRADE 25
DESKTOP APPLICATIONS AND OTHER PRODUCTS 25
COST OF REVENUES 25
OPERATING EXPENSES 26
BALANCE SHEET 26
PLATFORMS PRODUCT GROUP 31
APPLICATIONS AND CONTENT PRODUCT GROUP 32
SALES AND SUPPORT GROUP 32
EARNINGS AND FINANCES 34
FINANCIAL RATIOS 36
ANNUAL RATIOS 36
COMPARISON TO INDUSTRY 37
STOCK PRICE COMPARISONS 38
DIVERSIFICATION 41
ANALYSIS 42
Microsoft Corporation
Microsoft History
Historians categorize blocks of time with the discovery of certain raw materials that humans utilized. The Bronze Age and the Iron Age were two periods in human history that proved through the discovery of artifacts that humans learned to harness these raw materials ingeniously. The Industrial Revolution of the late nineteenth century brought the discoveries of the Bronze and Iron Ages to new heights, and the advent of the locomotive, automobiles, cargo ships and airplanes were the most evident by-products of such raw materials. Use of these by-products from the earth=s raw materials dramatically changed the world of business and trade. With the subsequent invention of wire communications (i.e., tapping out Morse code and speaking over telephone lines), business and trade grew exponentially. Wireless communications via the inventions of radio, television, and motion pictures contributed greatly to the advances of the Industrial Revolution.
The need to find better ways of doing business to keep the marketplace fresh and innovative has driven the human race toward the brink of a new eraCthe Information Age. Unlike more tangible qualities of prior ages, the Information Age offers less defined qualities. At the heart of this new age is the advent of the personal home computer. Pumping life into this otherwise material home appliance is software that incorporates the necessary commands to access information stored within the computer=s memory. The company that offered the world its first software manufacturing company was Microsoft Corporation (MSFT on the NASDAQ exchange). At the helm of this young, innovative company are William Gates and Paul Allen, a pair of former high school chums who envisioned a world of home computer technology years before such a dream became even remotely possible.
Early Influences
Their story begins at Lakeside High, a private high school in Seattle, Washington. The Mothers= Club at Lakeside decided to purchase a computer terminal for the kids with proceeds from bake sales and rummage sales. Students at Lakeside became enthralled with this new toy. True to their innate curiosity, Gates and Allen began to dabble farther into the workings of the computer; Gates, for example, wrote his first computer program at the age of thirteenCa version of Tic, Tac, Toe. Because the computer terminal was so slow, one game of Tic, Tac, Toe took up most of a lunch break; if played on paper, a full 30 seconds might have been required. Despite the simplicity of the program, it spawned the creative genius in both young men to tackle more challenging programs in the years ahead.
Because the Mothers= Club was unable to afford continued use of computer time at $40 per hour, they decided to make it students= responsibility to purchase their own computer time. Most students complied by getting jobs outside school. Gates and Allen became programmers in the summers for compensation of computer time and $5000 in cash. In his 1995 book The Road Ahead, Gates describes the mainframe computers of the early >70=s as A. . . temperamental monsters that resided in climate-controlled cocoons . . . connected by phone lines to clackety teletype terminals. . . .@ (11) He went on to explain that a personal home computer called the DPD-8 was actually available from Digital Equipment Corporation. According to Gates it was A. . . an $18,000 personal computer which occupied a rack two feet square and six feet high and had about as much computing capacity as a wristwatch does today . . . Despite its limitations, it inspired us to indulge in the dream that one day millions of individuals could possess their own computers.@ (11-12)
In the summer of 1973, Paul Allen, who knew more about computer hardware than Bill Gates, shared an article with Gates buried on page 143 in Electronics Magazine. The article described the invention of the 8008 micro-processor chip by a young company called Intel. Paul was surprised to receive the technical manual for the chip in the mail simply upon request. Immediately, he went to work analyzing its capabilities. Due to the lack of transistors, the 8008 chip was very limited in its use, but Allen discovered despite the limitations, the chip was good for repetitive tasks and mathematical data.
First Business Venture
When Paul Allen entered college at Pullman, Washington, a town on the east side of the state, sixteen-year-old Bill Gates traveled frequently by bus to visit him. On these long trips across the state, Gates wrote a program that facilitated the reading of traffic information gathered by municipalities through a device set up on the side of certain intersections. A long, rubber tube stretched across the road from one of these devices, and each time a vehicle ran over the tube a punch was made in the roll of paper within the device. People deciphered this crude data by visually inspecting the punch holes and annotating the results. Gates= program relieved humans from such a tedious task, using the technology of the 8008 chip instead. With this program Gates and Allen launched their first company, Traf-O-Data. The two programmers were full of enthusiasm for the success of their new company; most communities, however, were reluctant to purchase from two kids: consequently, their fledgling company enjoyed only marginal sales.
Education Attempt
Gates attended Harvard College in 1973 while Allen secured a job in Boston, Massachusetts as a programmer for Honeywell. In 1974 Intel announced the advent of the 8080 chip that boasted 2,700 more tran-sistors than its predecessor. Because of the disappointment they experienced in the hardware side of computing through dismal success in Traf-O-Data, Gates and Allen focused on new opportunities in the software side of computers. With a vision of millions of computers owned by individuals, the pair banked on competition between Japanese and American companies for control of the computer hardware market. With this in mind, and with the introduction of the 8080 microprocessor chip (and inevitable successors to the chip), Gates and Allen determined that their future lay in developing software for these computers.
The Motivational Side of Fear
During a cold, New England morning outside a newsstand in Harvard Square during one of his frequent visits to Bill Gates, Paul Allen picked up a copy of the January issue of Popular Electronics magazine. The cover photo pictured a small computer kit called the Altair 8800. It sold for a mere $397, and had 4,000 characters of memory . Panic struck Gates:
A>Oh no! It=s happening without us! People are going to go write real software for this chip.= I was sure it would happen sooner than later, and I wanted to be involved form the beginning. The chance to get in on the first stages of the PC revolution seemed the opportunity of a lifetime, and I seized it.@ (Gates, 16).
Driven by fear of someone writing software for the Altair 8800 personal computer before his own software was complete, Gates scrambled feverishly in his Harvard College dormitory forgoing a decent night=s rest. Five weeks later, a version of BASIC became the impetus for Athe world=s first microcomputer software company . . . In time we named it >Microsoft.=@ (Gates, 17)
In the spring of 1975, Allen quit his job with Honeywell; Gates decided to take an indefinite leave of absence from college (never intending to forgo a degree). Both young men planned to dive into the world of the computer software business at its very beginning stages. Allen was twenty-two years young and Gates was only nineteen. They set up operations in Albuquerque, New Mexico because the city was home to MITS, creator of the first inexpensive personal computer to be offered to the general pubicCthe Altair 8800 .
Microsoft provided BASIC language because it allowed a format for computer users to write their own programs instead of having to rely on scarce, packaged software. Immediately, the MITS Altair 8800 faced strong competition from computer makers such as Apple, Commodore, and Radio Shack who entered the personal computer market in 1977. The strategy at Microsoft was to convince computer manufacturers to buy licenses to Abundle@ Microsoft software with their computers. Royalties would then be paid to Microsoft on each computer sale. Aside from the antics of early software piraters and lack of government laws preventing such activities, this strategy of selling licenses for the use of their software worked well for Microsoft.
A Japanese Connection
By 1979 half of Microsoft=s business came from Japan. This was due in large part to Asweat equity@ of one man in particular. His name is Kazuhito (Kay) Nishi. Kay telephoned Gates in 1978 after discovering Microsoft in a newspaper article. Both Gates and Nishi were only twenty-two at the time and shared many similarities despite cultural and language differences. They met shortly after the phone call at an electronics con-vention in southern California. Without attorneys, they signed a 12 page contract which gave Nishi exclusive distribution rights to Microsoft=s BASIC language in East Asia. Eventually, their original expectation of $15 million was realized ten-fold through sales as a result of that contract.
Microsoft moved from Albuquerque, New Mexico to its present home in Redmond, Washington in 1979 with most of its twelve employees. According to Gates, the mission of Microsoft was Ato write and supply software for most personal computers without getting directly involved in making or selling computer hardware.@ (44) The programming team adapted programs to each machine and were Avery responsive to all the hardware manufacturers . . . we wanted choosing Microsoft software to be a no brainer . . . along the way, Microsoft BASIC became an industry standard.,@ Gates was quoted. (44)
IBM Influence
By 1980, International Business Machines (IBM) enjoyed an 80% market share of large computer hardware, but only marginal success with the smaller personal computer (PC) market. The Apple II computer appeared poised to takle the business market, thanks in part to a popular spreadsheet program called VisiCalc. Based on Apple=s success, IBM decided to enter the PC market. In the summer of 1980, two emissaries from IBM met with Gates to discuss IBM=s plans for a full-market assault, with components already available off-the-shelf. IBM=s plan was to utilize Intel=s microprocessor chip and to use Microsoft=s programming expertise, rather than create its own software. As a result of this meeting, Microsoft hired Tim Paterson, from a Seattle, Washington firm, who became responsible for creating the Disc Operating System (DOS) for IBM compatible computers.
Survival of the Fittest
The first IBM PCs hit the market in August of 1981 with a choice of three operating systems: Microsoft=s DOS, UCSD-Pascal, and CP/M86. Gates realized that only one operating system could survive, just as only one video cassette recorder survived their market previously (VHS beat out Beta Max). Gates developed a three-part plan to come out on top of the competition:
< make Microsoft DOS the best product of the three
< help other software companies write MS-DOS based software
< ensure MS-DOS to be inexpensive.
A Crucial Deal
With these objectives in mind, Gates offered IBM an attractive deal. Microsoft would allow IBM to use DOS (called IBM- or PC-DOS to distinguish itself from the nearly identical MS-DOS) for a low one-time fee for as many PC=s IBM could sell. This deal gave IBM the incentive to push DOS, rather than the other two oper-ating systems, whose manufacturers received royalties for each PC sale with their respective operating systems installed. Hence, IBM sold UCSD Pascal P-system for $450 and CP/M-86 for $175 while DOS was offered at only $60.
Gates=s strategy worked as he stated:
AOur goal was not to make money directly from IBM, but to profit from licensing MS-DOS to computer companies that wanted to offer machines more or less compatible with the IBM PC. IBM could use our software for free, but it did not have an exclusive license or control of future enhancements. This put Microsoft in the business of licensing a software platform to the PC industry.
AConsumers bought the IBM PC with confidence . . each new customer . . . added to the IBM PC=s strength as a potential de facto standard for the industry. . . .
A. . . the availability of software and hardware add-ons sold PCs at a far greater rate than IBM had antici-patedCby a factor of millions,@ which meant Abillions of dollars for IBM.@ (Gates, 49-50)
Competition Errors
After three years of competition blitzing, all competing standards for personal computers had disap-peared with the exception of Apple=s Apple II and Macintosh. AHewlett Packard, DEC, Texas Instruments, and Xerox, despite their technologies, reputations, and customer bases, failed in the PC market in the early 1980s because their machines weren=t compatible and didn=t offer significant enough improvements over the IBM architecture.@ (Gates 50) Only Commodore Corporation fared well through the eighties in the PC market, due substantially to lower cost of models 64 and 128, and the superb graphics of the Commodore Amiga, still used today by some commercial movie studios.
Gates defends IBM against certain revisionist historians who conclude A. . . IBM made a mistake working with Intel and Microsoft to create its PC. They argue that IBM should have kept the PC architecture proprietary, and that Intel and Microsoft somehow got the better of IBM. But the revisionists are missing the point. IBM became the central force in the PC industry precisely because it was able to harness an incredible amount of innovative talent and entrepreneurial energy and use it to promote its open architecture. IBM set the standards.@ (Gates, 50)
Birth of Windows
Because of the character-based commands that users of DOS needed to type into the computer from a keyboard peripheral, Gates saw the potential of losing Microsoft=s leading software position if it stayed with the MS-DOS format. Researchers at Xerox=s Palo Alto, CA Research Center studied human-computer interaction and found that computer users could more easily instruct the computer if users were allowed to point to commands, via a device called a Amouse,@ as opposed to typing commands, via a QWERTY keyboard. According to Gates, AXerox did a poor job of taking commercial advantage of this groundbreaking idea, because its machines were expensive and didn=t use standard microprocessors. Getting great research to translate into products that sell is still a big problem for many companies.@ (53)
The process of using picturesCiconsCto command a computer, rather than typed characters, is called graphical technology. The screen which molds graphical technology into the character-based operating system format is called a Graphical User Interface (GUI). In 1983, Microsoft announced its version of a GUI called Windows7. The Apple Lisa and Xerox Star were GUIs already available to consumers, but both, in Gates= view, A. . . were expensive, limited in capability, and built on proprietary hardware architectures.@ (53) This meant that other hardware companies could not license the operating systems to build compatible systems. The same was true for software companies, and this hindered the creation of new applications for the Star and Lisa GUIs by outside companies.
MISSION STATEMENT AND ANALYSIS
At Microsoft, our long held vision of a computer on every desk and in every home continues to be the core of everything we do. We are committed to the belief that software is the tool that empowers people both at work and at home. Since our company was founded in 1975, our charter has been to deliver on this vision of the power of personal computing.
As the world's leading software provider, we strive to continually produce innovative products that meet the evolving needs of our customers. Our ectensive commitment to research and development is coupled with dedicated responsiveness to customer feedback. This allows us to explore future technological advancements, while assuring that our customers today receive the highest quality software products.
A good mission statement attempts to answer some key questions about the company and the industry. These questions are Who are we?, What business are we in?, and Where are we headed? In Microsoft's mission statement they tell who they are, as well as what there business is. They stess their goals and where they are headed very well. My biggest problem with this mission statement is the fact that Microsoft is to worried about being on top and will do what ever is necessary.
INDUSTRY AND COMPETITVE ANALYSIS
Dominant Economic Characteristics
Market Differentiation
The first popular graphical platform came to market in 1984 with Apple=s Macintosh. It was an instant success as the GUI platform of Macintosh eliminated the need for obscure character commands. Gates worked closely with Steve Jobs, who was the leader of the Macintosh team, in order to create Microsoft=s competing GUI version of the Mac called Windows. The major difference that Microsoft held over Apple was its willingness to allow other software developers open access to the Windows format. Apple restricted its GUI to Macintosh computers only. That difference helped to elevate Microsoft eventually to the software industry leaderCbar none.
Gates devotes pages of explanations of why such a Agreat company@ as IBM failed in its attempts to finally create its own software operating system. He apologetically cites the specific decisions that IBM made with the development of its OS/2 operating system. His reason for the disappointing results of IBM=s attempts are chiefly due to the fact that graphical computing could have found mainstream success if IBM had been more cooperation with Microsoft in developing a general application of GUI software to be used with existing hardware rather than insisting on developing a whole new application.
When Microsoft went public in 1986, Gates offered IBM 30% of MSFT stock in order that IBM could share in the fortune, be it good or bad, of Microsoft. IBM declined. This was Microsoft=s attempt at keeping IBM close to Microsoft as IBM was instrumental in the success of Microsoft.
Despite not seeing eye to eye with IBM in the development of Windows, Gates saw the GUI application as the progressive alternative to DOS and continued to create improvements on the existing applications. In the weeks prior to the release of Windows 3.17, May 1990, Gates A. . . tried to reach an agreement with IBM for it to license Windows to use on its personal computers. We told IBM we thought that although OS/2 would work out over time, for the moment Windows was going to be a success and OS/2 would find its niche slowly.@ (62) IBM again refused to cooperate with Microsoft insisting total dedication to the development of OS/2 which was eventually doomed to an ignominious future. AIBM has proven conclusively through the years that it has no idea of how to create or market software. Examples are Displaywrite word processing; the PC Jr, IBM Personal Typing System, and the PS-1, all with proprietary software; OS/2as mentioned above, and feeble attempts at networking. Now, with the purchase of Lotus, the software giant should request last rites.@] According to Gates, AIf IBM and Microsoft had found a way to work together, thousands of people-yearsCthe best years of some of the best employees at both companiesCwould not have been wasted. If OS/2 and Windows had been compatible, graphical computing would have become mainstream years sooner.@ (62)
Pace of technological change
In its twentieth fiscal year (July 1BJune 30) since incorporation, Microsoft leads the software industry with revenues of $5,937,000,000 as of June 30, 1995 . It is the unequaled standard bearer for software manufactures and with its release of Windows 957, a total graphical operating system, should remain at the top for years to come.
Despite its current position, Microsoft is still faced with new challenges as with the progression of any high-tech industry. The most recent challenges facing Microsoft are its applications to the Internet and its commitment to the development of the information super highway.
In 1989 the U.S. Government decided to cease funding its 1960s project ARPANET and allow the project to be succeeded by the commercial equivalent AInternet.@ In its beginning stages, the Internet picked up where ARPANET left off. Its primary function was to provide electronic communications, or e-mail, solely between computer science projects and engineering projects. Its popularity increased as it became commercially available to PC users. To fully appreciate the significance of e-mail and the transmission of electronic data consider the evolution of the printed language.
Advances to the Printed Word
When Johann Gutenberg introduced the printing press to Europe in 1450, the method of copying the printed word was revolutionized. Before the advent of the printing press there was an estimated 30,000 books available on the earth, most were hand written by monks. Although it took two years to complete the movable type for Gutenberg=s Bible, once completed, multiple copies could be made rather quickly. Almost 500 years later, Chester Carlson, frustrated by the length of time involved in preparing patent applications, set out to invent an easier way to duplicate information in small quantities. What resulted was a process he called Axerography@ when he patented it in 1940. In 1959, Carlson aligned with Xerox Corporation as a means of manufacturing and distributing AXerox@ copying machines. Xerox projected sales of perhaps 3000 units. Much to their surprise, they placed orders for 200,000 units, and one year later reported nearly 50 million copies a month were being processed. By 1986, that figure increased to 200 billion copies per month and has steadily increased ever since. The advent of xerography allowed small groups to participate in the capabilities of a printing press for a fraction of the cost and in a fraction of the time a conventional printer would take.
The market size for the computer industry is very large, this past year it totaled $238.7 billion dollars. It is expected to rise considerably in the next few years.
The competitive scope for the computer industry globally is very strong, microsoft is worldwide. The Japenese are very big competitors, but Microsoft is to powerful to compete with.
Ease of entry is very hard, the computer industry is a costly industry to enter. To compete with large companies you would need millions of dollars to even consider getting started. One could start a small computer business focusing on one area without the cost being overly expensive. An example would be if you wanted to focus one the accounting industry you need not worry about anything else. The life of the product depends totally on your needs, as well as the increases in technolgy. Microsoft comes out with new products all the time, but you don't necessarly need to buy them. Sometimes a computer program can lasts companies for years. It is very difficult to enter the computer industry due to the large capital requirements and the rapid technological changes, so either backward or forward intergration would be very difficult.
Driving Forces
There are several driving forces in the computer industry.
1) Increased efficiency due to economies of scale
2) Change in the industry growth rate
3) Product innovation due to the rapid increases in technological advancements
4) The need to be the first to develop the new program
The newest driving force for the computer industry was the internet or super highway. The following describes both along with the advantages they brought.
The Internet
The Internet offers even more advantages than Xeroxed copiers where information can be accessed and/or distributed to all interested parties (with a PC) via the electronic transmission of data. As defined by Gates, the Internet is Aa group of computers connected together, using standard >protocols= (descriptions of technologies) to exchange information.@ (94) Electronic massages are sent via phone lines from one computer to another and stored in the electronic Amailbox@ of the another computer until the message is Adown-loaded@ by the user.
Another advantage to the Internet is AWeb browsing@ on the World Wide Web (.www) or simply AWeb.@ Server companies offer graphical pages of information to be accessed by subscribers of their service. From the Ahome@ page of a topic, one can activate subsequent hyperlinks for further information on given topics by clicking the mouse device of most PCs.
Although Gates admits that Microsoft was surprised at the commercial success of the Internet, he has begun work on software applications to make the Internet easier to access for PC owners with limited computer knowledge. Some people may confuse the subscriptions to companies on the Internet, such as CompuServe, Prodigy, and America On-line with the creation of the information super highway, but according to Gates, the Internet is simply a Aprecursor to the information highway.@ (90) Comparing the information highway with the Internet is like comparing a country lane with the Eisenhower Highway System. Even that analogy would not do justice to the information highway as it will look in twenty or more years. The limitations of the Internet must first be expanded before anything resembling the actual information highway exists. One challenge that Micro-soft and other companies have is to convince the phone companies and cable companies to replace the coaxial lines that serve homes and businesses with fiber optic cables. Fiber optics will expand the bandwidth necessary for the immense amount of information sent on the highway.
Two technologies currently in the works toward this transformation of trunk lines are DSVD and ISDN. Digital simultaneous voice data can be used with existing phone lines, but does not provide a sufficient bandwidth to handle video transmissions; hence, new lines must be laid for this application to reach full capacity. Even with the current integrated services digital network technologyCwhich incorporates a wider bandwidth but requires the laying of new linesCthe clarity of full motion picture images still leaves much to be desired. Add-in cards which upgrade the PC Ato support ISDN costs $500 in 1995, but the price should drop to less than $200 over the next few years. The line costs vary by location but are generally about $50 per month in the United States. I expect this will drop to less than $20, not much more than a regular phone connection.@ (Gates, 101)
The Information Highway
Once more and more PC owners hook up to the Internet with ISDN lines, the groundwork for further progress towards the information highway will be laid. The information highway was coined by then-Senator Al Gore Awhose father sponsored the 1956 Federal Aid Highway Act@ (Gates, 5) during the Eisenhower Administration. According to Gates, this terminology is flawed. It connotes the following of routes with distance between two points. It implies traveling from one place to another when the actual information highway will be free of such limitations. Some people also confuse the information highway with a massive government project which, Gates feels, A. . . would be a massive mistake for most countries . . . .@ (6) Just as Microsoft=s mission in 1975 was Aa computer on every desk and in every home,@ (Gates, 14) so it is with Microsoft progressing towards A. . . >information at your fingertips= which extols a benefit rather than the network itself.@ (Gates, 6)
Key sucess factors
1) The high degree of expertise and product innovation
2) Being able to stay on the cutting edge of technology
3) Companies need to have a low degree of glitches in there programs
4) A very strong customer support system (user friendly)
5) Must be able to meet the customer needs
The computer industry is a strong leader in technology. To compete you must stay one step ahead of the rest. Microsoft has proven how devoted they are to computer program developing by always being one step ahead of the rest. When one is dealing with the computer industry it is very important to have kniowledgable employees working for you. The high degr
f:\12000 essays\sciences (985)\Computer\Misc Computer Essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nothing attracts a crowd like a crowd. Today, with home computers and
modems becoming faster and cheaper, the home front is on the break of a new
frontier of on line information and data processing. The Internet, the ARPANET
(Advanced Research Programs Agency Network) spinoff is a channel of
uninterrupted information interchange. It allows people to connect to large
computer databases that can store valuable information on goods and services.
The Internet is quickly becoming a tool for vast data interchange for more than
twenty million Americans. New tools are allowing Internet presence an easier
task. As did the gold miners set out to California on carriages to stake
their claim in the gold rush, business and entrepreneurs are rushing to stake
their claim on the information superhighway through Gopher sites, World-Wide Web
sites, and electronic mailing lists. This article explains how businesses and
entrepreneurs are setting up information services on the Internet that allows
users to browse through picture catalogues, specification lists, and up to the
minute reports.
Ever since Sears Roebuck created the first pictorial catalogue, the
idea has fascinated US that merchandises could be selected and ordered in our
leisure time. Like any cataloging system, references make it easy to find what
user seeks. Since its inception, The Internet has been refining its search
tools. Being able to find products through many catalogues is what make the
Internet shine in information retrieval. This helps the consumer find
merchandise that they might other wise probably cannot find. The World Wide Web
allows users to find information on goods and services, pictures of products,
samples of music (Used by record Companies), short videos showing the product or
service, and samples of programs. Although a consumer cannot order directly from
the Web site, the business will often give a Voice telephone number or an order
form that costumer can print out and send out through the mail.
Although web sites have the magazine like appeal, storing large
amounts of textual data is often difficult. Gopher (like go-for) is set up like
a filing cabinet to allow the user more flexibility in retrieval. Gopher is
similar to the white/yellow pages in the way information is retrieved word for
word. They are also a lot cheaper and easier to set up which allows small
business an easy way to set up shop. Consumers can find reviews, tech-info, and
other bits and pieces of information.
Each person who uses the Internet has an identification that sets them
apart from everyone else. Often called handles (from the old short wave radio
days). Electronic mail addresses allow information exchange from user to user.
Business can take advantage of this by sending current information to many
users. A user must first subscribe to the mailing list. Then the computer adds
them to the update list. Usually, companies will send out a monthly update. This
informs users of upgrades in their products (usually software), refinements
(new hardware drivers, faster code, bug fixes, etc.), new products, question
bulletins where subscribers can post questions and answers, and links
(addresses) to sites where new company information can be found.
Comments and Opinions
This article pointed out the key information that anyone who is
interested in representing their company on the Internet might find useful. It
then went into explaining the few key elements that comprise the complete and
ever expanding system. It was also a fair lead way for the programs that they
explained in the next articles on software used to create web pages, E-mail
lists, Gopher sites and FTP (similar to Gopher). It showed the expanse at which
the Internet was growing, and the use it could serve businesses to expand their
user outreach.
I have personally used these services to find business that sell hard
to find products. Through the world wide web I have found specialty companies
that I believe I would not have found. The article showed essentials of web
savvy such as the availability of video and sound (music) files. For this
consumer I can say that I have purchased at least two compact disks after
hearing the short sample released by the record companies. The video clips are
eye catching and may influence people to buy the companies products.
I was disappointed in the information on Gopher. It mainly showed the
differences between it and the world wide web, instead of explaining what it is.
It also made an irrelevant reference to UNIX (Text based operating system used
on expert systems) books' search and HTTP (the language that the World Wide Web
reads) cross referencing might mislead the reader. Gopher is a very powerful
tool that businesses with an on-line presence and information worth reading
should be aware.
The business related information on electronic mailing lists did
nothing other then point out a few groups available. It briefly touched
intelligent agents, which are the backbone of E-mail publications. Although it
was detailed in publications, there was little theory of operation that a
business looking into this route of information distribution might find of use.
It did however explain the addressing system.
Overall this article was decent in the overview of the business use of
the Internet. It pointed out the three major areas that companies are racing to
settle. It gave many useful information on the World-Wide Web, which is
currently the business magnet. Reading this is article is a foot in the right
direction for any business seeking to have an on-line presence.
f:\12000 essays\sciences (985)\Computer\Modems.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Modems are used to connect two computers over a phone line. Modem is short for
Modulator Demodulator. It's a device that converts data from digital computer signals to
analog signals that can be sent over a phone line. This is called modulation. The analog
signals are then converted back into digital data by the receiving modem. This is called
demodulation. A modem is fed digital information, in the form of ones and zeros, from the
CPU. The modem then analyzes this information and converts it to analog signals, that can
be sent over a phone line. Another modem then receives these signals, converts them back
into digital data, and sends the data to the receiving CPU. At connection time, modems
send tones to each other to negotiate the fastest mutually supported modulation method
that will work over whatever quality line has been established for that call. There are two
main differences in the types of modems for PC, internal and external modems.
Evolution of Modems
In the last 10 years, modem users have gone from data transfer rates of 300bps to
1,200 bps to 2,400 bps to 9,600 bps to 14.4Kbps to 28.8Kbps to, and to 33.6Kbps. Now
new modem standards are emerging, reaching speeds of up to 56Kbps. Unlike the
33.6Kbps modems being sold today, 56Kbps is a significant improvement over 28.8Kbps
modems. Viewing complex graphics or downloading sound files improves significantly
with 56Kbps. The modem experts keep telling us that we are about maxed out. For
instance when the 28.8 modems where first introduced they said that we've reached our
maximum speed, and the same thing was said about the 33.6 and now again for the 56K,
but how true is this? The experts say that the next major improvement will have to come
from the telephone companies, when they start laying down fibber-optic cables so we can
have integrated services digital network (ISDN) . The thing that makes digital modems
better than analog is because with analog modem transmission errors are very frequent
which results in your modem freezing or just freaking out. These errors are caused mainly
by some kind of noise on the line due to lightning storms, sunspots, and other fascinating
electromagnetic phenomena, noise occurs anywhere on the line between your PC and the
computer you're communicating with 2,000 miles away. Even if line noise is minimal, most
modems will automatically reduce it's speed to avoid introducing data errors.
Baud vs bps
While taking about modems, the transmission speed is the source of a lot of
confusion. The root of the problem is the fact that the terms "baud" and "bits per second"
are used interchangeably. This is a result of the fact that it's easier to say "baud" than "bits
per second," though misinformation has a hand in it, too. A baud is "A change in signal
from positive to negative or vice-versa that is used as a measure of transmission speed"
and bits per second is a measure of the number of data bits (digital 0's and 1's) transmitted
each second in a communications channel. This is sometimes referred to as "bit rate."
Individual characters (letters, numbers, spaces, etc.), also referred to as bytes, are
composed of 8 bits. Technically, baud is the number of times per second that the carrier
signal shifts value, for example a 1200 bit-per-second modem actually runs at 300 baud,
but it moves 4 bits per baud (4 x 300 = 1200 bits per second).
Synchronous vs. Asynchronous Data Transfer
Synchronous and Asynchronous data transfer are two methods of sending data
over a phone line. In synchronous data transmission, data is sent via a bit-stream, which
sends a group of characters in a single stream. In order to do this, modems gather groups
of characters into a buffer, where they are prepared to be sent as such a stream. In order
for the stream to be sent, synchronous modems must be in perfect synchronization with
each other. They accomplish this by sending special characters, called synchronization, or
syn, characters. When the clocks of each modem are in synchronization, the data stream is
sent.
In asynchronous transmission, data is coded into a series of pulses, including a
start bit and a stop bit. A start bit is sent by the sending modem to inform the receiving
modem that a character is to be sent. The character is then sent, followed by a stop bit
designating that the transfer of that bit is complete.
Modems Speeds
A full page of English text is about 16,000 bits. And in order to view full-motion
full-screen video it would require roughly 10,000,000 bits-per-second, depending on data
compression.
The Past 300 bps (both ways)
1 200 bps (both ways)
2 400 bps (both ways)
9 600 bps (both ways)
14 400 bps (both ways)
Current Speeds 28 000 bps (both ways)
33 600 bps (both ways)
X2 or K56Plus 56 000 bps (downloading)
33 600 bps (uploading)
ISDN single channel 64 000 bps (both ways)
ISDN two channels 128 000 bps (both ways)
SDSL 384 000 bps (both ways)
Satellite integrated modem 400 000 bps (downloading)
ADSL (T-1) 1 544 000 bps (downloading)
128 000 bps (uploading)
Cable modem (T-1) 1 600 000 bps (both ways)
(Videotron)
Ethernet (T-2) 10 000 000 bps (both ways)
Cable modem (T-2) 10 to 27 000 000 bps (both ways)
(in general)
FDDI (T-3) 100 000 000 bps (both ways)
In some cases, the modem-equipped PC with a 28.8Kbps modem would be faster
than a 33.6Kbps or even 56K modem, especially with sites that don't have a great deal of
graphics. That's because there are several factors that determine how long it takes to reach
and display a Web site. These include the speed of your PC, your connection to your
Internet service provider, your ISP's connection to the Internet itself, traffic on the
Internet and the speed and current traffic conditions on the site you're visiting. A good
example would be, say you drive a fancy sports car and I drove along in my family
minivan, you'll certainly beat me on an open stretch of road. But if we're both stuck in a
traffic jam, you'll move just as slowly as me. In short, any modem will sometimes operate
below its rated speed. According to the vice president of a major 33.6Kbps modem
company, you can expect a full 33.6Kbps connection about one out of 10 tries.
X2 56K Modem
U.S. Robotics, Cardinal, Rockwell, and other manufacturers have developed
modems capable of 56K speeds over standard phone lines. U.S. Robotics line of modems
called X2, uses an "asymmetric" scheme. Basically, it lets you download data at up to
56Kbps from any on-line service or Internet service provider using matching U.S.
Robotics modems. The company says AOL, Prodigy, Netcom, and others are committed
to deploying the X2 technology. The only catch is the data you upload to the provider is
still limited to 33.6Kbps or 28.8Kbps. The main reason why everyone has not yet leap to
56Kbps is because there are no set standards yet. Not all modem vendors are supporting
the same 56Kbps specification. That means your Rockwell-based modem won't work with
a U.S. Robotics or Logicode model.
ISDN
ISDN (Integrated Services Digital Network) is a way to move more data over
existing regular phone lines. ISDN cards are like modems, but approximately 5 times
faster then regular 28.8 modems. They require special telephone lines, which cost a little
or a lot, depending on your phone company. It can provide speeds of roughly 128,000
bits-per-second over regular phone lines. ISDN has a couple of advantages. It uses the
same pair wire found in regular phone lines, so the phone company won't necessarily have
to run new wires into your house or business. A single physical ISDN line offers two
64Kbps phone lines called channels that can be used for voice and data. Unfortunately,
ISDN isn't cheap. Installation fees can run a couple hundred dollars and setup can be
confusing. ISDN also requires a special digital adapter for your PC that costs around
$200. And though you could replace your old phone line with ISDN, I wouldn't
recommend it. An ISDN line goes through a converter powered by AC current and if your
power fails, so does your phone line.
Satellite Modems
The access service to Internet by satellite is called DirecPC. It was created by an
American company of telecommunications called Hughes Network Systems Inc. DirecPC
offers speeds of up to 400 Kbps. That's nearly 14 times faster than a standard 28.8Kbps
modem and four times faster than ISDN (integrated system digital network). The draw
back to this system is that it's too expensive, requires a relatively elaborate installation and
configuration and, in the end, doesn't necessarily speed up your access to the World Wide
Web.
The price for the 21" dish, PC card and software is about $499 U.S. retail. Then
there is a $49.95 U.S. one-time activation fee. The monthly charges start at $9.95 U.S.,
but that is for a limited account that also requires you to pay to download data. The
"Moon Surfer" account, which costs $39.95 U.S., gives you unlimited access nights and
weekends. If you want unlimited access during the day, you'll have to pay $129 U.S. a
month for the "Sun Surfer" plan. Customers pay between $149 and $199 U.S. for
professional help, or $89 U.S. per hour plus materials if custom installation is required. If
you chose to install the dish on ground level, Hughes Network Systems also has designed
a hollow fiber glass camouflage that looks like a huge rock which can be put over the dish
in order to prevent it from it being stolen.
In addition to these charges, you also need to be signed up with an Internet service
provider, or ISP, which approximately costs about $20 a month. You can use any ISP
other than on-line services such as Prodigy or America On-line. The reason you need an
ISP is because DirecPC is a one-way system. The satellite sends data to your PC, but you
need to use a standard modem and a regular ISP to send data or commands to the
DirecPC network. The data you send flows at the speed of your modem, normally a 28.8
Kbps modem. The fact that the satellite is only one-way isn't as bad as it might seem. Most
users send very little data compared with what they receive. If you wish to view a Web
site, for example, you would send the Web address to the system via the modem, but the
site's text and graphics would rush back to you via the satellite. Since the address is
typically only a few bytes, that takes almost no time at all, even if you have a slow modem.
The data from the site itself takes up far more time, especially if it has a lot of graphics.
Those who upload a lot of data, including people who need to update their own Web sites,
will get no advantage from the satellite system while they are uploading.
In addition to the dish, you get a 16-bit card that plugs into an ISA port of a
desktop PC. The draw back to the system is that it eliminates Macs, notebook PCs and
any other machines that don't have available slots.
You will find a noticeable difference when viewing sites with video and lots of
graphics. This could eventually be a big advantage as an increasing number of information
providers start using the Internet for full-motion video and other multimedia presentations.
But DirecPC for now doesn't offer spectacular advantages for normal Web surfing. And if
you're thinking about a long-term investment, consider that in the future there will be
other options for high-speed Net access.
ADSL / SDSL
ADSL (Asymmetric Digital Subscriber Line) a method for moving data over
regular phone lines. An ADSL circuit is much faster than a regular phone connection, and
the wires coming into the subscriber's home are the same copper wires used for regular
phone service. An ADSL circuit must be configured to connect two specific locations. A
commonly used configuration of ADSL is to allow a subscriber to download data at
speeds of up to 1.544 megabits per second, and to upload data at speeds of 128 kilobits
per second. ADSL is often used as an alternative to ISDN, allowing higher speeds in
cases where the connection is always to the same place. SDSL (Symmetrical Digital
Subscriber Line) is a different configuration of ADSL capable of 384 Kilobits per second
in both directions.
Cable modems
Another type of modems are cable modems. It uses the same black coaxial cable
that connects millions of TVs nationwide and is also capable of carrying computer data at
the same time. It's able to uploading and downloading approximately 10 to 27 megabits
per second. A 500K file that would take 1.5 minutes to download via ISDN but would
take about one second over cable.
Classification Of Modems
A classification of modems that are capable of carrying data at 1,544,000 bits-per-
second are called T-1. At maximum capacity, a T-1 line could move a megabyte in less
than 10 seconds. That is still not fast enough for full-screen, full-motion video, for which
you need at least 10,000,000 bits-per-second. T-1 is the fastest speed commonly used to
connect networks to the Internet. Modems that are capable of carrying data at 3,152,000
bits-per-second are refereed to as T-1C. Modems that are capable of carrying data at
6,312,000 bits-per-second are refereed to as T-2. And modems that are capable of
carrying data at 44,736,000 bits-per-second are refereed to as T-3. This is more than
enough to do full-screen, full-motion video. Modems that are capable of carrying data at
274,176,000 bits-per-second are refereed to as T-4.
Ethernet
A very common method of networking computers in a LAN (local area network)
is called Ethernet. It will handle about 10,000,000 bits-per-second and can be used with
almost any kind of computer.
FDDI
FDDI, (Fiber Distributed Data Interface) is a standard for transmitting data on
optical fiber cables at a rate of around 100,000,000 bits-per-second. It's 10 times as fast as
Ethernet, and approximately twice as fast as T-3.
Most modems mentioned such as T-1, T-2, T-3, etc. are not intended for home
use. These high speed connections are use mainly for big businesses. But even such
speeds as T-4 and FDDI are use very little among big companies, but more of the Army,
NASA, the Government, etc. They're highly priced which makes them only available to
larger corporations and organizations who need to send huge amounts data from one place
to another in little time or no time at all. Apart the price factor when would you need to
transfer data that is on a CD-ROM disk holding it's full capacity (650 Mb) across the
world in 52 seconds?
f:\12000 essays\sciences (985)\Computer\Morality and Ethics and Computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Morality and Ethics and Computers
There are many different sides to the discussion on moral and ethical uses of
computers. In many situations, the morality of a particular use of a computer is up to the
individual to decide. For this reason, absolute laws about ethical computer usage is
almost, but not entirely, impossible to define.
The introduction of computers into the workplace has introduced many questions
as well: Should employers make sure the workplace is designed to minimize health risks
such as back strain and carpal tunnel syndrome for people who work with computers?
Can employers prohibit employees from sending personal memos by electronic mail to a
friend at the other side of the office? Should employers monitor employees' work on
computers? If so, should employees be warned beforehand? If warned, does that make
the practice okay? According to Kenneth Goodman, director of the Forum for Bioethics
and Philosophy at the University of Miami, who teaches courses in computer ethics,
"There's hardly a business that's not using computers."1 This makes these questions all
the more important for today's society to answer.
There are also many moral and ethical problems dealing with the use of computers
in the medical field. In one particular case, a technician trusted what he thought a
computer was telling him, and administered a deadly dose of radiation to a hospital
patient.2 In cases like these, it is difficult to decide who's fault it is. It could have been the
computer programmer's fault, but Goodman asks, "How much responsibility can you place
on a machine?"3
Many problems also occur when computers are used in education. Should
computers replace actual teachers in the classroom? In some schools, computers and
computer manuals have already started to replace teachers. I would consider this an
unethical use of computers because computers do not have the ability to think and interact
on an interpersonal basis.
Computers "dehumanize human activity"4 by taking away many jobs and making
many others "boring exercises in pushing the buttons that make the technology work." 5
Complete privacy is almost impossible in this computer age. By using a credit card
or check cashing card, entering a raffle, or subscribing to a magazine, people provide
information about themselves that can be sold to marketers and distributed to data bases
throughout the world. When people use the world-wide web, the sites they visit and
download things from, make a record that can be traced back to the person.6 This is not
protected, as it is when books are checked out of a library. Therefore, information about
someone's personal preferences and interests can be sold to anyone. A health insurance
company could find out if a particular person had bought alcohol or cigarettes and charge
that person a higher rate because he or she is a greater health risk. Although something
like this has not been reported yet, there are no laws against it, at this point.
More and more data base companies are monitoring individuals with little
regulation. "Other forms of monitoring-such as genetic screening-could eventually be
used to discriminate against individuals not because of their past but because of statistical
expectations about their future."7 For instance, people who do not have AIDS but carry
the antibodies are being discharged from the U.S. military and also fired from some jobs.
Who knows if this kind of medical information could lead employers to make decisions of
employment based on possible future illnesses rather than on job qualifications. Is this an
ethical use of computers?
One aspect of computers that is surely immoral and unethical is computer crime,
which has been on the rise lately. There are many different types of computer crime.
Three main types of crimes are making computer viruses, making illegal copies of
software, and actually stealing computers.
Computer viruses have been around for a decade but they became infamous when
the Michelangelo virus caused a scare on March 6, 1992. According to the National
Computer Security Association in Carlisle, Pennsylvania, there are 6000 known viruses
worldwide and about 200 new ones show up every month.8 These viruses are spread
quickly and easily and can destroy all information on a computer's hard drive. Now,
people must buy additional software just to detect viruses and possibly repair infected
files.
Making illegal copies of software is also a growing problem in the computer
world. Most people find no problem in buying a computer program and giving a copy to
their friend or co-worker. Some people even make copies and sell them to others.
Software companies are starting to require computer users to type in a code before using
the software. They do this in many ways. Sometimes, they require you to use a "code
wheel" or look in a book for the code. The software companies go through this trouble to
discourage people from making illegal copies because every copy that is made is money
the company lost.
One other thing that is just starting to become a problem is actual computer theft.
With the introduction of notebook computers came a rise in computer theft. The same
qualities that make these computers perfect for business travelers-their small size and light
weight- make them very easy for thieves to steal as well. In 1994, 295,000 computers
were reported stolen with resulting losses totaling over 981 million dollars. 9 The amount
lost to theft is about twice the amount lost in all forms of computer malfunction or
breakage.
The biggest news related to computers lately seems to always be about the
Internet. The Internet began decades ago, but is just becoming popular with the general
public now that technology is advancing and becoming cheaper. There are many aspects
of the Internet that can lead people into discussions concerning morality and ethics.
Much of the discussion of the Internet has to do with freedom of speech and the
First Amendment. Most Americans probably believe that the First Amendment is moral
because it is a national law. The problems arise because different people interpret the First
Amendment in different ways. In most cases since 1776, the First Amendment has been
easily defined and understood, but every once in a while, a situation appears which blurs
the lines. The Internet has caused one of these situations.
There is information on the Internet about everything from drugs to making
bombs. The United States government is trying to decide whether they should or should
not censor material on the Internet. The government does not censor information like this
in public libraries, so why should it censor this information on the Internet? The
government censors information like this on television though, so why wouldn't it censor
this on the Internet? If the government goes strictly by the First Amendment, it would not
censor anything on the Internet because that would be a violation of free speech. It is
obvious though, that the government does not always go directly by the First Amendment,
so this leaves the topic open to discussion.
Some people argue that this information would be dangerous if it got into the
wrong hands. Much of the information in the world would be dangerous if it got into
the wrong hands. Does this mean that we should perform background checks and
psychiatric tests on everyone before we give them any information? I believe it is
unethical to withhold information from anyone. All information should be given out
freely. It is up to the individual to decide how to use the knowledge they have.
Many people complain that there is a large number of sick and demented people on
the Internet. There are a large number of sick and demented people in the "real" world
as well. In fact, the same people who are on the Internet are in the real world, too. There
is not much we can do about them except arrest the people who take their sickness and
dementia too far and break the law.
Computers can be harmful and beneficial to people in many different ways. The
ways computers are beneficial are the most obvious. Computers can entertain us, they can
save us time and energy, as well as saving us from performing boring and laborious tasks.
Computers also can be physically harmful to people. People who use computers
too much can suffer from vision loss, to varying degrees, due to staring at the screen for
extended lengths of time . They can also have problems with the muscles in their hands
from typing so often. They can acquire back problems from sitting in chairs behind desks
at computer screens, all day long.
Some people say that computers allow humans to cheat. They give us the
answers. They allow us to stop thinking. They believe it is unethical for the computers to
do the work for us. These people may be right in that some humans allow computers to
do work for them, but then if people did not make use of the new inventions and time-
savers, farmers would still be plowing with a horse and we'd still be cooking on an open
fire. Until computers exhibit actual artificial intelligence, though, we are still the ones
doing the thinking. We program the computers to do what we want them to do.
In conclusion, I believe that, in most situations involving computers, the morality
or immorality of an action is up to the individual to decide, as it would be if computers
were not involved. We have seen, though, that there are many instances in which people
have, without a doubt, acted immorally and unethically.
1 Timothy O'Conner, "Computers Creating Ethical Dilemmas," USA Today Magazine
(September 1995) 7
2 Max Frankel, "Cyberrights," The New York Times Magazine (February 12, 1995) 26
3 O'Conner 7
4 James Coates, "Unabomber Case Underscores an On-Line Evil," Chicago Tribune (April
14, 1996) 5
5 Coates 5
6 O'Conner 7
7 Tom Forester, Computers in the Human Context (Cambridge: The MIT Press,1989) 403
8 Stephen A. Booht, "Doom Virus," Popular Mechanics (June 1995) 51
9 Philip Albinus, "Have You Seen This PC?," Home Office Computing (February 1996) 17
f:\12000 essays\sciences (985)\Computer\Multimedia Presentation Programs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
COMPUTER MULTIMEDIA
Sam Quesinberry
Computers have come a long way very fast since there start in the 1940's. In the beginning they were mainly used for keeping financial records by banks and insurance companies, and for mathematical computations by engineers and the U.S. Military.
However, exciting new applications have developed rapidly in the last few years. Two of these areas is Computer Graphics and sound.
Computer graphics is the ability of the computer to display, store and transmit visual information in the form of pictures. Currently there are two main uses for this new ability. One is in the creation of Movies and the other in Computer Games. Computer visual information is also increasingly being used in other computer applications, such as photographic storage, and the Internet.
Computers can also store, transmit and play back sound.
When a picture or a sound is stored on a computer it said to be digitized. There are two main ways of digitizing a picture. One is by vector graphics. Here the information in the picture is stored as mathematical equations. Engineering drawing applications such as CAD (computer assisted device) use this method. The other method is by bit mapped graphics. Here the computer actually keeps track of every point in the picture and its description. Paint programs use this technique. Drawing programs are usually vector mapped programs and paint programs are usually bit mapped.
Computer sound is handled in two different ways. The sound can be described digitally and stored as an image (wave format) of the actual sound or it can be translated in to what is called midi format. This is chiefly for music. In a piano, for instance, the information for what key to hit, for how long ad at what intensity is stored and retrieved. This is kind of like the way and old player piano worked.
Computer graphic applications in the beginning were developed
on large computes. The computer hardware and software
were developed by individuals and groups working independently.
These projects were very expensive and carried on by large companies and investment groups. Applications which only a few years ago would have cost millions of dollars, can now be run on a desk top computer with programs costing under $100.
It is the purpose of this paper to research and examine several areas of computer multimedia by using a typical application programs in that related area.
These areas are:
Paint Programs - Photo Finish -Zsoft
3d Rendering Programs - 3d f/x - Asymetrix
Animation Programs - Video Artist - Reveal
Morphing Programs - Video Artist - Reveal
Sound Recording Programs - MCS music rack - Logitech
Midi Recording Programs - Midisoft recording Session - Logitech
Multimedia Programs - Interactive - HSC software
Paint Programs
One of the fist paint programs was super paint. It was created by Carnegie Mellon Shoupe at Palo Alto Research Center. To demonstrate a paint program Photofinish by Zsoft will be used to import and modify a photograph. Photofinish is an inexpensive paint program costing under $50.
First a photograph is scanned into the paint program using a scanner.
The photograph is cleaned up and a title is added.
3 D Rendering Programs
3d rendering programs are programs used in the movies to create the special effects, such as those used in the movie Star Wars. The 3d rendering program was created over a period of time. They kept getting more advanced. Lucas Films is one of the first company's to develop 3d rendering programs for computers. The effects were one of the reasons that Lucas Film Productions became so popular. Here is an example of what a 3d rendering program can do. The name of the program that I'm using is 3d f/x by Asymetrix.
Animation Programs
One of the first company's to create animation software was Autodesk. The Disney studios were also one of the first company to develop animation software. A couple of years ago the Disney computer animation department only had two animators, but now there are 14. Computer animation has greatly reduced the human effort of making cartoons. A full length Disney film used to require over 600 animators. Now it can be done with approximately 125. The first full length computer animated movie, The Toy Story, came out around 6 months ago. Now the program that I am using to explore animation is Reveal's Computer's Artist. I captured an old cartoon from a 16 mm film made in 1913 and used computer artist to edit and digitize it to floppy disk. The cartoon can now be viewed under Windows using Multimedia player
Morphing Programs
Tom Brigham, a programmer and animator at NYIT, astounded the audience at the 1982 SIGGRAPH conference. Tom Brigham had created a video sequence showing a woman distort and transform herself into the shape of a lynx. Thus was born a new technique called "Morphing". It was destined to become a required tool for anyone producing computer graphics or special effects in the film or television industry. The morphing program that I am using to demonstrate the technique is "Reveal's Morph Editor". The following segment is a clip of my dad being morphed into my sister
Sound Recording Programs
Wave Files - Computer programs can be used to record and digitize actual sound. These applications were developed at the same time as the graphics applications. The sound is converted from some analog source such as radio, tape player, and live microphone and is stored to one of the computers mass storage devices such as hard disk or floppy disk. Software editors can then be used to edit the wave file. Special effects can be added such as noise reduction and reverb. The wave editor that I'm using to explore the computers ability to handle sound is from Logitech. I recorded a segment from an old 78 rpm record and used the editor to clean up the sound. It was tremendous improvement over the original recording. The following is a view of the editor window with the sound file loaded.
Midi Recording Programs
Midi Files - There is another method by which computers can record sound that is nothing like traditional sound recording. An actual musical instrument can be hooked to the computer and the computer records the actual notes struck, duration, intensity ,etc. This is an extremely efficient way to record music known as midi. The files created by this process are a fraction of the size of files created by waveform recording. This method may also be used even if there is no midi instrument. The notes can be entered or scanned into the computer from regular piece of sheet music. The computer is then able to translate these entries into the required midi file. The program I used to examine this technique is the Microsoft Midisoft recording studio. A piece of sheet music was actually entered into the computer
one note at a time. If a synthesizer is used to play the file. The piece can be turned into a orchestral arrangement. This is a screen shot of the music loaded into the program.
Multimedia Presentation Programs
Finally, this is the class of programs which can be used to tie all the products of the foregoing programs together. Multimedia interactive programs allow the user to combine graphics, animation, sound, and interactive programs together into a presentation. These presentations can be slide shows of still images accompanied by music or sequences of animation. They can allow the user to be passive and merely watch or permit the user to interact by answering questions or specifying when the next event is to begin. The Internet itself can be thought of as an interactive application , but for this purposes of this paper I am only looking at a computer in a stand alone configuration. There are many programs which allow one to tie all multimedia elements together, but the one I have is Interactive by HSC software.
The following are a few of the 100 slides we used to create a slide show of our trip to Olympic National Park. The show was linked to music on a CD ROM - the Music of Olympia. Once the slide show ran on the computer it was transferred to video tape by using a vga to television converter.
f:\12000 essays\sciences (985)\Computer\Natural Language Processing.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There have been high hopes for Natural Language Processing. Natural Language Processing, also known
simply as NLP, is part of the broader field of Artificial Intelligence, the effort towards making machines think.
Computers may appear intelligent as they crunch numbers and process information with blazing speed. In truth,
computers are nothing but dumb slaves who only understand on or off and are limited to exact instructions. But
since the invention of the computer, scientists have been attempting to make computers not only appear intelligent
but be intelligent. A truly intelligent computer would not be limited to rigid computer language commands, but
instead be able to process and understand the English language. This is the concept behind Natural Language
Processing.
The phases a message would go through during NLP would consist of message, syntax, semantics,
pragmatics, and intended meaning. (M. A. Fischer, 1987) Syntax is the grammatical structure. Semantics is the
literal meaning. Pragmatics is world knowledge, knowledge of the context, and a model of the sender. When
syntax, semantics, and pragmatics are applied, accurate Natural Language Processing will exist.
Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9):
"I believe that in about fifty years' time it will be possible to program computers .... to
make them play the imitation game so well that an average interrogator will not have more than
70 per cent chance of making the right identification after five minutes of questioning."
But in 1950, the current computer technology was limited. Because of these limitations, NLP programs of
that day focused on exploiting the strengths the computers did have. For example, a program called SYNTHEX
tried to determine the meaning of sentences by looking up each word in its encyclopedia. Another early approach
was Noam Chomsky's at MIT. He believed that language could be analyzed without any reference to semantics or
pragmatics, just by simply looking at the syntax. Both of these techniques did not work. Scientists realized that
their Artificial Intelligence programs did not think like people do and since people are much more intelligent than
those programs they decided to make their programs think more closely like a person would. So in the late 1950s,
scientists shifted from trying to exploit the capabilities of computers to trying to emulate the human brain. (Daniel
Crevier, 1994)
Ross Quillian at Carnegie Mellon wanted to try to program the associative aspects of human memory to
create better NLP programs. (Daniel Crevier, 1994) Quillian's idea was to determine the meaning of a word by the
words around it. For example, look at these sentences:
After the strike, the president sent him away.
After the strike, the umpire sent him away.
Even though these sentences are the same except for one word, they have very different meaning because of the
meaning of the word "strike". Quillian said the meaning of strike should be determined by looking at the subject.
In the first sentence, the word "president" makes the word "strike" mean labor dispute. In the second sentence, the
word "umpire" makes the word "strike" mean that a batter has swung at a baseball and missed.
In 1958, Joseph Weizenbaum had a different approach to Artificial Intelligence, which he discusses in this
quote (Daniel Crevier, 1994, page 133):
"Around 1958, I published my first paper, in the commercial magazine Datamation. I
had written a program that could play a game called "five in a row." It's like ticktacktoe, except
you need rows of five exes or noughts to win. It's also played on an unbounded board; ordinary
coordinate will do. The program used a ridiculously simple strategy with no look ahead, but it
could beat anyone who played at the same naive level. Since most people had never played the
game before, that included just about everybody. Significantly, the paper was entitled: "How to
Make a Computer Appear Intelligent" with appear emphasized. In a way, that was a forerunner
to my later ELIZA, to establish my status as a charlatan or con man. But the other side of the
coin was that I freely started it. The idea was to create the powerful illusion that the computer
was intelligent. I went to considerable trouble in the paper to explain that there wasn't much
behind the scenes, that the machine wasn't thinking. I explained the strategy well enough that
anybody could write that program, which is the same thing I did with ELIZA."
ELIZA was a program written by Joe Weizenbaum which communicated to its user while impersonating a
psychotherapist. Weizenbaum wrote the program to demonstrate the tricky alternatives to having programs look at
syntax, semantics, or pragmatics. One of ELIZA's tricks was mirroring sentences. Another trick was to pick a
sentence from earlier in the dialogue and return it attached to a leading phrase at random intervals Also, ELIZA
would watch for a list of key words, transform it in some way, and return it attached to a leading sentence. These
tricks worked well under the context of a psychiatrist who encourages patients to talk about their problems and
answers their questions with other questions. However, these same tricks do not work well in other situations.
In 1970, William Wood, AI researcher at Bolt, Beranek, and Newman, described an NLP method called
Augmented Transition Network. (Daniel Crevier, 1994) Their idea was to look at the case of the word: agent
(instigator of an event), instrument (stimulus or immediate physical cause of an event), and experiencer (undergoes
effect of the action). To tell the case, Filmore put restrictions on the cases such as an agent had to be animate. For
example, in "The heat is baking the cake", cake is inanimate and therefor the experiencer. Heat would be the
instrument. An ATN could mix syntax rules with semantic props such as knowing a cake is inanimate. This
worked out better than any other NLP technique to date. ATNs are still used in most modern NLPs.
Roger Schank, Stanford researcher (Daniel Crevier, 1994, page 167):
"Our aim was to write programs that would concentrate on crucial differences in
meaning, not on issues of grammatical structure .... We used whatever grammatical rules were
necessary in our quest to extract meanings from sentences but, to our surprise, little grammar
proved to be relevant for translating sentences into a system of conceptual representations."
Schank reduced all verbs to 11 basic acts. Some of them are ATRANS (to transfer an abstract
relationship), PTRANS (to transfer the physical location of an object), PROPEL (to apply physical force to an
object), MOVE (for its owner to move a body part), MTRANS (to transfer mental information), and MBUILD (to
build new information out of old information). Schank called these basic acts semantic primitives. When his
program saw in a sentence words usually relating to the transfer of possession (such as give, buy, sell, donate, etc.)
it would search for the normal props of ATRANS: the object being transferred, its receiver and original owner, the
means of transfer, and so on If the program didn't find these props, it would try another possible meaning of the
verb. After successfully determining the meaning of the verb, the program would make inferences associated with
the semantic primitive. For example, an ATRANS rule might be that if someone gets something they want, they
may be happy about it and may use it. (Daniel Crevier, 1994)
Schank implemented his idea of conceptual dependency in a program called MARGIE (memory, analysis,
response generation in English.) MARGIE was a program that analyzed English sentences, turned them into
semantic representations, and generated inferences from them. Take for example: "John went to a restaurant. He
ordered a hamburger. It was cold when the waitress brought it. He left her a very small tip." MARGIE didn't work.
Schank and his colleagues found that "any single sentence lends itself to so many plausible inferences that it was
impossible to isolate those pertinent to the next sentence." For example, from "It was cold when the waitress
brought it" MARGIE might say "The hamburger's temperature was between 75 and 90 degrees, The waitress
brought the hamburger on a plate, She put the plate on a table, etc." The inference that cold food makes people
unhappy would be so far down the line that it wouldn't be looked at and as a result MARGIE wouldn't have
understood the story well enough to answer the question, "Why did John leave a small tip?" While MARGIE
applied syntax and semantics well, it forgot about pragmatics. To solve this problem, Schank moved to Yale and
teamed up with Professor of Psychology Robert Abelson. They realized that most of our everyday activities are
linked together in chains which they called "scripts." (Daniel Crevier, 1994)
In 1975, SAM (Script Applied Mechanism), written by Richard Cullingford, used an automobile accident
script to make sense out of newspaper reports of them. SAM built internal representations of the articles using
semantic primitives. SAM was the first working natural language processing program. SAM successfully went
from message to intended meaning because it successfully implemented the steps in-between - syntax, semantics,
and pragmatics.
Despite the success of SAM, Schank said "real understanding requires the ability to establish connections
between pieces of information for which no prescribed set of rules, or scripts, exist." (Daniel Crevier, 1994, page
167) So Robert Wilensky created PAM (Plan Applier Mechanism). PAM interpreted stories by linking sentences
together through a character's goals and plans.
Here is an example of PAM (Daniel Crevier, 1994):
John wanted money. He got a gun and walked into a liquor store. He told the owner he wanted some
money. The owner gave John the money and John left.
In the process of understanding the story, PAM put itself in the shoes of the participants. From John's
point of view:
I needed to get some dough. So I got myself this gun, and I walked down to the liquor store. I told the
shopkeeper that if he didn't let me have the money then I would shoot him. So he handed it over. Then I left.
From the store owner's point of view:
I was minding the store when a man entered. He threatened me with a gun and demanded all the cash
receipts. Well, I didn't want to get hurt so I gave him the money. Then he escaped.
A new idea from MIT is to grab bits and parts of speech and ask for more details from the user to
understand what it didn't before and to understand better what it did before (G. McWilliams, 1993).
In IBM's current NLP programs, instead of having rules for determining context and meaning, the
program determines its own rules from the relationships between words in its input. For example, the program
could add a new definition to the word "bad" once it realized that it is slang for "incredible." IBM also uses
statistical probability to determine the meaning of a word. IBM's NLP programs also use a sentence-charting
technique. For example, charting the sentence "The boy has left" and storing the boy as a noun phrase allows the
computer to see that the subject of a following sentence beginning with "He" as "the boy." (G. McWilliams, 1993)
In the 1950s, Noam Chomsky believed that NLP consisted only of syntax. With MARGIE, Roger Schank
added semantics. By 1975, Robert Wilensky's PAM could handle pragmatics, too. And as Joe Weizenbaum did
with ELIZA in 1958, over 35 years later IBM is adding tricks to its NLP programs. Natural Language Processing
has had many successes - and many failures. How well can a computer understand us?
f:\12000 essays\sciences (985)\Computer\Netware Salvage Utility.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NetWare SALVAGE Utility
One of NetWares most useful utilities is the Salvage utility, which is kind of a trade secret. One day a user will delete a couple of files or a complete directory accidentally, of course, and it will be the job of the LAN administrator to save the day because the files were the company's financial statements and they are due in a meeting yesterday. The NetWare 3.12 and 4.X SALVAGE utility is the extremely useful and sophisticated tool to recover these files.
NetWare retains deleted files in the volume were the files originally resided. There they continue to pile up until the deleted files completely saturate this volume. When the volume becomes full with these images of the deleted files, the system begins purging, starting with the files that have been deleted for the longest period of time. The only exception to this, is files or directories that have been tagged with the purge attribute. As you can imagine these hidden deleted files can quickly eat up the space on a hard drive and the administrator will need to keep an eye on these so that the system is not unduly slowed down by the system purging to make room for saved and working files. These deleted files can also be purged manually with the SALVAGE utility, which is a great way to make sure that a file you don't want others to see is completely removed from the system!!!
For a user or administrator to retrieve a file using SALVAGE, the create right (right to edit and read a directory area or file) must be assigned to the directory in which the file resides. If the directory still exists, the files are put back into the directory from which they were deleted. If the file being salvaged has the same name as a file that already exists, then a prompt will be presented to rename the file being salvaged. Since NetWare keeps track of the files by date and time several versions of the file may accumulate.
When a directory is deleted, the method for recovery is a bit different. NetWare does not keep track of the directories, only the files. These files are stored in a hidden directory called DELETED.SAV. This directory exist in every volume on a network. The supervisor must go to this directory where the desired files can be copied to other directories to be completely recovered.
Now that you have a simple explanation of the way the system works, lets look at the actual graphic user interface (GUI) that comes up when you type SALVAGE at the network DOS prompt. The main menu is below.
As you can see, this simple menu is extremely user-friendly. Like all NetWare utilities, the only keys used are the Delete, Insert, F5, Escape and Enter. When you select the View/Recover Deleted Files option, a new menu appears prompting for the file string to locate. Like DOS, wild cards can be used or you can type the file name. The GUI is presented on the following page.
The default for the search string is "*" ,the all wild card , and will display all the files deleted in the chosen directory. An example of this listing is presented below which shows the files that were deleted in a particular directory. You can very simply undelete one of these files by highlighting the file, marking multiples with the F5 key, and pressing the Enter key. A message box then appears prompting you to verify the file(s) to be recovered. Selecting the YES command button will recover the file. It is as simple as that.
If the you need to change to a different directory all you have to do is select the Select Current Directory option from the main menu. This will bring up a current path display window and a network directory window in which to make the changes to the path. As you look at the example below, you will see that all you need to do is highlight the Network Directories window option and press the Enter key until the path window displays the path you want. Once at the desired path, press the Escape key to go back to the main menu and select the View/Recover Deleted Files option and do the same as before.
Well, this is all there is to recovering a file from a network using NetWare. It also is another great example of how things that are deleted from a network drive are still accessible, so if you want a very important company document to be purged, you will have to delete it from SALVAGE or mark it with the purge attribute.
f:\12000 essays\sciences (985)\Computer\Neural Networks.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neural Networks
A neural network also known as an artificial neural network provides a unique computing architecture whose potential has only begun to be tapped. They are used to address problems that are intractable or cumbersome with traditional methods. These new computing architectures are radically different from the computers that are widely used today. ANN's are massively parallel systems that rely on dense arrangements of interconnections and surprisingly simple processors (Cr95, Ga93).
Artificial neural networks take their name from the networks of nerve cells in the brain. Although a great deal of biological detail is eliminated in these computing models, the ANN's retain enough of the structure observed in the brain to provide insight into how biological neural processing may work (He90).
Neural networks provide an effective approach for a broad spectrum of applications. Neural networks excel at problems involving patterns, which include pattern mapping, pattern completion, and pattern classification (He95). Neural networks may be applied to translate images into keywords or even translate financial data into financial predictions (Wo96).
Neural networks utilize a parallel processing structure that has large numbers of processors and many interconnections between them. These processors are much simpler than typical central processing units (He90). In a neural network, each processor is linked to many of its neighbors so that there are many more interconnections than processors. The power of the neural network lies in the tremendous number of interconnections (Za93).
ANN's are generating much interest among engineers and scientists. Artificial neural network models contribute to our understanding of biological models. They also provide a novel type of parallel processing that has powerful capabilities and potential for creative hardware implementations, meets the demand for fast computing hardware, and provides the potential for solving application problems (Wo96).
Neural networks excite our imagination and relentless desire to understand the self, and in addition, equip us with an assemblage of unique technological tools. But what has triggered the most interest in neural networks is that models similar to biological nervous systems can actually be made to do useful computations, and furthermore, the capabilities of the resulting systems provide an effective approach to previously unsolved problems (Da90).
Neural network architectures are strikingly different from traditional single-processor computers. Traditional Von Neumann machines have a single CPU that performs all of its computations in sequence (He90). A typical CPU is capable of a hundred or more basic commands, including additions, subtractions, loads, and shifts. The commands are executed one at a time, at successive steps of a time clock. In contrast, a neural network processing unit may do only one, or, at most, a few calculations. A summation function is performed on its inputs and incremental changes are made to parameters associated with interconnections. This simple structure nevertheless provides a neural network with the capabilities to classify and recognize patterns, to perform pattern mapping, and to be useful as a computing tool (Vo94).
The processing power of a neural network is measured mainly be the number of interconnection updates per second. In contrast, Von Neumann machines are benchmarked by the number of instructions that are performed per second, in sequence, by a single processor (He90). Neural networks, during their learning phase, adjust parameters associated with the interconnections between neurons. Thus, the rate of learning is dependent on the rate of interconnection updates (Kh90).
Neural network architectures depart from typical parallel processing architectures in some basic respects. First, the processors in a neural network are massively interconnected. As a result, there are more interconnections than there are processing units (Vo94). In fact, the number of interconnections usually far exceeds the number of processing units. State-of-the-art parallel processing architectures typically have a smaller ratio of interconnections to processing units (Za93). In addition, parallel processing architectures tend to incorporate processing units that are comparable in complexity to those of Von Neumann machines (He90). Neural network architectures depart from this organization scheme by containing simpler processing units, which are designed for summation of many inputs and adjustment of interconnection parameters.
The two primary attractions that come from the computational viewpoint of neural networks are learning and knowledge representation. A lot of researchers feel that machine learning techniques will give the best hope for eventually being able to perform difficult artificial intelligence tasks (Ga93).
Most neural networks learn from examples, just like children learn to recognize dogs from examples of dogs (Wo96). Typically, a neural network is presented with a training set consisting of a group of examples from which the network can learn. These examples, known as training patterns, are represented as vectors, and can be taken from such sources as images, speech signals, sensor data, and diagnosis information (Cr95, Ga93).
The most common training scenarios utilize supervised learning, during which the network is presented with an input pattern together with the target output for that pattern. The target output usually constitutes the correct answer, or correct classification for the input pattern. In response to these paired examples, the neural network adjusts the values of its internal weights (Cr95). If training is successful, the internal parameters are then adjusted to the point where the network can produce the correct answers in response to each input pattern (Za93).
Because they learn by example, neural networks have the potential for building computing systems that do not need to be programmed (Wo96). This reflects a radically different approach to computing compared to traditional methods, which involve the development of computer programs. In a computer program, every step that the computer executes is specified in advance by the network. In contrast, neural nets begin with sample inputs and outputs, and learns to provide the correct outputs for each input (Za93).
The neural network approach does not require human identification of features. It also doesn't require human development of algorithms or programs that are specific to the classification problem at hand. All of this will suggest that time and human effort can be saved (Wo96). There are drawbacks to the neural network approach, however. The time to train the network may not be known, and the process of designing a network that successfully solves an applications problem may be involved. The potential of the approach, however, appears significantly better than past approaches (Ga93).
Neural network architectures encode information in a distributed fashion. Typically the information that is stored in a neural network is shared by many of its processing units. This type of coding is in stark contrast to traditional memory schemes, where particular pieces of information are stored in particular locations of memory. Traditional speech recognition systems, for example, contain a lookup table of template speech patterns that are compared one by one to spoken inputs. Such templates are stored in a specific location of the computer memory. Neural networks, in contrast, identify spoken syllables by using a number of processing units simultaneously. The internal representation is thus distributed across all or part of the network. Furthermore, more than one syllable or pattern may be stored at the same time by the same network (Ze93).
Neural networks have far-reaching potential as building blocks in tomorrow's computational world. Already, useful applications have been designed, built, and commercialized, and much research continues in hopes of extending this success (He95).
Neural network applications emphasize areas where they appear to offer a more appropriate approach than traditional computing has. Neural networks offer possibilities for solving problems that require pattern recognition, pattern mapping, dealing with noisy data, pattern completion, associative lookups, and systems that learn or adapt during use (Fr93, Za93). Examples of specific areas where these types of problems appear include speech synthesis and recognition, image processing and analysis, sonar and seismic signal classification, and adaptive control. In addition, neural networks can perform some knowledge processing tasks and can be used to implement associative memory (Kh90). Some optimization tasks can be addressed with neural networks. The range of potential applications is impressive.
The first highly developed application was handwritten character identification. A neural network is trained on a set of handwritten characters, such as printed letters of the alphabet. The network training set then consists of the handwritten characters as inputs together with the correct identification for each character. At the completion of training, the network identifies handwritten characters in spite of the variations (Za93).
Another impressive application study involved NETtalk, a neural network that learns to produce phonetic strings, which in turn specify pronunciation for written text. The input to the network in this case was English text in the form of successive letters that appear in sentences. The output of the network was phonetic notation for the proper sound to produce given the text input. The output was linked to a speech generator so that an observer could hear the network learn to speak. This network, trained by Sejnowski and Rosenberg, learned to pronounce English text with a high level of accuracy (Za93).
Neural network studies have also been done for adaptive control applications. A classic implementation of a neural network control system was the broom-balancing experiment, originally done by Widrow and Smith in 1963. The network learned to move a cart back and forth in such a way that a broom balanced upside-down on its handle tip and the cart remained on end (Da90). More recently, application studies were done for teaching a robotic arm how to get to its target position, and for steadying a robotic arm. Research was also done on teaching a neural network to control an autonomous vehicle using simulated, simplified vehicle control situations (Wo96).
Neural networks are expected to complement rather than replace other technologies. Tasks that are done well by traditional computer methods need not be addressed with neural networks, but technologies that complement neural networks are far-reaching (He90). For example, expert systems and rule-based knowledge-processing techniques are adequate for some applications, although neural networks have the ability to learn rules more flexibly. More sophisticated systems may be built in some cases from a combination of expert systems and neural networks (Wo96). Sensors for visual or acoustic data may be combined in a system that includes a neural network for analysis and pattern recognition. Robotics and control systems may use neural network components in the future. Simulation techniques, such as simulation languages, may be extended to include structures that allow us to simulate neural networks. Neural networks may also play a new role in the optimization of engineering designs and industrial resources (Za93).
Many design choices are involved in developing a neural network application. The first option is in choosing the general area of application. Usually this is an existing problem that appears amenable to solutions with a neural network. Next the problem must be defined specifically so that a selection of inputs and outputs to the network may be made. Choices for inputs and outputs involve identifying the types of patterns to go into and out of the network. In addition, the researcher must design how those patterns are to represent the needed information. Next, internal design choices must be made. This would include the topology and size of the network (Kh90). The number of processing units are specified, along with the specific interconnections that the network is to have. Processing units are usually organized into distinct layers, which are either fully or partially interconnected (Vo95).
There are additional choices for the dynamic activity of the processing units. A variety of neural net paradigms are available. Each paradigm dictates how the readjustment of parameters takes place. This readjustment results in learning by the network. Next there are internal parameters that must be tuned to optimize the ANN design (Kh90). One such parameter is the learning rate from the back-error propagation paradigm. The value of this parameter influences the rate of learning by the network, and may possibly influence how successfully the network learns (Cr95). There are experiments that indicate that learning occurs more successfully if this parameter is decreased during a learning session. Some paradigms utilize more than one parameter that must be tuned. Typically, network parameters are tuned with the help of experimental results and experience on the specific applications problem under study (Kh90).
Finally, the selection of training data presented to the neural network influences whether or not the network learns a particular task. Like a child, how well a network will learn depends on the examples presented. A good set of examples, which illustrate the tasks to be learned well, is necessary for the desired learning to take place. The set of training examples must also reflect the variability in the patterns that the network will encounter after training (Wo96).
Although a variety of neural network paradigms have already been established, there are many variations currently being researched. Typically these variations add more complexity to gain more capabilities (Kh90). Examples of additional structures under investigation include the incorporation of delay components, the use of sparse interconnections, and the inclusion of interaction between different interconnections. More than one neural net may be combined, with outputs of some networks becoming the inputs of others. Such combined systems sometimes provide improved performance and faster training times (Da90).
Implementations of neural networks come in many forms. The most widely used implementations of neural networks today are software simulators. These are computer programs that simulate the operation of the neural network. The speed of the simulation depends on the speed of the hardware upon which the simulation is executed. A variety of accelerator boards are available for individual computers to speed the computations (Wo96).
Simulation is key to the development and deployment of neural network technology. With a simulator, one can establish most of the design choices in a neural network system. The choice of inputs and outputs can be tested as well as the capabilities of the particular paradigm used (Wo96).
Implementations of neural networks are not limited to computer simulation, however. An implementation could be an individual calculating the changing parameters of the network using pencil and paper. Another implementation would be a collection of people, each one acting as a processing unit, using a hand-held calculator (He90). Although these implementations are not fast enough to be effective for applications, they are nevertheless methods for emulating a parallel computing structure based on neural network architectures (Za93).
One challenge to neural network applications is that they require more computational power than readily available computers have, and the tradeoffs in sizing up such a network are sometimes not apparent from a small-scale simulation. The performance of a neural network must be tested using a network the same size as that to be used in the application (Za93).
The response of an ANN may be accelerated through the use of specialized hardware. Such hardware may be designed using analog computing technology or a combination of analog and digital. Development of such specialized hardware is underway, but there are many problems yet to be solved. Such technological advances as custom logic chips and logic-enhanced memory chips are being considered for neural network implementations (Wo96).
No discussion of implementation would be complete without mention of the original neural networks, which is the biological nervous systems. These systems provided the first implementation of neural network architectures. Both systems are based on parallel computing units that are heavily interconnected, and both systems include feature detectors, redundancy, massive parallelism, and modulation of connections (Vo94, Gr93).
However the differences between biological systems and artificial neural networks are substantial. Artificial neural networks usually have regular interconnection topologies, based on a fully connected, layered organization. While biological interconnections do not precisely fit the fully connected, layered organization model, they nevertheless have a defined structure at the systems level, including specific areas that aggregate synapses and fibers, and a variety of other interconnections (Lo94, Gr93). Although many connections in the brain may seem random or statistical, it is likely that considerable precision exists at the cellular and ensemble levels as well as the system level. Another difference between artificial and biological systems arises from the fact that the brain organizes itself dynamically during a developmental period, and can permanently fix its wiring based on experiences during certain critical periods of development. This influence on connection topology does not occur in current ANN's (Lo94, Da90).
The future of neurocomputing can benefit greatly from biological studies. Structures found in biological systems can inspire new design architectures for ANN models (He90). Similarly, biology and cognitive science can benefit from the development of neurocomputing models. Artificial neural networks do, for example, illustrate ways of modeling characteristics that appear in the human brain (Le91). Conclusions, however, must be carefully drawn to avoid confusion between the two types of systems.
REFERENCES
[Cr95] Cross, et, Introduction to Neural Networks", Lancet, Vol. 346 (October 21, 1995), pp 1075.
[Da90] Dayhoff, J. E. Neural Networks: An Introduction, Van Nostrand Reinhold, New York, 1990.
[Fr93] Franklin, Hardy, "Neural Networking", Economist, Vol. 329, (October 9, 1993), pp 19.
[Ga93] Gallant, S. I. Neural Network Learning and Expert Systems, MIT Press, Massachusetts, 1993.
[Gr93] Gardner, D. The Neurobiology of Neural Networks, MIT Press, Massachusetts, 1993.
[He90] Hecht-Nielsen, R. Neurocomputing, Addison-Wesley Publishing Company, Massachusetts, 1990.
[He95] Helliar, Christine, "Neural Computing", Management Accounting, Vol. 73 (April 1, 1995), pp 30.
[Kh90] Khanna, T. Foundations of Neural Networks, Addison-Wesley Publishing Company, Massachusetts, 1990.
[Le91] Levine, D. S. Introduction to Neural & Cognitive Modeling, Lawrence Erlbaum Associates Publishers, New Jersey, 1991.
[Lo94] Loofbourrow, Tod, "When Computers Imitate the Workings of Brain", Boston Business Journal, Vol. 14 (June 10, 1994), pp 24.
[Vo94] Vogel, William, "Minimally Connective, Auto-Associative, Neural Networks", Connection Science, Vol. 6 (January 1, 1994), pp 461.
[Wo96] Internet Information.
http://www.mindspring.com/~zsol/nnintro.html
http://ourworld.compuserve.com/homepages/ITechnologies/
http://sharp.bu.edu/inns/nn.html
http://www.eeb.ele.tue.nl/neural/contents/neural_networks.html
http://www.ai.univie.ac.at/oefai/nn/
http://www.nd.com/welcome/whatisnn.htm
http://www.mindspring.com/~edge/neural.html
http://vita.mines.colorado.edu:3857/lpratt/applied-nnets.html
[Za93] Zahedi, F. Intelligent Systems for Business: Expert Systems with Neural Networks, Wadsworth Publishing Company, California, 1993.
f:\12000 essays\sciences (985)\Computer\None.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Organic Molecules Challenge Silicon's Reign as King of Semiconductors
There is a revolution fomenting in the semiconductor industry. It may take 30 years or more to reach perfection, but when it does the advance may be so great that today's computers will be little more than calculators compared to what will come after. The revolution is called molecular electronics, and its goal is to depose silicon as king of the computer chip and put carbon in its place.
The perpetrators are a few clever chemists trying to use pigment, proteins, polymers, and other organic molecules to carry out the same task that microscopic patterns of silicon and metal do now. For years these researchers worked in secret, mainly at their blackboards, plotting and planning. Now they are beginning to conduct small forays in the laboratory, and their few successes to date lead them to believe they were on the right track.
"We have a long way to go before carbon-based electronics replace silicon-based electronics, but we can see now that we hope to revolutionize computer design and performance," said Robert R. Birge, a professor of chemistry, Carnegie-Mellon University, Pittsburgh. "Now it's only a matter of time, hard work, and some luck before molecular electronics start having a noticeable impact."
Molecular electronics is so named because it uses molecules to act as the "wires" and "switches" of computer chips. Wires, may someday be replaced by polymers that conduct electricity, such as polyacetylene and polyphenylenesulfide. Another candidate might be organometallic compounds such as porphyrins and phthalocyanines which also conduct electricity. When crystallized, these flat molecules stack like pancakes, and metal ions in their centers line up with one another to form a one-dimensional wire.
Many organic molecules can exist in two distinct stable states that differ in some measurable property and are interconvertable. These could be switches of molecular electronics. For example, bacteriorhodpsin, a bacterial pigment, exists in two optical states: one state absorbs green light, the other orange. Shinning green light on the green-absorbing state converts it into the orange state and vice versa. Birge and his coworkers have developed high density memory drives using bacteriorhodopsin.
Although the idea of using organic molecules may seem far-fetched, it happens every day throughout nature. "Electron transport in photosynthesis one of the most important energy generating systems in nature, is a real-world example of what we're trying to do," said Phil Seiden, manager of molecular science, IBM, Yorkstown Heights, N.Y.
Birge, who heads the Center for Molecular Electronics at Carnegie-Mellon, said two factors are driving this developing revolution, more speed and less space. "Semiconductor chip designers are always trying to cram more electronic components into a smaller space, mostly to make computers faster," he said. "And they've been quite good at it so far, but they are going to run into trouble quite soon."
A few years ago, for example, engineers at IBM made history last year when they built a memory chip with enough transistors to store a million bytes if information, the megabyte. It came as no big surprise. Nor did it when they came out with a 16-megabyte chip. Chip designers have been cramming more transistors into less space since Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor first showed how to put multitudes on electronic components on a slab of silicon.
But 16 megabytes may be near the end of the road. As bits get smaller and loser together, "crosstalk" between them tends to degrade their performance. If the components were pushed any closer they would short circuit. Physical limits have triumphed over engineering.
That is when chemistry will have its day. Carbon, the element common to all forms of life, will become the element of computers too. "That is when we see electronics based on inorganic semiconductors, namely silicon and gallium arsenide, giving way to electronics based on organic compounds," said Scott E. Rickert, associate professor of macromolecular science, Case Western Reserve University, Cleveland, and head of the school's Polymer Microdevice Laboratory.
"As a result," added Rickert, "we could see memory chips store billions of bytes of information and computers that are thousands times faster. The science of molecular electronics could revolutionize computer design."
But even if it does not, the research will surely have a major impact on organic chemistry. "Molecular electronics presents very challenging intellectual problems on organic chemistry, and when people work on challenging problems they often come up with remarkable, interesting solutions," said Jonathan S. Lindsey, assistant professor of chemistry, Carnegie-Mellon University. "Even if the whole field falls through, we'll still have learned a remarkable amount more about organic compounds and their physical interactions than we know now. That's why I don't have any qualms about pursuing this research."
Moreover, many believe that industries will benefit regardless of whether an organic-based computer chip is ever built. For example, Lindsey is developing an automated system, as well as the chemistry to go along with it, for synthesizing complex organic compounds analogous to the systems now available for peptide and nucleotide synthesis. And Rickert is using technology he developed foe molecular electronic applications to make gas sensors that are both a thousand times faster and more sensitive than conventional sensors.
For now, the molecular electronics revolution is in the formative stage, and most of the investigations are still basic more than applied. One problem with which researchers are beginning to come to grips, though, is determining the kinds if molecules needed to make the transistors and other electronic components that will go into the molecular electronic devices, Some of the molecules are like bacteriorhodopsin in that their two states flip back and forth when exposed to wavelengths of light. These molecules would be the equivalent of an optical switch on which on state is on and the other state is off. Optical switches have been difficult to make from standard semiconductors.
bacteriorhodopsin is the light-harvested pigment of purple bacteria living in salt marshes outside San Francisco. The compound consists of a pigment core surrounded by a protein that stabilizes the pigment. Birge has capitalized on the clear cut distinction between the two states of bacteriorhodopsin to make readable-write able optical memory devices. Laser disks, are read-only optical memory devices, once encoded the data cannot be changed.
Birge has been able to form a thin film of bacteriorhodopsin on quartz plates that can then be used as optical memory disks. The film consists of a thousand one-molecule thick layers deposited one layer at a time using the Langmuir-Blodgett technique. A quartz plate is dipped into water whose surface is covered with bacteriorhodopsin. When the plate is withdrawn at a certain speed, a monolayer of rhodopsin adheres to the plate with all the molecules oriented in the same direction. Repeating this process deposits a second layer, then a third, and so on.
Information is stored by assigning 0 to the green state and 1 to the orange state. Miniature lasers of the type use din fiber optic communications devices are used to switch between the two states.
Irradiating the disk with a green laser converts the green state to the orange state, storing a 1. resetting the bit is accomplished by irradiating the same small area of the dusk with a red laser. Data stored on the disk are read by using both lasers. The disk would be scanned with the red laser and any bit with a value 1 would be reset using the green laser.
This is analogous to the way in which both magnetic and electrical memories are read today, but with one important difference: "Because the two states take only five picoseconds (five trillionths of a second) to flip back and forth, information storage and retrieval are much faster than anything you could ever do magnetically or electrically," explained Birge.
In theory, each pigment molecule could store one bit of information. In practice, however approximately 100,000 molecules are sued. The laser beam as a diameter if approximately 10 molecules and penetrates through the 1,000 molecule think layer. Although this reduces the amount of information that can be stored on each disk, it does provide fidelity though redundancy.
"We can have half the molecules or more in a disk fall apart and there would still be enough excited by the laser at each spot to provide accurate data storage," said Birge. And even using 100,000 molecules per data bit, an old 5.25 inch floppy disk could store well over 500 megabytes of data.
One drawback to this system is that the bacteriorhodopsin's two states are only stable at liquid nitrogen temperatures, -192¡C. But Birge does not see this as anything more than a short term problem. "We're now using genetic engineering to modify the protein part of the molecule so that it will stabilize the two states at room temperature," he said. "Based in outstanding work, we don't think this will be a problem."
Faster, higher-density disk storage is a laudable goal, but the big stakes are in improving on semiconductor components. Birge, for example, is developing a random access chip using the bacteriorhodopsin system. Instead of having millions of transistors wired together on a slab of silicon, there would be millions of tiny lasers pointed at a film of bacteriorhodopsin. "These RAM chips would actually be a little bigger than what we have," he said, "but they would still be 1,000 times faster because the molecular components work so much faster than ones made of semiconductor materials."
Recently, Theodre O. Poehler, director of research, John Hopkin's Applied Physics Laboratory, Laurel, Md., and Richard S. Potember, a senior chemist there, built a working four-byte RAM chip using molecular charge-transfer system. Four bytes may seem crude compared to the million-byte chip built by IBM, but the first semiconductor chip, built by Texas Instruments' Kilby in 1959, was also crude compared to today's chips.
Poehler and Potember's system also uses laser light to activate the molecular switches, but the chemistry is much different than Birge's. In the Carnegie-Mellon system, light causes an electron on the bacteriorhodopsin to move into a higher energy level within the same molecule. This changes its absorption spectrum. In the Hopkin's system, light causes an electron to transfer between two different molecules, one called an electron donor, the other an electron acceptor. This is known as a charge-transfer reaction, and the researchers in several laboratories are designing devices using this type of molecular switch.
In their system, Poehler and Potember use compounds formed form either copper or silver- the electron donor-and the tetracyaboquinodimethane (TCNQ) or various derivatives-the electron acceptor. The researchers first deposit the metal onto some substrate-it could be either a silicon or plastic slab. Next, they deposit a solution of the organic electron acceptor onto the metal and heat it gently, causing a reaction to occur and evaporating the solvent.
In the equilibrium state between these two molecular components, an electron is transferred from copper to TCNQ, forming a positive metal ion and a negative TCNQ ion. Irradiating this complex with light from an argon laser causes the reverse reaction to occur, forming neutral metal and neutral TCNQ.
Two measurable changed accompany this reaction. One is that the laser-lit area changes color from blue to a pale yellow if the metal is copper or from violet if it is silver. This change is easily detected using the same or another laser. Thus, metal TCNQ films, like those made from bacteriorhodopsin, could serve as optical memory storage devices. Poehler said that they have already built several such devices and are now testing their performance. They work at room temperature.
The other change that occurs, however, is more like those that take place on standard microelectronics switches. When an electric field id applied to the organometallic film, it becomes conducting in the irradiated area, just as a semiconductor does when an electric field is applied to it.
Erasing a data or closing the switch is accomplished using any low-intensity laser, including carbon dioxide, neodymium yttrium aluminum garnet, or gallium arsenide devices. The tiny amount of heat generated by the laser beam causes the metal and TCNQ to return to their equilibrium, non-conducting state. Turning off the applied voltage also returns the system to its non-conducting state.
The Hoptkins researchers found they could tailor the on/off behavior of this system by changing the electron acceptor. Using relative weak electron acceptors, such as dimethoxy-TCNQ, produced organometallic films with a very sharp on/off behavior. But of a strong electron acceptor such as tetrafluoro-TCNQ is used, the film remains conductive even when the applied field is removed. This effect can last from several minutes to several days; the stronger the electron acceptor, the longer the memory effect.
Poehler and his colleagues are now working to optimize the electrical and optical behavior of these materials. They have found, for example, that films made with copper last longer than those made of silver. In addition, they are testing various substrates and coatings to further stabilize these systems. "We know the system works," Poehler said. "Now we're trying to develop it into a system that will work in microelectronics applications."
At Case Wester Rickert is also trying to make good organic chemistry and turn it into something workable in microelectronics. He and his coworkers have found that using Langmuir-Blodgett techniques they can make polymer films actually look like and behave like metal foils. "The polymer molecules are arranged in a very regular, ordered array, as if they were crystalline," said Rickert.
These foils, made from polymers such as polyvinylstearate, behave much as metal oxide films do in standard semiconductor devices. but transistors made with the organic foils are 20 percent faster than their inorganic counterparts, and require much less energy to make and process. Early in 1986, Rickert made a discovery about these films that could have a major impact on the chemical industry long before any aspect of molecular electronics. "the electrical behavior of these foils is very sensitive to environmental changes such as temperature, pressure, humidity and chemical composition," he said. "As a result, they make very good chemical sensors, better than any sensor yet developed."
He has been able to develop an integrated sensor that to date can measure parts per billion concentrations of nitrogen oxides, carbon dioxide, oxygen, and ammonia. Moreover, it can measure all four simultaneously.
Response times for the new "supersniffer," as Rickert calls the sensor, are in the millisecond range, compared to tens of seconds for standard gas sensors, Recovery times are faster too; under five seconds compared to minutes or hours. The Case Western team is now using polymer foils as electrochemical and biochemical detectors.
In spite of such successes, molecular electronics researchers point out that MEDs will never replace totally those made of silicon and other inorganic semiconductors. "Molecular electronics will never make silicon technology obsolete," said Carnegie-Mellon's Birge. "The lasers we will need, for example, will probably be built from gallium arsenide crystals on silicon wafers.
"But molecular electronic devices will replace many of those now made with silicon and the combination of the two technologies should revolutionize computer design and function."
f:\12000 essays\sciences (985)\Computer\Nonverbal comm.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CHAPTER 1:
Rationale and Literature Review
Magnafix says, "Have you figured out the secret entrance to
Kahn Draxen's castle?"
Newtrik sighs deeply.
Newtrik says, "I think so, but I haven't found the stone key yet!"
Magnafix grins mischievously.
Magnafix gives a stone key to Newtrik.
Newtrik smiles happily.
Newtrik shakes hands with Magnafix.
Newtrik says, "Thanks!"
Magnafix grins broadly and says, "No problem..."
Newtrik leaves west.
Introduction
Purpose
The purpose of this thesis is to investigate the communicative phenomena to be found in those
environments known as Internet MUDs, or Multi-User Dimensions. These text-based virtual realities are
presently available to students and faculty at most learning institutions, as well as anyone with a computer
and a modem. Though the term "virtual reality" has become connected for many with visions of fancy
headgear and million dollar gloves, MUDs require no such hardware. They are, however, a form of
virtual reality, "because they construct enduring places, objects, and user identities. These objects have
characteristics that define and constrain how users can interact with them," (Holmes & Dishman, 1994,
p. 6). Having been created in their most rudimentary form nearly two decades ago, the technology that
supports MUD interaction is well developed and has spawned a new variety of communicative
environment, one that thousands if not millions of users have found fiercely compelling.
Since MUDs are generally restricted to text-based interaction (some support ANSI codes, and the
graphical MUDs are gaining popularity), one might expect that the interactions therein are characterized
by a lack of regulating feedback, dramaturgical weakness, few status cues, and social anonymity, as
Kiesler and her colleagues have suggested (Kiesler, Siegal, & McGuire, 1984). While these characteristics
may be readily attributable to the majority of interactions within experiments on computer conferencing
and electronic mail, such is not the case for MUDs, as each (there are hundreds) is a rich culture unto
itself, as will be shown. This thesis is meant to explore the modalities by which MUD users avoid the
drawbacks mentioned above, specifically, how nonverbal communication takes place in a virtual world
composed solely of words.
Background
History of network computing
The first computer network was created in the late 1960s in an effort by the Department of Defense to
link multiple command sites to one another, thus ensuring that central command could be carried on
remotely, if one or several were disabled or destroyed. Once the hardware was installed, the military
allowed educational institutions to take advantage of the research resources inherent in multiple site
networking. This interlaced network of computer connections spread quickly, and in the early 1980's, the
network was divided into MILNET, for strictly military uses, and ARPANET, which, with the advent of
satellite communications and global networking, became the Internet (Reid, 1993).
On a smaller scale, throughout the 1970's, various corporations developed their own computer networks
for intra-organizational interaction. E-mail and computer conferencing were created, useful for
information exchange, but asynchronous (i.e., messages are stored for later retrieval by other users,
rather than the synchronous co-authoring of messages) and thus less interpersonal than MUDs would
later become.
At the same time as this conferencing research was being done, another group of programmers was
involved in the creation of text-based adventure games in which a user would wander through a
textually-depicted maze, occasionally encountering programmed foes with whom to do battle. These first
single user adventure games, developed in the early 1970's, expanded the world's notion of computers
from mere super-cooled punch-card-munching behemoths to a more user-friendly conception of
computers as toys and even friends.
Inevitably, the networking technology and the game technology crossed paths. In 1979, Richard Bartle
and Roy Trubshaw developed the first MUD (called "MUD", for Multi-User Dungeon; now, the term
MUD is commonly accepted as a generic term for Multi-User Dimensions of many varieties) at Essex
University. This original game became enormously popular with the students at Essex, to whom its use
was restricted at first. As various technological barriers were toppled, access to "MUD" was granted to a
widening circle of users in the United Kingdom, which eventually prompted two results. First, several of
the "MUD" players wrote their own variations of the game. Second, the computer games magazines took
note and produced a flurry of articles about "MUD" in the early 1980's (Reid, 1993, Bartle, 1990).
These two results are related in that they brought about an exponential growth in the Multi-User
Dimension community. By 1989, there were quite a few families of MUD programming technology, each
designed with different goals in mind. Many of these technologies sought to distinguish themselves from
their brethren by adopting new acronyms (as well as new programming approaches), such as MUSH
(Multi-User Shared Hallucination), MUSE (Multi- User Simulated Environment), MOO (MUD,
Object-Oriented), DUM (Depend Upon Mud (forever)), MAGE(Multi-Actor Gaming Environment), and
MUCK (Multi User C Kernel).
At the time of this writing, there are an estimated five hundred publicly accessible MUDs (Turkle, 1995,
p. 11). There also exist an unknown number of private MUDs, and commercial "pay-for-play" MUDs.
These numbers change from week to week, as MUDs die out for various reasons quite frequently (e.g., a
MUD running on a university computer may suddenly lose the right to do so -- especially if the university
was not informed of such use). Indeed, "large MUDs can be opened from scratch by spending a few
hours with FTP," (Koster, 1996), and hence can expire shortly thereafter due to lack of interest.
However, many MUDs survive for years, as evidenced by such hugely popular MUDs as Ancient
Anguish, DragonMUD, and LambdaMOO, each of which boasts over seven thousand participants.
It must be noted, however, that even though the rate at which people come on and stay on the Net is
increasing, and shows no signs of slowing (Sellers, 1996), MUDs have remained as one of the
least-frequented portions of the Internet. Even with articles published in such mainstream publications as
Time (September 13, 1993), The Atlantic (September 1993), The Wall Street Journal (September 15,
1995), MacUser (November 1995), Technology Review (July 1994), and The Village Voice (December
21, 1993), even the most cyber-savvy of citizens has likely not experienced a MUD. There are several
reasons for this. First of all, MUDs have been rather insular, almost underground, in their marketing;
there is a single USENET newsgroup dedicated to the announcement of new MUDs
(rec.games.mud.announce). For the uninitiated, this sole advertising space is quite obscure, if not
invisible. As such, it is common for people to be introduced to MUDs simply by word of mouth, a
diffusion method that has met with limited success. Among people who have heard of MUDs, many
assume that they are simply wastes of time (indeed, MUDs can devour time like few other activities).
Another factor for new users is the fact that the graphical interface is the Internet industry standard now;
if there's not a multi-colored icon to click on, many recent Internet users will pass it by. As such, it may
turn out that the graphical MUDs currently under development will become the dominant paradigm for
real time chat and adventure games in the years to come. Finally, there is a steep learning curve involved
in becoming acquainted with one's first MUD, including such hurdles as Unix, telnet, the initial login
screen, the hundreds of available MUD commands, the local MUD culture, etc.
Previous studies of text based virtual realities:
The current body of communication research on MUDs is scarce, though growing steadily. Carlstrom's
(1992) sociolinguistic study examines the popular MUD LambdaMOO, and points out several notable
differences between MUD communication and real life communication, including issues of proxemics,
turn-taking, and the uses of silence. Lynn Cherney at Stanford University has produced a wealth of
important linguistic studies, such as her (1994) analysis of gender-based language differences as
evidenced on one MUD, and a (1995a) study of the objectification of users' virtual bodies on MUDs.
Another article (Cherney, 1995b) points out the details involved in MUD communication backchannels,
implicitly satisfying Kiesler's query, "Consider the consequences if one cannot look quizzically to indicate
if the message is confusing or ... nod one's head or murmur 'hmm' to indicate that one understands the
other person," (Kiesler, Zubrow, & Moses, 1985, p.82). Finally, Cherney's (1995b) effort examines the
modal complexity of speech events on one MUD, and suggests a possible classification system for MUD
nonverbal communication, including conventional actions, backchannels, byplay, narration, and
exposition.
Michael Holmes is another scholar who has recently contributed to the literature on MUDs. His (1994)
study of MUD environments as compared to Internet Relay Chat (and other similar "chat" utilities)
concluded that the chat services "supply a stark context for conversation", while MUDs furnish "a richer
context intended to model aspects of the physical world," (Holmes, 1994). Similarly, his (1995)
examination of deictic conversational modalities in online interactions sheds light on such curious
observed utterances as "Anyone here near Chicago?", (Holmes, 1995). Owen (1994) worked with
identity constructions spawned by the chat utilities of the world's largest commercial Internet provider,
America Online (AOL) and posits the frequent appearance of self-effacing attribution invitations in online
conversations.
As the number and extent of the uses of computer mediated communication (CMC) have grown
exponentially in the last two decades, the communication discipline has produced a body of literature
examining the interpersonal effects of such interaction. Some such studies purport that CMC is
necessarily task-oriented, impersonal, and inappropriate for interpersonal uses (see Dubrovsky, Kiesler,
& Sethna, 1991, Dubrovsky, 1985, Siegel, Dubrovsky, Kiesler, & McGuire, 1986). This effect is
brought about by a lack of media richness, and is sometimes called the "cues-filtered-out" perspective
(Culnan & Markus, 1987). In other words, restricting interlocutors to the verbal channel strips their
messages of warmth, status, and individuality, (Rice & Love, 1987). However, as Walther, Anderson,
and Park point out in their excellent (1994a) meta-analysis of published CMC studies, when provided
with unlimited time, CMC users gain familiarity with the tools at hand, and communication becomes
much more sociable, indicating that "the medium alone is not an adequate predictor of interpersonal tone,
", (Walther, 1995, p. 11). Walther even posits the existence of what he calls "hyperpersonal"
communication, "CMC which is more socially desirable than we can achieve in normal Ftf [face to face]
interaction,", (Walther, 1995, p.18). This phenomenon stems from three sources. First, CMC
interlocutors engage in an over-attribution process, attributing idealized attributes on the basis of minimal
(solely textual) cues. In fact, Chilcoat and Dewine (1985) report that conversants are more likely to rate
their partner as attractive as more cues are filtered out. (Their study compared face to face, video
conferencing, and audio conferencing, and the results were exactly the opposite of their hypotheses.)
Second, CMC provides users with an opportunity for "selective self-presentation" (Walther & Burgoon,
1992), since the verbal channel is the easiest to control. Finally, certain aspects of message formation in
CMC create hyperpersonal communication in that one has time to formulate replies and analyze
responses to one's queries, a luxury denied, or at least restricted, in face to face dyads.
A considerable number of papers and projects concerning MUDs has been produced within other
disciplines. For instance, sociologist Reid (1994) examines a MUD as a cultural construct, rather than a
technical one, and addresses issues such as power, social cohesion, and sexuality. Serpentelli (1992)
examines conversational structure and personality correlates in her psychological study of MUD behavior.
Likewise, NagaSiva (1992) treats the MUD as a psychological model, but draws on Eastern philosophy,
and discusses MUD experiences as mystical experiences. Young (1994) embraces the textuality of MUD
experience as postmodern hyperreality, a rich new hybrid of spoken and written communication.
Numerous articles have been produced within the Computer Science discipline, many of which are of a
non-technical nature, most notably Bartle (1990), whose experience as the co-creator of the first MUD
makes him uniquely qualified as a commentator, Curtis (1992), another noted innovator in the field (and
perhaps the original author of the phrase "text-based virtual reality"), and Bruckman (1993), whose
extensive work on socio-psychological phenomena in MUDs at MIT has earned her deserved respect.
Finally, Turkle's (1995) important new book examines numerous MUD- relevant topics, including
artificial intelligence and "bots" (MUD robots), multiple selves and the fluidity of identity ("parallel lives"),
and the effects of anonymity. She points out the psychological significance of role (game) playing, and
reminds the reader that the word "persona" comes from the Latin word referring to "That through which
sound comes", i.e., the actor's mask. Through MUDs and other forms of CMC, she believes that people
can learn more about all the various masks people wear, including the one worn "in real life".
Recent innovations:
While the original "MUD" began a tradition of games with monster-slaying and treasure acquisition as
their primary goals, the advent of the MOOs, MUSHes, MUSEs, and perhaps most notably, Jim Aspne's
TinyMUD in 1989, brought about a new thinking in the purpose of Multi-User Dimensions. Rather than
utilizing commands such as "wield sword" and "kill dragon", participants in these "social MUDs" use the
virtual environment as a forum for interpersonal interaction and cooperative world creation.
At the same time as these text-based virtual environments were rapidly multiplying, an arguably more
ambitious project was well underway in Japan. Known as "Habitat", it was (and is) a "graphical
many-user virtual online environment, a make-believe world that people enter using home computers...",
(Farmer, Morningstar, & Crockford, 1994, p. 3). The creators of Habitat soon discovered that a virtual
society had been spontaneously generated as a result of their efforts. One of the creators claims,
This is not speculation! During Habitat's beta test, several social institutions sprang up
spontaneously: There were marriages and divorces, a church (complete with a real-world
Greek Orthodox minister), a loose guild of thieves, and elected sheriff (to combat the thieves),
a newspaper (with a rather eccentric editor), and before long two lawyers hung up their
shingle to sort out claims. (Farmer, 1989, p. 2)
As these various MUD environments have developed, each with their own particularities of culture, a
number of categories have emerged. Social MUDs have become virtual gathering places for people to
meet new friends, converse with old ones, get help on their trigonometry homework, play "virtual
scrabble", and assist in the continuing creation of the virtual environment. Some MUDs are known for
their risque activities. On FurryMUCK, players assume the identity of various animals and have
"mudsex" with one another, a rapid exchange of sexually explicit messages.
Professional and educational MUDs have begun to appear recently with more "serious" uses in mind --
their aim is to provide a virtual spatial context (e.g., conference rooms, lecture halls, and private offices)
for the participants therein, and even the creation of various pedagogical devices within the environment.
A few MUDs have been set up as havens for virtual support groups for people with common misfortunes
or interests. The most popular variety of MUD, though, harkens back to the philosophy of the original
"MUD", involving puzzle-solving, dragon slaying, and treasure accumulation.
It is these "adventure-style" MUDs which shall be the topic of inquiry for the remainder of this thesis.
While it may be argued that the social MUDs, with interpersonal interaction as their participants' sole
goal, would be more suitable, it is precisely because of this goal that adventure MUDs have been
selected. It stands to reason that the communicative phenomena to be found on purely social MUDs may
be even more firmly entrenched than on adventure MUDs due to the wealth of additional cultural cues
which such environments spawn. Therefore, it is important to demonstrate that 1) virtual cultures
develop on adventure-style MUDs, 2) that these cultures are quite real to the participants therein, and 3)
that nonverbal communication occurs in these worlds designed with point accumulation in mind, and
created solely by words.
Adventure MUDs
While a few "pay MUDs", i.e., MUDs which charge for access, do exist (and claim to be more dynamic
and carefully programmed), the vast majority of adventure MUDs are created and maintained by
volunteers. These volunteers are often computer science majors at major universities who have access to
the hardware needed to run a MUD and make it accessible to multiple users at once. Once the hardware
is in place, a "mudlib" must be decided upon. A "mudlib" is the most basic code that makes the MUD
run, i.e., the code that defines the mechanisms by which the spatial metaphor is created, defines the
difference between living and non-living objects, and calculates the formulae involved in combat.
Beyond the technical distinction of which mudlib a MUD runs on, the next most distinctive feature is
probably the theme which guides the builders (i.e., the people who actually program the objects in the
MUD - every room, monster, weapon, etc) in their creation of the MUD. The first MUDs were most
commonly based on a Tolkienesque world of hobbits and giants, swords and sorcery.
Now that the MUD community has expanded, however, diverse themes can be found, such as MUDs
based on Star Trek, Star Wars, and other popular fantasy genres. Some MUDs (mostly social MUDs)
are simply set in American cities, such as BayMOO (San Francisco) and Club Miami (Miami, FL). Other
MUDs are not themed in setting, but in purpose; they exist as meeting places for people with common
interests, such as support groups for zoophiles, or discussion groups for astronomers. Still other MUDs
are set simply in a virtual representation of the administrator's home. (The WWW site
http://www.mudconnect.com contains an extensive list of current publicly available MUDs).
By far, however, the fantastical swords and sorcery adventure-style MUDs are the most popular among
MUD players. As such, they have been developed perhaps more than any other, with a rich tapestry of
literature from which to draw, and perhaps even attracting especially imaginative builders and players. It
may be speculated that an additional reason that adventure- style MUDs are so popular is that the
treasure and point gathering that takes place therein appeals to many computer enthusiasts' desire for
mastery of technique and knowledge.
Each adventure-style MUD (referred to as simply MUDs from now on, unless otherwise noted) has a
primary dichotomy, often referred to as the "mortal/immortal" dichotomy. Simply put, the "immortals"
are those participants who have access to the programming which makes the MUD run. "Mortals" do
not. Though the colorful terminology may change from MUD to MUD, this split is sure to exist. It should
be noted that this is a significant difference between adventure-style MUDs and purely social MUDs
(most often based on MOO code), in which all members enjoy some access to the programming, and
there fore the ability to create their own objects.
Every MUD participant starts out as a "mortal". This entails no access to the programming language at
all. That is, they receive all the textual descriptions of the virtual environment, but none of the underlying
code that makes the MUD run. For the mortals, the spatial metaphor is reified through this limited access.
They have no choice but to exist within the spatial metaphor and interact with the other characters and
monsters therein.
Most adventure MUDs offer their participants a range of classes, or professions, (such as fighter, thief, or
necromancer), and races (fantastical things like ogres and elves). Besides being a colorful addition to the
participant's virtual persona, these designations have various effects on the player's experience with the
MUD. Ogres may be quite strong, but poor at spell casting. Mages may have an arsenal of spells at their
disposal, but may be struck down easily when hit. These details become pertinent when one understands
the "goal" of an adventure MUD.
In the maze of rooms that makes up a typical adventure MUD, there reside various programmed
monsters to be slain and puzzles to be unraveled. Players will typically spend much of their time dashing
from room to room engaging in computer-moderated verbally described combat with these creatures.
When successful in vanquishing these foes (success is determined in a large part by programmed
attributes of the combatants, though player strategy plays a part), players may reap their bounty.
Rewards such as equipment (which may aid the character in future battles or sold at the shop), or money
(which may be used to purchase equipment), and other treasures may be found. Above all, though, the
player of the adventure MUD seeks "experience points", which determine how powerful the character
can become. When a sufficient quantity of experience points have been collected, the character may
"advance a level", thereby increasing his or her mastery of combat, spell casting, or other skills.
There are risks, of course, in such valorous activity. Every time a character enters into combat with a
foe, there exists a chance of death. The severity of players' deaths varies from MUD to MUD. On some
MUDs, characters may simply lose the treasures they have amassed during their session. On others,
significant reductions in a character's quantified skill levels may occur, while on a few MUDs, death is
quite realistic and harsh - the character is simply erased.
Death is not a random occurrence on well-tuned adventure MUDs. Each character is a quantifiable
distance from death at any given moment, often referred to as "hit points". Every time s/he is struck in
combat (which proceeds quite rapidly, text scrolling across the player's screen), that number of hit points
is reduced. When it reaches zero, the character dies.
Since characters engage in combat often, and combat reduces hit points, there exists a need for healing,
so that characters do not simply get weaker with each successive battle. On adventure MUDs, these
biological needs are taken care of through the presence of pubs and restaurants from which one may buy
various cocktails and foodstuffs, all of which contribute to a character's health. This virtual biology is
extended in that characters can only eat and drink a certain amount before becoming satiated, after which
they need to wait a short time before consuming again. Some MUDs even require that each character eat
from time to time even if they do not require healing - they get hungry.
Besides food and drink (which cost gold coins), there exist healing spells which certain classes of
character may cast. This is just one of the ways that interaction between characters is spawned on
MUDs. If one character is injured and knows that a healer is connected to the MUD at the time, s/he
may seek the healer out and ask for help, perhaps even offering something in exchange. Some MUDs,
for instance, require material components for spell casting (eyes of newt, and so forth), thus providing
non-spell casters with some bargaining power.
An additional source of interaction between players is the guild system. While each character has a
"class", or profession, which determines what proficiencies they have, guilds are more like social
organizations. A guild could be based upon traditional notions of chivalry, or black magic, or the love of
chocolate, or anything else that the creators decide. Guilds generally have a private location for guild
members to congregate and interact, and perhaps a few specialized signs or signals that they use to
recognize one another. Guilds often provide an additional reason for interaction, even to those players
most interested in accumulating experience points.
Many MUDs allow characters of sufficient experience the opportunity to ascend into the ranks of the
"immortals", or those individuals with some degree of access to the actual programming that makes the
MUD run and the power to create and manipulate objects therein. For the immortals, combat skills are
completely irrelevant; they can simply erase any (non-player) foe in their path. As such, the very nature
of the environment is completely different for them.
Within the Immortal group, there are several levels of access to the programming, each with its own
colorful moniker. The hierarchy outlined below is based roughly on the author's acquaintance with two
popular MUDs, Ancient Anguish (described at length in Masterson, 1995) and Paradox II (development
of this hierarchy described in part in Masterson, 1995b). The lowest level of Immortals includes the
Builders, Wizards, or Creators. This group of individuals consists generally of those players who have
reached a certain level of expertise and experience, and have been granted limited access to MUD code.
They are generally given a directory (MUD syntax is much like the Unix operating system) in which they
can write and edit files which may create objects in the MUD. It is this group of immortals whose
responsibility it is to continue the creation and expansion of the virtual geography of the MUD. It is also
generally the largest group of immortals.
Various other groups of immortals are responsible for overseeing the activities of the wizards and the
players. A common division involves one person (often called an "arch") to determine if the areas (this
term includes the monsters and objects therein, as well) that the wizards are making are of sufficient
quality (imaginatively described and comprehensively coded) to install in the game for players to enjoy
(the "QC" or "Approval Arch"). Another arch might be responsible for ensuring that the areas all are
smoothly integrated into the milieu of the MUD, and that there are neither areas in which players will
suffer grave misfortune for little reward nor areas from which players stagger home with loads of treasure
with little risk (the "Balance Arch", or "World Arch"). Another Arch may be responsible for ensuring that
the underlying code that governs combat, character death, and interaction of objects runs smoothly (the
"Mudlib Arch"). Finally, there is usually an arch who's responsibility it is to ensure a fair and equitable
environment for the wizards to code in and the players to adventure in; in other words, and individual
responsible for the upkeep of the rules of the MUD (the "Law Arch"). Though this scheme is by no
means the only way that adventure MUDs govern themselves, it is quite common. All of the arches will
have greater access to the programming than do the wizards.
The individuals who occupy the top tier of the adventure MUD immortal hierarchy are known as the
Admins (administrators). This group of individuals is endowed with the ultimate responsibility for
maintenance and the upkeep of the MUD. They have access to every file that comprises the MUD.
Mortal concerns are outside the scope of their responsibilities.
The issue at hand
A common descriptive metaphor in the literature of nonverbal communication states that "We don't need
to be told we are at a wedding." In other words, our nonverbal communication provides essential
contextual cues, moment by moment, which help us and others to make sense of our interpersonal
situation. Just as a picture may take the place of a thousand words, so too may a gesture.
It can be seen from the preceding section that there are numerous attributes of MUDs that give rise to
interaction between participants. This interaction brings about a sense of community among participants
on a given MUD. Indeed, some people get quite passionate about their membership in the
"MUD-family", and connect to the MUD for as many as 80 hours a week, which is testimony to MUD
conversations' compelling interactivity. Given that this is the case, though, how is it that in virtual
c
f:\12000 essays\sciences (985)\Computer\NOVEL 3 12.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Novel 3.12
Inleiding
Netwerken hebben hun nut op scholen en bedrijven bewezen in de loop der jaren en
komen steeds veelvuldiger voor.
De netwerken worden groter, krachtiger en dus ook gecompliceerder.
Om ze te kunnen beheren moet men ze minstens kunnen installeren en begrijpen.
Novel's eigenschappen
Novel is een volledig 32 bits multi-task omgeving en dus alleen installeerbaar op een 386 en
hoger, dit is wel nodig want novel netware 3.12 onderstuund tot 250 users en om dit alles te
verwerken moet novel optimaal gebruik maken van de 32 bits processor kracht.
Verder bied novel een goede beveiliging op toegangs gebied en data beveiliging
Op toegangs gebied kan dit door middel van login rechten, file rechten etc.
Op data beveiligings gebied moeten we denken aan het "mirroring" en "duplexing" van 2
harde schijven, UPS controle (UPS= Uninterruptible Power Supply)
Netwerk management: remote console service, dit wil zeggen dat je op een werkstation taken
kunt uitvoeren alsof je op de server werkt en MONITOR utilitys waarme je alle activiteiten op
het netwerk kunt waarnemen.
Novel's modulaire opbouw maakt het mogelijk om tools te laden en ontladen als de server
nog loopt en om eigen programma's of verbeteringen los toe te voegen
Novel draait nu onder UNIX, OS /2, DOS en Macintosh en kan met deze ook onderling
communiceren
Ge gobale verbeteringen zijn weergegeven in de onderstaande tabel.
SpecificationNetWare
v2.2
NetWare v3.12
Hard disks per volume
1
32 (or 16 if mirrored)
Volumes per server
32
64
Volumes per hard disk
16
8
Directory entries per volume
32,000
2,097,152
Maximum volume size
255 MB
32 TB
Maximum file size
255 MB
4 GB
Maximum addressable disk storage
2 GB
32 TB
Maximum addressable RAM
12 MB
4 GB
Maximum volume name length
15 characters
15 characters
Maximum directory/file name length
14 characters
12 characters (DOS format)
Name space support
DOS, Macintosh
DOS & Windows, Macintosh, UNIX, FTAM, OS/2
Disk block sizes
4 KB
4 KB, 8 KB, 16 KB, 32 KB, 64 KB
Voor het installeren
Voordat daadwerkelijk met het installeren kan worden begonnen moet uiteraard aan een aantal
eisen worden voldaan. Het systeem moet aan de hardware eisen voldoen en correct zijn
geinstalleerd, de stroom voorziening moet goed geregeld zijn, het systeem moet minimaal 4 Mb
geheugen bevatten en 50 Mb voor de systeem files van novel en dos en een netwerk kaart (en
kabel) zijn geinstalleerd
Om het benodigde geheugen goed te berekenen is een formule:
4mb voor de standaard drivers en install module
+ 2mb voor netwerk 'add on' zoals printer modules en andere standaard modules
+ 0,008 x harddisk grootte
+ 1 - 4 mb ram als cache
De netwerk kaart is meestal een ne-2000 of compatibele kaart en zal meestal op adress 300 en
irq 3 staan, dit dien je voor het installeren te weten omdat hiernaar wordt gevraagd voor novel.
Het installeren
Installeren kan op 3 manieren en wel, van disk, van CD-ROM of van een netwerk directory.
Van disk of CD-ROM dient men eerst 3 SYSTEM schijven aan te maken met drivers en novel
files.
Van netwerk gaat echter wat simpeler
Eerst maak je een partitie van 10Mb voor MS-DOS en formatteert deze met systeem bestanden.
Herstart de server opnieuw en maak een directory server aan met MD SERVER.
Vervolgens log je in op de fileserver waar de bestanden zich bevinden en kopieer de orginele
system_1,2 en 3 disk naar c:\server (dit kan zo ook van disk of CD)
Run van de install disk programma INSTALL.
Dit zal disk SYSTEM 1 2 en 3 gebruiken.
Vervolgens kunnen we verder werken in novel zelf door in de directory server ,SERVER.exe op
te startten of in de installatie SERVER.EXE in de autoexec.bat laten zetten
Nu bevinden we ons in novels eigen besturings systeem
Nu moetten we de server een naam geven. Dit mag tussen de 2 en 47 tekens lang zijn;
bijvoorbeeld:
*Fileservername: TER_AA
Hierna moetten we een IPX nummer invullen. Voor de server is dit altijd 1.
*IPX internal network number: 1
Om de SCSI of ISA controller aan te sturen voor CD-ROM etc dienen we de bijgeleverde driver
in te laden met het commando LOAD xxxxx.xxx (waarbij xxxxx.xxx de naam van de driver is).
In ons geval LOAD ISADISK
Nu kunnen we verder gaan met de installatie. We geven het commando LOAD INSTALL
Na LOAD INSTALL komt er een menu op het scherm :
INSTALLATIONS OPTIONS MENU
disk options
volume options
system options
volume options
exit
We kiezen DISKOPTIONS en vervolgens PARTITION TABLES nu kiezen we voor CREATE
NETWARE PARTITION.
Hier neemen we meestal de standaard waardes en drukken op OK.
Nu zal de computer vragen of hij deze zal aanmaken Yes/No, Yes uiteraard om verder te gaan.
De partitie wordt aangemaakt zonder te hoeven formatteren.
Als je meer als een harde schijf hebt kun je hier mirroren en duplexen.
Dit dienen wel 2 dezelfde harde schijven te zijn met de zelfde grootte of zelfde grootte partitie
Terug in het hoofd menu kiezen we voor VOLUME OPTIONS
Je mag tot 64 volumes maken waarvan en één SYS: MOET heten.
Dit volume is voor de SYSTEM, PUBLIC, LOGIN, en MAIL directories.
Door op INS te drukken kunnen we een volume toevoegen en we maken hier het volume SYS:
aan en eventueel nog meer volumes.
Hier kan ook de block grootte worden ingesteld op 4, 8, 16, 32, of 64 KB.
Grote block size is beter voor grote database bestanden kleinere block size bespaard ruimte als
je veel kleine bestanden beheert.
Terug in het hoofdmenu kiezen we SYSTEM OPTIONS EN COPY SYSTEM AND PUBLIC
FILES. Nu geeft novel een melding: Insert disk "Netware 3.12 install diskette in drive A
Maar omdat we hier mischien niet van disk installeren kan je met F6 een alternatieve drive of
directory aangeven waarna novel begint met copieren.
Als novel meldt File upload completed.
Nu dienen we de drivers voor de netwek kaart etc. te laden.
Omdat novel multitasking heeft hoeven we het programma niet te verlaten maar kun je met
[Alt + ESC] schakelen van task,over naar de Novell prompt en geven we het commando
LOAD NE2000 (of andere driver) en novel geeft een melding terug:
loading NE2000.LAN
autoloading ETHERTSM.NLM
(Topology Support Module)
autoloading MSM31.X
(Media Support Module)
Om de netwerk kaart aan de novel IPX module te koppelen moet het commando
BIND IPX TO NE2000 gegeven worden waarna novel weer met een melding komt:
Network number: 1
IPX lan protocol bound to Novell NE3200
Bij network number vul je meestal 1 in tenzij je als een ander netwerk bezit, dan nummer je
omhoog
[Alt + ESC] schakeld task over terug naar het installatie menu.
We kiezen voor CREATE AUTOEXEC.NCF FILE
fileservername TER_AA
ipx internal net 1
load NE2000 slot=6 frame=ethernet_802.3
bind IPX to NE3200 net=1
Let op:
frame=ethernet_802.2 voor VLM (Virtual Loadable Modules)
frame=ethernet_802.3 voor IPX (Inter-Packet Exchange) (IPXODI voor muli OS systemen)
dus verander indien nodig 802.2 in 802.3
We saven deze en kiezen nu CREATE STARTUP.NCF FILE
Hierin staat de SCSI/IDE driver, b.v.: load ASADISK
Om verder te werken om bijvoorbeeld een printer server te maken dienen we op een
werkstation verder te gaan. Voordat we kunnen inloggen vanaf een werkstation dienen we een
passwoord aan te maken. Met [Alt + ESC] Schakelen we van task over naar de Novell prompt en
typen LOAD RSPX à
loading RSPX.NLM
(Remote Console SPX driver)
autoloading REMOTE.NLM
(Netware Remote Console)
enter new password for remote console: TER_AA (het passwoord)
Om alle settings te laten werken en te controleren of de .CNF files goed zijn sluiten we de server
af met het command DOWN en keren terug naar DOS met EXIT.
Vervolgens startten we de fileserver op door dfe commando's
C:\>CD\SERVER
C:\>SERVER
Het maken van een printer server
De server is opgestart, en we gaan naar een werkstation en loggen in op de fileserver TER_AA.
In F:\SYSTEM> typen we in PCONSOLE. Een menu verschijnt en we kiezen PRINTSERVER
INFORMATION en vervolgens PRINTSERVERS. Hier staan geen printerserevers maar door op
[INS] te drukken kunnen we een printerserver toevoegen. à
New printservername:PSERV
Je kiest met de cursor Printserver Pserv en geeft RETURN en gaat naar
PRINTER CONFIGURATION.
HIER ZIE JE STAAN ONDER Configured printers à Not installed 0
Je geeft een return om er 1 toe te voegen een een lijst verschijnt met opties:
name printer 0: HP-Laserjet_4M (naam van een printer)
type
parallel LPT1
drukt op [esc] en geef bij SAVE? Yes
Nu moeten we terug naar AVAILABLE OPTIONS en kiezen PRINT QUE INFORMATION,
PRINT QUE'S. Wederom [ins] om een que aan te maken.
Geef de Que een duidelijke naam:
NEW PRINT QUE NAME: LASERJET
Om een que nu toe te kennen aan een printer server gaan we in het hoofd menu naar
PRINTSERVER INFORMATION, kiezen PSERVals server
Printserver configuration [Return]
Ques serviced by printer
LPT1 [Return][INS] available queues
COM1 [Return][INS] available queues
- [ESC][ESC]
- Login als USER/GUEST
- Maak een job aan
F:\>PRINTCON
edit printjob configurations [INS]
new name LPT1_J [Return]
Melding: no form defined on server [ESC]
- Zet printque op LPT1_Q
- Zet printbanner op NO
- [ESC] Save? Yes
- Exit printcon
- Zet nu in AUTOEXEC.NCF
load RSPX
load pserver pserv
Om verder te gaan dient eerst een schema gemaakt te worden voor de file structuur op de
fileserver. De eerste en essenciele directory's zijn al aangemaakt door novel.
De meeste file server zien er als volgt uit.
Hierna moetten de USER GOUPS en USERS gemaakt worden.
Als de server wordt geinstalleerd worden de users SUPERVISOR en GUEST en degroep
EVERYONE automatisch aangemaakt. Door users rechten toe te kennen kun je hun toegang
beperken of vergroten. Meestal zijn er mensen met extra rechten naast de SUPERVISOR die
een gedeelte van het netwerk beheren bijvoorbeeld als in het onderstaan schema.
Deze personen krijgen exta rechten over een bepaalde groep om deze te beheren.
De supervisor kan op zijn beurt deze weer beheren.
Door GROUPS aan te maken kun je users tot een groep toevoegen waaraan een bepaald aantal
rechten, programma's en menu is toegewezen zodat je niet bij elke nieuwe user nieuwe menu's
hoeft te maken of alle rechten hoeft in te stellen en om zo ook de overzichtelijkheid te behouden.
Met het programma SYSCON kun je dit alles instellen.
Een user maak je aan door bij USER INFORMATION op [ins] te drukken en een naam in te
vullen. Door vervolgens op deze nieuwe user te drukken kun je zijn of haar rechten bepalen.
Als SUPERVISOR krijg je het volgende menu te zien:
In dit menu kun je rechten toekennen, toevoegen aan groepen, login tijden toekennen etc.
Om groepen aan te maken kies je GROUP INFORMATION in het hoofd menu en weer met [ins]
kun je een groep toevoegen een een naam geven.
Aan een groep kun je directory's toekennen met daaraan gekoppeld de rechten die users daarin
hebben volgens onderstaand schema kun je deze (met [ins]) toevoegen.
Letter
Rechten
S
Supervisory
R
Read
W
Write
C
Create
E
Erase
M
Modify
F
File scan
A
Access control
Vervolgens kun je USERS toevoegen aan een GROUP of een GROUP toekennen aan een
USER.
LOGIN SCRIPT
Voor elke USER kun je een login script schrijven.
Met een login script kun je:
Drives mappen en search drives aan directories kop[pelen. (mapping = soort subst van novel)
Boodschappen weergeven.
Systeem variabelen instellen (Tijd, zoekpath etc.)
Programas of menus runnen.
Je hebt 3 verschillende soorrten loginscripts:
System login script: Stelt de primaire setting in voor alle users (komt als eerste)
User login script. Stelts omgeving in voor een user, bijvoorbeeld een menu options of een
username voor electronic mail. Dit script komt na het system login script.
Default login script.(Een gedeelte van de LOGIN.) Dit script wordt uitgevoerd als de
SUPERVISOR de eerste keer inlogd. Hierin staan de belangrijkste SEARCH MAPS en NOVEL
NETWARE UTILITIES.
Een script kun je bewerken onder USER INFORMATION als je genoeg rechtten bezit.
Conclusie:
Novel is een ZEER gecompliceerd pakket.
De hiergegeven beschrijving is een simpele installatie methode om mee te beginnen.
Om verder het netwerk beter te laten werken is veel meer kennis nodig.
Stijn Peeters
10-6-1996
f:\12000 essays\sciences (985)\Computer\Now is the time.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Now is the time to become computer literate. Now is the time to
become familiar and comfortable with the computer because in
the future we will become virtually a paperless society and many
daily activities will be linked to the computer.
Mail delivery to the home and business will be almost
entirely phased out and e-mail will replace it. Bills will come via
the computer and paid the same way. Pay checks will be
electronically deposited to your bank account. On special
occasions such as birthdays, greeting cards will be sent from
your computer to your loved ones computer.
Shopping malls will become cyber malls and we will do our
shopping via the computer. You will be able to view on your
monitor how you would look in a certain outfit you are
considering to buy. Imagine traveling over the entire mall in a
comfortable in front of your computer. Push a button and the
entire stock of a store will be at your finger tips. When you do go
to a store to shop you will not use money. You will use either a
credit card or debit card which will automatically deduct the
amount if you purchase from your bank account.
Our homes will be run by computers. Computers will adjust
the temperature. Home appliances will be linked to the computer.
Imagine driving home from work and calling your computer and
having it start dinner for you. Have it adjust the temperature so
your home will be a comfortable temperature when you arrive.
Window covering will be adjusted to allow the correct amount of
sunlight in. Light fixtures will automatically adjust to the right
level of light in your home.
The way of business conducted be entirely changed.
Instead of long distance business trips, business will be
conducted via interactive tele-conferences. Documents and files
will be stored on computers hard drives. Much of this is done
today but in the future it will expand as we become a paperless
society.
Many workers will not have to go to a place of employment.
They will work from their homes via the computer. For those who
do have to drive to work it will become less stressful as
computer help to keep traffic congestion down. Cars will have on
board computers to keep them aware of road conditions, traffic
backups, and which route is best to take. On board computer will
also replace maps and give directions from your current location
to where you are going. If you happen to get lost your computer
will get you back to the correct road.
The education system will also join the computer age. Every
student will have access to a computer. Text books will be on
disks. Students will have access to a vast amount of reference
material via the computer modem from far away universities and
other institutions. Home work will done on the computer. Instead
of turning in papers on which you have done your homework you
will either turn in a disk or send it to your teachers by a modem.
Teachers will no longer have to spend hours grading papers.
Homework and in class work will be graded by the computer.
Test will be taken on the computer and you will know as soon as
you finish you will know what your score is. At the end of the
grading period your teacher will just punch a few keys on her
computer and your report cards will print out as the computer
keeps track of all your grades for the quarter. Some classes will
be conducted by interactive teleconferences much the same
business conferences are conducted. This will give students in
small schools the same educational opportunities as those in the
larger school systems. Our leisure time will also be affected by
the expanded use of computers. In the future the home
communication system { phones, e-mail, faxes, and modems} and
tv service will be integrated into one system. If you want to read
the newspaper you will not have to travel to the driveway to pick
it up. Just flip on your tv and with the aid of your computer pull
up the paper on your screen and read. Magazines will be available
the same way. If you want to watch a movie and just turn on the
tv and you will receive a list of what is on. Order by the computer
and sit back and enjoy the movie. Video games will be available
to play on your tv the same way. People whose hobbies are
collecting things such as card or stamps can receive the latest
information on their collections from the computer.
Find yourself putting on a few extra pounds spending all
your time in front of your computer system. You can get exercise
programs and a computer generated diet geared to your specific
needs from your computer
So to be a productive person in the future you will need to
prepared for the future so NOW IS THE TIME!
f:\12000 essays\sciences (985)\Computer\Optical Storage.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Optical Storage MediumsJames Ng The most common way of storing data in a computer is
magnetic. We have hard drives and floppy disks (soon making way to the CD-ROM), both of
which can store some amount of data. In a disk drive, a read/write head (usually a coil of
wire) passes over a spinning disk, generating an electrical current, which defines a bit as
either a 1 or a 0. There are limitations to this though, and that is that we can only make
the head so small, and the tracks and sectors so close, before the drive starts to suffer
from interference from nearby tracks and sectors. What other option do we have to store
massive amount of data? We can use light. Light has its advantages. It is of a short
wavelength, so we can place tracks very close together, and the size of the track we use is
dependent only on one thing - the color of the light we use. An optical medium typically
involves some sort of laser, for laser light does not diverge, so we can pinpoint it to a
specific place on the disk. By moving the laser a little bit, we can change tracks on a
disk, and this movement is very small, usually less than a hair?s width. This allows one to
store an immense amount of data on one disk. The light does not touch the disk surface,
thereby not creating friction, which leads to wear, so the life of an average optical disk
is far longer than that of a magnetic medium. Also, it is impossible to ?crashÓ an optical
disk (in the same sense as crashing a hard drive), since there is a protective layer
covering the data areas, and that the ?headÓ of the drive can be quite far away from the
disk surface (a few millimeters compared to micrometers for a hard drive). If this medium
is so superior, then why is it not standard equipment? It is. Most of the new computers
have a CD-ROM drive that comes with it. Also, it is only recently that prices have come low
enough to actually make them affordable. However, as the acronym states, one cannot write
to a CD-ROM disk (unless one gets a CD-Recordable disk and drive). There are products
however, that allows one to store and retrieve data on a optical medium. Some of those
products are shown in table 1. However, the cost of this is quite high, so it doesn?t
usually make much sense for consumer use yet, unless one loves to transfers 20 megabyte
pictures between friends. One will notice on the table that there are some items labled ?MOÓ
or magnet-optical. This is a special type of drive and disk that get written by magnetic
fields, and read by lasers. The disk itself is based on magnetism, that affects the
reflective surface. Unlike floppy disks, to erase such a disk at room temperature requires
a very strong magnetic field, much stronger than what ordinary disk erasers provide. To aid
in writing to this MO disks, a high-power laser heats up part of the disk to about 150 oC
(or the Curie temperature), which reduces the ability for the disk to withstand magnetic
fields. Thus, the disk is ready to be rewritten. The disk needs to passes to change the
bits though. The first pass ?renewsÓ the surface to what it was before it was used. The
second pass writes the new data on. The magnetic fields then alters the crystal structure
below it, thereby creating places in which the laser beam would not reflect to the
photodetector. Another type of recordable medium, is the one-shot deal. The disk is shipped
from the factory with nothing on it. As you go and use it, a high-power laser turns the
transparent layer below the reflective layer opaque. The normal surface becomes the islands
(on a normal CD) and the opaque surface the pits (pits on a normal CD do not reflect light
back). These CDs, once recorded, cannot be re-recorded, unless saved in a special format
that allows a new table of contents to be used. These CDs are the CD-Recordable, and the
Photo CD. The Photo CD is in a format that allows one to have a new table of contents, that
tell where the pictures are. It is this that distinguishes between ?single-sessionÓ drives
(drives that con only read photos recorded the first time the disk was used) and
?multi-sessionÓ drives (that can read all the photos on a Photo CD). To read an optical
medium, a low-power laser (one that cannot write to the disk) is aimed at the disk, and data
is read back, by seeing if the laser light passes to the photodetector. The photodetector
returns signals telling if there is or is not light bouncing back from the disk. To
illustrate this process, see Figure 1. Optical data storage is the future of storage
technology. However, it will take some time before prices are low enough for the general
public. Applications get bigger, data files get bigger, games get bigger, etc. The humble
floppy disk, with its tiny 1.44 megabyte (actually, 1.40 megabytes... since disk companies
like to call 1 megabyte 1,024,000 bytes, when it is actually 1,048,576 bytes, or 220 bytes)
capacity will be no match for the latest and greatest game, requiring 2+ gigabytes of space
(and such games to exist now... in 4 CD-ROMs), the hard drive will reach its capacity, while
the optical drives get smaller, faster, and cheaper. The speed of optical drives today is
appalling to say the least. Also in the future would be hard drives based on optical
technology, since nowadays a 51/4 inch disk can contain as much as 1 gigabytes of data.
Optical drives, with their high-bit densities are in the near future...Sources Used:UMI -
May 1992 BYTE MagazineTOM - June 1992 PC Magazine (64J2528)CD-ROMs - Grolier?s
multimediaPrinted - Various BYTE, ComputerCraft, MacUser and MacWorld magazinesInternet -
Figure 1: http://www.byte.com/art\9502\img\411016E2.htm Table 1:
http://www.byte.com/art\9502\img\411016Z2.htm
f:\12000 essays\sciences (985)\Computer\Outsourcing.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Contents.
1 Abstract
2 Introduction
3 Fundamentals
4 The Main Strategy
5 Sucessful Outsourcing
6 Conclusion
Outsourcing and how it can help IT Managers enhance their projects.
Abstract
With computer systems / projects and there implementations getting more complex with every day that passes , the tendering of IT responsibilities to external parties is becoming more and more attractive to the IT Managers of large organisations. The common name for this type of operation is "Outsourcing". It is the attempt of this paper to explain outsourcing , it's pro's and con's and how it can help our friendly IT Manager enhance developments or implementations.
Introduction
Outsourcing can be defined as a contract service agreement in which an organisation hires out all or part of its IT responsibilities to an external company.
More and more companies are leaning towards outsourcing it could be said that this may be caused by the growing complexity of IT and the changing business needs of an organisation. As a result, an organisation may find that it is not possible to have all its IT services supplied from within its own company. Given this, an IT manager may decide to choose to seek assistance from an external contractor/company to supply their services the organisation lacks. In addition, the business competition has set the pace for an organisation to continue to strive for internal efficiency. It also needs to look for a way to transfer non-core activities or "in house" services and support activities to external specialist organisations who can deliver quality services at a lower cost.
Fundamentals
In deciding whether to use outsourcing or not, the main objective of outsourcing is based on the price of delivery of services by an external contractor/company. Although price of delivery is a primary factor for outsourcing, other issues should be considered e.g. price should be measured against the overall package offered by the external contractor/company. Briefly if it's a good competitive price in relation to the services rendered by the company and in respect to their skills/competency and experience, and timely delivery. The organisation also needs to consider outsourcing in light of its long term strategic directions and its information needs.
Competition is a another area to be carefully considered. Competition opens up opportunity for all potential suppliers to conduct business with the organisation. Through the competitive process, it allows organisations/IT managers to derive the best outcome. From the open and effective competition, the organisation is then able to judge soundly in determining the best strategy after it has taken into account of the competition and value for money principle.
IT managers can go through lengthy procedures to minimise problems with outsourcing, but still things can go wrong and intended objectives may not get achieved. To overcome such mistakes, it may be prudent to look at other companies that have undertaken outsourcing and learn from their successes and mistakes
Listed below are some of the major issues to be considered when using outsourcing:
· An IT manager that undertakes outsourcing must be able to clearly identify its long term IT strategic directions and long term information needs.
· Organisation must be able to clearly define its business objectives.
· To avoid unnecessary friction between the organisation and the external service provider, it would be prudent to incorporate an "extraordinary events" clause into any contract entered into. This clause should cover any extraordinary changes in circumstance that should occur . This also allows a lot of flexibility between the two parties.
· The IT manager should identify all the external and internal stakeholders and the impact that the outsourcing may have on stakeholders.
· Learn from other companies, use their mistakes and successes to avoid duplication and waste of manpower.
· The IT manager should communicate regularly with anyone in the organisation who is affected by outsourcing, even if the affect is very small.
· The IT manager should make sure that the external service provider should know exactly what is expected of them e.g. the exact services required.
· The IT manager should allow adequate time and the correct resources to the problem at hand , this is to ensure the best possible outcome from the service arrangement.
· The IT manager should assign skilled staff to manage the external contractor and to monitor closely the external contractors performance.
· The IT manager should monitor and assess the contractor to ensure quality of the service not just price of the delivery of services.
The Main Strategy
In an organisation, the IT infrastructure components are comprised of a number of technical and service areas. Before going through any outsourcing decision process, the organisation needs first to assess its sourcing across the entire IT infrastructure. Once this is done, the organisation can then determine the best sourcing strategy against a number of perspectives.
In order to determine the optimum sourcing strategy, an organisation needs to look at a number of perspectives or alternatives and then balance these perspectives with the benefits and risks of outsourcing. With this information, an organisation can derive a more structured methodology for a balanced view of the IT infrastructure and its components.
It can be stated that there is no one approach to outsourcing. However, in practice there are three common methods used by the practitioners. They include:
1 Outsourcing a significant proportion of the IT services and technical areas. This approach has a lower co-ordination cost and also has a greater organisational impact;
2 Assessing each IT service and technical area independently. A number of vendors are used to match the needs of each outsourcing event. This approach selects the best vendor and deal for each outsourcing arrangement. However, it involves higher internal costs and synergy problem;
3 Selecting a prime contractor. The prime contractor can select and manage all other vendors. This approach depends on the importance of learning curve and therefore, it takes longer.
As part of the determination of outsourcing strategy, it is useful for the organisation to incorporate any experience derived from other organisations that have outsourced and other forms of outsourcing that the organisation has undertaken. The organisation should also perform an initial investigation on the potential vendors background. Furthermore, the organisation should examine different kinds of outsourcing forms that the vendors are able to provide.
The organisation must identify all the internal and external stakeholders and the impact that outsourcing may have on them and their objectives. The internal stakeholders include IT staff, users and management and, the external stakeholders include unions, customers, and existing suppliers (IT and non-IT).
The IT manager should also undertake cost benefit analysis of all internal costs and external provisions. This provision include capital investment, ongoing expenses and the commitment of time and resources. Once a cost baseline is developed, an organisation can come up with a more objective cost analysis. It can then assess the related components of the vendor's
proposal against this cost benefit analysis before making any decision regarding the outsourcing.
Successful Outsourcing
For an IT manager to successfully outsource its IT functions, there are a number of factors that need to be addressed.
An organisation that has outsourced its IT functions to an external contractor, should not abdicate itself from the responsibility from the activity that it has outsourced. In other words, there is still a need for the organisation to retain overall control of its IT services being outsourced. In addition, the organisation needs to regularly monitor the external contractor to ensure that they continue to deliver quality service and to perform at the required standard as agreed in the contract arrangement. To be able to do this, the organisation must ensure that it can maintain sufficient technically competent "in house" staff to oversee the contract service agreement.
Before an organisation outsources its IT functions, it is very important that it prepares a sound full cost estimate for all existing internal computer systems so that it can determine whether the outsourcing is cost effective. Failure to do so can be critical.
The costing issue of Outsourcing is discussed in more detail in the section headed
"The Economics of Outsourcing"
For any successful outsourcing, a good solid contract is essential. The contract should also allow for flexibility as it is difficult, in the life cycle of the contract, to predict every circumstance or cover every eventuality. Successful outsourcing should be based on partnership between the organisation and the external contractor.
Outsourcing an organisation's IT functions without proper consultation with employees can cause a lots of stress among IT staff and reduce their morale. The result may be a loss of some key technical and specialist staff from the organisation. A more open and timely communication with employees can minimise this impact and uphold the staff morale. Organisation can allay the fears by outlining career options and opportunities for its staff within and outside the organisation and also by explaining the benefits of outsourcing to those affected employees.
The Economics of Outsourcing.
There are many reasons a company may choose to outsource its software development function. These couple of paragraphs address the two main reasons for this action
1 The conception that outsourcing is cheaper
2 The expertise for developing the required software product does not exist
within the company.
In the past, it was difficult to compare the cost of outsourcing a software product against the cost of in- house development, mainly because there was no functional sizing metric agreed upon prior to the start of the contract. As function points grow in popularity and gain wider acceptance as an accurate measure of software size, more firms will be better equipped to compare outsourcing firms with in-house development teams. In all sophisticated industries cost per unit (or average cost) is an important consideration, where average cost is total cost divided by total output. The same concept can be applied to software development using function points. Total development cost divided by total function points is an average cost calculation. Once average cost is determined, all prospective developers, in-house and outsource, can be compared on an equal basis.
Just as important is the ability to adequately evaluate the delivered product, considering several factors: size, quality, time to market, and so on. Using functional metrics total delivered function points can be contractually agreed upon prior to the start of the contract, assuming the company contracting the software development has clearly defined the final product. This is a dramatic change in the way software projects historically are managed. Any change in the number of total delivered function points once the project begins will impact the average cost calculation. Changes, additions, and even deletions to the software become more expensive per unit as you move through the development life cycle. Since the consumer of the custom built software wants to minimise unit cost, it is therefore in their best interest to sufficiently define requirements prior to the start of the project.
The ability to compare cycle time, or time to production, is also important. Time to production is defined as total number of function points delivered divided by elapsed calendar time. The least expensive developer also may be the one whose delivery date is the latest out. The buyer of the software must decide if quicker time to production is worth the extra expense.
The number of acceptable defects delivered per unit of size is another important evaluation metric; with higher quality comes higher development costs. But delivering software with numerous embedded defects will be expensive to maintain and will cost more in the long run.
The considerations of outsourcing change dramatically when you view the relationship from the perspective of the outsourcing firm.
It is important to the outsourcing firm that the average unit cost for software development be kept to a minimum. Fixed-price contracts create an environment that pushes average costs lower for subsequent projects.
Outsourcing firms have a great incentive to maintain a software library: reuse of components in future projects. If this library is thoroughly tested, insuring that it is nearly defect free, and documented so it can be easily understood, it can be used with confidence to lower average costs over time.
Additionally, outsourcing firms have a great incentive to keep their staffs trained in the latest software languages, tools, and techniques. As more outsourcing projects are undertaken, the responsibility to keep staff knowledgeable and up-to-date transfers from the in-house development team to the outsourcing firm. The outsourcing firm assumes the risk of investing in the technical staff - if their people are not trained on the latest software technologies, they cannot remain competitive with other outsourcing firms who have staffs with state-of-the-art skills. They willingly assume the risk with the expectation that these training initiatives will lower future average costs.
Unfortunately, it is still the norm in the software development arena, and in outsourcing cases in particular, for an organisation to be ignorant of the average size and average cost of a software project. All other sophisticated industries calculate and monitor their per unit average cost. As the software industry continues to mature, not only will it be common practice to know average costs in dollars per function point, it will be required.
Conclusion
Outsourcing should not be viewed as a solution in resolving problem service areas within the organisations. If an internal service area is not performing effectively and by transferring it to an external contractor could only magnify the problem. Therefore, it is important that an organisation that undertakes outsourcing must be able to clearly identify its long term IT strategic directions and long term information needs. The IT manager is the prime candidate to fulfill this role . Once the organisations have understood and addressed its long term IT strategic directions, it can then go on to decide which IT service areas should be outsourced. Organisations undertake outsourcing of their IT service areas should do so based on the basis of costs and benefits analysis and it is justified on cost effectiveness and must be based on sound business decision.
References
Although many different books references and web sites were researched the following Instsitute yielded a most comphrehesive supply of information. That ultimately became the basis of this report.
The author would strongly recommend any party investigating Outsourcing to contact the below institute.
The Outsourcing Institute
45 Rockefeller Plaza,
Suite 2000,
New York,
NY 10111
f:\12000 essays\sciences (985)\Computer\Paperless Office.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Paperless(?) Office
1. What are the advantages and disadvantages of the paperless office?
There are many advantages to having a paperless office. One advantage is that companies are able to greatly reduce the amount of paper that they use. Not only does this help the environment, it helps cut costs within the organization. Companies are also able to improve service through implementing the paperless office. This is because communication is immediate and does not get lost in a pile of papers on someone's desk. A paperless office can also save the company money. This can be seen in the example of Washington Mutual Savings Bank of Seattle. The bank automated more than one-hundred different forms and estimates that they are saving upwards of one million per year.
One disadvantage to having a paperless office is the issue security. How does a company make sure that only the eyes the document is intended for, are the only eyes that see it? Also how does a company know an electronic communication is authentic? Another issue is privacy. How does a company make sure that when an electronic communication is sent only the person it is intended for will read it? How does a company make sure private information does not make the evening news?
2. Are certain types of information more readily amenable to digital processing in a paperless office than others? If so, why; if not, why not?
It would seem that some types of information are better in paperless form, while some are not. Implementing an e-mail system can do wonders for companies. The e-mail sessions allow managers to get more information across to the employees and vice versa. This is a way to make sure everyone will access to the same information. A paperless office is a good way to send and receive reports.
Another area that is conducive to a paperless office is such companies that put large volumes of books and papers on CD-ROM. A single CD-ROM can hold a whole room full of books. This cuts down on the physical space a company must devote to paper storage.
3. How might book publishing change as the technology of the paperless office continues to develop? Will books become obsolete? Why or why not?
The book publishing industry will have to grow and change in relation to the changing technology. As the paperless office gains more and more popularity, one will begin to see more and more documents being on CD-ROM and also on the Internet. The CD-ROM's are cost effective, paper reducing, and easy to manufacture.
In the near future what will probably happen is that publications will be produced in both paper and in some type of electronic media. I see sort of phasing, similar to that if the cassette tape going to the compact disc. For the meantime most everyone has gone the way of compact discs, but there are still the ones who prefer cassette tapes for whatever reason. For this reason I don't think we will see the deletion of printed books, but we will begin to see more and more on some type of electronic media.
f:\12000 essays\sciences (985)\Computer\Past Present and Future of computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Imagine being able to do almost anything right from your own living room. You could order a pizza, watch cartoons, or play video games with people from around the entire world. All are possible today with your computer. The beginnings of the computer started off in a rather unique way. It was first used to produce intricate designs with silk, a task far to long a tedious for a human to do constantly. It¹s really unbelievable how the computers changed from that to what they are now. Today, computers are completely astounding. The possibilities are endless. Who knows where they will take us in the years ahead. The computer is the most influential piece of equipment that has ever been invented.
The begginings of the computer are actually kind of strange. It started in the 1800¹s when a man named Charles Babbage wanted to make a calculating machine. He created a machine that would calculate logarithms on a system of constant difference and record the results on a metal plate. The machine was aptly named the Difference Engine. Within ten years, the Analytical Engine was produced. This machine could perform several tasks. These tasks would be givin to the machine and could figure out values of almost any algebraic equation. Soon, a silk weaver wanted to make very intricate designs. The designs were stored on punch-cards which could be fed into the loom in order to produce the designs requested. This is an odd beginning for the most powerful invention in the world.
In the 1930¹s, a man named Konrad Zuse started to make his own type of computer. Out of his works, he made several good advances in the world of computing. First, he developed the binary coding system. This was a base two system which allowed computers to read information with either a 1 or a 0. This is the same as an on or and off. The on or off functions could be created through switches. These switches were utilized with vacuum tubes. The functions could then be relayed as fast as electrons jumping between plates. This was all during the time of the Second World War and further advancements were made in the area of cryptology. Computer advancements were needed in order for the Allied Coding Center in London to decode encrypted Nazi messages. Speed was of the essence, so scientists developed the first fully valve driven computer. Before this, computers only had a number of valves, none were fully driven by them because of the complexity and difficulty of producing it. Despite the odds, several Cambridge professors accomplished the mammoth task. Once it was built, the computer could decode the encrypted messages in enough time to be of use, and was an important factor in the end of World War II.
The war also provided advancements in the United States as well. The trajectory of artillery shells was a complex process that took alot of time to compute on the field. A new, more powerful computer was in dire need. Working with the Moore School of Electrical Engineering, the Ballistics Research Laboratory created the Electronic Numerical Integrator and Computer. The ENIAC could compute things a thousand times faster than any machine built before it. Even though it was not completed until 1946 and was not any help during the war, it provided another launching pad for scientists and inventors of the near future. The only problem with the ENIAC was that it was a long a tedious process to program it. What was needed was a computation device that could store simple ³programs² into it¹s memory for call later. The Electronic Discrete Variable Computer was the next in line. A young man named John von Neumann had the original plan for memory. His only problem was where and how could the instructions be stored for later use. Several ideas were pursued, but the one found most effective at the time was magnetic tape. Sets of instructions could be stored on the tapes and could be used to input the information instead of hand feeding the machine every time. If you have ever heard of a ³tape backup² for a computer, this is exactly what this is. All the information on your computer can be stored on the magnetic tape and could be recovered if your system ever crashed. It¹s strange that a method developed so long ago is still in use today, even though the computer today can do alot more than simply ³compute².
The computer works in a relatively simple way. It consists of five parts; input, output, memory, CPU, and arithmetic logic unit. Input is the device used by the operator of the computer to make it to what is requested. The output display the results of the tasks created from the input. The data goes from the input to the memory then to the arithmetic logic unit for processing then to the output. The data can then be stored in memory if the user desires. Before the advent of the monitor, the user would have to hand feed cards into the input and wouldn¹t see the results until it was displayed by the printer. Now that we have monitors, we can view the instant results of the tasks. The main component that allows the computer to do what is desired is the transistor. The transistor can either amplify or block electrical currents to produce either a 1 or a 0. Previously done by valves and vacuum tubes, the transistor allows for much faster processing of information. The microprocessor consists of a layered microchip which is on a base of silicone. It is a computer in itself and is the most integral part of the CPU in modern computers. It is a single chip which allows all that happens on a computer. Integrated circuits, a microchip which is layered with it¹s own circuitry, also provide a much more manageable memory source. The only reason magnetic tape backups are used today is because of the space which is needed in order to backup an entire computer. Memory for todays computers consist of RAM or ROM. ROM is unchangable and stores the computers most vital componants, it¹s operating instructions. Without this, the computer would be completly inoperable. Programs today use the instructions in the ROM to complete the tasks the program is attempting. This is why you cannot use IBM programs on a Macintosh, the ROM and operating systems are different, therefor the programing calls are different. Some powerful computers today can complete both sets of tasks because they have both sets of instructions in the stored in the ROM. The reason ROM is unchangable is because of people who don¹t know what they are doing could mess things up on their computer forever. RAM is the temporary memory that is in a computer. This is the memory that is used by programs to complete their tasks. RAM is only temporary because it requires a constant electrical charge. Once the computer is shut off, the RAM loses everything that was in it. That is why you lose work that you have done if the power goes of and you didn¹t save it first. If something needs to be saved, it is either saved to the hard disk within the computer or a floppy disk. With today¹s networking capabilities, things can be saved on completly seperate machines called ³servers². Though the process of saving is the same, a server can be located five feet away or on the opposite side of the world.
With today¹s technology, anything is possible with the use of a computer. You could visit a website and find that special someone, or create a virus that could crash thousands of machines at a single moment in time. If you have the money, the possibilities are endless. In today¹s day and age, information is sacred. One of the biggest problems found with information is what is free and what isn¹t. There will always be people who want more information than will be alloted to them, today, these people are known as hackers. Hackers use their individual knowledge to gain access to information that is not meant for them to know. It is almost a shame that hackers have such a bad reputation. Most are teenagers who are looking to gain more information. Of course, some are dedicated to destruction and random violence, but there always will be those types of people in the world. Of course, there is personal information that is transmitted over the Internet that no one but the inteded party and yourself should have access to (i.e. your credit card numbers and expration dates) but who decides what is and what isn¹t personal information. This is a problem that has greatly prevented the growth of the Internet into major companies. In the future, it can only get worse. At the rate we are going, everything will be computerized and stored electronicaly. This means that with the know-how, anyone could access your information. If you have ever seen the movie ³The Net² , you know exactly what I am talking about. If all information is stored electronicaly, anyone with the desire can view, change, remove, or add your personal attributes. With enough effort, one could take away someone¹s entire identity. This may seem like a futuristic sci-fi novel, but it could be in our not so distant future.
The future of technology can only be guessed upon. I believe that the connection between computers and humans will become much closer. People will feel the need to become ³one² with their machines and possibly even be physicaly linked with them. Information will be stored, transmitted, and viewed completly electronicaly. Perhaps an implant directly into the brain will be the link between humans and computers. This implant could feed you information directly off the Internet and several other sources. I personally believe that this is extremely scary. Once that link is made, there will be the desire to get even closer to the computer. A new, even more intimate link will be made. The cycle will continue untill there is no line between humans and elctronics. We will all be robots just reacting to instructions and following protocol. This is the most horrifing thing I can imagine. Our identities will be removed and we will all become one. I dont know, maybe I need to stop for a minuet before I completly terrify myslef.
f:\12000 essays\sciences (985)\Computer\Pentium Pro Microarchitecture.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A Tour of the Pentium(r) Pro Processor Microarchitecture
Introduction
One of the Pentium(r) Pro processor's primary goals was to significantly exceed the performance
of the 100MHz Pentium(r) processor while being manufactured on the same semiconductor process. Using the same process as a volume production processor practically assured that the Pentium Pro processor would be manufacturable, but it meant that Intel had to focus on an improved microarchitecture for ALL of the performance gains. This guided tour describes how multiple architectural techniques - some proven in mainframe computers, some proposed in academia and some we innovated ourselves - were carefully interwoven, modified, enhanced, tuned and implemented to produce the Pentium Pro microprocessor. This unique combination of architectural features, which Intel describes as Dynamic Execution, enabled the first Pentium Pro processor silicon to exceed the original performance goal.
Building from an already high platform
The Pentium processor set an impressive performance standard with its pipelined,
superscalar microarchitecture. The Pentium processor's pipelined implementation uses five
stages to extract high throughput from the silicon - the Pentium Pro processor moves to a
decoupled, 12-stage, superpipelined implementation, trading less work per pipestage for
more stages. The Pentium Pro processor reduced its pipestage time by 33 percent, compared
with a Pentium processor, which means the Pentium Pro processor can have a 33% higher clock
speed than a Pentium processor and still be equally easy to produce from a semiconductor
manufacturing process (i.e., transistor speed) perspective.
The Pentium processor's superscalar microarchitecture, with its ability to execute two
instructions per clock, would be difficult to exceed without a new approach.
The new approach used by the Pentium Pro processor removes the constraint of linear
instruction sequencing between the traditional "fetch" and "execute" phases, and opens up
a wide instruction window using an instruction pool. This approach allows the "execute"
phase of the Pentium Pro processor to have much more visibility into the program's
instruction stream so that better scheduling may take place. It requires the instruction
"fetch/decode" phase of the Pentium Pro processor to be much more intelligent in terms of
predicting program flow. Optimized scheduling requires the fundamental "execute" phase to
be replaced by decoupled "dispatch/execute" and "retire" phases. This allows instructions
to be started in any order but always be completed in the original program order. The
Pentium Pro processor is implemented as three independent engines coupled with an
instruction pool as shown in Figure 1 below.
What is the fundamental problem to solve?
Before starting our tour on how the Pentium Pro processor achieves its high performance it
is important to note why this three- independent-engine approach was taken. A fundamental
fact of today's microprocessor implementations must be appreciated: most CPU cores are not
fully utilized. Consider the code fragment in Figure 2 below:
The first instruction in this example is a load of r1 that, at run time, causes a cache miss.
A traditional CPU core must wait for its bus interface unit to read this data from main
memory and return it before moving on to instruction 2. This CPU stalls while waiting for
this data and is thus being under-utilized.
While CPU speeds have increased 10-fold over the past 10 years, the speed of main memory
devices has only increased by 60 percent. This increasing memory latency, relative to the
CPU core speed, is a fundamental problem that the Pentium Pro processor set out to solve.
One approach would be to place the burden of this problem onto the chipset but a
high-performance CPU that needs very high speed, specialized, support components is not a
good solution for a volume production system.
A brute-force approach to this problem is, of course, increasing the size of the L2 cache to reduce the miss ratio. While effective, this is another expensive solution, especially considering the speed requirements of today's L2 cache SRAM components. Instead, the Pentium Pro processor is designed from an overall system implementation perspective which will allow higher performance systems to be designed with cheaper memory subsystem designs.
Pentium Pro processor takes an innovative approach
To avoid this memory latency problem the Pentium Pro processor "looks-ahead" into its instruction pool at subsequent instructions and will do useful work rather than be stalled. In the example in Figure 2, instruction 2 is not executable since it depends upon the result of instruction 1; however both instructions 3 and 4 are executable. The Pentium Pro processor speculatively executes instructions 3 and 4. We cannot commit the results of this speculative execution to permanent machine state (i.e., the programmer-visible registers) since we must maintain the original program order, so the results are instead stored back in the instruction pool awaiting in-order retirement. The core executes instructions depending upon their readiness to execute and not on their original program order (it is a true dataflow engine). This approach has the side effect that instructions are typically executed out-of-order.
The cache miss on instruction 1 will take many internal clocks, so the Pentium Pro processor core continues to look ahead for other instructions that could be speculatively executed and is typically looking 20 to 30 instructions in front of the program counter. Within this 20- to 30- instruction window there will be, on average, five branches that the fetch/decode unit must correctly predict if the dispatch/execute unit is to do useful work. The sparse register set of an Intel Architecture (IA) processor will create many false dependencies on registers so the dispatch/execute unit will rename the IA registers to enable additional forward progress. The retire unit owns the physical IA register set and results are only committed to permanent machine state when it removes completed instructions from the pool in original program order.
Dynamic Execution technology can be summarized as optimally adjusting instruction execution by predicting program flow, analysing the program's dataflow graph to choose the best order to execute the instructions, then having the ability to speculatively execute instructions in the preferred order. The Pentium Pro processor dynamically adjusts its work, as defined by the incoming instruction stream, to minimize overall execution time.
Overview of the stops on the tour
We have previewed how the Pentium Pro processor takes an innovative approach to overcome a key system constraint. Now let's take a closer look inside the Pentium Pro processor to understand how it implements Dynamic Execution. Figure 3 below extends the basic block diagram to include the cache and memory interfaces - these will also be stops on our tour. We shall travel down the Pentium Pro processor pipeline to understand the role of each unit:
•The FETCH/DECODE unit: An in-order unit that takes as input the user program instruction stream from the instruction cache, and decodes them into a series of micro-operations (uops) that represent the dataflow of that instruction stream. The program pre-fetch is itself speculative.
•The DISPATCH/EXECUTE unit: An out-of-order unit that accepts the dataflow stream, schedules execution of the uops subject to data dependencies and resource availability and temporarily stores the results of these speculative executions.
•The RETIRE unit: An in-order unit that knows how and when to commit ("retire") the temporary, speculative results to permanent architectural state.
•The BUS INTERFACE unit: A partially ordered unit responsible for connecting the three internal units to the real world. The bus interface unit communicates directly with the L2 cache supporting up to four concurrent cache accesses. The bus interface unit also controls a transaction bus, with MESI snooping protocol, to system memory.
Tour stop #1: The FETCH/DECODE unit.
Figure 4 shows a more detailed view of the fetch/decode unit:
Let's start the tour at the Instruction Cache (ICache), a nearby place for instructions to reside so that they can be looked up quickly when the CPU needs them. The Next_IP unit provides the ICache index, based on inputs from the Branch Target Buffer (BTB), trap/interrupt status, and branch-misprediction indications from the integer execution section. The 512 entry BTB uses an extension of Yeh's algorithm to provide greater than 90 percent prediction accuracy. For now, let's assume that nothing exceptional is happening, and that the BTB is correct in its predictions. (The Pentium Pro processor integrates features that allow for the rapid recovery from a mis-prediction, but more of that later.)
The ICache fetches the cache line corresponding to the index from the Next_IP, and the next line, and presents 16 aligned bytes to the decoder. Two lines are read because the IA instruction stream is byte-aligned, and code often branches to the middle or end of a cache line. This part of the pipeline takes three clocks, including the time to rotate the prefetched bytes so that they are justified for the instruction decoders (ID). The beginning and end of the IA instructions are marked.
Three parallel decoders accept this stream of marked bytes, and proceed to find and decode the IA instructions contained therein. The decoder converts the IA instructions into triadic uops (two logical sources, one logical destination per uop). Most IA instructions are converted directly into single uops, some instructions are decoded into one-to-four uops and the complex instructions require microcode (the box labeled MIS in Figure 4, this microcode is just a set of preprogrammed sequences of normal uops). Some instructions, called prefix bytes, modify the following instruction giving the decoder a lot of work to do. The uops are enqueued, and sent to the Register Alias Table (RAT) unit, where the logical IA-based register references are converted into Pentium Pro processor physical register references, and to the Allocator stage, which adds status information to the uops and enters them into the instruction pool. The instruction pool is implemented as an array of Content Addressable Memory called the ReOrder Buffer (ROB).
We have now reached the end of the in-order pipe.
Tour stop #2: The DISPATCH/EXECUTE unit
The dispatch unit selects uops from the instruction pool depending upon their status. If the status indicates that a uop has all of its operands then the dispatch unit checks to see if the execution resource needed by that uop is also available. If both are true, it removes that uop and sends it to the resource where it is executed. The results of the uop are later returned to the pool. There are five ports on the Reservation Station and the multiple resources are accessed as shown in Figure 5 below:
The Pentium Pro processor can schedule at a peak rate of 5 uops per clock, one to each resource port, but a sustained rate of 3 uops per clock is typical. The activity of this scheduling process is the quintessential out-of-order process; uops are dispatched to the execution resources strictly according to dataflow constraints and resource availability, without regard to the original ordering of the program.
Note that the actual algorithm employed by this execution-scheduling process is vitally important to performance. If only one uop per resource becomes data-ready per clock cycle, then there is no choice. But if several are available, which should it choose? It could choose randomly, or first-come-first-served. Ideally it would choose whichever uop would shorten the overall dataflow graph of the program being run. Since there is no way to really know that at run-time, it approximates by using a pseudo FIFO scheduling algorithm favoring back-to-back uops.
Note that many of the uops are branches, because many IA instructions are branches. The Branch Target Buffer will correctly predict most of these branches but it can't correctly predict them all. Consider a BTB that's correctly predicting the backward branch at the bottom of a loop: eventually that loop is going to terminate, and when it does, that branch will be mispredicted. Branch uops are tagged (in the in-order pipeline) with their fallthrough address and the destination that was predicted for them. When the branch executes, what the branch actually did is compared against what the prediction hardware said it would do. If those coincide, then the branch eventually retires, and most of the speculatively executed work behind it in the instruction pool is good.
But if they do not coincide (a branch was predicted as taken but fell through, or was predicted as not taken and it actually did take the branch) then the Jump Execution Unit (JEU) changes the status of all of the uops behind the branch to remove them from the instruction pool. In that case the proper branch destination is provided to the BTB which restarts the whole pipeline from the new target address.
Tour stop #3: The RETIRE unit
Figure 6 shows a more detailed view of the retire unit:
The retire unit is also checking the status of uops in the instruction pool - it is looking for uops that have executed and can be removed from the pool. Once removed, the uops' original architectural target is written as per the original IA instruction. The retirement unit must not only notice which uops are complete, it must also re-impose the original program order on them. It must also do this in the face of interrupts, traps, faults, breakpoints and mis- predictions.
There are two clock cycles devoted to the retirement process. The retirement unit must first read the instruction pool to find the potential candidates for retirement and determine which of these candidates are next in the original program order. Then it writes the results of this cycle's retirements to both the Instruction Pool and the RRF. The retirement unit is capable of retiring 3 uops per clock.
Tour stop #4: BUS INTERFACE unit
Figure 7 shows a more detailed view of the bus interface unit:
There are two types of memory access: loads and stores. Loads only need to specify the memory address to be accessed, the width of the data being retrieved, and the destination register. Loads are encoded into a single uop. Stores need to provide a memory address, a data width, and the data to be written. Stores therefore require two uops, one to generate the address, one to generate the data. These uops are scheduled independently to maximize their concurrency, but must re-combine in the store buffer for the store to complete.
Stores are never performed speculatively, there being no transparent way to undo them. Stores are also never re- ordered among themselves. The Store Buffer dispatches a store only when the store has both its address and its data, and there are no older stores awaiting dispatch.
What impact will a speculative core have on the real world? Early in the Pentium Pro processor project, we studied the importance of memory access reordering. The basic conclusions were as follows:
•Stores must be constrained from passing other stores, for only a small impact on performance.
•Stores can be constrained from passing loads, for an inconsequential performance loss.
•Constraining loads from passing other loads or from passing stores creates a significant impact on performance.
So what we need is a memory subsystem architecture that allows loads to pass stores. And we need to make it possible for loads to pass loads. The Memory Order Buffer (MOB) accomplishes this task by acting like a reservation station and Re-Order Buffer, in that it holds suspended loads and stores, redispatching them when the blocking condition (dependency or resource) disappears.
Tour Summary
It is the unique combination of improved branch prediction (to offer the core many instructions), data flow analysis (choosing the best order), and speculative execution (executing instructions in the preferred order) that enables the Pentium Pro processor to deliver its performance boost over the Pentium processor. This unique combination is called Dynamic Execution and it is similar in impact as "Superscalar" was to previous generation Intel Architecture processors. While all your PC applications run on the Pentium Pro processor, today's powerful 32-bit applications take best advantage of Pentium Pro processor performance.
And while our architects were honing the Pentium Pro processor microarchitecture, our silicon technologists were working on an advanced manufacturing process - the 0.35 micron process. The result is that the initial Pentium Pro Processor CPU core speeds range up to 200MHz.
f:\12000 essays\sciences (985)\Computer\pg.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Joystick Port Power Glove
Here's something useful to do with those old Nintentdo PowerGloves collecting dust in the closet! I will present to you how you can use parts of the
Mattel/Nintendo Power Glove to make your own input devices.
Step 1. Remove the flexible resistor strips from the powerglove's fingers. To do this, you must peel the black "glove" from the grey plastic part, as shown:
Note: If you choose, you may remove the rest of the electronics on the glove and use it as it is. I chose to remove the strips and sew them onto a glove that fit my
hand better.
Step 2. Cut along the clear plastic tubes surrounding the brown flexible sensor strips, to free the strips from the grey plastic. De-solder or cut the wires connecting
the sensors to the glove.
sensor strip
Step 3. Sew the sensors onto a glove that fits your hand. Notice that the sensors bend one direction better than the other. Keep this in mind when placing them on
your glove (or whatever else you build). I used this soccer glove because it had fabric pieces sewn over the fingers. I simply cut the stitches and put the sensors
under the fabric. I later found it necessary to sew one end of the sensor to the glove to hold it in place.
The maximum resistance value of the sensors I used was 150K ohms. I connected my glove to my pc through the joystick port, using positions 0 and 1. I later
added a 19K ohm resistor in parallel with each sensor to increase the sensitivity for the pc joystick port. I have included the pin diagram of a typical pc joystick
port. Table was referenced from The Pocket Ref, compiled by Thomas J. Glover, published by Sequoia Publishing, Inc. You connect one pole of the resistor strip
to +5 volts and the other pole to one of the coordinate positions on the joystick port.
Pin
Description
1
+5 volts (from computer)
2
Button 1 input
3
Position 0, X - Coordinate
4
Ground
5
Ground
6
Position 1, Y - Coordinate
7
Button 2 input
8
+5 volts
9
+5 volts
10
Button 3 input
11
Position 2, X - Coordinate
12
Ground
13
Position 3, Y - Coordinate
14
Button 4 input
15
+5 volts
WARNING!
Do not attempt to plug anything you build into you computer unless you are ABSOLUTELY certain you know what you are doing. You can cause permanent
damage to your hardware!
The Visual Basic program and joystick driver I used to test my glove is available for ftp.
This program runs out of windows. To use it you must install the joystick driver included in the .zip. I have included the source code and make file for the program
so you can see how it works, if you have VB.
The program is simple to use. Run it, make the gesture you want to recall, press the corresponding button, and watch the recall window for results. The text on the
button will change from red to green when a gesture is stored. Adjust the fuzz factor to increase or decrease sensitivity (between 2500 and 8000 is usually good).
Position values are also displayed for the finger and thumb.
About Joe Back to Projects Page
This page provided by: Space Research Group
f:\12000 essays\sciences (985)\Computer\Piracy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
October 28, 1996
Ian Sum
Recently, The Toronto Star published an article entitled "RCMP
seizes BBS, piracy charges pending." The RCMP have possessed all
computer components belonging to the "90 North" bulletin board
system in Montreal, Quebec. The board is accused of allowing
people the opportunity to download (get) commercial and beta (or
commercial) software versions. I feel that the RCMP should not
charge people that are linked to computer piracy, because the
pirated software offers valuable opportunity to programmers and
users. Also, revenue lost to the large software companies is such
a small amount that the effect won't be greatly felt by them and so
it is not worth the policing effort required to track down the
pirates.
When pirates distribute the illegal software, one could say
that they are helping, than hurting the software companies. By
distributing the software world wide, it creates great
advertisement for the software companies and their products.
Although the software company is losing profits from that
particular version, it could generate future sales with other
versions. Also, when the pirates distribute the software this could
be a great source of test data for the software companies. This is
an effective way to catch any unfounded bugs in the software
program. From debugging to hacking, hackers can benefit the most.
They can study and learn from the advancements with in the
programming.
So what does all this activity tell us? This tells us the
people are willing to go to great lengths to get software at a
lower cost, or possibly in exchange for other software and that
they are succeeding in their efforts. Although more than 50% of
their software income is from other companies which do not pirate,
this poses a problem for the software industries. By fining a
single bulletin board out of the thousands in North America, there
would be little accomplished. Not to mention the fact the it is
extremely difficult to prove and convict people under the Copyright
Act. In today's society, revenue from software is such a small
income source for corporations such as WordPerfect Corp. These
companies make their money mainly from individuals purchasing extra
manuals, reference material, supplementary hardware, and calling
product support. Software companies are conscious of the pirate
world and the changes they have made. Some companies actually want
you to take the software by using the SHAREWARE concept. In
SHAREWARE one gets a chance to use demo programs and then pay for
the full purchase if he feels it is worthwhile. It is a bit like
test driving a car, before one buys. In most cases, users are
happy and end up purchasing complete software. Most software
companies are still in business, and still bringing up more
technological advancements that entice users to continually buy
newer versions. The companies, in this sense , have outsmarted and
beaten the pirates. Violation of the Copyright Act seems to
benefit software companies more than it hurts them. Their software
gets more exposure which leads to more software revenue in the end
than revenue that is lost through piracy. The opportunity cost is
worth it in the end.
Cracking down on software piracy is a waste of societies
energy. There is more benefit for everyone the way things are in
the present. Users get to view and evaluate it before they pay.
Hackers get a opportunity to view other works and learn from the
advancements on or find the errors in the beta versions. Software
companies get more exposure which in the long run will lead to more
revenues for them.
f:\12000 essays\sciences (985)\Computer\Policies and Procedures Manual Forms Analysis and Design.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Policies and Procedure Guidelines Page 1 of 14
Section 1.1: Forms Analysis and Design Effective date: March 6, 1997
Issued by Approved by:
1.1 FORMS ANALYSIS AND DESIGN
1.1.1 WHAT IS A FORM?
A form is basically a fixed arrangement of captioned spaces designed for entering and obtaining pre-described information. A form is considered effective if it is:
· easy to complete
· easy to use
· easy to store
· easy to retrieve information quickly
· easy to dispose
1.1.2 HOW IS IT IMPORTANT?
In a business, forms and design are greatly needed to allow the company to better organize the way they want their business to operate smoothly and efficiently. Although the presence of forms and design in a company ensures that the company will run better, be able to make better decisions and be able to coordinate activities more easily, these forms and design programs must be covered in the companies budget, in terms of costs.
The company will have to make sure that its forms and designs are a unique standard throughout the company and not different in separate sections of the companies total make-up. If, by chance the presence of a universal form in a certain section of the company is a disadvantage rather than an advantage, the forms and policies of other companies may be looked at in order to correct the problem. When creating a form, companies may use the same standard techniques before making changes to make the form right for its company.
Some basic techniques are making sure that the form is easy to fill in, takes minimal time to fill-in, it has a functional layout and it contains an attractive visual appearance.
After using the basic standards of form design, the forms analysists', spend countless hours making the design a unique standard for their company, while considering every section of the company, so that the form will be useful to every member of the company.
Policies and Procedure Guidlines Page 2 of 14
Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997
Issued by: Approved by:
1.2 TOOLS AND AIDS FOR FORMS DESIGNING
Many companies use the same basic tools to design their forms. In the past when forms were designed, many "traditional tools" were used to design forms. Some of those tools include the following:
· pencils, erasers
· rulers, triangles
· tracing paper
· lettering and symbol templates
· cutting tools
· masking tape and cellophane tape
· correction fluid
· rubber cement
Now, because of new technology and easier ways to design forms, most of these tools are obsolete. New computer hardware and software have provided many tools and accessories which have allowed companies to train employees to design forms using these advanced tools. Software packages such as Corel Draw, Microsoft Office, which includes Word, Excel, Access and PowerPoint along with WordPerfect, PowerBuilder, Visual Basic and many other software packages have made tasks easier to complete. Their amazing accurate and precise design tools provide "picture-perfect" quality.
1.2.1 Computer Hardware and Software
· Pentium Computers
Today most designers use computers especially Pentium computers because of their speed and performance.
Policies and Procedure Guidelines Page 3 of 14
Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997
Issued by: Approved by:
· Corel Draw
There are several different software packages that can be used to design the forms. Many companies recommend Corel Draw. It is an excellent choice to use for designing the form as you would want it on paper. There are excellent designing tools included in the Corel Package which allows you to draw lines of any size, color or shape. It also allows you to insert grids, graphics, graphs or images with different border styles and sizes.
· Microsoft Word
After designing the physical appearance of the form with style and borders, Microsoft Word will be used to fill in the form's information because of the various fonts that are available. Also, Microsoft Word's ability to change font size, and either, bold, underline or italicize wording, will be very useful in the creation of the text that will appear in the form.
· Microsoft Excel
This section of Microsoft Office can be used by the designers to design grids and graphs that might be needed to represent data in the form. Grids and tables may be inserted into the form to hold data that the applicant may need to fill. Different types of graphs such as pie charts, line graphs, column graphs and combination graphs may be needed to represent a question in the form. For example, the applicant may need to fill in what percentage he/she belongs to as compared to the rest of the field represented by the graph.
· Microsoft Access
This section of Microsoft Office can be used to design databases. The designers may want to include previously designed tables or create new tables to insert into forms. They may also want to only include portions of tables in which they can create queries so that the tables they insert includes only the information that they specified.
Policies and Procedure Guidelines Page 4 of 14
Section 1.2: Tools and Aids For Forms and Design Effective date: March 6, 1997
Issued by: Approved by:
· Printers
An Epson III Laser Jet Color Printer can be used to print the forms. The laser quality will provide the crisp and clear texture of lines and text, along with bright colors to make the form more attractive and visually appealing. Although any laser printer, will provide excellent quality, the color laser jets printers makes the forms more attractive because of how the different colors distinguish between the different sections of the form.
· Saving Forms
All the forms will that are designed by the company should be backed up on the hard drive of the computers. The forms will be saved whether they were used or not, in case of changes in the form's design or in case the company wants to improve on a previously designed form. The forms will also be saved on floppy disks, just in case of viruses, malfunctions in the computer or hard drive upgrading and formatting.
Policies and Procedure Guidelines Page 5 of 14
Section 1.3: Designing Procedures Effective date: March 6, 1997
Issued by: Approved by:
1.3 DESIGNING PROCEDURES
The two major objectives of this process is:
1) collecting information, which is its reason for existence
2) facilitating a format for the form, which is standard.
1.3.1 Facilitative Area
The forms are a very important aspect of a company because they provide the information of each employee that the employers wish to know. Since most companies use a standardized format, each company must contain its title and identify the type of form that the applicant is filling out .
It is also useful to include the name of the department, date, codes and instructions that may be necessary to complete the form.
· Identification
The title of the form will be placed at the top center of the form and in any case where the form contains more than one invoice, it should include subtitles to distinguish it from the rest of the forms. If the forms will be filed, it will be helpful to place the title in the "visible area" of the form, which would be the area visible on the form when it is in a filing cabinet or some other type of filing.
· Form Numbers
The forms will also include form numbers which will be placed in either of the lower corners on each page of the form. This will prevent the form numbers from being covered by staples and it won't interfere with the working area of the form. It will also serve as an aid in stocking the forms in small quantities.
Policies and Procedure Guidelines Page 6 of 14
Section 1.3: Designing Procedures Effective date: March 6, 1997
Issued by: Approved by:
· Page Numbers
It is also very important to ensure that all the pages of the form contain page numbers for various reasons. This will be helpful in identifying what page of the form it is and help make it easier to sort out forms, especially if they contain more than one page. The page numbers should be placed in the upper right hand corner of the page so that when the form is opened the number of the page will be easier to see when the pages are stapled in the upper left corner. (EX: Page 1 of **)
· Edition Date
The company should ensure that all the forms contain edition dates which show when the form was made. The form should also show how long they will be valid before they need to be updated again. The edition dates will be included with form numbers.
· Supersession Notice
This is simply a method of notifying users and workers in the supply room so that they will know when a new form has been created has replaced the older version of the form. It is also used when a newer version of the previous form has been revised.
This notice is usually printed in the bottom margin of the form. It should let the user know if the form has been replaced and what the number of the new form is. If more than one form is used to replace a single form, then a separate notice should would be more appropriate to inform effective personnel of the change.
· Expiration Dates and Approval of Forms
If a form is to be used for only a limited of time, then it should contain expiration dates and limit dates. These will let the users no when and how long the form will be valid and when they should get another one.
Because many forms have to be approved by a company first before they are distributed to users, they must allow room for the company to state its approval number, signature or symbol, along with the date that the form was approved.
Policies and Procedure Guidelines Page 7 of 14
Section 1.3: Designing Procedures Effective date: March 6, 1997
Issued by: Approved by:
· Emblems and Symbols
After the forms are approved by the company, the designers must insert the company's emblem or logo on the form. This will validate the form as property of that company and act sort of like a patent so that it won't be used by any other companies.
· Comments and Suggestions
In order to have room for improvement on the forms, there should be enough space for any comments or suggestions that the authorizing department wishes to leave when approving the form. The form will have to be approved by the department before the companies logo or seal can be placed on the form. and it will have to contain the companies logo before the form will be valid.
Policies and Procedure Guidelines Page 8 of 14
Section 1.4: Instructions Effective date: March 6, 1997
Issued by: Approved by:
1.4 INSTRUCTIONS
1.4.1 General Instructions
To ensure that the forms are easy to fill out, each form will contain instructions for completing the form and what to do with the forms after completing them. The instructions should be brief. The instructions that are located under the title of the form will be basic, general instructions that tell the applicant what to do with the form, why they are filling it out and who they should give it to when they are finished. This should be read by the user before completing the form.
1.4.2 Lengthy Instructions
In any case where the form is lengthy and requires a lot of thought to fill it out, an instruction booklet should be included with the form. These instructions are more lengthy but explain more about filling out the form. They should try to answer any questions that the applicant may have about his/her choices while completing the form. These instructions will explain clearly how to fill out the form, including what is mandatory to fill in and what sections are optional.
These instructions should be sort of like a written procedure that explains the form in a sort of summary. The font size of the wording should be carefully designed to make sure that the words are big enough and the lines should be double spaced to make sure that the instructions are clear enough to read and understand.
An acceptable reading font size is around 12pt or 14 pt size. Times New Roman, Arial or Courier are standard true type fonts that are clear and easy to read.
1.4.3 Section Instructions
There will also be instructions included in each section. These instructions will explain clearly how to fill out each the section of the form. It will contain information on whether or not the section needs to be filled out in order to determine full completion of the form.
Policies and Procedure Guidelines Page 9 of 14
Section 1.5: Addressing and Mailing Effective date: March 6, 1997
Issued by: Approved by:
1.5 ADDRESSING AND MAILING
1.5.1 Self-Routing
On the bottom of the last page of the form or on the back of the last page, there will be a space for the address of the employer and a space for the applicant to fill in his/her address, along with extra space in case the form has to be sent to multiple routes. This will make it easier for the forms to be transferred to the employer and increase the capability of self - routing mail.
When addressing to a certain employer, job titles should be used instead of names just in case changes in departments should occur due to promotions or lay-offs. This will change the positions held by certain employees who are in control of certain departments which means different responsibilities for these people.
1.5.2 E-Mailing and Faxing
Companies that have email will be at an advantage. They will be able to email a copy of the form to the user and have them fill out the appropriate information and then email the results back to the employer
For companies that don't have email, fax machines are also useful. They can simply fax the forms to the employees or applicants. The employees can then fill it out and then fax it or bring the form to the employer in person.
1.5.3 Personal MailBoxes
In most companies, employers and employees have their own personal mailboxes. By including both the address of the employee and the employer, it is easier for employees or users to transfer forms to the employer. In the event that the employer may be out on a business trip, the applicants may simply drop the forms into the employers mailboxes to meet deadlines.
Policies and Procedure Guidelines Page 10 of 14
Section 1.6: Form Layout Effective date: March 6, 1997
Issued by: Approved by:
1.6 FORM LAYOUT
· Sheet Size
The forms should be designed on 8 1/2" x 11" carbon paper with a carbon sheet on the back, so that the person filling out the form can keep a copy for him/herself. The sections of the forms should be placed on both sides of the paper to save paper. The information on the forms should not be crammed so that some important information could possibly left out or so that it would make it harder to read the questions due to poor spacing or small lettering.
· Margins
The form should have half inch margins on all sides so that the wording won't be too close to the end of the page. This allows the user or reader to hold the paper without covering any wording on the form.
· Spacing
The amount of horizontal and vertical spacing is determined by the amount of headings and sub-headings, size and style of text and the amount of space left for fill in answers.
· Box Format
The form will follow a box format which will increase space because the information will go to each end of the page margin. It will have exceptional horizontal and vertical spacing to enable easier reading.
· Borders and Bolding
The different sections of the form will be divided by solid black lines. The headings and sub-headings will be bolded and larger than the question text in order to improve the visual appearance of each section of the form.
Policies and Procedure Guidelines Page 11 of 14
Section 1.6: Form Layout Effective date: March 6, 1997
Issued by: Approved by:
· Shading
Shading will also be used in the sections where no information is required to make it easier for the applicant to know what sections he/she needs to fill in. This would also be used to highlight sections that need to be filled in, but not by the applicant. For example, some forms have sections that specify "for office use only" meaning that they don't have to fill out any information in that section.
· Answer Spaces
There will be spaces indicated on the right side of the section that will be lined aligned with one another. They will be used for filling in information that contain only numbers or a letter code. In the case that the answers to the question requires several lines in order to answer it, there will be more than enough space available to appropriately answer the question. Therefore the information must be clear and widely spaced so that it is very easy to fill out the forms.
Policies and Procedure Guidelines Page 12 of 14
Section 1.7: Breakdown of Form Arrangements Effective date: March 6, 1997
Issued by: Approved by:
1.7 BREAKDOWN OF FORM ARRANGEMENTS
The form should be set up in a way to make it easier for the applicants to fill in. The sections of the forms will be organized so that all the related parts of the form are placed one after the other to avoid reading back through the form. The form will have headings and sub-heading which define which section of the form you are filling out and help you understand what kind of information you should fill in.
1.7.1 Beginning
The personal information will be placed at the first of the form. This will contain things such as the applicants name, address, phone number, and date of birth .
1.7.2 Body
This will contain the basic purpose of the form. It will have the questions that will be needed to complete the form, depending on what kind of form it is. For example, if it was an application for applying for a job, the beginning would include the items mentioned above in the beginning section. The body, would contain, previous education, previous employment, the position you wish to apply for and your references.
1.7.3 Ending
This section of the form will have spaces to fill in the address of the person you wish to send it to, along with your own address. It will have several spaces in case you wish to send it to more than one person.
Policies and Procedure Guidelines Page 13 of 14
Section 1.8: Revising an Existing Form Effective date: March 6, 1997
Issued by: Approved by:
1.8 REVISING AN EXISTING FORM
There are many things to consider when revising a form:
· Previous forms will be considered to be obsolete
· Previous editions of forms can be used until there are no more left. Companies can use the older forms until there are no more left before presenting a new form.
· Existing stocks which include the form number and edition date can be used. The now obsolete forms, will be replaced by new ones, but the form numbers and editions dates will be transferred on to the new forms.
Policies and Procedure Guidelines Page 14 of 14
Section 1.9: Replacing Existing Forms with Different Numbers Effective date: March 6, 1997
Issued by: Approved by:
1.9 REPLACING EXISTING FORMS WITH DIFFERENT NUMBERS
· You first have to replace the form numbers and edition dates which are now considered to be obsolete.
· Instead of replacing the number and dates right away, you can wait until there are no more forms left and then make the changes to the new forms.
f:\12000 essays\sciences (985)\Computer\Polymorphic and Cloning Computer Viruses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Polymorphic &
Cloning Computer
Viruses
The generation of today is growing up in a fast-growing, high-tech world which allows us to do the impossibilities of yesterday. With the help of modern telecommunications and the rapid growth of the personal computer in the average household we are able to talk to and share information with people from all sides of the globe. However, this vast amount of information transport has opened the doors for the computer "virus" of the future to flourish. As time passes on, so-called "viruses" are becoming more and more adaptive and dangerous. No longer are viruses merely a rarity among computer users and no longer are they mere nuisances. Since many people depend on the data in their computer every day to make a living, the risk of catastrophe has increased tenfold. The people who create computer viruses are now becoming much more adept at making them harder to detect and eliminate. These so-called "polymorphic" viruses are able to clone themselves and change themselves as they need to avoid detection. This form of "smart viruses" allows the virus to have a form of artificial intelligence. To understand the way a computer virus works and spreads, first one must understand some basics about computers, specifically pertaining to the way it stores data. Because of the severity of the damage that these viruses may cause, it is important to understand how anti-virus programs go about detecting them and how the virus itself adapts to meet the ever changing conditions of a computer.
In much the same way as animals, computer viruses live in complex environments. In this case, the computer acts as a form of ecosystem in which the virus functions. In order for someone to adequately understand how and why the virus adapts itself, it must first be shown how the environment is constantly changing and how the virus can interact and deal with these changes. There are many forms of computers in the world; however, for simplicity's sake, this paper will focus on the most common form of personal computers, the 80x86, better known as an IBM compatible machine. The computer itself is run by a special piece of electronics known as a microprocessor. This acts as the brains of the computer ecosystem and could be said to be at the top of the food chain. A computer's primary function is to hold and manipulate data and that is where a virus comes into play. Data itself is stored in the computer via memory. There are two general categories for all memory: random access memory (RAM) and physical memory (hard and floppy diskettes). In either of those types of memory can a virus reside. RAM is by nature temporary; every time the computer is reset the RAM is erased. Physical memory, however, is fairly permanent. A piece of information, data, file, program, or virus placed here will still be around in the event that the computer is turned off.
Within this complex environment, exists computer viruses. There is no exact and concrete definition for a computer virus, but over time some commonly accepted facts have been related to them. All viruses are programs or pieces of programs that reside in some form of memory. They all were created by a person with the explicit intent of being a virus. For example, a bug (or error) in a program, while perhaps dangerous, is not considered a computer virus due to the fact that it was created on accident by the programmers of the software. Therefore, viruses are not created by accident. They can, however, be contracted and passed along by accident. In fact it may be weeks until a person even is aware that their computer has a virus. All viruses try to spread themselves in some way. Some viruses simply copy clones of themselves all over the hard drive. These are referred to as cloning viruses. They can be very destructive and spread fast and easily throughout the computer system.
To illustrate the way a standard cloning virus would adapt to its surroundings a theoretical example will be used. One day a teacher decides to use his/her classroom Macintosh's Netscape to download some material on photosynthesis. Included in that material is a movie file which illustrates the process. However, the teacher is not aware that the movie file is infected with a computer virus. The virus is a section of binary code attached to the end of the movie file that will execute its programmed operations whenever the file is accessed. Then, the teacher plays the movie. As the movie is being played the virus makes a clone of itself in every file inside the system folder of that computer. The teacher shuts down the computer normally, but the next day when it is booted up all of the colors are changed to black and white. The explanation is that the virus has been programmed to copy itself into all of the files that the computer accesses in a day. Thus, when the computer reboots, the Macintosh operating system looks into the system folder at a file to see how many colors to use. The virus notices it access this file and immediately copies it self into it and changes the number of colors to two. Thus the virus has detected a change in the files that are opened in the computer and adapted itself by placing a clone of itself into the color configuration files.
Another prime way that viruses are spread throughout computers extremely rapidly is via LANs (Local Area Networks) such as the one setup at Lincoln that connects all of the classroom Macs together. A LAN is a group of computers linked together with very fast and high capacity cables. Below is an illustrated example of a network of computers:
Since all of the computers on a network are connected together already, the transportation of a virus is made even easier. When the "color" virus from the above example detects that the computer is using the network to copy files across the school, it automatically clones a copy of itself into every file that is transported across the network. When it reaches the new computer it waits until it has been shut off then turned back on again to copy itself into the color configuration files and change the display to black and white. If this computer should then log on to the network, the virus will transport again. In this manner network capable viruses can very quickly adapt and cripple an entire corporation or office building.
Do to the severity of some viruses, people have devised methods of detecting and eradicating them. The anti-viral programs will scan the entire hard drive looking for evidence that viruses may have infected it. These programs must be told very specifically what to look for on the hard drive. There are two main methods of detecting viruses on a computer. The first is to compare all of the viruses on the hard disk to known types of viruses. While this method is very precise, it can be rendered totally useless when dealing with a new and previously unknown virus. The other method deals with the way in which a common cloning virus adapts. All that a cloning virus really does is look at what operations the computer is executing and react and adapt to them by making more copies of itself. This is the serious flaw with cloning viruses: all the copies of itself look the same. Basically all data in a computer is stored in a byte structure format. These bytes, which are analogous to symbols, occur in specific orders and lengths. Each of the cloned viruses has the same order and length of the byte structure. All that the anti-virus program has to do is scan the hard drive for byte structures that are duplicated several times and delete them. This method is an excellent way of dealing with the adaptive and reproducing format of cloning viruses. The disadvantage is that it can produce a number of false alarms such as when a user has two copies of the same file.
Thereby, a simple cloning viruses' main flaw is exposed. However, the (sick minded) people who create these viruses have founded a way to get around this by creating a new and even more adaptive virus called the polymorphic virus. Polymorphic viruses were created with the explicit intent of being able to adapt and reproduce in ways other than simple cloning. These viruses contain a form of artificial intelligence. While this makes them by no means as smart or adaptive as a human being, it does allow them to avoid conventional means of detection. A conventional anti-virus program searching for cloned viruses will not think files with different byte-structures as are viruses. A good analogy for a polymorphic virus would be a chameleon. The chameleon is able to change its outward appearance but not the fact that it is a chameleon. A polymorphic virus's main goal is just like that of any other virus: to reproduce itself and complete some programmed task (like deleting files or changing the colors of the monitor); this fact is never changed. However, it is the way in which they reproduce that makes them different. A polymorphic virus does more to adapt than just make copies of itself into other files. In fact, it does not really even clone its physical byte structure. Instead it creates other programs with different byte structures that are attempting to perform the same task. In a sense, polymorphic viruses are smart enough to evolve itself by writing new programs on the fly. Because of the fact that they all have different byte structures, they pass undetected through conventional byte comparison anti-viral techniques. Not only are polymorphic viruses smart enough to react to their environment by adaptation, but they are able to do it in a systematic way that will prevent their future detection and allow them to take on a new life of their own.
Computer viruses are extremely dangerous programs that will adapt themselves to the ever changing environment of memory by making copies of themselves. Cloning viruses create exact copies of themselves and attach to other files on the hard drive in an attempt to survive detection. Polymorphic viruses are able to change their actual appearance in memory and copy themselves in much the same way that a chameleon can change colors to avoid a predator. It is not only the destructive nature of computer viruses that make them so dangerous in today's society of telecommunications, but also their ability to adapt themselves to their surroundings and react in ways that allow them to proceed undetected to wreck more havoc on personal computer users across the globe.
Bibliography
Rizzello, Michael. Computer Viruses. Internet. http://business.yorku.ca
/mgts4710/rizello/viruses.htm
Solomon, Dr. Alan. A Guide to Viruses. Internet. http://dbweb.agora.stm.it/
webforum/virus/viruinfo.htm
Tippett, Peter S. Alive! Internet. http://www.bocklabs.wisc.edu/~janda/alive10.html. 1995.
"Virus (computer)," Microsoft (R) Encarta. Copyright (c) 1993 Microsoft Corporation.
Copyright (c) 1993 Funk & Wagnall's Corporation
Yetiser, Tarkan. Polymorphic Viruses. VDS Advanced Research Group. Baltimore, 1993.
f:\12000 essays\sciences (985)\Computer\Pornography on the Internet Freedom of Speech or Dangerous .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Pornography on the Internet Internet Pornography: Freedom of Press or Dangerous Influence?
The topic of pornography is controversial many times because of the various definitions which each have different contexts. Is it nudity, sexual intercourse, art, or all of these? Is it magazines, videos, or pictures? For the purposes of this paper, pornography will be defined as any material that depicts erotic behavior and is intended to cause sexual excitement. With all of the arguments presented in this paper, it seems only a vague definition of this type can be applicable to all views on the subject. Pornography on the Internet has brought about difficulties pertaining to censorship. All of the arguments in this paper can be divided into one of two categories: those whose aim is to allow for an uncensored Internet, and those who wish to completely eliminate pornography from the Internet all together.
All arguments for an uncensored Internet all cite the basic rights of free speech and press. While arguments in this paper are international, almost everyone of them cites the First Amendment of the United States. In many of the papers it is implied that the United States sets precedent for the rest of the world as far as laws governing the global world of the Internet. Paul F. Burton, an Information Science professor and researcher, gives many statistics showing that presence of pornography on the Internet is not necessarily a bad thing. He gives one example that shows that "47% of the 11,000" most popular searches on the Internet are targeted to pornography. This fact shows that pornography has given the Internet approximately half of its clientele (2). Without this, the Internet would hardly be the global market that it is today. Most on the Internet are not merely the for pornography either. It is just a part-time activity while not attending to serious matters.
At another point in his paper, Burton cites reasons why the Internet is treated differently than other forms of media. The privacy of accessibility is a factor that allows many people to explore pornography without the embarrassment of having to go to a store and buy it. The fact that anybody, including children of unwatchful parents, may access the material. However, Burton believes that these pornographic web sites must be treated the same way as pornographic magazines or videos.
One fear of many people is that children will happen across pornography, but as Burton writes in his paper, the odds of someone not looking pornography and finding it are "worse than 70,000:1" (Holderness in Burton 2). Even if a child were to accidentally find an adult site, he or she would most likely see a "cover page" (See Figure 1). These cover pages, found on approximately 70% of adult sites, all have a lot of law jargon that summed up says, "if you are not of age, leave." This cover page will not stop children in search of pornography because all that is required is a click on an "enter" button and one can access the site. Adult verification systems, such as Adult Check and Adult Pass, have been very effective in governing access to these site, but with only 11% of adult sites having a verification of this nature, this system does not seem realistic. Another method of controlling access is use of a credit card number to verify age. This method opens many doors for criminals wishing to obtain these numbers for unlawful use.
According to Yaman Akdeniz, a Ph.D. researcher at the Centre for Criminal Justice Studies at the University of Leeds, pornography is not as wide spread as some governments would have us believe. With a total of 14,000 Usernet discussion groups (a place where messages are posted about specific topics), only 200 of them are sexually related. Furthermore, approximately half are related to serious sexual topics, such as abuse or rape recovery groups. Akdeniz also makes the point that "[t]he Internet is a complex, anarchich, and multi-national environment where old concepts of regulation...may not be easily applicable..." (15).
This makes a very interesting case about there general nature of the Internet. It is the first electornic media source that is entirely global, and although some countries will and have tried to regulate it, there is no way to mesh what every country does to control the Internet. Germany made an attempt at regulating the Internet within their country, however, the aim was not only to ban pornography but also to ban anti-Semitic newsgroups and web sites. Prodigy, a global network server, helped the German government by blocking these Web sites. When Prodigy was pressured by groups like the American Civil Liberties Union, Prodigy stopped blocking these Web site, and there was nothing Germany could do.
This just shows the "power" that the United States holds over the Internet. Two reasons account for this "power." First, 60% of all the information comes from the U.S., and secondly, the U.S. has set up most global laws and regulations. Almost every article pertaining to the Internet freedom or censorship cites the U.S. and bases arguments on the First Amendment. With this precedent setting responsibility, one must look at what is going on in the Supreme Court with regards to the Internet.
Peter H. Lewis, a reporter for the New York Times, has been covering the Computer Decency Act since passing of the law. The Computer Decency Act, part of the Telecommunications Act, was passed on February 8, 1996. The main purpose of this section was to halt the "flow of pornography and other objectionable material on the Internet..." (1). This section, however, was declared unconstitutional by a panel of the Supreme Court in June 1996. This overturn caused an uproar among the anti-pornography groups in the United States. The case will be heard once again in June 1997 to ensure that there were First Amendment rights being violated. Judge Stewart R Dalzell, a member of the Supreme Court panel, stated that, "Just as the strength of the Internet is chaos, so the strength of our liberty depends upon the chaos and cacophony of the unfettered speech the First Amendment protects." (Lewis, Judges 1). According to Lewis' next article, no one will be prosecuted under the Internet section of this law until it is constitutionally determined. So as of right now, there is no fear of prosecution for pornographers (Lewis, Federal 1)
Maria Semineno, a writer for PCWeek, reported on free speech advocates reations to the overturning of the CDA. Jerry Berman, executive director for the Center for Democracy and Technology, stated that, "[i]t is very clear that Congress is not going to let this alone...." Berman made this statement aluding to the make up of the Supreme Court and what will happen in June of 1997 when the decision is reevaluated. It is argued that the Supreme Court is much divided on the subject of free speech, and therefore, the decision in 1997 will depend upon the panel presiding. When the decision is made it will make one side of the debate triumphant, and the other fighting for their beliefs.
Those who view that pornography should be wiped off the Internet entirely site many different reasons. One highly recognized group, the Family Research Council, has determined that pornography on the Internet is harmful to all individuals and concludes that the only way to stop this is to ban pornography, in all it's forms, on the Internet. The FRC categorizes pornography as follows:
...images of soft-core nudity, hard-core sex acts, anal sex, bestiality &dominion, sado- masochism (including actual torture and mutilation, usually of women, for sexual pleasure), scatological acts (defecating and urinating, usually on women, for sexual pleasure), fetishes, and child pornography. Additionally there is textual pornography including detailed stories of rape, mutilation, torture of women, sexual abuse of children, graphic incest, etc ("Computer" 1)
In addition to categorizing pornography, the FRC goes on to address questions pertaining to Internet pornography. One question asked is, "IS THE ON-LINE COMMUNITY AGAINST PROPOSALS FOR "DECENCY" ON THE INTERNET?" The answer provided was no and that of the 20 million people on the Internet (an out-dated figure), only 2 percent opposed censorship. However, no citation for this figure was provided ("Computer" 2).
The FRC article then goes on to discuss the the main arguments against banning pornography. The article poses the question of possible loss of works of art because of banning. It goes on to cite the "official" definition by the Supreme Court of obscenity, that any object having artistic, educational, or moral value shall not be censored. The article next discusses "technological fixes," such as SurfWatch and NetNanny, that could possibly control pornography from in the home. It gives three points against this method: children can use other computers, children know more about computers that most parents, and people who distribute pornography have no legal reasons not to target children with pornography ("Computer" 2). Cathleen A. Cleaver, head of legal studies for the FRC, backed the Communications Deceny Act. When was overturned, she stated her concerns that not only was the broader sections of the law overturned, "...but also the part that made it illegal to transmit pornography directly to specific children." (Lewis 2). With this section omitted, pornographers may lure children without fearing any repercussions from the law. Although this is the FRC's main concern, they are still fighting for a total ban of pornography on the Internet.
Dr Victor B. Cline, a psychotherapist specializing in sexual addiction, argues that massive exposure, such as with the Internet, will cause irrepairalbe damage to society. Cline states that pornography, for all intents and purposes, should be treated as a drug. Cline has treated 350 people with this sexual illness and reports that, "[o]nce involved in pornographic materials, they kept coming back for more." Cline suggests that with availability of pornography reaching these great proportions, that we can expect to see an increase in sexual deviance and sexual illness (Cline 4).
Cline next goes on to explain the steps an addict goes through to becoming a sexual deviant. First the person becomes addicted. Secondly, there is an "escalation" of the addiction in which the person becomes engulfed in pornography, even to the point of prefering masturbation to pornography over actual sexual contact. Third, there is a process of "desensitization" which allows acceptance of horrific sexual acts as the norm. Fourth is the "acting out sexually" phase in which a person no longer achieves the satisfaction from the pornography, and in turn, acts upon fantasies, usually based on pornography. Cline's main concern is that with the ease of availability of this type of material, more examples of sexual addiction will occur, not only with adults, but also children will be able to "start out" at an early age (Cline 4-5). This fear is substantiated with the number of pornographic sites with ease of accessibility.
With all sides of this issue having their separate reasons to either keep or ban pornography, each makes their case with facts. Pornography is an issue that is difficult to take a side on. People for pornography, or against censorship, state that there is no real reason for this ban. They cite that with of the parental controls and attempts to keep pornography from children are good enough reasons to allow for pornography. However, anti-pornography groups argue that not enough is being done to keep pornography from children, and furthermore, that pornography affects adults just as much as it does children. The issue of pornography is just as controversial as the abortion debate and very similar in many respects. Both sides have strong feelings on what definitions are used, what is morally correct, and the causality of pornography. There is no clear-cut answer, however, it is now up to the U.S. government to make a decision and set precedent for the rest of the world.
Works Cited
Akdeniz, Yaman. "The Regulation of Pornography and Child Pornography on the Internet." Online. World Wide Web.http://137.205.240.103:80/elj/jilt/biogs/akdeniz.htm. 4 March 1997.
Burton, Paul F. "Content on the Internet: Free or Fettered?" (20 Feb. 1996). Online. World Wide Web.http://www.dis.strath.as.uk/people/paul/CIL96.html. 21 Feb. 1997.
"Computer Pornography Questions and Answers." (8 Nov. 1996). Online. World Wide Web.http://www.pff.org:80/townhall/FRC/infocus/if95k4pn.html. 8 Mar. 1997.
Lewis, Peter H. "Judge Turn Back Law Intended to Regulate Internet Decency." (13 June 1996). Online. America Online. The New York Times Archive. 12 March 1997.
---. "Federal Judge Block Enforcement of CDA.." (16 Feb 1996). Online. America Online. The New York Times Archive. 12 March 1997.
Semineno, Maria. "Free speech advocates: CDA fight might not end with Supreme Court." Online. World Wide Web.http://www.pcweek.com:80/news/0310/14ecda.html. 23 Feb. 1997.
f:\12000 essays\sciences (985)\Computer\POSItouch.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Convention and Group Sales
Sunday, April 06, 1997
POSitouch
The POSitouch system was conceived in 1982, by the Ted and Bill Fuller, owners of the
Gregg's Restaurant chain. They were looking to increase the efficiency of there restaurants
through the use of computer technology. During there search they found systems but none
meeting there total needs. That is why the Fullers created the company, (R.D.C) Restaurant Data
Concepts. RDC keeps developing better and more efficient equipment to be used in the food
service industry.
ADVANTAGES DISADVANTAGES
1.) Timely information, and speeds operations. 1.) People will become dependent
on technology. So when it fails they will
2.) Tighter labor controls. probably not be trained or prepared to be
with out it.
3.) No need to hire or pay a bookkeeper. 2.) Takes time to train people to work
efficiently on POSitouch.
4.) Calculates food costs and menu mix.
3.) POSitouch is expensive to the small
5.) Tighter controls over orders taken. business owner. The smallest system
Cuts down on free meals waiters give out. that they have installed cost under
$10,000.
6.) Can order (via modem) and keep track of
inventory.
7.) Built-in modem allows technical support
via modem, and on line access to reports
available at anytime, even historical reports..
8.) Sales trend analysis.
9.) Credit Card authorization with draft capture.
10.) Easy to customize, to meet the needs of
many different types of operations.
11.) Increased speed means, increased turnover.
Overall, I feel that POSitouch is well worth the initial expense. It should be looked at as
an investment, saving time, and money in all areas needing tight controls. This management tool
has been shown to cut labor, and food costs in many food service establishments, not to mention
the speed of the system, which could easily increase turnover. There is one important key that
should be recognized for restaurants planning to utilize this system. Be prepared for technology
to fail. If it fails the managers and staff should be capable of staying open without the
POSitouch system. The thing that I like most about this system is that you can truly tell that it
was developed by people in the food service industry, do to its completeness.
f:\12000 essays\sciences (985)\Computer\Private Cable TV.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The times are a"changing...
How France, Germany and Sweden introduced private,
cable and satellite TV - a comparison over the past
10 years.
1. INTRODUCTION
Why we have chosen this subject?
Before starting to write about TV in Sweden, Germany and France, we
wanted to compare French,German and Swedish media. But on account of
the wideness of this analysis, we decided to focus on the evolution
of TV broadcasting during these last 10 years.
The technical revolution which has appeared in this area since 1980
is necessary to be understood to be able to follow and forecast what
will happen in the future when multinational companies can take a
look on pan-european broadcasting. In this paper we try to make the
point on this changes. Furthermore as we came from different
countries and live now in an other one, we found it interesting to
compare the three countries (France, Germany and Sweden) TV-
broadcasting system.
While we were searching for datas, we discovered the gap that exists
in cable-covering between France and the two other countries. What
are the main reasons of this delay? Are they political, financial or
cultural? We will try to answer these questions in our paper. But we
will first define the different technical terms that we
are going to focus on. Then we will developp the birth of private
channels, their regulations, laws and financing in the different
countries.
2. BASICS
In our paper you will find the following technical terms:
¥ terrestrial broadcasting: this is the basic technology used to
broadcast radio and TV. It"s the use of radio-frequencies that can
be received by a simple antenna. The problem by using terrestrial
broadcasting is, that you only have a few (up to max. 7) possible
frequencies and that you need to have expensive transmitters every
100-150 kms to cover an area.
Programms which are broadcasted terrestrical are e.g.: Swedish TV 1,
2 and 4; German ARD, ZDF, 3. Programme and some private channels in
urban areas; French TF 1, France 2 and France 3.
¥ cable TV: the reason why you have only a few frequencies by
using terrestrial broadcasting is that terestrial broadcasting is
influenced by physical phenomens (bandwith) whereas broadcasting in
a cable is shielded/protected from outside influences. So you can
have more channels on the same bandwith-space. For example: a cable
might carry 7 programmes catched with an antenna from terrestrical
transmitters and additional 25 satellite channels (maximum 30-35
different channels in one cable). Instead of connecting to an
antenna cable-households connect their TV-sets to the cable-network.
¥ satellite broadcasting: a satellite is a transmitter that is
positioned on a course in space 40.000 kms far from earth. The
advantage of this technology is to cover a wide area with only one
transmitter. Modern direct broadcasting satellites (DBS, e.g. Astra)
can be received by small (³ 30cm) and cheap (³ 2.000:- SKR)
"satellite-dishes". To connect a TV-set to the "dish" you also need
a device that converts the received satellite-signals to signals
that can be used by a standard TV-set.
In the beginning (80s) this technology needed huge and expensive
dishes and was only used to transmit signals to cable-networks.
Newer technology is often cheaper than connecting a house to a
cable-network. In east-Germany the German PTT (Telekom) is competing
with their cable-network against the cheap satellite-dishes.
The most tranceiver-signals on DBS-Astra are booked by British (NBC-
Super, MTV...) and German (RTL, SAT-1...) broadcasters. Satellites
can also be used for telephone-connections, TV- or radio-
broadcasting.
3. TV-BROADCASTING IN FRANCE
3.1 HISTORY
TO BE FILLED WITH THE BEGINNING (PUBLIC TV 1930S - 1984)
The first broadcasting tests happenned in the late 30Ôs like in
Germany. It is only in 1945, after the second world war, that The
Ordinance formalized the state monopoly of broadcasting which was
assigned to Radiodiffusion de France. The Radiodiffusion de France
has then included television in 1959 and became RTF (Radiodiffusion-
Television de France). Established as a public company accountable
to the Ministery of Information, RTF became an "Office" (ORTF) still
supervised by the government. The events that happened in France in
May 1968, have then helped the government to liberalize the medium.
The government of information was therefore abolished and in 1974,
an Act divided the ORTF in seven different public companies which
formed the public broadcasting service : TF1, Antenne 2, FR3, Radio
France, TDF, SFP, INA.
Private channels emerge in France with Canal Plus the crypted-paying
channel in 1984. This terrestical channel is owned by Havas. Canal
Plus has to broadcast a daily clear program lasting from 45 minutes
to 6 hours, the average is 3 hours and a half per day. In 1985 sees
the birth of two new private channels France 5 and TV6 which were
forbidden to broadcast the year after. Finally in 1987, they have
refound the right to broadcast under the respective name La Cinq and
M6. At this time, it already existed five public channels : TF1
(which is since 1987 privatized), A2 (rebaptised France 2 a
generalist broadcasting television), FR3 (today called France 3, a
national and regional TV), TV 5 Europe (European channel launched in
1983, transmits programmes broadcast in French-speaking countries by
satellite) and RFO (transmits radio and TV programmes to French
overseas territories and possessions). In may 1992, ARTE-La Sept,
the Franco-German channel has started to broadcast on the French and
German cable-net. Then when the private French channel, La Cinq,
stopped broadcasting, ARTE was allowed to broadcast from 19h to 1h
in the morning on this available frequence. The 13th of december
1994, has appeared a new public channel "La Cinquieme" also called
"channel of knowledge" (la cha"ne du savoir) which is broadcasting
on the same frequence as ARTE until 19h.
To summarise, today the French TV-broadcasters are :
public: France 2 private :
France 3 M6
Arte TF 1
La Cinquieme Canal+ (pay-tv)
RFO
TV 5
3.2 CABLE/SATELLITE TV
Cable channels were launched in France in 1984, 2% of the households
were cabled. This initiative came from Minister Mauroy who presented
cable as "a massive, consistent and orderly solution to satisfy
multiple communication needs". In fact this cable plan met
opposition of several parties. This was representing to high costs,
and the state organization (DGT) assigned of the overall control
control of the implementation of the new technology antagonized the
manufacturers of cable equipment who proved unable to produce what
was required within the agreed price and time. In 1986, the cable
plan was definitevly abandonned. Around 10 private companies are now
responsible for promoting the cable, for instance la compagnie
g?n?rale de videocommunication, la Lyonnaise Communication,
Eurocable ...
It exists 25 local channels, 13 French channels are broadcasted,
cable now reaches 25,3% of French households and the fee vary from
115:SKR to 400:SKR on account of the number of channels you wish
receiving.
It costs a lot of money for the company to share the cable in France
as it requires the use of an expensive material such as the optical
microfiber. Because of this cost, the cable net is now set for
collectivity instead of individuals. Furthermore this installation
can only be achieved on the will of the county otherwise the
autorisation can not be received by the cable company. the
commercial board of the cable society has to convince these
communities.
France ownes two direct-diffusing satellites : TDF 1 and TDF 2, and
one telecommunication one : TELECOM 2A. Most of the programmes
diffused through satellite are in fact the one you can get thanks to
the cable.
3.3 LAWS AND REGULATIONS
The C.S.A. (Conseil Sup?rieur de lÔ Audiovisuel) is the authority
responsible in France for broadcastingÔs regulations. It is composed
of 9 designed members :
- three chosen by the President of Republique
- three chosen by the President of Senat
- three other by the President of National Assembly
This institution is really politicised as we can see.
It insures respect of pluralist expression of ideas, of French
language and culture, of free competition, of quality and diversity
of programs ... It also regulates the frequences gestion. It can
interfer as well in the public as in the private sector. It gives
the autorisations of exploitation of cable networks, satellite and
terrestrial Television, M6 and Canal Plus for instance are allowed
to broadcast for 10 years, then tehy have to renegociate their
autorisation of broadcasting. Autorisations for CableTV last 20
years and can be allowed to companies or "regies" on local elected
peopleÔs proposal. Furthermore French and foreign channels which
want to broadcast on cable net need to sign a convention with the
CSA. The implementation of the net is then under the Commune
responsibility.
The CSA makes also policy such as advertising to be respected. The
time of advertising per hours is 12 mns. TF1 for instance has
overpassed this allowance of 81 secondes and 94 secondes an other
time and was therefore obliged to pay 2. 800.000,00 Ffr
(4.000.000,00:SEK), which equals 16.000 Ffr per second
(23.000,00:SEK).
It also reuglates the political intervention on the public channel
and made the law of the three third to be regarded. This regulation
is that the channel in a political programm should respect 1/3 for
the government, 1/3 for majority and 1/3 for opposition.
3.4 FINANCING
4. TV-BROADCASTING IN GERMANY
4.1 HISTORY
The first TV-experiments in Germany were made in the 1930s to
broadcast e.g. the Olympic Games. After World War II the harbinger
of the first German TV-station ARD began broadcasting under allied
control in 1949 in northern Germany and Northrhine-Westfalia under
the responsibility of the NWDR-Laenderanstalt. The ARD is a
broadcaster with only organizing functions for the "Laender"-based
production facilities (Laenderanstalten, e.g. NDR, WDR...). Every
part of the programm that is broadcasted under the label ARD is
produced under the responsibility of a state-based station. The
second german broadcaster ZDF is different from ARD. The ZDF
produces TV on its own but the station is indirectly controlled by a
conference of the states. There are also several regional "third"
channels bound to the culture of one or more states which are only
broadcastet within the states and are produced by the
"Laenderanstalten".
Private TV-programmes were introduced in 1984. You will find more
about the introduction on the following page. There were 15 Germany-
based TV-broadcasters in 1994.
To summarise, today the Germany-based TV-broadcasters are :
public: ARD private (general interest):
ZDF RTL
Arte (with F) Sat 1
3-Sat (with AU + CH) Pro7
DW-TV (foreign service)
private (special interest): private (pay TV):
Kabel 1 Premiere
Vox
Viva
RTL 2
DSF
n-tv
Definitions on the next page!
4.2 CABLE/SATELLITE TV
The German PTT developed as one of the first PTT"s in Europe
standards in cabling private households. But in the late 70"s the
social democrats (SPD) blocked the PTT because the Bonn government
was afraid that cable technology would lead into private TV. After
the changing the government in 1982 the new conservative government
(CDU) and the minister for post and telecommunication Schwarz-
Schilling invested in the new cable-technology.
The first private TV-broadcasters (SAT-1 and RTLplus) got their
license for a cable-trial-project in Ludwigshafen in 1984. After
starting the Ludwigshafen project (estimated for 3 years duration)
the countries with conservative majority allowed the PTT to
broadcast the trial-programmes from the trial-projects in their
regular cable-networks. This was the beginning of private TV in
Germany and a trial-project became regular-service within a few
months... . After a decision from the highest court in 1986
commercial TV was legal. The social democrats (SPD) changed their
politics against private TV in the late 80"s and gave licenses to a
few of the most important private broadcasters in states with a SPD
majority. Now Koeln (Cologne) in the state of Northrhine-Westfalia
(SPD) is one of the most important places for German media (RTL,
Viva-TV, Vox) among the traditional "media-capitols" Hamburg and
Muenchen.
After unification in 1990 the PTT Telekom invested in cable Networks
in the former GDR. But 1994 only 14 percent of all east-German
households were connected to a cablenetwork and even terrestrical
broadcasting still has not reached the "western" standard. For
eastern Germany satelite-TV is very important. For this reason the
German public broadcasters ARD and ZDF decided in 1992 to broadcast
via the ASTRA-Sat to reach the eastern population. In 1993 the PTT
signed a contract with the Luxemburg based ASTRA-Enterprises to
become a associate member of this commercial organization. Since
1995 the Telekom is a private company and there are plans to provide
technology for digital and pay-TV in the future.
17 % of all east-German households and 11% of all west-German hh
have a satellite-dish (1993). More than 90% of the german-sat-dishes
are focused on the Astra-Sat. Connected to a cablenetwork are 48%
(west) and 14% (east) of all households.
In some urban areas free terrestrial frequencies are licensed to a
few private channels (RTL, Sat 1, Pro 7).
Local TV is very new in Germany, the first License was given by the
states Berlin and Brandenburg to "1A-Brandenburg" in 1993 for the
towns Potsdam and Berlin. There are also some projects in state
financed open channels in several cable networks.
4.3 LAWS AND REGULATIONS
Among the three countries we compare, Germany is the only country
running a "federal system". Media in general are underlying rules
and laws by the decentralized several state-governments within the
Federal Republic of Germany. Also the public broadcasters are ruled
by the several states (Laender) and the private channels get their
Licenses from the states.
The reason for the decentralized broadcasting system in Germany is
the German "Grundgesetz", the Basic Law that guarantees the
"cultural sovereinty" of the staates. This Basic Law protects the
media from possible political interests a central (Bonn or Berlin
based) government might have.
Even the fees for the public-broadcasters are fixed by decissions
from a conference of the federal states. The only exception now is
the Deutsche Welle (DW-TV), a broadcaster for foreign countries
which is used as a "ambassador" for german culture and is under
special government-regulation.
In the 80s all German states drafted private-media laws. Now every
state has the legal possibility to give licenses to commercial TV-
stations. The supervisory body for Licenses in each state is called
"Landesmedienanstalt". Because of the decentralised German system
all laws and regulations concerning commercial broadcasters are
connected to the "cultural sovereinty" of the states. To avoid that
a private broadcaster has to license his programm in every of the 16
German states all states signed a contract (Staatsvertrag). This
contract guarantees e.g. that each state will accept the license
given by a Landesmedienanstalt in a single German state. In this
contract are also fixed regulations about ownership, content of
programmes and the possibility for each "Landesmediananstalt" to
accuse decisions made in an other state.
Each Landesmedienanstalt is also responsible for the decission which
programmes are allowed to be broadcasted in the PTT-cable-network in
their state (normally: 1. stations licenced within the state, 2.
stations licenced in other states, 3. foreign stations).
Another important assignment of the Landesmedienanstalt is to watch
the german media-ownership-regulations. There are special quotations
in ownership which have to be controlled. The strongest regulation
is that no one is allowed to hold more than 50% on an broadcaster.
An other important mechanism is the declaration of a channel, there
are declarations as "special interest" (only one topic, e.g. sport,
movies), "general interest" (with information/news) and "pay TV".
The most important german media-investors are Bertelsmann (RTL,
Premiere) and the Kirch-Group (Sat 1, Kabel 1, Pro 7). Both groups
are accused to violate the ownership and monopoly-law that will be
renewed within this year.
Because of the relative liberal-license-law in 1994 more than 10 new
entrepeneurs anounced to apply for a german TV-license (e.g.
Disney).
5. SWEDEN
5.1 HISTORY
Unlike Germany and France where they started with experimental TV-
broadcasting in the late 30"s Sweden launched its first channel in
1956. But like in France and Germany the state had a monopoly on
broadcasting. The first Swedish channel was Channel 1 the second
channel (TV 2) was launched in 1969. Since 1987 the two public
television channels have been organized in such a way that TV 1 is
based on programme production in Stockholm and and TV 2 on
production in ten TV districts in the provinces.
The first two private Swedish channels where introduced in Sweden in
1987 by satellite and cable. TV 3 and Filmnet-pay TV are swedish
owned but were not allowed and licensed to send on terrestrial
frequencies so they transmit via satellite and cable. In 1989 the
third satellite broadcaster the Nordic Channel was launched and two
more pay-TV channels, TV 1000 and SF-Succ? where introduced to the
market. TV 1000 and Succ? merged two years later. The first private
channel licensed to transmitt terrestrial within Sweden was TV 4 in
1991.
To summarise, today the Swedish TV-broadcasters are :
public: TV 1 private : TV 3
TV 2 TV 4
TV 5
Nordic (pay-tv)
TV 1000 (pay-tv)
5.2 CABLE AND SAT
The construction of cable networks begann in 1984. This share was
supposed to bring 3 000 employments perr year for 7 years and was a
mean to protect telephone monopoly. Now Sweden is among the
european countries with the most cable subscribers (B, NL, CH). Up
to 50% of all households in sweden have acces to the cable and 7%
own a satellite-dish
Like in France the cable-networks gave a chance for local stations.
Advertising is not allowed for these local stations so they have a
lack of money and often broadcast only a few hours a day. Local-TV
is provided in circa 30 towns and can be seen by 16% of all Swedes
(1993).
Satellite installation was given birth in the middle of the 1970Ôs
through an agreement among the five Nordic countries to launch
NORDSAT. This satellite would inforce the cooperation between these
countries and also helpes to promote nordic culture. In fact this
project died and a Tele-X was launched by Sweden and Norway, then
Finland joined the project. Nowadays 60 % of the Swedish households
have access to the satellite channels.
5.3 LAWS AND REGULATIONS
-cable transmission legislation 1992
In Sweden, the Radio Act and the Enabling Agreement between the
braodcasting companies and the State are leading broadcasting
policies The State exercise no control over the programms prior to
broadcasting. However a Broadcasting council is empowered to raise
objections to specific programms.
The Cable Law
-The two Swedish public channels are financed by a license fee.
6. CONCLUSION
In the times of public-tv the few possible frequencies for
terrestrical-broadcasting where used by the very few public channels
in each country. These channels were under control of the state and
not connected to financiel interests of owners or investors. With
the beginning of the 80s the invention of cable TV made broadcasting
from up to 30 channels possible. Our governments had to face the
demand for TV-licenses and also had to invest in cable-
infrastructure. In the late 80s new direct broadcasting satelites
gave the same number of channels to households in less developed
regions.
One thing we found out and can face now as a major fact is that
there is no cable-infrastructure in France and only a few commercial
channels (compared to the 57 million inhibitants). The market seems
to be influenced by the default of the state to provide cable
access. For some reasons we can"t evaluate from sweden in a few
weeks how the "sleeping beauty" France managed not to develop a
cable-network.
But we can compare the facts for all three countries and conclude:
-dual system in all 3 countries (public and private tv since mid
80s)
-tv is important in all countries 97% (see chart)
-pay tv is introduced in all countries
7. QUESTIONS TO THE CLASS
-maybe there is no demand for cable in France?
-will the public channels survive?
-we only evaluated quantity and historical information and facts-
what about quality?
f:\12000 essays\sciences (985)\Computer\Procedures Parameters & SubPrograms.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Procedures, Parameters & Sub-Programs
In any modern programming language, procedures play a vital role in the construction of any new software. These days, procedures are used instead of the old constructs of GOTO and GOSUB, which have since become obsolete.
Procedures provide a number of important features for the modern software engineer:-
Programs are easier to write. Procedures save a large amount of time during software development as the programmer only needs to code a procedure once, but can use it a number of times. Procedures are especially useful in recursive algorithms where the same piece of code has to be executed over and over again. The use of procedures allows a large and complex program to be broken up into a number of much smaller parts, each accomplished by a procedure. Procedures also provide a form of abstraction as all the programmer has to do is know how to call a procedure and what it does, not how it accomplishes the task.
Programs are easier to read. Procedures help to make programs shorter, and thus easier to read, by replacing long sequences of statements with one simple procedure call. By choosing goo procedure names, even the names of the procedures help to document the program and make it easier to understand.
Programs are easier to modify. When repeated actions are replaced by one procedure call, it becomes much easier to modify the code at a later stage, and also correct any errors. By building up the program in a modular fashion via procedures it becomes much easier to update and replace sections of the program at a later date, if all the code for the specific section is in a particular module.
Programs take less time to compile. Replacing a sequence of statements with once simple procedure call usually reduces the compilation time of the program, so long as the program contains more than one reference to the procedure!
Object programs require less memory. Procedures reduce the memory consumption of the program in two ways. Firstly they reduce code duplication as the code only needs to be stored once, but the procedure can be called many times. Secondly, procedures allow more efficient storage of data, because storage for a procedure's variables is allocated when the procedure is called and deallocated when it returns.
We can divide procedures into two groups:-
Function procedures, are procedures which compute a single value and whose calls appear in expressions
For example, the procedure ABS is a function procedure, when given a number x, ABS computes the absolute value of x; a call of ABS appears in an expression, representing the value that ABS computes.
Proper procedures, are procedures whose calls are statements
For example, the procedure INC is a proper procedure. A call of INC is a statement; executing INC changes the value stored in a variable.
Procedures have only one real disadvantage: Executing a procedure requires extra time because of the extra work that must be done both when the procedure is called, and when it returns.
Most of the time however, the advantages of using procedures heavily outweigh this minor disadvantage.
Most procedures depend on data that varies from one call to the next, and for this reason, Modula-2 allows a procedure heading to include a list of identifiers that represent variables or expressions to supply when calling the procedure. The programmer can use these identifiers, known as formal parameters, in the body of the procedure in the same fashion as ordinary variables.
A call of a procedure with parameters must include a list of actual parameters. The number of actual parameters must be the same as the number of formal parameters. Correspondence between the actual and formal parameters is by position, so the first actual parameter corresponds to the first formal parameter, the second actual parameter corresponds to the second formal parameter, and so on. The type of each actual parameter must match the type of the corresponding formal parameter.
Modula-2 provides to kinds of formal parameters:-
Variable parameters. In a procedure heading, if the reserved word VAR precedes a formal parameter, then it is a variable parameter. Any changes made to a variable parameter within the body of the procedure also affect the corresponding actual parameter in the main body of the program.
Value parameters. If the reserved word VAR does not precede a formal parameter then it is a value parameter. If a formal parameter is a value parameter, the corresponding actual parameter is protected from change, no matter what changes are made to the corresponding parameter in the procedure.
To sum up, variable parameters allow information to flow both into and/or out of a procedure, whereas value parameters are one way and only allow information into a procedure.
Most Modula-2 systems allow a program to "call" a program module as if it were a procedure. We call a module used in this way a subprogram module or just a subprogram. The commands for calling another program are not part of Modula-2 itself, but are provided by a procedure in a library module. The command used in most Modula-2 systems for calling a sub-program is CALL, and a number of parameters are usually passed along with this procedure so as to allow the two programs to communicate with each other, but there is no way to supply parameters to a subprogram. The parameters passed only indicate things like whether the sub-program was executed correctly and did not terminate early because of an error.
The primary reason for using subprograms is to reduce the amount of memory required to execute a program. If a program is too large to fit into memory, the programmer can often identify one or more modules, that need not exist simultaneously. The main module can then call these modules as subprograms when needed. Once a subprogram has completed execution, it returns control to the main program, which can then call another sub program. All subprograms share the same area of memory, and because only one is resident at a time, the memory requirements of the overall program are greatly reduced.
f:\12000 essays\sciences (985)\Computer\PROJECT MANAGEMENT IN COMPUTER.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks:
1. Keep an accurate status of inventory, by part number.
2. Determine the location of all parts (Manchester, St. Charles and Afton locations).
3. Maintain a list and cross reference by manufacture names, part class and part number.
4. Generate purchase orders.
5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks.
6. Maintain all account payable information.
7. Maintain all account receivable data.
Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation.
Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include processing.rters location. Account payable and account receivable transactions will be prepared at the headquarters . Each store will have its own personal computer, which will serve as a client/server and workstation, along with two hand held scanners connected to the personal computer and the workstation. The hand held scanners will be used to scan in inventory at each store, and all purchase and return transaction will be scanned into the computer. ABC Software Solutions recommend setting up and using a web page via the Internet, so that prospective customers can place orders 24 hours a day, 7 days a week. The World Wide Web (WWW) will also allow Midnight Auto Supply to have the capability to contact its supplier via the Internet to place orders. ABC Software Solution will install in each store, a one NEC 9624 Pentium Plus as its client server, with a 200 MHz processor, a 15' Multisync XV15+ monitor, a 4.0 gigabyte hard drive, a NEC standard keyboard with hot keys to access the database immediately, a Hewlett Packard Deskjet 820Cse printer and one workstation with hot keys. Our company chose the Access Database software to build the database for all support (see overview), because of its user friendliness, stability, and its compatibility to interface with most software. Access is extremely flexible and our analyst can build a customized database to handle your company's daily management requirement.
SYSTEM CAPABILITIES
{Each location will have the following software}
MASS STORAGE DEVICES
HARD DRIVE:
· NAME: NEC READY 9625
· INTEL PENTIUM PRO PROCESSOR (200 MHz)
· 32 MB OF RAM EXPANABLE TO 128
· 4.0 GIGABYTE HARD DRIVE
· 12V MULTISPIN CD-ROM READER
· 56.6/14.4 KBPS VOICE/DATA/FAX/MODEM (BOCA)
· MPEG FULL MOTION DIGITAL VIDEO
· 512 PIPELINE BURST
· 2 ISA, 2PCI 1PCI/ISA EXPANSION SLOTS
MONITOR
· NEC 15' MULTI-SYNC XV15+
· 640 X 480: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
· 800 X 600: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
· 1024 X 768: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
KEYBOARD
· NEC WINDOW 95 104-KEY ENHANCED KEYBOARD WITH FUNCTION KEYS
PRINTER
· HEWLETT PACKARD DESK JET 820Cse PROFESSIONAL SERIES PRINTER
· PRINTS 6.5 PPM BLACK, 4 PPM COLOR
· PRINTS ALL TYPES OF FORMS, LABELS, CHECKS AND LETTERHEAD
SOFTWARE
· MICROSOFT WINWDOWS 97
· ACCESS (for building the customized database)
· NETSCAPE (Internet software)
SYSTEM:
· WILL DETERMINE LOCATION OF ALL PARTS FOR ALL THREE STORES
· WILL MAINTAIN ALL PAYROLL FUNCTIONS
· WILL MAINTAIN ALL ACCOUNT PAYABLE DATA
· WILL MAINTAIN ALL ACCOUNT RECEIVABLE DATA
· WILL MAINTAIN A LIST OF ALL MANUFACTURES BY NAME, PART CLASSIFICATION AND PART NUMBER
· WILL KEEP A CURRENT STATUS OF PURCHASE ORDERS
· WILL KEEP A CURRENT STATUS OF ALL INVENTORY BY PART NUMBER
Company Information: ABC Software Solution will send a consultant/analyst to your headquarters store and observe/interview the employees at Midnight Auto Supply. The data obtained from the observation/interview will allow us to design the best product for your company. After the observation/interview, the consultant/analyst will convene back at ABC headquarters and participated in a round table meeting to discuss the outcome and design phase. The critical information is listed in the Requirement Document (RD). Our staff will assist the employees at Midnight Auto Supply in writing a full and complete Requirement Document.
THE PHASE PLAN
(SEE GANTT CHART FOR BEGINNING AND ENDING DATES OF EACH PHASE)
DEFINITION: With the signature of Mr. Valve or the company's designated representative, ABC Software Solution will help write the RD, (see Solution). An analyst will assist your staff in writing the RD to ensure that all details have been covered.
ANALYSIS: During this phase, a Functional Specification (FS) will be established, which will consist of a cost analysis, a milestone schedule, a milestone deliverables, all with a fully written detail description. Our analyst will assist your staff in writing the (FS). A signature on the FS will be required prior to the start of the Design phase. The prototype will begin at this phase.
DESIGN: A test database will be designed, which will include menus and screens. A demonstration and sample data will be provided to the Midnight Auto Supply staff, in order to ensure that all specifications have been met. After your company staff have closely analyzed and reviewed the demonstration and sample data, a signature of approval will be required by Mr. Valve or the company's designated representative. This phase will be frozen once the signature has been authorized and any changes after this stage can result in a substantial increase in the firm fixed price and may change the schedule of completion date.
Acceptance Plan Test (APT): The APT will require a signature from
Mr. Valve or the company's designated representative, which will approve all the above phases (Definition, Analysis, Design). Once all modules have been completed for integration and testing, there will be a final system review. If all phases are accepted, a signature from Mr. Valve or the company's designated representative will be required for approval.
PROGRAMMING: The programming modules will be designed as follows:
Mod A is for menu & screens Mod B is for payroll system
Mod C is for accounting system Mod D is for purchase order system
Mod E is for the supply system Mod F is for inventory
Mod G is printing checks Mod H will interface of Mod A-G
SYSTEM TEST: The Programmer, Test Engineers and the Project Manager will test the system for quality assurance. At this phase all the integrated systems will be working together properly. If any unforeseen problems exist any and all adjustment will be made.
ACCEPTANCE: All terms from the APT will be implemented. The employees and staff at Midnight Auto Supply will be given a demonstration of the complete system. If any unforeseen problems occur, additional programming changes will be applied at this time. A signature from Mr. Valve or the designated company's representative will be needed at this time.
DELIVERABLES
Computer: The first delivery will be at the Headquarters store in Manchester. We will install the 15' NEC Multisync XV15+ Monitor, the Hewlett Packard printer, the NEC standard keyboard and the hand held scanner. The operation system (WINDOWS 97) , along with the prototype software, Netscape (web page) as well as all of the custom written software (ACCESS) will be installed at the operation and installation phase during the second week of March 1998 and will be completed during the first two weeks of April 1998.
Hardware: The Pentium Pro Process with 32 MB of RAM expandable to
128 MB. The expansion of RAM will be sufficient for the next 4 to 5 years. The A1 has a 4.0 gigabyte hard drive which includes a 56.6 data/fax modem from BOCA. At the current time the 4.0 gigabyte is the latest and the largest on the market. The 56.6 data/fax modem is currently the fastest on the market, it will take your company well into the future. The 12V Multispin CD ROM Reader is also the fastest and the best in quality on the market today.
MONITOR: NEC 15' Multisync xv15+ with a .28mm dot pitch.
KEY BOARD: NEC Standard with function keys.
PRINTER: Hewlett Packard Deskjet 820 Cse Pro Series with 800 dpi.
SOFTWARE: The software will be written with the signature of the Analysis Phase (1st week of Aug 1998).
WARRANTIES: All NEC 9624 is backed by one year limited warranty which includes one year on-site service and 90 days software support. Warranty Plus: Optional extended warranty, includes three years on-site service at a cost of $260.00 per PC (monitor, keyboard, PC server, scanner & printer) additional
item will cost $25.00 for scanner, the work station will cost $160.00. The total cost for the additional warranty is $1300.00.
Documentation and Training: A full set of documentation along with the user manuals will be provided during the operation and installation phase. The operation and installation is schedule to start on March 1, 1998. During the training phase, our project manager will train each store for 1 week. The last week we will have a analyst at each store to make all finalization.
OPERATION: Midnight Auto Supply's automated system (A1) will be installed at the Headquarters store first. Once implemented, the St. Charles and Afton stores will be completed within fourteen working days. During the training, a full set of documentation and user manual, will be issued at each site. The project manager will begin a training session at each store. Each session will take five working days to complete and consist of how to properly startup the A1 system, backup the database, restore/recover data and shutdown procedures. The training session will also cover use of the scanner devices.
PROPOSE SCOPE FOR MIDNIGHT AUTO SUPPLY
ABC Software Solutions is writing this solution to Midnight Auto Supply to show the benefits of using our A1 automated system. The A1 system will provide the means for Midnight Auto Supply to track its daily activities and management all of its inventory. This state of the art system will make payroll much simpler and federal, state and all 401k activities automated. The A1 system will automated the headquarters, the Afton and the St. Charles stores by doing all account receivable, account payable, purchase orders. The A1 system will also list all manufacture prices, parts and list the manufacture with the least to most costly part. After reading the project information sheet, we at ABC Software Solution realized your needs and feel that the A1 system will be what your company need.
STAFF
Project Manager: Wilbert E. Brownlow
Programmer: Personnel will be assigned upon signature. The levels are as
follows:
SYSTEM FLOW CHART FOR MIDNIGHT AUTO SUPPLY
Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks:
1. Keep an accurate status of inventory, by part number.
2. Determine the location of all parts (Manchester, St. Charles and Afton locations).
3. Maintain a list and cross reference by manufacture names, part class and part number.
4. Generate purchase orders.
5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks.
6. Maintain all account payable information.
7. Maintain all account receivable data.
Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation.
Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include processing.rters location. Account payable and account receivable transactions will be prepared at the headquarters . Each store will have its own personal computer, which will serve as a client/server and workstation, along with two hand held scanners connected to the personal computer and the workstation. The hand held scanners will be used to scan in inventory at each store, and all purchase and return transaction will be scanned into the computer. ABC Software Solutions recommend setting up and using a web page via the Internet, so that prospective customers can place orders 24 hours a day, 7 days a week. The World Wide Web (WWW) will also allow Midnight Auto Supply to have the capability to contact its supplier via the Internet to place orders. ABC Software Solution will install in each store, a one NEC 9624 Pentium Plus as its client server, with a 200 MHz processor, a 15' Multisync XV15+ monitor, a 4.0 gigabyte hard drive, a NEC standard keyboard with hot keys to access the database immediately, a Hewlett Packard Deskjet 820Cse printer and one workstation with hot keys. Our company chose the Access Database software to build the database for all support (see overview), because of its user friendliness, stability, and its compatibility to interface with most software. Access is extremely flexible and our analyst can build a customized database to handle your company's daily management requirement.
SYSTEM CAPABILITIES
{Each location will have the following software}
MASS STORAGE DEVICES
HARD DRIVE:
· NAME: NEC READY 9625
· INTEL PENTIUM PRO PROCESSOR (200 MHz)
· 32 MB OF RAM EXPANABLE TO 128
· 4.0 GIGABYTE HARD DRIVE
· 12V MULTISPIN CD-ROM READER
· 56.6/14.4 KBPS VOICE/DATA/FAX/MODEM (BOCA)
· MPEG FULL MOTION DIGITAL VIDEO
· 512 PIPELINE BURST
· 2 ISA, 2PCI 1PCI/ISA EXPANSION SLOTS
MONITOR
· NEC 15' MULTI-SYNC XV15+
· 640 X 480: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
· 800 X 600: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
· 1024 X 768: 256 STANDARD, 64K STANDARD, 16.8M WITH 2MB
KEYBOARD
· NEC WINDOW 95 104-KEY ENHANCED KEYBOARD WITH FUNCTION KEYS
PRINTER
· HEWLETT PACKARD DESK JET 820Cse PROFESSIONAL SERIES PRINTER
· PRINTS 6.5 PPM BLACK, 4 PPM COLOR
· PRINTS ALL TYPES OF FORMS, LABELS, CHECKS AND LETTERHEAD
SOFTWARE
· MICROSOFT WINWDOWS 97
· ACCESS (for building the customized database)
· NETSCAPE (Internet software)
SYSTEM:
· WILL DETERMINE LOCATION OF ALL PARTS FOR ALL THREE STORES
· WILL MAINTAIN ALL PAYROLL FUNCTIONS
· WILL MAINTAIN ALL ACCOUNT PAYABLE DATA
· WILL MAINTAIN ALL ACCOUNT RECEIVABLE DATA
· WILL MAINTAIN A LIST OF ALL MANUFACTURES BY NAME, PART CLASSIFICATION AND PART NUMBER
· WILL KEEP A CURRENT STATUS OF PURCHASE ORDERS
· WILL KEEP A CURRENT STATUS OF ALL INVENTORY BY PART NUMBER
Company Information: ABC Software Solution will send a consultant/analyst to your headquarters store and observe/interview the employees at Midnight Auto Supply. The data obtained from the observation/interview will allow us to design the best product for your company. After the observation/interview, the consultant/analyst will convene back at ABC headquarters and participated in a round table meeting to discuss the outcome and design phase. The critical information is listed in the Requirement Document (RD). Our staff will assist the employees at Midnight Auto Supply in writing a full and complete Requirement Document.
THE PHASE PLAN
(SEE GANTT CHART FOR BEGINNING AND ENDING DATES OF EACH PHASE)
DEFINITION: With the signature of Mr. Valve or the company's designated representative, ABC Software Solution will help write the RD, (see Solution). An analyst will assist your staff in writing the RD to ensure that all details have been covered.
ANALYSIS: During this phase, a Functional Specification (FS) will be established, which will consist of a cost analysis, a milestone schedule, a milestone deliverables, all with a fully written detail description. Our analyst will assist your staff in writing the (FS). A signature on the FS will be required prior to the start of the Design phase. The prototype will begin at this phase.
DESIGN: A test database will be designed, which will include menus and screens. A demonstration and sample data will be provided to the Midnight Auto Supply staff, in order to ensure that all specifications have been met. After your company staff have closely analyzed and reviewed the demonstration and sample data, a signature of approval will be required by Mr. Valve or the company's designated representative. This phase will be frozen once the signature has been authorized and any changes after this stage can result in a substantial increase in the firm fixed price and may change the schedule of completion date.
Acceptance Plan Test (APT): The APT will require a signature from
Mr. Valve or the company's designated representative, which will approve all the above phases (Definition, Analysis, Design). Once all modules have been completed for integration and testing, there will be a final system review. If all phases are accepted, a signature from Mr. Valve or the company's designated representative will be required for approval.
PROGRAMMING: The programming modules will be designed as follows:
Mod A is for menu & screens Mod B is for payroll system
Mod C is for accounting system Mod D is for purchase order system
Mod E is for the supply system Mod F is for inventory
Mod G is printing checks Mod H will interface of Mod A-G
SYSTEM TEST: The Programmer, Test Engineers and the Project Manager will test the system for quality assurance. At this phase all the integrated systems will be working together properly. If any unforeseen problems exist any and all adjustment will be made.
ACCEPTANCE: All terms from the APT will be implemented. The employees and staff at Midnight Auto Supply will be given a demonstration of the complete system. If any unforeseen problems occur, additional programming changes will be applied at this time. A signature from Mr. Valve or the designated company's representative will be needed at this time.
DELIVERABLES
Computer: The first delivery will be at the Headquarters store in Manchester. We will install the 15' NEC Multisync XV15+ Monitor, the Hewlett Packard printer, the NEC standard keyboard and the hand held scanner. The operation system (WINDOWS 97) , along with the prototype software, Netscape (web page) as well as all of the custom written software (ACCESS) will be installed at the operation and installation phase during the second week of March 1998 and will be completed during the first two weeks of April 1998.
Hardware: The Pentium Pro Process with 32 MB of RAM expandable to
128 MB. The expansion of RAM will be sufficient for the next 4 to 5 years. The A1 has a 4.0 gigabyte hard drive which includes a 56.6 data/fax modem from BOCA. At the current time the 4.0 gigabyte is the latest and the largest on the market. The 56.6 data/fax modem is currently the fastest on the market, it will take your company well into the future. The 12V Multispin CD ROM Reader is also the fastest and the best in quality on the market today.
MONITOR: NEC 15' Multisync xv15+ with a .28mm dot pitch.
KEY BOARD: NEC Standard with function keys.
PRINTER: Hewlett Packard Deskjet 820 Cse Pro Series with 800 dpi.
SOFTWARE: The software will be written with the signature of the Analysis Phase (1st week of Aug 1998).
WARRANTIES: All NEC 9624 is backed by one year limited warranty which includes one year on-site service and 90 days software support. Warranty Plus: Optional extended warranty, includes three years on-site service at a cost of $260.00 per PC (monitor, keyboard, PC server, scanner & printer) additional
item will cost $25.00 for scanner, the work station will cost $160.00. The total cost for the additional warranty is $1300.00.
Documentation and Training: A full set of documentation along with the user manuals will be provided during the operation and installation phase. The operation and installation is schedule to start on March 1, 1998. During the training phase, our project manager will train each store for 1 week. The last week we will have a analyst at each store to make all finalization.
OPERATION: Midnight Auto Supply's automated system (A1) will be installed at the Headquarters store first. Once implemented, the St. Charles and Afton stores will be completed within fourteen working days. During the training, a full set of documentation and user manual, will be issued at each site. The project manager will begin a training session at each store. Each session will take five working days to complete and consist of how to properly startup the A1 system, backup the database, restore/recover data and shutdown procedures. The training session will also cover use of the scanner devices.
PROPOSE SCOPE FOR MIDNIGHT AUTO SUPPLY
ABC Software Solutions is writing this solution to Midnight Auto Supply to show the benefits of using our A1 automated system. The A1 system will provide the means for Midnight Auto Supply to track its daily activities and management all of its inventory. This state of the art system will make payroll much simpler and federal, state and all 401k activities automated. The A1 system will automated the headquarters, the Afton and the St. Charles stores by doing all account receivable, account payable, purchase orders. The A1 system will also list all manufacture prices, parts and list the manufacture with the least to most costly part. After reading the project information sheet, we at ABC Software Solution realized your needs and feel that the A1 system will be what your company need.
STAFF
Project Manager: Wilbert E. Brownlow
Programmer: Personnel will be assigned upon signature. The levels are as
follows:
SYSTEM FLOW CHART FOR MIDNIGHT AUTO SUPPLY
Overview: The purpose of this document is to identify a proposal, which will allow Midnight Auto Supply to become upgraded with a state of the art computer system. Midnight Auto Supply currently have great potentials for capturing a large percentage of the auto part supply business in the St. Louis Metropolitan area, and believe the business will benefit from automating its operation. Midnight Auto Supply is in need of a computer system that will handle all of its current daily management activities, in a faster and more efficient manner. Presently, Midnight Auto Supply is operating out of three locations, the main store and headquarters is located in Manchester, Missouri; the other two locations are in St. Charles, Missouri and Afton, Missouri. After implementing this new computer system, Midnight Auto Supply will have the capability to successfully compete or surpass all of its current competitors in the St. Louis metropolitan area. My company, the ABC Software Solution, propose that an A1 system be developed, which will allow all three stores to become fully automated. The A1 system will grant Midnight Auto Supply the ability to keep all their daily records and activities in real time mode; along with ensuring that all daily management activities are produced in a more effective and efficient manner. The A1 system will also accomplish the following tasks:
1. Keep an accurate status of inventory, by part number.
2. Determine the location of all parts (Manchester, St. Charles and Afton locations).
3. Maintain a list and cross reference by manufacture names, part class and part number.
4. Generate purchase orders.
5. Perform all payroll functions, which includes issuing paychecks and preparing address labels for mailing checks.
6. Maintain all account payable information.
7. Maintain all account receivable data.
Problem: Midnight Auto Supply is a growing company in the auto supply industry, which consist of three stores located in the St. Louis metropolitan area. Midnight Auto Supply currently does all of its reports, purchases, accounts receivable, accounts payable and payroll functions manually. Midnight Auto Supply is constantly having problems in keeping track of its incoming parts. Midnight Auto Supply's accounts receivable and accounts payable department is having a very difficult time keeping accurate records manually and expeditiously. Midnight Auto Supply currently has no idea what its inventory is composed of at each of its three store locations. Midnight Auto Supply is a small company with a limited budget, and is growing fast, they are eager to rectify their existing problems through the use of automation.
Solution: After reading and analyzing the project information sheet, ABC Software Solution recommends the following: Using the Access Database Management System to build the database (see Overview), all three stores will be linked via the Internet. The database will be backup ever fifteen minutes. Each store will have a dedicated T1 telephone line connected to the client/server. The software Access Database Management package will be used to allow for real time processing, which will automatically update the database. All payroll functions, will be done at the headquarters store in Manchester include proce
f:\12000 essays\sciences (985)\Computer\Propaganda in the Online Free Speech Campaign.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Propaganda in the Online Free Speech Campaign
Propaganda and Mass Communication
July 1, 1996
In February 1996, President Bill Clinton signed into law the Telecommunications Act of 1996, the first revision of our country's communications laws in 62 years. This historic event has been greeted with primarily positive responses by most people and companies. Most of the Telecommunications act sets out to transform the television, telephone, and related industries by lowering regulatory barriers, and creating law that corresponds with the current technology of today and tomorrow. One part of the Telecommunications act, however, is designed to create regulatory barriers within computer networks, and this has not been greeted with admirable commentary. This one part is called the Communications Decency Act (CDA), and it has been challenged in court from the moment it was passed into law. Many of the opponents of the CDA have taken their messages to the Internet in order to gain support for their cause, and a small number of these organizations claim this fight as their only cause. Some of these organizations are broad based civil liberties groups, some fight for freedom of speech based on the first amendment, and other groups favor the lowering of laws involving the use of encrypted data on computers. All of these groups, however, speak out for free speech on the Internet, and all of these groups have utilized the Internet to spread propaganda to further this common cause of online free speech and opposition to the CDA.
Context in which the propaganda occurs
Five years ago, most people had never heard of the Internet, but today the Internet is a term familiar to most people even if they are not exactly sure about what the Internet is. Along with the concept of the Internet, it is widely known that pornography and other adult related materials seem to be readily available on the Internet, and this seems to be a problem with most people. Indeed, it does not take long for even a novice Internet user to search out adult materials such as photographs, short movies, text based stories and live discussions, chat rooms, sexual aide advertisements, sound files, and even live nude video. The completely novel and sudden appearance of the widely accessible Internet combined with the previously existing issues associated with adult materials has caused a great debate around the world about what should be done. The major concern is that children will gain access to materials that should be reserved only for adults. Additionally, there is concern that the Internet is being used for illegal activities such as child pornography. In response to the concerns of many people, the government enacted the Communications Decency Act which attempts to curtail these problems by defining what speech is unacceptable online and setting guidelines for fines and prosecution of people or businesses found guilty of breaking this law. While the goal of keeping children from gaining access to pornography is a noble one that few would challenge, the problem is that the CDA has opened a can of worms for the computer world. Proponents of the CDA claim that the CDA is necessary because the Internet is so huge that the government is needed to help curb the interaction of adult materials and children. Opponents of the CDA claim that the wording of the CDA is so vague that, for example, an online discussion of abortion would be illegal under the new law, and our first amendment rights would therefore be pulled out from under us. Opponents also argue that Internet censorship should be done at home by parents, not by the government, and that things such as child pornography are illegal anyway, so there is no need to re-state this in a new law. At this point, the battle lines have been drawn and like everything else in society, everyone is headed into the courtroom to debate it out. While this happens, the propagandists have set up shop on the Internet.
In terms of a debate about the first amendment and the restriction of free speech, this current battle is nothing new. The debate over free speech has been going on for as long as people have been around, and in America many great court cases have been fought over free speech. The Internet's new and adolescent status does not exclude it from problems. Just as all other forms of mass communication have been tested in the realms of free speech and propaganda, so will the Internet.
Identity of the propagandists
There are scores of online groups that work to promote free speech on the Internet, but there are a few who stand out because of the scope of their activities, their large presence on the Internet, and their apparently large numbers of supporters. The Electronic Frontier Foundation (EFF) is today one of the most visual online players in the fight against the CDA, but was established only in 1990 as a non-profit organization before the Internet started to gain its status as a daily part of our lives. Mitchell D. Kapor, founder of Lotus Development Corporation, along with his colleague John Perry Barlow, established the EFF to "address social and legal issues arising from the impact on society of the increasingly pervasive use of computers as a means of communication and information distribution." In addition, the EFF also notes that it "will support litigation in the public interest to preserve, protect and extend First Amendment rights within the realm of computing and telecommunications technology." Also in the press release that announced the formation of the EFF, Kapor said, "It is becoming increasingly obvious that the rate of technology advancement in communications is far outpacing the establishment of appropriate cultural, legal and political frameworks to handle the issues that are arising." Clearly, the EFF is very up-front and open about its belief that the American legal system is currently not equipped to handle the daily reliance and use of computers in society, and that the EFF will facilitate in handling problems in the area of litigation and computers.
Initial funding of the EFF was provided in part by a private contribution from Steve Wozniak, the co-founder of Apple Computer, and since then contributions have come from industry giants such as AT&T, Microsoft, Netscape Communications, Apple Computer, IBM, Ziff-Davis Publishing, Sun Microsystems, and the Newspaper Association of America. It is likely that these companies see the need for assistance when the computer world collides with the world of law, and also see the EFF as one way for the rights of the computer industry and its customers to be upheld.
A second player in the area of online free speech protection is the Center for Democracy and Technology (CDT). The CDT, founded in 1994, is less up-front about their history and funding, but states that its mission is to, "develop public policies that preserve and advance democratic values and constitutional civil liberties on the Internet and other interactive communications media." Like the EFF, the CDT is located in Washington, DC, and is a non-profit group funded by, according to the 1996 annual report, "individuals, foundations, and a broad cross section of the computer and communications industry."
A third major player in the online free speech movement is The Citizens Internet Empowerment Coalition (CIEC, pronounced "seek"). This is the group who filed the original lawsuit against the US Department of Justice and Attorney General Janet Reno to overturn the CDA based on, in part, the use of the word "indecent". The plaintiffs in this lawsuit are a very diverse group, and include many who are also cited as contributors to the EFF. Some of these plaintiffs include the American Booksellers Association, the Freedom to Read Foundation, Apple Computer, Microsoft, America Online, the Society of Professional Journalists, and Wired magazine. In their appeal to gain new members, CIEC states that they are, "a coalition of Internet users, businesses, non-profit organizations and civil liberties advocates formed to challenge the constitutionality of the Communications Decency Act because they believe it violates their free speech rights and condemns the Internet to a future of burdensome censorship and government intrusion." Like the CDT, CIEC does not directly state what organizations support their cause or how much money is changing hands, but based on the companies supporting the lawsuit filed by the CIEC, it is almost certain that the same computer and publishing related companies are paying for CIEC's existence. Finally, unlike other groups which are activists for several causes, CIEC has the one and only mission of challenging the CDA and does not claim to have any other purpose.
Ideology and purpose behind the campaign
There are several interrelated reasons motivating the online free speech movement. The most visual, and therefore one of the most obvious, reasons for the online presence of the free speech movement is to sign up new supporters. Current technology of the Internet is ideal for gathering information from people without inconveniencing them. While exploring the Internet in the privacy of one's own home, it takes only seconds to type in your name, address, and other information so that it can be sent to the headquarters of an organization. When compared to the traditional process of walking into a traditional storefront, talking with a human, and then writing out your membership information on paper, this new electronic method is superior. A person can become an online free speech supporter at 2am while sitting in his or her underwear and eating leftovers while sitting at home without having to worry about talking to a pushy recruiter. Because of this ease of gathering information, it is possible for an organization to quickly recruit large numbers of members. Also, in terms of the demographics of the members, the mere fact that they are signing up online generates a certain, desirable demographic group of people. Even though computers are becoming easier to use every day, the majority of Internet users are educated and tend to have higher incomes than the average. At the head of CIEC's page where new members are encouraged to sign up, there is a large banner proclaiming, "Over 47,000 Individual Internet Users Have Joined as of June 17, 1996!". This particular technique of announcing the number of new recruits is popular among various online organizations who recruit new members because it lets the user know that he is not alone. The user will see the large number and know that he or she will be part of a large group of supporters and therefore feel safe about signing up with the cause.
Once an individual gets "in the door" of an online free speech website, he or she is encouraged to become a member or supporter, but why are the supporters needed? I believe that when presented in a legal setting, these large membership lists can be used to demonstrate that numerous people do exist who are in favor of the online free speech campaign. Just as people vote for laws or politicians, membership lists demonstrate that people have "voted" for this cause. While a membership list is not quite as powerful as an election, it does show that real "everyday" people support this cause. When the online free speech campaign takes the CDA case to the Supreme Court, it will be armed with long lists of people who support what these organizations are trying to do, and the knowledge of all of the supporters could be just enough to tilt the judges' decision in the right direction.
Another purpose behind the online free speech campaigns is to attract more businesses to the effort. When, for example, a software company who advertises on the Net proclaims to be a supporter of the movement, then the movement gets free advertising. When the names of computer companies such as Microsoft and Apple are mentioned in the introductory and sign up information, other companies might feel the urge to join because of the "me too" effect in which the smaller companies look up to the bigger companies and might tend to adopt the policies of the giants. For example, if YYZ Software knows that Microsoft is supporting the free speech online movement, YYZ might feel important if it supports the cause too. While the number company owners or managers browsing a site will be much smaller than the number of individual people looking at the same site, this idea of throwing around the name of famous companies is an attempt to attract at least some supporters. Even though only a small number of supporters could be gained through this channel, it is still a channel, and therefore important no matter how small. Also, if this method happens to bring a large company into the group, then the organization could gain great financial support. While it is likely that all the Netscapes and IBMs of the world are already aware of the online free speech movement, new companies and new fortunes are made frequently in the fast moving world of the computer industry, so an unknown company today could be a key player tomorrow. It is, therefore, important for the online free speech movement to be constantly recruiting new companies, because the need for large financial backers never ends, and you never know when a mom and pop operation today will be the next Microsoft tomorrow.
Another motivation behind the campaign is the protection of businesses and their interests. For example, a new online magazine for scientists in the biomedical field is being formed, and the company behind the venture, Current Science, is investing between $7.5 and $9 million in the project (Rothstein). With money like this at risk, it is obvious that freedom of speech must be secured in order for ventures like this to work.
Finally, the ultimate goal for all groups is the repeal of the CDA, but the deletion of the CDA does not mean the end of free speech problems on the Internet, so these groups will always exist in some form or another. Just as there is an ongoing debate about what books are appropriate for who, there will always be a debate about what Internet content is appropriate for who. Add to this the global aspect of the Internet, and the scope and complexity of the issue can be envisioned.
Target audience
The clever, or perhaps just convenient aspect about online free speech propaganda is that the propaganda is located at the very same spot that the debate is about. In other words, if you want to promote free speech, go to where the speech is taking place- the Internet. By promoting propaganda online about online free speech, you are directly targeting the audience you want to target. People who do not utilize the Internet will be less interested than those who do, so it makes sense to locate your campaign on the Internet, where the people there will naturally be more concerned about computer censorship issues. An added bonus of the Internet is its relatively low cost compared to traditional media outlets such as print or radio, so not only are these groups promoting their causes almost directly to the people they want to reach, they are doing it at a very low cost compared with more traditional methods.
On the other hand, these online free speech organizations have little, if any propaganda outside of the Internet, so they are therefore not reaching the maximum number of possible people. While they all maintain traditional offices, phone numbers, postal mailing addresses, and fax numbers, they are virtually unknown by the populace outside of the Internet. While purchasing print or television advertisements might not be as direct and monetarily efficient as utilizing the Internet to promote propaganda, those traditional methods would help get the word out to the largest number of people.. Just as all other forms of mass media have been utilized for the spread of propaganda, so will the Internet.
Media utilization techniques
This section is by far the most interesting because it deals primarily with the actual examples and techniques of propaganda used by the online free speech movement. While the propaganda of these groups is primarily limited to the electronic realm of the Internet, it is important to remember that the Internet is itself a multimedia tool. Unlike newspaper, for example, the Internet can convey words, pictures, sound, and moving video. As an added dimension, these forms can vary in unlimited colors, intensities, qualities and quantities so that the viewer does not always know what to expect. The important propagandistic idea of utilizing all available channels to maximize the effect of propaganda is certainly at use here.
My first involvement with the online free speech movement, and the reason why I decided to investigate this topic, was the Blue Ribbon Campaign. Almost a year ago, I began to notice the occurrence of the same blue ribbon icon on many different Internet web locations and homepages. These icons are similar to the red AIDS awareness ribbon in terms of their appearance and function, and the actual size of the icon in most locations is typically only about 8 mm high by 25 wide. Of course this size depends on several computer specific variables, but the point is that the Blue Ribbon Campaign icon is small so that it appears quickly without taking much transfer time. The people behind the Blue Ribbon icon knew that if they created a large space and time hogging image, that people would become frustrated with the lethargic image and fail to gain respect for it. However, in reality, this small icon is tiny and unobtrusive so that its appearance on a web page is not bothersome.
The idea of using a blue ribbon is smart because of the association with the AIDS red ribbon campaign. While people have different opinions about homosexuality, most people, if not all, agree that aids must be stopped. Using this logic, it makes sense to utilize this almost universal appeal of the red ribbon by the creation of a blue ribbon. Additionally, the red ribbon icon is very well established and is widely recognized, so once again, the adoption of a similar blue ribbon icon is smart.
The genius of the Internet's world wide web is the use of hyperlinks or hypertext. Hypertext is the system of allowing the reader to click on something and be instantly transported to another location that relates to what he or she clicked on. Every time a Blue Ribbon Campaign icon exists on the world wide web, it contains the Internet homepage address of the Electronic Frontier Foundation, one of the key players in the online free speech movement. Therefore, by clicking on the Blue Ribbon icon, the reader is instantly transferred to EFF's homepage. When compared again to the AIDS red ribbon movement, the advantage of the Internet system are obvious. When one sees a person wearing an AIDS red ribbon, he or she can not automatically and instantaneously receive information about AIDS. The person would have to ask the red ribbon wearer for a phone number or address where AIDS information could be found. With the Blue Ribbon Campaign, however, the information is instant, and it fits right in with today's fast moving society. A person can see the Blue Ribbon icon, and can immediately see what it means. There is no time for the person to lose interest due to making a phone call or waiting for a postal letter to be delivered.
Therefore on a daily basis I was seeing the Blue Ribbon Campaign icons, and several times I clicked on those icons in order to gain more information about this symbol that kept popping up all over the place. If, on a particular day, I was not in the mood to learn about the EFF, I could easily go back to what I was doing before I clicked on the blue ribbon icon. However, since the icon kept appearing at various web sites, there were times when I did feel like exploring this interesting phenomenon further, and because the blue ribbon icon was easy to run across, it was easy for me to enter the EFF and see what they had to offer.
The EFF's homepages do contain a brief history of the organization, but there is no information about the actual origin of the Blue Ribbon Campaign. According to electronic mail I received from Dennis Derryberry at the EFF after querying about the origin of the Blue Ribbon Campaign:
The Blue Ribbon Campaign does not belong to any specific group; it is shared by all groups and individuals who value and support free speech online. I believe the idea originally was sparked by a woman who has been helping us with membership functions, but amid all the expansion of the campaign, we kind of forgot where it really came from. I guess that's just the spirit of a campaign for the benefit of the many. (Derryberry)
Even if the Blue Ribbon Campaign does not belong to any one group, it was originated by the EFF and all of the blue ribbon icons point back to the EFF.
One of the first options of things to do when one first sees the EFF's opening page is to join the EFF, the Blue Ribbon Campaign, or both.. Joining the Blue Ribbon Campaign is simple, and basically involves just giving them a small amount of personal information and then copying one of several blue ribbon icons to be used on your web site. There are many, many different blue ribbons available of all different sizes and compositions, but they all revolve around the basic blue ribbon idea. If a user is not fully pleased with the online selection if available icons, there is an option to receive information about many others that are available. Finally, it is also possible to create your own blue ribbon icon and allow the EFF to give it away to be used for the same cause. This entire emphasis on the graphic image of the campaign is a smart move because people's interest is aroused by images more than words. If the words "Blue Ribbon Campaign" were seen everywhere, the impact would be less dramatic than the colored image of the blue ribbon that accompanies these words.
Even though the doorway to the EFF is graphic based, the bulk of the EFF's web site contains document after document of textual information that all relates to the CDA and freedom of speech. Also located here is the entire text of the Telecommunications Act of 1996, including all text of the CDA. Internet users who click on the blue ribbon icon will be taken directly to the part of the EFF's website that deals with the Blue Ribbon Campaign. Because the Blue Ribbon Campaign is not the only cause the EFF supports, there is of course much more to the EFF's website than just this. Some of the sections of the EFF's homepage are:
The Blue Ribbon Campaign section on the EFF's homepage is set apart from the other areas by use of the traditional blue ribbon icon. This section begins with a link to the newest information about the CDA, and then goes on to list links to several things including introductory information about the campaign, federal, state, and local information, an archive of past information, examples of Internet sites that could be banned under the CDA, activism information, and finally a "Skeptical?" link to a page that tries to convince skeptics about believing the EFF's cause.
About EFF is the first thing that new visitors to the site will want to read. This contains a brief history of the organization and answers most of the questions people might have. This area also goes into the beliefs and motivations behind the EFF.
Action Alerts is a list of current events that the EFF is currently monitoring. For example, one of the most recent action alerts deals with the latest decision on the CDA. This section also encourages people to take action in the Blue Ribbon Campaign and provides a list of various ways to help. At the top of the list there is a disclaimer about civil disobedience being "at least nominally illegal". Some of the suggested activities include: supporting a 28th amendment to the U.S. Constitution to extend First Amendment rights to the Internet, attend rallies, wear T-shirts that promote free speech online, put a real blue ribbon pin on your backpack if you are a student, etc.. This section also contains a list of previous example of protest and demonstration of CDA opposition, so show that people have actually gone out to stand up for the things that are promoted on this site.
Guide to the Internet is a document that helps acquaint novices with the Internet in general, and does not contain any EFF or free speech related specific material. While this seems pretty innocent, its purpose here is a bit deeper. If more people can become more familiar with the Internet, then more people will use the Internet and therefore hopefully become interested in online free speech.
Archive index is an essential tool on the EFF website because of the large number of different documents available here. This is a searchable index that aides users in finding specific information contained in the EFF pages. For example, if you wanted to see if the word "pornography" occurred in the CDA, you could search for it.
Newsletter is a section that contains the current and past newsletters of the EFF. These newsletters are updates about things the EFF is currently involved with. I think that although much of the information contained in these newsletters is redundant in that it can be found elsewhere on the site, there are two reasons for this. First, the newsletter format is one that everyone is familiar with. If a person is new to the EFF site and sees the "newsletter" section, he or she will automatically have a general idea how information will be presented in this format, and it will therefore be easier and more welcoming to read than other types of information. Secondly, the newsletter is important because it is repeated information. One key aspect of propaganda is repetition, so the duplication of certain information in the newsletter accomplishes that.
Calendar is a listing of future events and dates that are important to EFF. Many of the listings here are protest rallies and schedule speeches that look good when many people attend. This provides a consolidated listing of dates that is easy to access, without having to search all over the site for things. Also, the information here is available for download so that it can be put into a person's personal time management software on his or her own computer. This gives the EFF an indirect link to remind you where to go and when.
Job openings provides information about applying to the EFF for a job with the EFF.
Merchandise lets members and nonmembers purchase T-shirts and metal Blue Ribbon Campaign pins to help spread the word.
Awards gives a list of the 19 awards won by the EFF for various things such as "Best of the Web" and "Top 250 Lycos Sites". The display of these awards legitimizes the organization and shows to others that many people are visiting this site.
Staff Homepages at first seems somewhat boring, but this section is actually a list of the staff, in rank order, and a short description of what each person does at the EFF. Clicking on the person's name takes you to their homepage. This display of information once again reinforces the idea of white propaganda that the EFF uses.
Miscellaneous contains a sponsors list, other publications of interest, and EFF related images, sounds, and animations.
A second example of online free speech propaganda on the Internet is a homepage promoting the lawsuit filed by The Citizens Internet Empowerment Coalition (CIEC, "seek") against the U.S. Department of Justice and Attorney General Janet Reno. This page is designed to look like a 1700's handbill or poster and to arouse emotions of patriotism and fighting for one's country. It would be difficult for an American to view this document and not be reminded of how we fought for our freedom from the English. Icons of patriots shouting out loud, canons and American flags, and pictorial representations of the Constitution all arouse emotions of fighting for what is right. This page also contains an 4 minute audio clip that is available for download. This audio is Judith Krug of the American Libraries Association speaking about the censorship of libraries. The reader has to only click on the icon and the audio will be transferred to his or her computer and the user listens to the audio as it is transmitted. Aside from these audio and visual messages, this site is similar to the EFF's in that it contains lots of information and links to related anti-CDA sites.
Another website that utilizes propaganda is operated by the Center for Democracy and Technology (CDT). This site is one of many that utilizes an animated "Free Speech" icon that displays fireworks exploding in the air. Like other examples, this too is very patriotic. Also like other sites, the CDT displays various Internet awards they have won, as well as the number of people they have signed up who support the lawsuit against the CDA.
Counter propaganda
While there are groups and people who favor the CDA, there is very little propaganda promoting these beliefs. Part of the reason for this is that the whole debate over the CDA seems to be a very nonpartisan issue in terms of Republicans and Democrats. If this had been a partisan issue, there would certainly be propaganda on both sides. The main reason that little counter propaganda exists is that the CDA is the law, so people who are for it have already been appeased to a certain extent. The anti-CDA groups are protesting and using propaganda because the CDA is the law, and they want it changed. As with many things in life, it is more common to hear complaints from people who are not satisfied than from people who are ple
f:\12000 essays\sciences (985)\Computer\Protecting A Computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
About two hundred years before, the word "computer" started to appear
in the dictionary. Some people even didn't know what is a computer.
However, most of the people today not just knowing what is a computer, but
understand how to use a computer.
Therefore, computer become more and more popular and important to our
society. We can use computer everywhere and they are very useful and
helpful to our life. The speed and accuracy of computer made people felt
confident and reliable. Therefore, many important information or data are
saved in the computer. Such as your diary, the financial situation of a oil
company or some secret intelligence of the military department. A lot of
important information can be found in the memory of computer. So, people
may ask a question: Can we make sure that the information in the computer
is safe and nobody can steal it from the memory of the computer?
Physical hazard is one of the causes of destroying the data in the
computer. For example, send a flood of coffee toward a personal computer.
The hard disk of the computer could be endangered by the flood of coffee.
Besides, human caretaker of computer system can cause as much as harm as
any physical hazard. For example, a cashier in a bank can transfer some
money from one of his customer's account to his own account. Nonetheless,
the most dangerous thief are not those who work with computer every day,
but youthful amateurs who experiment at night --- the hackers.
The term "hacker "may have originated at M.I.T. as students' jargon for
classmates who labored nights in the computer lab. In the beginning,
hackers are not so dangerous at all. They just stole computer time from the
university. However, in the early 1980s, hackers became a group of
criminals who steal information from other peoples' computer.
For preventing the hackers and other criminals, people need to set up a
good security system to protect the data in the computer. The most
important thing is that we cannot allow those hackers and criminals
entering our computers. It means that we need to design a lock to lock up
all our data or using identification to verify the identity of someone
seeking access to our computers.
The most common method to lock up the data is using a password system.
Passwords are a multi-user computer system's usual first line of defense
against hackers. We can use a combination of alphabetic and number
characters to form our own password. The longer the password, the more
possibilities a hacker's password-guessing program must work through.
However it is difficult to remember a very long passwords. So people will
try to write the password down and it may immediately make it a security
risk. Furthermore, a high speed password-guessing program can find out a
password easily. Therefore, it is not enough for a computer that just have
a password system to protect its data and memory.
Besides password system, a computer company may consider about the
security of its information centre. In the past, people used locks and keys
to limit access to secure areas. However, keys can be stolen or copies
easily. Therefore, card-key are designed to prevent the situation above.
Three types of card-keys are commonly used by banks, computer centers and
government departments. Each of this card-keys can employ an identifying
number or password that is encoded in the card itself, and all are produced
by techniques beyond the reach of the average computer criminals. One of
the three card-key is called watermark magnetic. It was inspired by the
watermarks on paper currency.The card's magnetic strip have a 12-digit
number code and it cannot be copied. It can store about two thousand bits
in the magnetic strip. The other two cards have the capability of storing
thousands of times of data in the magnetic strip. They are optical memory
cards (OMCs) and Smart cards. Both of them are always used in the security
system of computers.
However, it is not enough for just using password system and card-keys
to protect the memory in the computer. A computer system also need to have
a restricting program to verify the identity of the users. Generally,
identity can be established by something a person knows, such as a password
or something a person has, such as a card-key. However, people are often
forget their passwords or lose their keys. A third method must be used. It
is using something a person has --- physical trait of a human being.
We can use a new technology called biometric device to identify the
person who wants to use your computer. Biometric devices are instrument
that perform mathematical analyses of biological characteristics. For
example, voices, fingerprint and geometry of the hand can be used for
identification. Nowadays, many computer centers, bank vaults, military
installations and other sensitive areas have considered to use biometric
security system. It is because the rate of mistaken acceptance of
outsiders and the rejection of authorized insiders is extremely low.
Individuality of vocal signature is one kind of biometric security
system. The main point of this system is voice verification. The voice
verifier described here is a developmental system at American Telephone and
Telegraph. Only one thing that people need to do is repeating a particular
phrase several times. The computer would sample, digitize and store what
you said. After that, it will built up a voice signature and make
allowances for an individual's characteristic variations. The theory of
voice verification is very simple. It is using the characteristics of a
voice: its acoustic strength. To isolate personal characteristics within
these fluctuations, the computer breaks the sound into its component
frequencies and analyzes how they are distributed. If someone wants to
steal some information from your computer, the person needs to have a same
voice as you and it is impossible.
Besides using voices for identification, we can use fingerprint to
verify a person's identity because no two fingerprints are exactly alike.
In a fingerprint verification system, the user places one finger on a glass
plate; light flashes inside the machine, reflects off the fingerprint and
is picked up by an optical scanner. The scanner transmits the information
to the computer for analysis. After that, security experts can verify the
identity of that person by those information.
Finally, the last biometric security system is the geometry of the
hand. In that system, the computer system uses a sophisticated scanning
device to record the measurements of each person's hand. With an overhead
light shining down on the hand, a sensor underneath the plate scans the
fingers through the glass slots, recording light intensity from the
fingertips to the webbing where the fingers join the palm. After passing
the investigation of the computer, people can use the computer or retrieve
data from the computer.
Although a lot of security system have invented in our world, they are
useless if people always think that stealing information is not a serious
crime. Therefore, people need to pay more attention on computer crime and
fight against those hackers, instead of using a lot of computer security
systems to protect the computer.
Why do we need to protect our computers ?
It is a question which people always ask in 18th century. However,
every person knows the importance and useful of a computer security system.
In 19th century, computer become more and more important and helpful.
You can input a large amount of information or data in a small memory chip
of a personal computer. The hard disk of a computer system is liked a bank.
It contained a lot of costly material. Such as your diary, the financial
situation of a trading company or some secret military information.
Therefore, it just like hire some security guards to protect the bank. A
computer security system can use to prevent the outflow of the information
in the national defense industry or the personal diary in your computer.
Nevertheless, there is the price that one might expect to pay for the
tool of security: equipment ranging from locks on doors to computerized
gate-keepers that stand watch against hackers, special software that
prevents employees to steal the data from the company's computer. The bill
can range from hundreds of dollars to many millions, depending on the
degree of assurance sought.
Although it needs to spend a lot of money to create a computer security
system, it worth to make it. It is because the data in a computer can be
easily erased or destroyed by a lot of kind of hazards. For example, a
power supply problem or a fire accident can destroy all the data in a
computer company. In 1987, a computer centre inside the Pentagon, the US
military's sprawling head quarters near Washington, DC., a 300-Watt light
bulb once was left burning inside a vault where computer tapes were stored.
After a time, the bulb had generated so much heat that the ceiling began to
smelt. When the door was opened, air rushing into the room brought the fire
to life. Before the flames could be extinguished, they had spread consume
three computer systems worth a total of $6.3 million.
Besides those accidental hazards, human is a great cause of the
outflows of data from the computer. There have two kind of people can go in
the security system and steal the data from it. One is those trusted
employee who is designed to let in the computer system, such as
programmers, operators or managers. Another kind is those youth amateurs
who experiment at night ----the hackers.
Let's talk about those trusted workers. They are the groups who can
easily become a criminal directly or indirectly. They may steal the
information in the system and sell it to someone else for a great profit.
In another hand, they may be bribed by someone who want to steal the data.
It is because it may cost a criminal far less in time and money to bride a
disloyal employee to crack the security system.
Beside those disloyal workers, hacker is also very dangerous. The term
"hacker" is originated at M.I.T. as students' jargon for classmates who
doing computer lab in the night. In the beginning, hackers are not so
dangerous at all. They just stole some hints for the test in the
university. However, in early 1980s, hacker became a group of criminal who
steal information from other commercial companies or government
departments.
What can we use to protect the computer ?
We have talked about the reasons of the use of computer security
system. But what kind of tools can we use to protect the computer. The most
common one is a password system. Password are a multi-user computer
system's which usual used for the first line of defense against intrusion.
A password may be any combination of alphabetic and numeric characters, to
maximum lengths set by the e particular system. Most system can accommodate
passwords up to 40 characters. However, a long passwords can be easily
forget. So, people may write it down and it immediately make a security
risk. Some people may use their first name or a significant word. With a
dictionary of 2000 common names, for instance, a experienced hacker can
crack it within ten minutes.
Besides the password system, card-keys are also commonly used. Each
kind of card-keys can employ an identifying number or password that is
encoded in the card itself, and all are produced by techniques beyond the
reach of the average computer criminal. Three types of card usually used.
They are magnetic watermark, Optical memory card and Smart card.
However, both of the tools can be easily knew or stole by other people.
Password are often forgotten by the users and card-key can be copied or
stolen. Therefore, we need to have a higher level of computer security
system. Biometric device is the one which have a safer protection for the
computer. It can reduce the probability of the mistaken acceptance of
outsider to extremely low. Biometric devices are instrument that perform
mathematical analyses of biological characteristics. However, the time
required to pass the system should not be too long. Also, it should not
give inconvenience to the user. For example, the system require people to
remove their shoes and socks for footprint verification.
Individuality of vocal signature is one kind of biometry security
system. They are still in the experimental stage, reliable computer systems
for voice verification would be useful for both on-site and remote user
identification. The voice verifier described here is invented by the
developmental system at American Telephone and Telegraph. Enrollment would
require the user to repeat a particular phrase several times. The computer
would sample, digitize and store each reading of the phrase and then, from
the data, build a voice signature that would make allowances for an
individual's characteristic variations.
Another biometric device is a device which can measuring the act of
writing. The device included a biometric pen and a sensor pad. The pen can
converts a signature into a set of three electrical signals by one pressure
sensor and two acceleration sensors. The pressure sensor can change in the
writer's downward pressure on the pen point. The two acceleration sensor
can measure the vertical and horizontal movement.
The third device which we want to talk about is a device which can scan
the pattern in the eyes. This device is using an infrared beam which can
scan the retina in a circular path. The detector in the eyepiece of the
device can measure the intensity of the light as it is reflected from
different points. Because blood vessels do not absorb and reflect the same
quantities of infrared as the surrounding tissue, the eyepiece sensor
records the vessels as an intricate dark pattern against a lighter
background. The device samples light intensity at 320 points around the
path of the scan , producing a digital profile of the vessel pattern. The
enrollment can take as little as 30 seconds and verification can be even
faster. Therefore, user can pass the system quickly and the system can
reject those hackers accurately.
The last device that we want to discuss is a device which can map the
intricacies of a fingerprint. In the verification system, the user places
one finger on a glass plate; light flashes inside the machine ,reflect off
the fingerprint and is picked up by an optical scanner. The scanner
transmits the information to the computer for analysis.
Although scientist have invented many kind of computer security
systems, no combination of technologies promises unbreakable security.
Experts in the field agree that someone with sufficient resources can crack
almost any computer defense. Therefore, the most important thing is the
conduct of the people. If everyone in this world have a good conduct and
behavior, there is no need to use any complicated security system to
protect the computer.
f:\12000 essays\sciences (985)\Computer\Quality Issues in Systems Development.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The period between the 1970's and 1980's was a time of great advancement in computer hardware technology which took an industry still in it's infancy, to a level of much sophistication and which ultimately revelutionised the information storage and processing needs of every other industry and that of the entire world. However, it was also during this period when the shortcomings of implementing such technology became apparent. A significant number of development projects failed which resulted with disastrous consequences, not only of an economic nature, but social aswell. Seemingly, although hardware technolgy was readily available and ever improving, what was inhibiting the industry was in the methods of implementing large systems. Consequently, all kinds of limited approaches materialized that avoided the costs and risks inherent in big-systems developments.
Times have changed, and with it our understanding and experience as how best to develop large systems. Today's large systems yield greater benefits for less cost than those of previous decades. Large systems provide better, more timely information, the ability to integrate and correlate internal and external information, the ability to integrate and facilitate streamlined business processes. Unfortunately, not every system that information workers develop are well implemented; this means that the computer system which was originally intended to make a company more efficient, productive and cost-effective, is in the end doing the exact opposite - namely, wasting time, money and valuable manpower.
So even with all the lessons learned from the 70's and 80's, our vastly superior methodologies and knowledge of the 90's is still proving to be fallible, as suggested in the following examples.
System Development Failures
In Britain, 1993, an incident occurred which forced the London Ambulance Service to abandon its emergency system after it performed disastrously on delivery, causing delays in answering calls. An independent inquiry ordered by British government agencies found that the ambulance service had accepted a suspiciously low bid from a small and inexperienced supplier. The inquiry report, released in February 1993, determined that the system was far too small to cope with the data load. For an emergency service, the system error would not only cause the loss of money, but more essentially, fail to dispatch ambulances correctly and promptly upon the arising of critical situations. Thus, the implications of such a failure are apparently obvious, both socially and economically. Since the failures, the ambulance service has reverted to a paper-based system that will remain in place for the foreseeable future.
Another failure was the collapse of the Taurus trading system of the London Stock Exchange. Taurus would have replaced the shuffling of six sorts of paper among three places over two weeks - which is how transactions in shares are settled in London-with a computerized system able to settle trades in three days. The five-year Taurus development effort, which sources estimated cost hundreds of millions of dollars, was termed a disaster, and the project was abandoned in March 1993. Exchange officials have acknowledged that the failure put the future of the Exchange in danger.
Why did they fail?
What went wrong with these systems? The real failure in the case of the London Stock Exchange was managerial, both at the exchange and among member firms. The exchange's bosses gave the project managers too much rope, allowing them to fiddle with specifications and bring in too many outside consultants and computer firms. Its new board, having heavy-weight and diverse membership, proved too remote from the project. Member firms that spent years griping about Taurus's cost and delays did not communicate their doubts concerning the project. The Bank of England, a strong Taurus supporter, failed to ask enough questions, despite having had to rescue the exchange's earlier attempt to computerize settlement of the gilts market. According to Meredith , an expert in project management issues, many system development catastrophes begin with the selection of a low bidder to do a project, even though most procurement rules state that cost should be only one of several criteria of designation. The software failure occurs because the companies involved did not do a risk assessment prior to starting a project. In addition, many companies do not study the problems experienced in earlier software development projects, so they cannot apply that data when implementing new projects.
Another source of problems is the failure to measure the quality of output during the development process. Information workers as yet have not fully understood the relationship that exists between information and development. It is shown that information should be viewed as one of the essential know-how resources. The value and necessity of information for development is argued. An attempt is made to classify the various areas where information is needed for development, as well as the information systems and infrastructures available or required to provide for the different needs. There are a number of reasons why information has not yet played a significant role in development. One reason is that planners, developers and governments do not yet acknowledge the role of information as a basic resource. Another is that the quality of existing information services is such that they cannot yet make an effective contribution to information provision for development.
Avoiding development failure
Companies blame their unfinished system projects on such factors as poor technology, excessive budgets, and lack of employee interest. Yet, all these factors can be easily avoided. All that is needed to develop and implement successful systems is a strong corporate commitment and a basic formula which has proven effective time after time. By following the guidelines below, any system workers can install and implement a successful, efficient system quickly and with minimal disruption to the workplace.
Understand your workplace-every company must fully understand its existing environment in order to successfully change it.
Define a vision for the future- This objective view will help the company develop a clear vision of the future.
Share the vision- In order for the system to be successful, all those who are involved in its development must fully buy into the process and end-product. This will also help further define specific goals and expectations.
Organize a steering committee-This committee, which must be headed by the executive who is most affected by the success or failure of the project, has to be committed and involved throughout all stages.
Develop a plan-The project plan should represent the path to the vision and finely detail the major stages of the project, while still allowing room for refinement along the way. Select a Team of users- A sampling of company employees is important to help create, and then test, the system. In the Laboratory systems failure case . That means both the vendor and laboratory should identify what users know and what they need to know to get the best out of the LIS. They must also develop a formal training plan before selecting a system.
Create a prototype-Before investing major dollars into building the system, consider investing in the development of a prototype or mock system which physically represents the end product. This is similar in concept to an architect's model, which allows one to actually touch and feel the end product before it is created.
Have the users actually develop the system- It is the end-users who will directly benefit from the system, so why not let them have a hand in developing it? In the DME is DBA case , the fault that the Open Software Foundation(OSF) make it's Distributed Management Environment system fail is the OSF tried to go from theory to perfect product without the real-would trial and error that is so critical to technology development.
Build the solution-With a model in place, building the solution is relatively easy for the programmer. Users continue to play an important role at this stage ensuring smooth communication and accurate user requirement.
Implement the system-Testing the system, training and learning new procedures can now begin. Because the majority of time up until now has been spend planning and organizing, implementation should be smooth and natural, and most importantly quick.
The Role of SAA and ACS in the Assurance of Quality
The Standards Association of Australia was established in 1922 as the Australian Commonwealth Engineering Standards Association. Their original focus was on engineering, subsequently it expanded to include manufacturing standards, product specifications and quality assurance and consumer-related standards. The role the SAA play is in quality certification. According to SAA, a standard is a published document which sets out the minimum requirements necessary to ensure that a material, product, or method will do the job it is intended to do. For systems development, both the Standards Association of Australia and Australian Computer Society give the guides and standards to develop a system and to control the quality of a system and to prevent failure from occurring. They also make the standard of the system developed connectable world wide.
When software development projects fail, they usually fail in a big way. For large development projects, the cost is typically astronomical, both in terms of dollars spent and human resources consumed, some with even further reaching implications effecting adversely the whole of a society. Too often, mistakes made in developing one project are perpetuated in subsequent ones. As with the error which occurred in the London Stock Exchange system, what they should have done was find out how the system allowed the error to happen and fix it, then learn from it for making better developments for future information systems.
Bibliography:
1. Fail-safe Advice, Software Magazine, George Black, 3/93
2. All fall down, The Economist, Anonymous, 20/3/93
3. DME is DBA(Dead Before Arrival), Data Communications, Robin Layland, 2/94
4. There's No Excuse for Failure, Canadian Manager, Selim EI Raheb, 9/92
5. Laboratory Systems failure: The enemy may be us, Computers in
Healthcare, Stanley J. Geyer, M.D., 9/93
6. Australian Standard Software quality management system, Standards
Australia
f:\12000 essays\sciences (985)\Computer\Questions of Ethics in Computer Systems and their Future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1) Identify and discuss security issues and considerations evident for Information Systems
And computerization in the brokerage industry. ( Think about how the Internet has
already influenced trading.)
"The technology is getting ahead of regulators" claims David Weissman, director of money
and technology at Forrester Research Inc., in Cambridge, Mass.
If one is to believe the quote above it sounds very ominous for the regulators and the government
to attempt to even bring this media under any kind of regulation. But, what is it that they the
government agencies truly are looking to regulate? If you take to the argument that this media,
the Internet is truly a public access network, then the control to which they would like to extend
to it would be the most regulated public access system in history. What I believe the attempt here
is to regulate through censorship. Since it is almost impossible to censor the phone networks
without actually eaves dropping on your phone, they have decided to regulate and censor your
written word. The danger in this is what you write as an opinion may be construed by that
government regulator as a violation of some regulatory act. The flip side to this is if you did this
through another medium such as the phone system nothing would ever come it. The bigger
question here is how much government do people want in there lives? The Internet was brought
into the picture for the public as the next great technology of this century. It is without a doubt as
big if not bigger than any other public means of communication that has come before it. With
that in mind I think the government is trying to extract it's pound of flesh for what they believe is
missed revenue dollars that could be made in the form of tax regulations.
"There are probably insiders touting stocks on the Internet either anonymously or under
assumed names," said Mary Schapiro, president of the National Association of Securities
Dealers, which oversees the NASDAQ market. The argument that they are both (the government
and NASDAQ) currently running with is the "protection of the investor". When one looks at
NASDAQ's complaint it is fairly superficial, for them it is clearly a loss of income for their
trading enviorment, for the government it is a loss of taxes that could be derived from those
trades. Are we to believe that both of these agencies only have the best intentions at heart for
those investors that might be duped. These issues have been around for along time through the
use of other mediums like the phone system, direct mail marketing, "cold calling" from "boiler
plate" houses, and even unscrupulous brokers who work in legimate brokerage houses. People
today are still the victims of these types of scams through the use of the older technologies. So
how is it that since the older scams are still being used is one to believe that they will have
anymore success tackling the complex nature of the Internet and the myriad of scams that could
generate from it. The success rate of convictions from past indiscretions is low at best, one only
has to look at the mountain of arrests for "insider trading", that the government launched during
the late 1970's through the middle 1980's to realize for all the hype of cleaning up Wall Street not
a whole lot ever came from the scourging. What it seems to me is Ms. Shapiro would be better
suited to try and align her NASDAQ forum with the Internet technology to take advantage of the
technology rather than trying to use the government to bully people into being afraid to use the
technology. Her second quote of "there is a tremendous amount of hype," comes off as nothing
but sour grapes and a big opportunity to use her position to knock the Internet. If she honestly
believes she's done everything to insure her customer base that her system of doing business is
any bit as less likely to fall victim to insider trading and traders touting of stocks beyond what
they should be touted as, she is sadly mistaken. The average investor is going to use every
opportunity presented to them if they think it will give them the advantage in investing. Just look
at places like Harry's at Hanover Square, a popular bar in the Wall Street area where depending
on the afternoon one would only need walk around the bar to hear the broker types hyping their
own stocks to each other and just about anyone in sight. Are they ready to regulate this very
common practice done for the last 30 years, or how about the broker's who spend weekends on
golf courses singing the praises of their stock to customers and brokers alike, who then come in
on Monday and trade off the weekends knowledge or what they heard at the bar. How do they
regulate this kind of "insider trading" activity, they have no way to help or protect the person
who is not privy to these kind of conversations or dealings. The availability of the Internet to
trade on this information to a larger market base I believe does even the playing for a lot of
people who do invest. I don't believe that those who would use the Internet for financial
information are that wild in their approach of their investing to fall for the super-hype of
someone they don't know. For those do they would have fallen for it through any media out their
because their approach is to win at all costs regardless whether it is legal or not.
In closing, the argument presented by NASDAQ and the government is a weak one at best, I
don't believe any government agency should be pressured into regulating any medium because
of private industry's displeasure with that medium. Also regulations passed based on private
industry demand usually leads to more problems than ever before. On only has to look at the
great S&L bank failures that occurred after the government stepped in to help the S&L industry
out. We will never know the true value of all the losses (in the S&L failures) derived from when
government agencies answer the call with regulations to help out an industry that pushed for
regulation to prop up a dying industry. The American people and the government should stand
up and take notice of what the government tried to do in regulating banking in the 1980's, could
very well be the debacle of the late 1990's with trying to regulate the Internet to save some parts
of the Wall Street industry. Maybe this medium of the Internet will sound the death bells for
some parts of a lot of industries, but I believe it is only the start of many great things to come for
everyone involved who takes advantage of it.
2) Provide what regulations and guidelines, if any, you feel need to be implemented for this
situation.
Based on the preceding question any regulations passed to help the Wall Street industry I
believe would create situations even more serious than the S&L failures, or the "insider trader"
failures of the 1980's. You always run a fine line between what is a regulation for the good of the
consumer and what are regulations designed to protect an industry. I believe there are enough
regulations out there to protect the Wall Street industry as it presently exists, to have to conjure
up regulations for every medium that could possibly come down the road to protect every
industry or private citizens enviorment is just too much government agency in everyone's face.
Not only will the federal government want their piece of the action, let's not forget cash strapped
states like New York will also be looking for their's to. I will discuss this more in question #3.
3) Discuss ethics and surveillance concepts that pertain to this situation.
The ethics problems I would like to discuss from both sides of the equation. From the
standpoint of the trader the ethics problems are fairly obvious. He has to do his job within the
present guidelines that regulate his/her profession. They are not to trade on information that has
been illegally gotten through whatever means. This includes what I mentioned in Question #1
about information obtained through means that the general public is not privy to, all those bar
dates and golf dates where the information about stocks is bantered about like idle gossip at a
garden club party. This technically is considered "insider trading", but how does the government
intend to alleviate this problem through any kind of surveillance, they can't. No more than they
could alleviate the problem through the use of the phone network short of tapping their phones
and monitoring their conversations. When does monitoring for wrong doing and the infringement
of your Constitutional rights start to crossover. The danger here is obvious, for every regulation
the government perceives as needed the American citizen gives up a little more of there right to
privacy and free speech. For the trader types this comes in the form of what he says or writes
about on a particular stock, he has to worry that it won't be construed as classified information
that was some how derived from an illegal source. From the public side's responsibility and the
perception they have to worry about is that what they traded cannot also look like they received
some kind of special information that help them trade successfully and earn a profit.
For both sides the questions of ethics in trading can only be answered by those that are
involved. The majority of the industry does do everything above board, and I believe there are
enough regulations and surveillance out there already to keep a fairly tight lid on all of those
people who choose to be involved. Nothing is every 100%, but with is being done to police the
industry is enough of a deterrent not to be persuaded to do the "insider trading" thing. You will
always have those that will break the law for the pursuit of the dollars, some will even break the
law for the thrill of getting over on the system, but for the vast majority this is not the means by
which they invest, and with that they should not be penalized by overzealous government
regulators and an industry looking to extract dollars out of a technology. You will never be able
to stop the criminal types who will use the Internet for criminal advantage, anymore than you can
stop all street crime. You cannot regulate the Internet for prevention of crime any better than you
can regulate all people from doing criminal things, there is that small minority that will always
continue to find the easy or criminal way around everything. To regulate the Internet to attempt
to protect the public is just another form of censorship. The government would be riding a very
fine line behind this concept of protection and the rights of the individual to express an opinion.
If I publish on my Internet page that I made a great but of stock this week, that is my opinion and
only my opinion. Should the government come along or any other private group come along and
attempt to either sue me or censor me in some form or fashion just for my opinion. Should I
worry that someone reading my page decides to act on what I wrote. If he/she does I would have
to say that they are rather foolish to act on my opinion and invest their money. On the same
token I would never react and invest on someone's say so without first thoroughly checking out
all the facts. Do people go out and kill because they see a violent movie I don't think so! Then
why would the government say they need to protect the public's interest by possibly watching
my home page or anyone else's out there. Do they listen to your phone calls, do they read your
mail, do they read your E-mail, do they tell you what books to read, what movies to see, then
why would I want them surfing the Internet to under the guise of public protection. I'm an adult
and would like to be treated as such, I can make correct decisions not only about how my money
is spent but where. If I find something out on the Internet that I feel is so criminal I would alert
people to the fact that whatever was out there to watch out for. You would be surprised how well
Internet people do police the net and warn there friends and others about it.
Don't buy into the government hype of public protection, for all the mediums I just listed
above the scams as they are related to Wall Street are still happening big time, and they already
regulate those technologies for our protection! It's not regulation they are looking to do, it is
ownership of the Internet they are trying for, and with the help of big business, who sits there
and cries foul, they may very well achieve this. Talk about two groups in need of finding some
ethics big business and the government are sorely lacking. This the first major technology that
has leveled the playing field for even the littlest user. Don't buy the hype where ever you can try
to keep the regulators out, by voting, writing your congress, or whatever it takes legally. We are
intelligent enough to make our own decisions!!
4) The year is 2016. Describe how information Technology and computerization have
impacted individuals and Society in the past 20 years.
Let's look at from an everyday perspective: First you'll be gently awaking by an alarm that
you set by your voice the night before and playing what you want to hear again that decision was
made the night before. You'll enter a kitchen where on voice command you order your cup of
coffee and whatever breakfast you want because your computer run appliances will be able to do
this for you. Next you go to your printer and get a copy of the newspaper you want to read
because you will have programed to extract information from five or six different sources that
you want your news from and it will be waiting for you to read. If you real lazy you could have
your computer read it to you in a smooth digitized voice that you've selected. After finishing in
your computerized bathroom that not only knows how hot you like your shower but also
dispenses the right amount of toothpaste on to your tooth brush. After dressing from your
computerized closet that selected all your clothes for the week, you'll enter your computerized
car which is all activated by your voice. There also is a satellite guidance system for the times
you might get lost but you've already programmed the car to know how to get you to work.
Work will be only a three day a week affair with the other two days working out of your home.
Your office will be totally voice activated. You'll run all of the programming you'll need for the
day by voice activating the programming.
You'll conference call to other office sites but it be in complete full motion video. The next
step will be 3D holograms but that hasn't quite come to market yet. You'll instuct your computer
by voice to take ant E-mail you need to send and it will be sent in real-time. The rest of the office
also is capable of call forwarding you any phone calls or messages because the central computer
in the office will know your where abouts in the office at any time as you pass through any door.
Your day is over you'll leave instructions fro your computer to watch certain events throughout
the night and if need be you could be reached at home. You'll be paid in credits to the credit
cards of your choice, there will no longer be money exchanged. To help you protect against fraud
on your cards when you spend money you'll use your thumb print as you would your signature
now. At night you'll come to a far less stressed enviorment because the computer appliances in
your house have taken a lot of the mundane jobs that you use to do away. You'll be able to enjoy
high definition TV and be able to receive some 500 channels. After checking with your voice
activated home computer to see if there is any phone messages or E-mail, you'll retire to bed of
course in you climate controlled home that knows what settings you like in what parts of the
house. Oh, yes you won't even have to tell your voice activated computer not to run your
computerized sprinkler system for your lawn because it will have realized from the weather
report that it will rain.
1) Identify and discuss security issues and considerations evident for Information Systems
And computerization in the brokerage industry. ( Think about how the Internet has
already influenced trading.)
"The technology is getting ahead of regulators" claims David Weissman, director of money
and technology at Forrester Research Inc., in Cambridge, Mass.
If one is to believe the quote above it sounds very ominous for the regulators and the government
to attempt to even bring this media under any kind of regulation. But, what is it that they the
government agencies truly are looking to regulate? If you take to the argument that this media,
the Internet is truly a public access network, then the control to which they would like to extend
to it would be the most regulated public access system in history. What I believe the attempt here
is to regulate through censorship. Since it is almost impossible to censor the phone networks
without actually eaves dropping on your phone, they have decided to regulate and censor your
written word. The danger in this is what you write as an opinion may be construed by that
government regulator as a violation of some regulatory act. The flip side to this is if you did this
through another medium such as the phone system nothing would ever come it. The bigger
question here is how much government do people want in there lives? The Internet was brought
into the picture for the public as the next great technology of this century. It is without a doubt as
big if not bigger than any other public means of communication that has come before it. With
that in mind I think the government is trying to extract it's pound of flesh for what they believe is
missed revenue dollars that could be made in the form of tax regulations.
"There are probably insiders touting stocks on the Internet either anonymously or under
assumed names," said Mary Schapiro, president of the National Association of Securities
Dealers, which oversees the NASDAQ market. The argument that they are both (the government
and NASDAQ) currently running with is the "protection of the investor". When one looks at
NASDAQ's complaint it is fairly superficial, for them it is clearly a loss of income for their
trading enviorment, for the government it is a loss of taxes that could be derived from those
trades. Are we to believe that both of these agencies only have the best intentions at heart for
those investors that might be duped. These issues have been around for along time through the
use of other mediums like the phone system, direct mail marketing, "cold calling" from "boiler
plate" houses, and even unscrupulous brokers who work in legimate brokerage houses. People
today are still the victims of these types of scams through the use of the older technologies. So
how is it that since the older scams are still being used is one to believe that they will have
anymore success tackling the complex nature of the Internet and the myriad of scams that could
generate from it. The success rate of convictions from past indiscretions is low at best, one only
has to look at the mountain of arrests for "insider trading", that the government launched during
the late 1970's through the middle 1980's to realize for all the hype of cleaning up Wall Street not
a whole lot ever came from the scourging. What it seems to me is Ms. Shapiro would be better
suited to try and align her NASDAQ forum with the Internet technology to take advantage of the
technology rather than trying to use the government to bully people into being afraid to use the
technology. Her second quote of "there is a tremendous amount of hype," comes off as nothing
but sour grapes and a big opportunity to use her position to knock the Internet. If she honestly
believes she's done everything to insure her customer base that her system of doing business is
any bit as less likely to fall victim to insider trading and traders touting of stocks beyond what
they should be touted as, she is sadly mistaken. The average investor is going to use every
opportunity presented to them if they think it will give them the advantage in investing. Just look
at places like Harry's at Hanover Square, a popular bar in the Wall Street area where depending
on the afternoon one would only need walk around the bar to hear the broker types hyping their
own stocks to each other and just about anyone in sight. Are they ready to regulate this very
common practice done for the last 30 years, or how about the broker's who spend weekends on
golf courses singing the praises of their stock to customers and brokers alike, who then come in
on Monday and trade off the weekends knowledge or what they heard at the bar. How do they
regulate this kind of "insider trading" activity, they have no way to help or protect the person
who is not privy to these kind of conversations or dealings. The availability of the Internet to
trade on this information to a larger market base I believe does even the playing for a lot of
people who do invest. I don't believe that those who would use the Internet for financial
information are that wild in their approach of their investing to fall for the super-hype of
someone they don't know. For those do they would have fallen for it through any media out their
because their approach is to win at all costs regardless whether it is legal or not.
In closing, the argument presented by NASDAQ and the government is a weak one at best, I
don't believe any government agency should be pressured into regulating any medium because
of private industry's displeasure with that medium. Also regulations passed based on private
industry demand usually leads to more problems than ever before. On only has to look at the
great S&L bank failures that occurred after the government stepped in to help the S&L industry
out. We will never know the true value of all the losses (in the S&L failures) derived from when
government agencies answer the call with regulations to help out an industry that pushed for
regulation to prop up a dying industry. The American people and the government should stand
up and take notice of what the government tried to do in regulating banking in the 1980's, could
very well be the debacle of the late 1990's with trying to regulate the Internet to save some parts
of the Wall Street industry. Maybe this medium of the Internet will sound the death bells for
some parts of a lot of industries, but I believe it is only the start of many great things to come for
everyone involved who takes advantage of it.
2) Provide what regulations and guidelines, if any, you feel need to be implemented for this
situation.
Based on the preceding question any regulations passed to help the Wall Street industry I
believe would create situations even more serious than the S&L failures, or the "insider trader"
failures of the 1980's. You always run a fine line between what is a regulation for the good of the
consumer and what are regulations designed to protect an industry. I believe there are enough
regulations out there to protect the Wall Street industry as it presently exists, to have to conjure
up regulations for every medium that could possibly come down the road to protect every
industry or private citizens enviorment is just too much government agency in everyone's face.
Not only will the federal government want their piece of the action, let's not forget cash strapped
states like New York will also be looking for their's to. I will discuss this more in question #3.
3) Discuss ethics and surveillance concepts that pertain to this situation.
The ethics problems I would like to discuss from both sides of the equation. From the
standpoint of the trader the ethics problems are fairly obvious. He has to do his job within the
present guidelines that regulate his/her profession. They are not to trade on information that has
been illegally gotten through whatever means. This includes what I mentioned in Question #1
about information obtained through means that the general public is not privy to, all those bar
dates and golf dates where the information about stocks is bantered about like idle gossip at a
garden club party. This technically is considered "insider trading", but how does the government
intend to alleviate this problem through any kind of surveillance, they can't. No more than they
could alleviate the problem through the use of the phone network short of tapping their phones
and monitoring their conversations. When does monitoring for wrong doing and the infringement
of your Constitutional rights start to crossover. The danger here is obvious, for every regulation
the government perceives as needed the American citizen gives up a little more of there right to
privacy and free speech. For the trader types this comes in the form of what he says or writes
about on a particular stock, he has to worry that it won't be construed as classified information
that was some how derived from an illegal source. From the public side's responsibility and the
perception they have to worry about is that what they traded cannot also look like they received
some kind of special information that help them trade successfully and earn a profit.
For both sides the questions of ethics in trading can only be answered by those that are
involved. The majority of the industry does do everything above board, and I believe there are
enough regulations and surveillance out there already to keep a fairly tight lid on all of those
people who choose to be involved. Nothing is every 100%, but with is being done to police the
industry is enough of a deterrent not to be persuaded to do the "insider trading" thing. You will
always have those that will break the law for the pursuit of the dollars, some will even break the
law for the thrill of getting over on the system, but for the vast majority this is not the means by
which they invest, and with that they should not be penalized by overzealous government
regulators and an industry looking to extract dollars out of a technology. You will never be able
to stop the criminal types who will use the Internet for criminal advantage, anymore than you can
stop all street crime. You cannot regulate the Internet for prevention of crime any better than you
can regulate all people from doing criminal things, there is that small minority that will always
continue to find the easy or criminal way around everything. To regulate the Internet to attempt
to protect the public is just another form of censorship. The government would be riding a very
fine line behind this concept of protection and the rights of the individual to express an opinion.
If I publish on my Internet page that I made a great but of stock this week, that is my opinion and
only my opinion. Should the government come along or any other private group come along and
attempt to either sue me or censor me in some form or fashion just for my opinion. Should I
worry that someone reading my page decides to act on what I wrote. If he/she does I would have
to say that they are rather foolish to act on my opinion and invest their money. On the same
token I would never react and invest on someone's say so without first thoroughly checking out
all the facts. Do people go out and kill because they see a violent movie I don't think so! Then
why would the government say they need to protect the public's interest by possibly watching
my home page or anyone else's out there. Do they listen to your phone calls, do they read your
mail, do they read your E-mail, do they tell you what books to read, what movies to see, then
why would I want them surfing the Internet to under the guise of public protection. I'm an adult
and would like to be treated as such, I can make correct decisions not only about how my money
is spent but where. If I find something out on the Internet that I feel is so criminal I would alert
people to the fact that whatever was out there to watch out for. You would be surprised how well
Internet people do police the net and warn there friends and others about it.
Don't buy into the government hype of public protection, for all the mediums I just listed
above the scams as they are related to Wall Street are still happening big time, and they already
regulate those technologies for our protection! It's not regulation they are looking to do, it is
ownership of the Internet they are trying for, and with the help of big business, who sits there
and cries foul, they may very well achieve this. Talk about two groups in need of finding some
ethics big business and the government are sorely lacking. This the first major technology that
has leveled the playing field for even the littlest user. Don't buy the hype where ever you can try
to keep the regulators out, by voting, writing your congress, or whatever it takes legally. We are
intelligent enough to make our own decisions!!
4) The year is 2016. Describe how information Technology and computerization have
impacted individuals and Society in the past 20 years.
Let's look at from an everyday perspective: First you'll be gently awaking by an alarm that
you set by your voice the night before and playing what you want to hear again that decision was
made the night before. You'll enter a kitchen where on voice command you order your cup of
coffee and whatever breakfast you wa
f:\12000 essays\sciences (985)\Computer\Radar A Silent Eye in the Sky.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Radar: A Silent Eye in the Sky
Daniel Brosk
Period Two
Today's society relies heavily on an invention taken for granted: radar. Just about everybody uses radar, whether they realize it or not. Tens of thousands of lives rely on the precision and speed of radar to guide their plane through the skies unscathed. Others just use it when they turn on the morning news to check the weather forecast.
While radar seems to be an important part of our everyday lives, it has not been around for long. It was not put into effect until 1935, near World War II. The British and the Americans both worked on radar, but they did not work together to build a single system. They each developed their own systems at the same time. In 1935, the first radar systems are installed in Great Britain, called the Early Warning Detection system. In 1940, Great Britain and the United States install radar aboard fighter planes, giving them an advantage in plane-to-plane combat as well as air-to-ground attacks.
Radar works on a relatively simple theory. It's one that everybody has experienced in their lifetime. Radar works much like an echo. In an echo, a sound is sent out in all directions. When the sound waves find an object, such as a cliff face, they will bounce back to the source of the echo. If you count the number of seconds from when the sound was made to when the sound was heard, you can figure out the distance the sound had to travel. The formula is:
(S/2) X 1100 = D
(Half of the total time times 1100 feet per second equals the distance from the origin to the reflection point)
Of course, radar is a much more complicated system than just somebody shouting and listening for the echo. In fact, modern radar listens not only for an echo, but where the echo comes from, what direction the object is moving, its speed, and its distance. There are two types of modern radar: continuous wave radar, and pulse radar.
Pulse radar works like an echo. The transmitter sends out short bursts of radio waves. It then shuts off, and the receiver listens for the echoes. Echoes from pulse radar can tell the distance and direction of the object creating the echo. This is the most common form of radar, and it is the one that is used the most in airports around the world today.
Continuous wave radar works on a different theory, the Doppler Theory. The Doppler Theory works on the principle that when a radio wave of a set frequency hits a moving object, the frequency of the wave will change according to how the object is moving. If the object is moving toward the Doppler radar station, the object will reflect back a higher frequency wave, If it is moving away, the frequency of the wave will be lower. From the change in frequency, the speed of the target can This is the type of radar that is used to track storms, and the type of radar used by policemen in radar guns.
These are the basics of radar. But, there is a lot of machinery and computer technology involved in making an accurate picture of what is in the sky, on the sea, or on the road. Most radar systems are a combination of seven components (See Appendix A). Each component is a critical part of the radar system.
The oscillator creates the actual electric waves. It then sends the radio waves to the modulator.
The modulator is a part of the timing system of a radar system. The modulator turns on and off the transmitter, creating the pulse radar effect. It tells the transmitter to send out a pulse, then wait for four milliseconds.
The transmitter amplifies the low-power waves from the oscillator into high-power waves. These high-power waves usually last for one-millionth of a second.
The antenna broadcasts the radar signals and then listens for the echoes.
The duplexer is a device that permits the antenna to be both a sending device, and a receiving device. It routes the signal from the transmitter to the antenna, and then routes the echoes from the objects to the receiver.
The receiver amplifies the weak signals reflected back to the antenna. It also filters out background noise that the antenna picks up, sending only the correct frequencies to the signal processor.
The signal processor takes the signals from the receivers, and removes signals from stationary objects, such as trees, skyscrapers, or mountains. Today, this is mostly done by computers.
And last, but not least, we come to the display screen. For many years, this was a modified TV tube with an electroluminescent coating, which lights up when hit by electrons, and retained the glow for a few seconds. This is what creates the "blips" on the radar screen, that flash about every ten seconds, then fade. In newer systems, the signal processor and the display screen are combined into a single computer. With the power of today's computers, this information is transmitted around the world, to other airports, to the government, and to TV stations, where weather broadcasts are made.
Today, radar systems are standard around the country. The United States has the most sophisticated radar system, both on the ground and in the sky. On the ground, we track planes, weather, ships, and many Intercontinental Ballistic Missiles. From space, we use satellites with radar to map the globe, spy on foreign countries, and track over the oceans. In each instance, radar plays a key role in our day-to-day lives.
Bibliography
Hitzeroth, Deborah. Radar: The Silent Detector, 96 pp., ills., Lucent Books, 1990.
Page, Irving H. "RADAR," The New Book of Popular Science, pgs. 246-253, Grolier Inc. 1994.
f:\12000 essays\sciences (985)\Computer\regulating the internet whos in charge.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
James H Huff Huff1
English 111G
Fall 1996
The internet was started by the military in the late forties, and has since grown to an incredibly
large and complex web, which will no doubt effect all of us in the years to come. The press has
recently taken it upon themselves to educate the public to the dark side of this web, a network which should be veiwed as a tremendous resource of information and entertainment. Instead, due to this negative image, more and more people are shying away from the internet, afraid of what they may find there. We must find a way to regulate what is there, protect ourselves from what is unregulatable, and educate the general populace on how to use this tremendous tool.
''The reality exists that governance of global networks offers major challenges to the user, providers, and policy makers to define their boundaries and their system of govenment" (Harassim, p84)
The intemet is a group of networks, linked together, which is capable of transmitting vast amounts of information from one network to another. The internet knows no boundaries and is not located in any single country. The potential the internet has of shaping our world in the future is inconceivable. But with all its potential the internet is surrounded by questions of its usage.
The intemet was named the global village by McLuhan and Fiore in 1968, but recently the internet has been more properly renamed the global metropolis. Robert Fortner defines the internet as a place where people from all different cultures and backgrounds come together to share ideas and information.
" Communication in a metropolis also reflects the ethnic, racial, and sexual inequalities that exist generally in the society. '' (Fortner, p25)
Huff2
When a person enters into a global metropolis to engage in communication they do not know who they will interact with nor do they know what information that they may come across. Which brings an important question to mind. If this is a community, a global metropolis, should it not be governed to protect the members of the community? But more importantly, can a community that knows no boundaries and belongs to no country, be regulated? And who can or should regulate it?
With the vast amounts of information transmitted through network to network, with some information remaining at sites temporarily or disappearing within seconds, how can one regulate it? In a meeting of the Senate Select Committee on Community Standards in Australia, iiNet, an Australian intemet provider, presented facts on how much information passes through their server daily.
''Our own network sees over 200,000 items of email between individuals every day of the year, and this is increasing. In USENet news, the 'discussion areas', iiNet sees 150Mb of typed data every day, over 100,000 pages. This includes people chatting idly, informational postings, questions, answers and anything else that the committee can imagine people wishing to talk about.'' (Senate Committee).
This is an example of one server, the information that passes through it originates from all over the world. The point is that this one provider can not possibly be able to review everything that passes through its server.
Should the internet be regulated? We know that it can't and never will be perfectly regulated and therefore the user will always need to be aware that he is entering a global community and he may find some information offensive.
Huff3
For example, one of the hottest issues which has been in the news is the internet transmitting pornography. Individuals and companies do upload and download pomography. It ranges from pictures of nude men and women to child pornography.
Many schools have adopted the idea of bringing computers into the classrooms.
"In the classroom, where youngsters are being introduced to the machines as early as kindergarten, they astound-and often outpace-their teachers with their computer skills." (Golden, 219) Educating students about computer literacy is an important aspect for the upcoming generation. Computer literacy will become just as important for people to understand as reading,
writing and arithmetic are.
With this increased ability at such a young age comes the the abilty to access the net, and the places on the net that we as parents don't want our children going. Much the same as the ability to walk enables them to go places they don't belong.
The United States has laws which regulate pornography with a clear understanding of the First Amendment, allowance for freedom of speech. There is a difference between obscenity which is not protected by the First Amendment and indecency which is!
The way the U.S. determines what is obscenity and what isn't is by using the Miller three part test to see if something is obscene or not. The test is listed here:
1. Would the average person, applying contemporary community standards' find that the work, taken as a whole, appeals to the prurient interest?
2. Does the work depict or describe, in a patently offensive way, sexual conduct specifically defined by the applicable state law?
3. Does the work, taken as a whole, lack serious literary, artistic, political, or scientific value?
Huff4
As one might imagine it is complex enough trying to deem what is obscene and what isn't using this test. All three must be ''yes'' in order to deem something as obscene. Every state has different pomography laws based on this test because every state has different community standards. Yet we deal with a global metropolis, in which many people with different national standards exsist.
''National laws are just that, national in orientation and application. '' (Harissam p.923)
If we are proposing regulating the internet to make it illegal to distribute and receive obscene material we need to find a law that the world could agree on. If the world accepted Miller's test of how to determine obscene material, what would be the standard needed in order to answer the first question?
These are the questions that are facing the government, providers, and users. Many users are saying regulating the intemet is foolish and futile. A new act introduced in the senate, called the Exon/Gorton Communications Decency Act would give the government authority over what can and cannot be sent over the internet and many users are lobbying voters to write their senators and ask them not to vote for it, invoking the First Amendment.
Is anyone regulating the net? The answer is yes, the providers and some universities are trying to regulate some things. Daniel C. Robbins, the author, artist, and producer of the bondage, domination, submission, sadism, and masochism web page, was told he would have to shut down his page by an administrator of Brown University because of the content of his page. The web page contained stories of married couples tying each other up to non-consensual rape, torture, and murder as well as pictures and an interactive virtual reality dungeon. (Robbins).
Huff5
America Online (AOL) also has pulled people's posts because of their content. The reasoning is that these people have violated their Terms of Service agreement which they make when they sign onto AOL. The terms of sevice agreement for AOL states that members must restrain from using vulgarity and insulting language, and from talking explicitly about sex. Immediately people cry censorship and plead the First Amendment Rights! But in both cases, First Amendment Rights did not apply. AOL is a private provider and has a right to let who they want on the net and are breaking no laws for not allowing members to have complete freedom of speech. The University as well has the right to say what is received or sent on their server.
The government has started to take a stronger position regarding the internet. Officials have investigated a few incidents concerning child pornography, and have begun to investigate more obscene material being sent over the net. Child pornography is defined as pictures or any visual form that show minors, under the age of 18, in a sexual way. The material does not need to be legally obscene in terms of the test stated above to deem it child pornography. All child pornography is illegal and does not enjoy First Amendment rights. Written marerial about children engaged in sexual acts does not apply to child pornography, because the marerial has to use real minors. Drawings also do not count as child pornography. It is easier to regulate against child pornography because, in the U.S., just having possession of it is illegal.
Where-as a person can not be prosecuted for having other obscene material in his home, if child pornography is found the person will be prosecuted. If one is to upload child pornography, or obscene material for that matter, they can be charged with transporting obscene material across state lines for distribution, which is a crime. Officials, especially when it comes to comes to child pornography, are starting to take as strong of a stand as they can.
Huff6
The only reason the government could respond the way it has is because they have been able to prosecute people in the U.S., mainly for downloading more than uploading child pornography, because it is such a strong law in the U.S. This has made some users concerned about whether they are involved in illegal activity. The authors of Cyberspace and the Law have made a flow chart to demonstrate what should and should not lead a user to legal problems. it points out even more ominous than pornography ; electronic fraud.
"Computer crime can be enormously profitable." (Logsdon, 162)
"The opportunities for creative fraud are vastly greater than they used to be."(Baig, Business week, Nov. 14, `94)
Computer embezzlement can be very profitable with literally hundreds of thousand
of dollars right at their fingertips. Many computer embezzlers are not caught,
if they are, it is usually only by chance. Also those who embezzle and are
caught usually "escape prosecution because the institutions they rob prefer to avoid the unfavorable publicity of a public trial." (Logsdon,164-5)
The temptation is great for many who are computer geniuses."The average lifted in an embezzlement involving computers is $430,000-and it is not uncommon for the total to go considerably higher."(Logsdon, 163)
This leads to the question of trust and privacy. New technologies are being developed to help protect citizens from fraud and give them a sense of privacy , but in the mean time consumers must remember the old adage: "If it sounds too good to be true, it is!"
Huff7
There are still many flaws that need to be worked out with the new computer revolution. As someone had written in a usenet group on the Internet: "The ultimate authority of a claim to my identity is me and my credibility."(Internet source #1)
It is still up to the individual whether or not to believe what has been said and by whom it was said.
Can the net be regulated? What is it that we want the internet to be for us and our society?
Is it safe to allow our children to play with a system that adults do not fully understand and are not sure how to control? These are not easy questions to answer. As the net grows, goverments will most certainly become more involved, and regulation will most certainly follow. Most importantly we as adults, parents and educators most find ways to teach our children how to use this powerful tool constructively.
Granted, that's not easy in today's fast paced, two income, latch-key kids society, it is imperative that we find a way. Maybe the answer is to take an hour of television time, and devote it to computer literacy.(Then while we're at it let's take another hour and read a book!) If that's not possible, there are ways to block out certain sites, much the same as the V-chip used on televisions. These are readily available, many at no cost on the internet. This allows us, as users to regulate what enters our home.
References
Y1.Harissam, Linda Global Networks1993 Mass. Institute of Technology 2.Fortner, Robert International Communication 1993 Wadsworth, Inc. Belmont Calif 3.Senate Select Committee on Community Standards 4.Robbins, Daniel Documents on Bondage Web Page 5.Cavazos, Edward and Morin, Gavino Cyberspace and the Law 1994 Mass. Institute of Technology 6.Turner Research Committee 7.Broadcast and Cable Nov. 6 Vol. 125 Berniker, Mark Internet begins to cut into TV viewing p.113 8.USA Today Nov. 1 Linda Kanamine, Gamblers Stake out the Net cover story.
f:\12000 essays\sciences (985)\Computer\Response to AOL contraversy essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The article "America Online, while you can" by Bob Woods is all about the hoopla concerning the fact that America Online, or AOL, has not been able to accommodate its vast amount of customers. This is due to AOL's new flat rate, which substituted their original hourly deal. Many AOL users experience busy signals when trying to log on. When and if they do get on AOL, the service runs extremely slow because of the overload of users. Woods threatens that AOL will lose many of their customers if they don't improve their resources. Other companies should beef-up their advertising and try to cash in by targeting the unsatisfied AOL users.
In this day and age of internet use, people in any given location can choose from at least fifteen national companies, such as sprintlink, compuserve, ameritech, erols and so on. Using these services are less expensive than America Online. Per month for unlimited use they average at around $10 to $15 dollars as opposed to AOL's hefty $19.95 a month. AOLers are paying for the appealing menus, graphics and services AOL uses to drive their customers to the internet. These same features can be located anywhere else on the net with the aid of any search device, such as infoseek, yahoo, microsoft network or web- crawler. These sites are no harder to use and they provide lots of helpful menus and information.
In Wood's article, he states that he lives in Chicago, and AOL has several different access numbers to try if one is busy. He writes that often when he has tried to log on using all of the available numbers, and has still been unsuccessful. This is a problem for him because he is dependent on AOL to "do the daily grind of (his) job as a reporter and PM managing editor." If I was not satisfied with the performance of my internet provider, which happens to be sprintlink, I would not complain to the company. I would take my money elsewhere, especially if my job depended on using the internet. With all of the other options available, wasted time and inevitable frustration using AOL could be eliminated. I live in Richmond, Va., which is a fairly big city and have not once been logged off or gotten a busy signal using sprintlink. And I only have one access line available with my provider as opposed to AOL's multiple lines. I agree with Woods in the fact that people will (in most circumstances) get better internet service and customer service with a local, smaller or more specified company.
I think it is safe to say that America Online has done too little too late. In the internet business, or any commercial mega-cooperation, I believe that you shouldn't advertise and try to get more clients that you are prepared to handle. AOL most definitely should have put more thought into the response their extensive advertising campaigns were sure to bring. I think that eventually people will realize that many other options exist and break away from AOL and will find other providers. I think that Compuserve also thought this, by placing an ad during the Super Bowl stating "We have the best internet service, call 1-800- NOT-BUSY." America Online users have recently banned together and filed a class action suit about all this. I don't see that necessary because they could easily find a smaller, localized company that would be more than happy to help out with today's demand for internet service. I do not understand why the unsatisfied AOL customers have not already taken their business elsewhere. Well, I can't make decisions for other people, but this should have not been such a big deal.
Throughout my life, I have found that if something is not working out for you, it is better to evaluate your other options and find something more advantageous to you than to complain to the source and ask them to do the changing. Basically, what I am saying is if you have a problem, fix it yourself and don't whine or cry to everyone else about your misfortunes. It would save a lot of time, trouble and controversy.
f:\12000 essays\sciences (985)\Computer\review of online newspapers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Review of on-line publications
Because of the prevalence of the internet in today's society many thousands of papers now publish an on-line edition. It is through the use of this medium that they wish to make in roads in the communications market. It is seen as a necessary step by many because of the loss of readership due to the internet, broadcast journalism and radio. In my review I will examine an on-line edition of a newspaper from each of the continents. I will comment on some of the technical aspects each employ. I will also discuss the tools many of the papers are using, that not only make them a hybrid of broadcast, radio and print journalism but also establish them as the only medium that is using interactive reporting.
Representing the continent of Asia and the city of Hong Kong is the South China Morning Post. It is one of only a few English language news papers in the republic. The post has an air of journalistic freedom the other news papers do not seem to have. One of the lead articles outlined this concern and dispelled the rumor that China would censor the paper when the city is turned over in July.
The Morning Post is a very up to date paper that features an updated breaking news sidebar. A very useful and inviting feature which enables it to keep up with and often scoop the broadcast media. The newspaper also had a technology section which caters to the on-line user. The post also utilizes the use of java script to make it seem more like an interactive medium.
Of the papers available from the continent of India the Times of India is the one of the finest. It features an archive that is quite extensive, a metropolis section which features two cities a day, an easily accessible reprint section for syndication of articles and a career opportunities in India section that is aimed at the overseas applicant. It has some draw backs though, the world section although very extensive is more of an overview of the continent and the region rather than the entire world. It is not as inviting as some of the other on-line news papers it has a uninviting look to it which lends it to be a little less reader friendly.
Business Day is the best offering of dailies for South Africa. It takes a little longer to load the page but that is due to a very dominate graphic which clearly outlines all of the major markets of the world. The major market graphic is the most impressive of the elements of this on-line newspaper. Coming in second is business days entertainment section. It has well written and intelligent reviews of cinema, books, theater, wine and food. As a whole the paper has some major drawbacks, there are no pictures, essentially no world news and the front page contained a spelling error.
The Times of London is the signature paper of Europe. It is not only very easy to use it is also very insightful and timely. It has a crossword puzzle, a first among all of the newspapers that I reviewed for this assignment. It has a true world news section which has comprehensive coverage of the world. It has a very slick look that is complimented by many color and black and white pictures. Many of the stories incorporated graphics which lead to a very contemporary wed designed look. The most compelling aspect of the paper was the feature "personal times". Personal times, is a customized news paper designed from the readers general interests. The feature was one of many which distinguished the times from any other I reviewed. Overall the times was a very exciting newspaper and one which is very insightful into its readers needs.
An extremely modern looking design dominates the Christchurch Press, one of the three on-line papers from the small island. It has the distinction of being the only paper I could find that had a photo gallery. The gallery was a very welcomed find amongst the many papers on the internet that all but abandon photojournalism for more graphics and text. Surprisingly, it does not have a world news a section. It has some unusual fare for an on-line news paper, such as, a motoring section, a teen section, a TV guide and a computer tools section. It also distinguishes itself from other on-line news papers by offering sound files. The sound files demonstrate how the medium of on-line news and broadcast will probably be a merging and gray area in the future.
The Vancouver Sun is my installment for the North American continent. It is a very efficient news paper that maximizes all the space it uses and loads very rapidly. I has distinct departments such as: a net guide, a personal finance section and a trivia page. It is similar to other western papers in that it has an extensive world news section that is very comprehensive. In the world section is a scrolling synopsis of the days news that is very attractive to the on-line reader. Overall the Vancouver sun represents some of the finest North America has to offer in the form of the on-line newspaper.
In my search of the news papers of South America I did not come across one that was printed in English or gave you the option of choosing one in English. I choose The Gazeta of Brazil to review because it seemed to be the most modern of the South American newspapers. although I could not read the features or the news articles it seemed to have an extensive listing of all categories as well as a large world news section.
f:\12000 essays\sciences (985)\Computer\ROBOTICS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The image usually thought of by the word robot is that of a mechanical being, somewhat human in shape. Common in science fiction, robots are generally depicted as working in the service of people, but often escaping the control of the people and doing them harm.
The word robot comes from the Czech writer Karel Capek's 1921 play "R.U.R." (which stands for "Rossum's Universal Robots"), in which mechanical beings made to be slaves for humanity rebel and kill their creators. From this, the fictional image of robots is sometimes troubling, expressing the fears that people may have of a robotized world over which they cannot keep control. The history of real robots is rarely as dramatic, but where developments in robotics may lead is beyond our imagination.
Robots exist today. They are used in a relatively small number of factories located in highly industrialized countries such as the United States, Germany, and Japan. Robots are also being used for scientific research, in military programs, and as educational tools, and they are being developed to aid people who have lost the use of their limbs. These devices, however, are for the most part quite different from the androids, or humanlike robots, and other robots of fiction. They rarely take human form, they perform only a limited number of set tasks, and they do not have minds of their own. In fact, it is often hard to distinguish between devices called robots and other modern automated systems.
Although the term robot did not come into use until the 20th century, the idea of mechanical beings is much older. Ancient myths and tales talked about walking statues and other marvels in human and animal form. Such objects were products of the imagination and nothing more, but some of the mechanized figures also mentioned in early writings could well have been made. Such figures, called automatons, have long been popular.
For several centuries, automatons were as close as people came to constructing true robots. European church towers provide fascinating examples of clockwork figures from medieval times, and automatons were also devised in China. By the 18th century, a number of extremely clever automatons became famous for a while. Swiss craftsman Pierre Jacquet-Droz, for example, built mechanical dolls that could draw a simple figure or play music on a miniature organ. Clockwork figures of this sort are rarely made any longer, but many of the so called robots built today for promotional or other purposes are still basically automatons. They may include technological advances such as radio control, but for the most part they can only perform a set routine of entertaining but otherwise useless actions.
Modern robots used in workplaces arose more directly from the Industrial Revolution and the systems for mass production to which it led. As factories developed, more and more machine tools were built that could perform some simple, precise routine over and over again on an assembly line. The trend toward increasing automation of production processes proceeded through the development of machines that were more versatile and needed less tending. One basic principle involved in this development was what is known as feedback, in which part of a machine's output is used as input to the machine as well, so that it can make appropriate adjustments to changing operating conditions.
The most important 20th-century development, for automation and for robots in particular, was the invention of the computer. When the transistor made tiny computers possible, they could be put in individual machine tools. Modern industrial robots arose from this linking of computer with machine. By means of a computer, a correctly designed machine tool can be programmed to perform more than one kind of task. If it is given a complex manipulator arm, its abilities can be enormously increased. The first such robot was designed by Victor Scheinman, a researcher at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology in Cambridge, Mass. It was followed in the mid-1970s by the production of so called programmable universal manipulators for assembly (PUMAs) by General Motors and then by other manufacturers in the United States.
The nation that has used this new field most successfully, however, is Japan. It has done so by making robot manipulators without trying to duplicate all of the motions of which the human arm and hand are capable. The robots are also easily reprogrammed and this makes them more adaptable to changing tasks on an assembly line. The majority of the industrial robots in use in the world today are found in Japan.
Except for firms that were designed from the start around robots, such as several of those in Japan, industrial robots are still only slowly being placed in production lines. Most of the robots in large automobile and airplane factories are used for welding, spray-painting, and other operations where humans would require expensive ventilating systems. The problem of workers being replaced by industrial robots is only part of the issue of automation as a whole, and individual robots on an assembly line are often regarded by workers in the familiar way that they think of their car.
Current work on industrial robots is devoted to increasing their sensitivity to the work environment. Computer-linked television cameras serve as eyes, and pressure-sensitive skins are being developed for manipulator grippers. Many other kinds of sensors can also be placed on robots.
Robots are also used in many ways in scientific research, particularly in the handling of radioactive or other hazardous materials. Many other highly automated systems are also often considered as robots. These include the probes that have landed on and tested the soils of the moon, Venus, and Mars, and the pilotless planes and guided missiles of the military.
None of these robots look like the androids of fiction. Although it would be possible to construct a robot that was humanlike, true androids are still only a distant possibility. For example, even the apparently simple act of walking on two legs is very hard for computer-controlled mechanical systems to duplicate. In fact, the most stable walker made, is a six-legged system. A true android would also have to house or be linked to the computer-equivalent of a human brain. Despite some claims made for the future development of artificial intelligence, computers are likely to remain calculating machines without the ability to think or create for a long time.
Research into developing mobile, autonomous robots is of great value. It advances robotics, aids the comparative study of mechanical and biological systems, and can be used for such purposes as devising robot aids for the handicapped.
As for the thinking androids of the possible future, the well-known science-fiction writer Isaac Asimov has already laid down rules for their behavior. Asimov's first law is that robots may not harm humans either through action or inaction. The second is that they must obey humans except when the commands conflict with the first law. The third is that robots must protect themselves except, again, when this comes into conflict with the first law. Future androids might have their own opinions about these laws, but these issues must wait their time.
Bibliography
Buckley, Ruth V. "Robot." Grolier Electronic Publishing, Inc. 1993.
Gibilisco, Stan. The McGraw-Hill Illustrated Encyclopedia of Robotics and Artificial Intelligence. McGraw-Hill, Inc. New York, 1994.
Warring, R. H. Robots and Robotology. Tab Books Inc. Blue Ridge Summit, Pa. 1984.
And various sites on the internet.
f:\12000 essays\sciences (985)\Computer\Save the Internet!.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Did you know that 83.5% of the images available on the Internet were pornographic (Kershaw)? Did you know that pornography on the Internet is readily available to curious little children who happen to bump into them?
Today, the Internet which has only become popular several years ago, is unequivocally one of the most revolutionary innovations in the computer world. The information superhighway has changed peoples' lives dramatically and have created many new exciting opportunities as well as markets to be exploited. But, unfortunately, the Internet also has created a haven for the depravity of pornography and hate literature. Therefore, this has called for immediate action and the only solution up to today is censorship. The Internet must be censored to the utmost.
Many people complain that censorship is the violation of the first amendment and the suppression of freedom of speech but there is a point where freedom of speech becomes corrupt; freedom of speech only creates an excuse for the vile pornographers to poison our nation let alone our children.
Pornography is regarded as immoral and downright filthy by the people. It denies human dignity and often stimulates the user to violent acts (Beahm 295). Therefore, pornography and violence are correlated. It trivializes the human beauty and converts it into commercialized slime (Beahm 295). Moreover, the consumption of pornography can lead to a detrimental addiction and the consumer can become a slave to it (Beahm 297). In short, pornography is a very addictive drug; which has an equal or more potency to hard-core drugs like heroin and cocaine. Can you imagine a ten year-old innocently surfing the Internet and suddenly bumps into a pornographic site depicting explicit images of naked women and becoming addicted to it? The damage is long-term and when the time comes, we will have a nation of perverts. Galbraith says, "The U.S. constitution does not forbid the protection of children from a pornographer's freedom of speech. That must be inferred through the First Amendment." These are our children and we have the right to protect them. The fact that pornography is damaging mentally is further aggravated as the availability of pornography to all Internet users is a major problem as well.
The ridiculously easy accessibility to all types pornography; by anyone who logs into the Internet has raised a major concern from both the government and the public. The Internet, being the biggest interactive library ever existed, has no owner, President, chief operating officer or pope (Montoya). "Inevitably, being an uncontrolled system, means that the Internet will be subjected to subversive applications of some unscrupulous users." (Kershaw) Internet users can publish pornography and hate literature that information is literally made available to millions of Internet users worldwide (Kershaw).
A five year-old can easily obtain pornography on the Internet by just typing the word "sex" in the search engine and literally hundreds of thousands of listing will appear on-screen, each leading to a smut page. This type of easy accessibility have people calling for censorship (Kershaw).
"Most popular images available were of hardcore scenes featuring such acts as paedophilia, defection, bestiality and bondage." (Kershaw) According to Chidley, "In 1994, more than 450,000 pornographic images and text files were available to the Internet users around the world; that information had been accessed more than 6 million times." (58) This shocking figure is further agitated by the fact that pornography would be very harmful to the young unsuspecting child who happens to stumble on it while roaming about cyberspace (Kershaw). Remember, our children is our most important resource in the future; we have to refrain them from negative influences so that they could be good citizens of tomorrow.
"Regulating the Internet might be the only way to protect Internet users including our children from accessing obscene pages." (Montoya) Singapore has taken an encouraging step to establish a "neighborhood police post" on the Internet to monitor and receive complaints of criminal activity-including the distribution of pornography (Chidley 58). They have also implemented proxy servers to partially filter our pornographic sites such as "Playboy" and "Penthouse" from access. An anonymous author quotes, "When such material is discovered, access providers could be alerted, and required to deny entry to the sites concerned." (Only) This is an ideal approach to censorship and should be exercised in every country. Parents at home can also be more responsible over what information is retrieved by their young ones by installing programs like SurfWatch that will block pornography from access (Quitter 45). In addition to this problem, child pornography also prevails over the Internet.
Another distressing issue about the Internet is the presence of child pornography; "Digitally scanned images of ... naked boys and girls-populate cyberspace." (Chidley 58) Innocent-looking little boys and girls were forced to undress and they pictures are published on the Internet. How degrading of us as human beings! Furthermore, possession of child pornography is an offense and the "police are concerned that a shadowy pedophiles' ring, offering child pornography and information on where and how to indulge in their fetish, is operating on an international scale." (Chidley 58) By censoring the Internet, not only you'll keep the public save from the wickedness of pornography, but you'll also help enforcing the law. Pornography is not the only problem on the Internet; as there are many others; some of which I will describe next.
Another issue that concerns me is that publications such as bomb making manuals are easily available online (Kershaw 2). According to Kershaw, "...the wrong people can now get their hands on this information without having to leave the secrecy of their home." (2) This easy availability of such material promotes terrorism-the information obtained to make the bomb found in Centennial park in Atlanta during the Olympics is available on the Internet. The bomb had created a big chaos but fortunately, there were no fatal casualties. However, not all terrorists' attempts were unsuccessful, thousands of innocent people and children have been killed in the Oklahoma bombing and the subway massacre in Tokyo. Moreover, many curious children have lost their fingers and even their lives by experimenting with bomb making. This must stop immediately! Another non-pornographic problem about the Internet is the availability of hate literature.
The Internet has also been a place where people express their hatred and anger toward other people. Kershaw says, "...newsgroups on the Internet contain messages which could incite violence against members of various racial, ethnic or religious groups or messages which deny the Holocaust." This sort of information advocates racism and other types of sensitive discrimination. In many countries, the problem of racism is almost unheard of today, but the problem will surface up if we let the racists minorities influence public. Racism will then tear our nation apart and trigger many wars from trivial matters. Kershaw also says that groups such as the neo-Nazi of America are not uncommon and have many people worry that the Net gives these types of groups a meeting place and a source of empowerment (2). Kershaw also stresses, "One particularly disturbing message found on the Net one week after the Oklahoma bombing that read, 'I want to make bombs and kill evil Zionist people in the government. Teach me. Give me text files.'" The Internet is meant to be a medium that promotes healthy qualities; not a place of hate and evil. "There is a difference between free speech and teaching others how to kill." (Kershaw)
Overall, the Internet has many useful applications which are educational and a fresh source of entertainment when television gets too boring. However, we shall not feel too complacent and ignore the deleterious face of the Internet. We will not rest on our laurels until the Internet is completely free from pornography and other unhealthy elements. Otherwise, the Internet will slowly but surely end up to be sleazy slums operated and dominated by notorious gangs and secret societies. While now it seems difficult to censor the Internet; however, we shall attempt our very best to do so to keep our children away from the dark side of the Internet; our children remains our highest priority. Let's attack this problem at its source by censoring the Internet as that is to only rational solution up to today. We do not want our world to be ravaged by the present situation of Internet!
WORKS CITED
Beahm, George. War of Words-The Censorship Debate. Kansas City : Andrew and McMeel, 1993.
Chidley, Joe. "Red-Light District." Maclean's 22 May 1995.
Galbraith, John Kenneth. "The Page That Formerly Occupied This Site Has Been Taken Down in Disgust!" http://user.holli.com/~kathh/anti.htm
Kershaw, Dave. "Censorship and the Internet."
http://cmns-web.comm.sfu.ca/cmns353/96-1/dkershaw 2 Apr. 1996
Montoya, Drake. "The Internet and Censorship." http://esoptron.umd.edu/FUSFOLDER/dmontoya.html 1995
"Only disconnect." The Economist 1 July 1995.
Quittner, Joshua. "How Parents Can Filter Out the Naughty Bits." Time 13 July 1995.
BIBLIOGRAPHY
Beahm, George. War of Words-The Censorship Debate. Kansas City : Andrew and McMeel, 1993.
Chidley, Joe. "Red-Light District." Maclean's 22 May 1995.
Galbraith, John Kenneth. "The Page That Formerly Occupied This Site Has Been Taken Down in Disgust!" http://user.holli.com/~kathh/anti.htm
Jensen, Carl. Censored: The News That Didn't Make the News-AND WHY. New York : Four Walls Eight Windows, 1994.
Kershaw, Dave. "Censorship and the Internet."
http://cmns-web.comm.sfu.ca/cmns353/96-1/dkershaw 2 Apr. 1996
Montoya, Drake. "The Internet and Censorship." http://esoptron.umd.edu/FUSFOLDER/dmontoya.html 1995
"Only disconnect." The Economist 1 July 1995.
"Pulling the Plug on Porn." Time 8 January 1996.
Quittner, Joshua. "How Parents Can Filter Out the Naughty Bits." Time 13 July 1995.
f:\12000 essays\sciences (985)\Computer\Secret Addiction.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Secret Addiction
Addictions are present in almost all of us. Whether it's a chemical dependence, or simply talking on the phone, they can start to control your life before you can even realize what is happening. Just a couple of years ago, I had a problem peeling myself away from a certain activity. As odd as it may sound, my own computer dominated my life.
I remember the situation quite clearly. In the typical day, I would return home from school at around 3:00 or so. You see, most kids would set their backpack down in their bedroom, head to the kitchen and grab a snack. On the other hand, I was different.
The second I walked through the door, I would immediately throw my backpack on the floor, quickly open the refrigerator and grab whatever food item was in sight, and would then proceed to dart up the stairs to the computer chair.
Upon start-up of the computer, a warm and pleasant feeling would vibrate through my entire body, straight down my spine. It almost felt as if I was in some sort of heaven. Every keystroke of the keyboard sent a refreshing burst of pleasure in each of my finger tips. The glowing monitor emitted delightful rays that pleased and calmed my eyes. Oh yes, it was great to be home.
Of course, this does not even compare to the long and never-ending hours I would spend on this machine. Although my bedtime was supposed to be around 10 or 11 in the evening, I would manage to stay up on this computer until sometimes as late as 3 in the morning, and this was on school nights as well.
I never really realized how serious this was, until one day my best friend Trevor called me up on the phone. He told me about a hot new movie that was out, and I found myself making an excuse as to why I could not make it. But of course, the real reason was because I had work to do on the computer. However, the only work I had to do was to play that incredible new video game that was just released.
It wasn't until that situation where I finally woke up out of my trance, and discovered something that completely blew me away -- I really did not have a life. Every time I went on the computer after that occurrence, a feeling of guilt swept through my body.
What was it about this machine that forces me to stay on it for so long? After long hours of thought on the matter, I came to a some-what logical conclusion: it has the power to hypnotize me.
After these and other events, I successfully was able to limit myself at the number of hours I was involved in it. I found myself doing more activities with my friends and family, eating a regular diet, and even sleeping. To this day, I am still in disbelief on how many hours I really did spend on that thing, but one thing is for sure, my addiction has vanished.
f:\12000 essays\sciences (985)\Computer\Smart Cars.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Answer A :
The TravTek navigationsystem is installed in 100 Oldsmobile Toronados, the
visual part of the system is a computer monitor. Through detailed colour maps, it
leads the driver through the town. The map changes all the time, cause a computer
connected to a navigation-satellite, and with a magnetic compass installed, calculates
the fastest or easiest way to your destination. When yellow circles appear in a
particular place on the screen, it means that there is traffic jam here, or there has been
an accident on the spot. The computer receives this information from the Traffic
Management Centre, and it quickly points an alternative route out.
b:
The driver interact with the system through the so called "touch screen". 7000
buisnesses in the area are already listed in the computer, and you can point out your
destination by searching through a lot of menus until you find it, or simply by typing
the name of the street. when the place you want to go are registered you push the make
destination button, and the computer programmes a route, the second after the route
appears on the screen, while a voice explains it to you through the loudspeaker.
c:
The TravTek guides the driver through the traffic. The computer always knows
where you are, and the navigation system makes it impossible to get lost in the traffic,
unless you really want to, and deliberately make the wrong turns. It also guides you
past traffic jams and problems who might crop up around an accident. In a town
where you have never been, you will quickly be able to find your way to hotels,
restaurants, sports arenas, shops and much more, just by looking through the various
menus of the TravTek.
d:
The text definitely prefers the accuracy of the computer to the insecurity and
misunderstandings who occur between two persons. The passage from line 54 and
down clearly shows the point of view (quote): "...a guy on the gas station who, asked
for directions, drawls: "Bee Line Expressway? Ummmm. I think you go up here about
four miles and take a right, or maybe a left..."" The guy at the gas station are described
as the incompetent fool who actually have no idea where he is himself... and his
guidelines, insecure as they are already, will probably also be very hard to remember
because of Ummmm, I think, maybe and or...
Answer B:
Japanese drivers can now find their way almost blindly, if they equip their cars
with a digital map, who shows the position of the car. Based on the position of
satellites, the position of the car is calculated by a small computer in the receiver. The
receiving set in the car is attached to a screen on the dashboard. The screen can show
a map of the entire Japan. The maps are delivered on four laserdiscs, each showing a
part of Japan. All road maps are in colours, and they do not only show the network of
roads. Restaurants and hotels are plot in as well. A small shining dot shows the cars
position on the map.
Answer C: Essay
Smart-car Technology In Denmark
The smart-car system is developed in USA and Japan. The system makes it
almost impossible to get lost, when you are travelling by car . One big question for
countries all over the rest of the world are: Will this kind of technology match our
needs too? Are we able to use this in action. It sounds great, but will it give enough
advantages compared to the price, and compared to other possibilities of solving our
traffical problems. This question will of course also arise in Denmark.
And what can this technology offer to improve in our traffical situation. In
America and Japan it is made to take care of:
Problems which appear when people have to find their way.
Traffic jams, by leading the cars on alternative routes
Problems which appear in connection with accidents
The three problems are very big in these large countries. In the big cities with a
population of several millions, it s very easy to get lost, also if you have lived there for
a long time. The city itself and the complicated network of roads changes all the time,
new buildings sprout up every day. The system who can keep up with this
development, is clearly an advantage. But what about Denmark?
The road network in Denmark can not be compared to the American freeways
at all. Even I would be able to find my way from town to town, cause there are usually
not so many possibilities.
We do not have giant cities, Copenhagen is the largest and I admit that, it can be
big enough! especially to Jutlanders as my self...
But in the public transportation - network in Denmark is very adequate. No
matter where in the country, or in the cities for that matter, you are going, you will be
able to find a bus or a train to exactly that place. If we used the great advantage we
have got in this, it would also take care of a lot of other problems. The main problem
by driving your own car is: Parking. No matter how many new car-parks build, a
parking spot never is to be found. Build in a device who could look up parking spots,
and some people might see an advantage in it. Basically I still see the public
transportation as a much better solution though.
Concerning the accidents and traffic jams I see no problems in American scale
either. There are accidents and traffic jams on the Danish roads too, but the queues
coming with it are usually not so long that a detour pays.
In Denmark we also tries to create other ways to guide the drivers. An example
of this are changeable signs. On the express- and highways, for example the ones
leading to Aalborg, there are signs connected to the Traffic Information Centre, they
show how much waiting time there are if you choose to us the tunnel or the bridge,
which roads that are cut off, in which car-parks there are vacant spots for parking...and
so on - sort of a common TravTek. Another disadvantage that comes with the
TravTek, that is not mentioned in the text, is that the sattelite reciever and the
computer needs a lot of space in the car. I have seen one, and it occupies most of the
trunk in an ordinary car.
Bottom line: I consider the smart-car a great development for nations such as
USA, they will have great advantages of them, but for smaller countries as Denmark I
find that there are other things who offers a better solution.
f:\12000 essays\sciences (985)\Computer\SMART HOUSE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usually people think that computer just using in a
company and office. It is a misleading concept as we have a SMART
HOUSE. The complete SMART HOUSE System has been available since
early 1993. In a SMART HOUSE, people build a relationship between
computer and home. The SMART HOUSE is a home management system
that allows home owners to easily manage their daily lives by providing for a
lifestyle that brings together security, energy management, entertainment,
communications, and lighting features. So, the SMART HOUSE system is
designed to be installed in a new house. Moreover, the system can be
installed in a home undergoing reconstruction where walls have been
completely exposed. The SMART HOUSE Consortium is investigating a
number of different option to more easily install the SMART HOUSE system
in an existing home. Moreover, the SMART HOUSE system has been
packaged to satisfy any home buyer's needs and budget. The system appeals
to a broad segment of new home buyers because of the diverse features and
benefits it offers. These segments includes professionals, baby boomers in the
move up markets, empty nesters, young middle-class, two - income families,
the aging, and all who are energy conscious and technologically astute.
Therefore, the SMART HOUSE system is suitable to install in new homes.
Firstly, more saving can be gained when the SMART HOUSE System
offers several energy management options that have the potential to reduce a
home owner's utility bill by 30% or more per year depending on the options
installed. For examples, a smart house can turn lights on and off
automatically, it can help save on your electric bill. Moreover, the heating and
air conditioning can be more efficiently controlled by a computer, saving
tremendously on the cost of maintaining a consistent temperature within a
large house. The exact level of savings will pay vary by house due to local
utility rate structures, size of home, insulation, lifestyle, etc.
Secondly, it is an easily operating system. Home owners can control
their SMART HOUSE System using a menu driven control panel, touch-tone
phone, personal computer, remote control or programmable wall switch. All
SMART HOUSE controls are designed to be simple and easy to use. Because
smart houses are independence, they can help people with disabilities
maintain an active life. A smart house system can make such tasks easier by
automating them. Lights and appliances can be turned on automatically
without the user having to do it manually. For people with short term memory
problem, a smart house can remind them to turn off the stove or even turn the
stove off by itself.
The SMART HOUSE System is initially programmed by a trained
technician who configures the system using electronic tools designed to guide
the technical through the necessary steps of System programming. These
tools use a menu driven format to prompt the technician for the appropriate
inputs to customize the System to meet a specific buyer's needs.
Then, the home owner can create some house modes that are
preprogrammed settings that allow home owners to activate a sequence of
events with a single action. House modes can be named to represent general
activity patterns common to most homes -- Awake, Asleep, Unoccupied,
Vacation, etc. All can be programmed and changed to meet a home owner's
needs. An example of a house mode is an AWAKE mode which can be
programmed in the morning to do such things as: turn up the heat, turn up the
water heater, change the security system settings, turn on the lights start the
coffee and turn on the TV, etc.
Thirdly, in a power outage, home owners will not able to use their
system, which is the case with all electrical products, simply because
electrical power is required in order for the SMART HOUSE system to
operate. However, the system controller will re-boot itself when the power
comes back on and the system's programming will be maintained. When the
system fails, the home owners will be able to manually operate their home's
products and appliances. The SMART HOUSE System is specifically
designed so that if the system fails, the house still provides, at a minimum, all
of the functionality provided by a conventionally wired home. For example,
outlets will revert to what is called " Local control " so that they still provide
power to anything plugged into them.
In conclusion, SMART HOUSE System will be the new trend of the
home construction in the following decades. It will make closer the
relationship between computer and people. It seems to be supported by some
people who believe in environment protection because it can reduce the waste
in utility and save more money for people. It also saves times for people by
the centralized system that can be controlled easily.
1
Some people think that it is difficult to find a relationship between
home and computer. Usuall
f:\12000 essays\sciences (985)\Computer\Society and the role that computers play in USA.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The microeconomic picture of the U.S. has changed immensely since 1973, and the trends
are proving to be consistently downward for the nation's high school graduates and high
school drop-outs. "Of all the reasons given for the wage squeeze - international
competition, technology, deregulation, the decline of unions and defense cuts - technology
is probably the most critical. It has favored the educated and the skilled," says M. B.
Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages
adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth
for high school graduates, and by about 7% for those with some college education. Only
the wages of college graduates are up.
Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon
University reports, "recruitment of it's software engineering students is up this year by over
20%." All engineering jobs are paying well, proving that highly skilled labor is what
employers want! "There is clear evidence that the supply of workers in the [unskilled labor]
categories already exceeds the demand for their services," says L. Mishel, Research Director
of Welfare Reform Network.
In view of these facts, I wonder if these trends are good or bad for society. "The danger of
the information age is that while in the short run it may be cheaper to replace workers with
technology, in the long run it is potentially self-destructive because there will not be enough
purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend
from unskilled labor to highly technical, skilled labor is a good one! But, political action
must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970,
a high school diploma could still be a ticket to the middle income bracket, a nice car in the
driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and
a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue).
However, in 1970, our government provided our children with a free education, allowing
the vast majority of our population to earn a high school diploma. This means that anyone,
regardless of family income, could be educated to a level that would allow them a
comfortable place in the middle class. Even restrictions upon child labor hours kept
children in school, since they are not allowed to work full time while under the age of 18.
This government policy was conducive to our economic markets, and allowed our country
to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly
technical world, that requires highly skilled labor. The natural answer to this problem, is
that the U.S. Government's education policy must keep pace with the demands of the
highly technical job market. If a middle class income of 1970 required a high school
diploma, and the middle class income of 1990 requires a college diploma, then it should be
as easy for the children of the 90's to get a college diploma, as it was for the children of the
70's to get a high school diploma. This brings me to the issue of our country's political
process, in a technologically advanced world.
Voting & Poisoned Political Process in The U.S.
The advance of mass communication is natural in a technologically advanced society. In
our country's short history, we have seen the development of the printing press, the radio,
the television, and now the Internet; all of these, able to reach millions of people. Equally
natural, is the poisoning and corruption of these medias, to benefit a few.
From the 1950's until today, television has been the preferred media. Because it captures
the minds of most Americans, it is the preferred method of persuasion by political figures,
multinational corporate advertising, and the upper 2% of the elite, who have an interest in
controlling public opinion. Newspapers and radio experienced this same history, but are
now somewhat obsolete in the science of changing public opinion. Though I do not
suspect television to become completely obsolete within the next 20 years, I do see the
Internet being used by the same political figures, multinational corporations, and upper 2%
elite, for the same purposes. At this time, in the Internet's young history, it is largely
unregulated, and can be accessed and changed by any person with a computer and a
modem; no license required, and no need for millions of dollars of equipment. But, in
reviewing our history, we find that newspaper, radio and television were once unregulated
too. It is easy to see why government has such an interest in regulating the Internet these
days. Though public opinion supports regulating sexual material on the Internet, it is just
the first step in total regulation, as experienced by every other popular mass media in our
history. This is why it is imperative to educate people about the Internet, and make it be
known that any regulation of it is destructive to us, not constructive! I have been a daily
user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which
makes me a senior among us. I have seen the moves to regulate this type of
communication, and have always openly opposed it.
My feelings about technology, the Internet, and political process are simple. In light of the
history of mass communication, there is nothing we can do to protect any media from the
"sound byte" or any other form of commercial poisoning. But, our country's public
opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first
experience I had in a course on Critical Thinking came when I entered college. As many
good things as I have learned in college, I found this course to be most valuable to my basic
education. I was angry that I hadn't had access to the power of critical thought over my
twelve years of basic education. Simple forms of critical thinking can be taught as early as
kindergarten. It isn't hard to teach a young person to understand the patterns of
persuasion, and be able to defend themselves against them. Television doesn't have to be a
weapon against us, used to sway our opinions to conform to people who care about their
own prosperity, not ours. With the power of a critical thinking education, we can stop
being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to
persuade us.
In conclusion, I feel that the advance of technology is a good trend for our society;
however, it must be in conjunction with advance in education so that society is able to
master and understand technology. We can be the masters of technology, and not let it be
the masters of us.
f:\12000 essays\sciences (985)\Computer\Software and Highschool.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SOFTWARE AND HIGH SCHOOL
The beginning of the 1990's is marked by the era of
computers. Everywhere we look ,we see computers. They have become
an essential part of our every day life. If the world's computer
systems were turned off even for a short amount of time,
unimaginable disasters would occur. We can surely say that
today's world is heading into the future with the tremendous
influence of computers. These machines are very important players
in the game, the key to the success however is proper software
(computer programs). It is the software that enables computers to
perform a certain tasks. Educational systems in developed
countries realize the importance of computers in the future world
and therefore, emphasize their use in schools and secondary
institutions. The proper choice of software is very important
especially for beginners. Their first encounter with the computer
should be exiting and fun. It should stimulate their interest in
the computing field.
First and foremost is the fact that computer software
is a very important educational tool. Students in high schools
experience computers for the first time through games and other
entertaining software. These help develop youth's mental pathway
in the way of logic, reflexes and the ability to make quick and
concrete decisions [Lipcomb, 66]. The next step requires them to
think more seriously about the machines. Secondary students learn
the first steps in computer programming by creating simple
programs. Here, the assistance of useful software is necessary.
The computer software has many applications in the real world and
is found virtually everywhere.
The new generation of very fast computers introduces us
to a new type of software. Multimedia is a of computer program
that not only delivers written data for the user, but also
provides visual support of the topic. By exploring the influence
of multimedia upon high school students. I have concluded that
the usage of multimedia have significantly increased students'
interest in particular topics(supported by the multimedia). In
order get these positive results, every child has to have a
chance to use the technology on a daily basis [jacsuz@].
Mathematics is one of the scientific fields that has
employed the full potential of computer power complicated problem
solving. By using the computer, students learn to solve difficult
problems even before they acquire tough mathematical vocabulary.
The Geometer's Sketch pad, a kind of math software, is used in
many Canadian high schools as a powerful math tutor.
Students can pull and manipulate geometric figures and at the
same time give them specific attributes. The next best feature of
the software is a drawing document. It allows for easy drawing of
perfect ellipses, rectangles and lines. Over all students' marks
in the particular subject that have used helpful software have
significantly increased. [mhurych@].
Computers have been used commercially for well over 50,
their significant use in modern society however has never been so
high. People rely on computers in every aspect of their lives.
Medicine, engineering and other highly specialized fields of
science use computers in their work. Computer education is very
important. It builds the basis for future generations which will
be more dependent on computers than we are today.
The usage of computers depends mainly on the software. It is
software that navigates computers through series of commands to a
desired goal. Computer programs used in high schools must
motivate students to study. The degree of difficulty of the
computer software has to increase with the age of the user. Games
are introduced first as icebreakers between children and
machines. Later, more difficult software is used. Overall I think
that computer software is very important tool in high school
education.Drake (1987).
`
f:\12000 essays\sciences (985)\Computer\Software Piracy A Big Crime With Big Consequences.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Computer\Software Piracy and its Effects.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Software Piracy and it's Effects
Identification and Description of the Issue
Copyright law are perhaps those laws which are breached the most by individual on a daily bases. This is because one might not know be informed about these law or because not much is done to enforce these law. Also some countries of the world have no Copyright laws.
Software Piracy is a breach of a copyright law as one copies data contained on the medium on to another medium without the consent of the owner of the Software. When one buy a software one buys not the software content and therefore it isn't ones property. Instead one buy the license to use the software with accordance to the licensing agreement.
Software companies invest a lot of time and money in creating a Software and the company rely upon the sales of the Software for it's survival. If illegal copies are made of Software the companies earns no money and could therefore be forced into bankruptcy. Software Piracy can be compared to robbing as one is stealing the goods of someone else and using it without paying for it.
Up to 13 Billions dollars are lost in computer piracy yearly and in order to overcome these cost the company are force to rise the prices of their product.
Brand name are properties of their respected companies and they have the right to protect their properties.
Understanding of the IT background of the Issue
Software is contained on disc or on a CD-ROM. Pirates copy can easily be made of Software on disc by copying from one disc to another disc. For CD-ROM one needs a CD-ROM burner or one copies the content onto a large hard disc and then on to a floppy disc.
There are some underground bulletin boards ( BBS ) that contain pirate software. A user who logs on to one of these BBS can download Full version of Pirate Software provided one too can give something in return.
On the Internet there are binary Newsgroup such as alt.binaries.warez, WWW pages and FTP sites that also contain Pirate Software. On the Newsgroup the Files are send upon request from anonymous users. As a result people who have access to the Internet can retrieve these Software Program free of charge. The person posting the Pirate software could be from a countries that has no copyright laws.
These methods used in Software Piracy are hard to stop because of the fact it is done on the Internet and between individual form different counties.
Buying one legitimate copy of a Software package and then offering it over an internal network so that it can be accessed by more than one individual at the same time on different computer is another form of Software Piracy.
Analysis of the Impact of the Issue
Software program is a service just like any other service the difference that this service come on a medium from which one can make copies. Software are could be judged as begin expensive but one wants them but doesn't want to pay for it or can't afford to therefore one could be judged as begin a theft.
Office 97 from Microsoft required 3 years to develop and Microsoft invested Millions of Dollars. Microsoft will rely upon legitimate sale of this product for income and upon that judge on future versions of their product. If people don't pay for the program but make pirate copy Microsoft doesn't earn a cent and therefore could be force not to make future version of the product. This would mean that the Computer Industry growth would be halted and that one will not be expecting newer technology on Software as the Companies will not have the initiative to bring out better product if the don't get anything out of it.
Unfortunately many people aren't able to see these as many pirates copies are made but also more importantly also used. Society doesn't see Software Pirates as thieves or criminals that could be because one to makes illegal copies in order not to pay.
In Economical these means that if a companies can't make money it will eventually have to go bankrupt these would mean that people will lose their jobs.
Solution to Problems arising from the Issue
Software piracy will have to be fought on 2 different levels the first level will be the political and the second one will be on the technological level.
On the political level one will have to pass tougher legislation against Software pirates.
Also the government should makes regulation with other governments on Software piracy and encouraging them to be though on Software Pirates and try to combat them. Appropriate action should be taken against countries who fail to do so.
On the technological level one should consider to insert "Harder" Copy protection on the Software product. One such method could be that once the Installation of the Program has been completed one has to register the product and receive a code and each code begin different according to different variables.
Some Software program have copy protection one them but all of them are cracked very quickly by hackers and therefore it to unlikely that any new copy protection will not be cracked. To much copy protection could drive away legitimate consumers.
Till now politician haven't really looked into the problem of Software piracy and Copyrights very thoroughly as they think that there are bigger problems to solve. Once though legislation are passes and people made aware that Software Piracy is a crime one could see a fall in Software Piracy. In dealing with other country involves a lot of bureaucracy but also a committed government.
For anything to occur on the political level could take years for any effects to be seen. It has the greatest chance of solving the problem in the long run.
In the technological side one could solve the problem only one a short term but implementation would be fast.
Sources
Internet Page WWW.pcworld.com/News December 96
Business Software Alliance Information Sheet on Software Piracy
Computer Ethics Tom Forester &Perry Morrison Chapter3 "Software Theft" page 51-72
CNN Computer Connection December 96 - January 97
PC Magazine entire 96 Volume
Reuters InfoWorld, Vol.19, No.6
Reuters 6 Feb. 97
Media Daily, Jan 30, 1997 Article on FBI crackdown on Software Pirates
f:\12000 essays\sciences (985)\Computer\Software Piracy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Almost everyday it seems , software companys keep pumping out brand new software that kills the day befores in that it is more sophisticated and more in tune with the needs of todays superusers , office users , and home users . Yet , at the same time , the software theft industry in growing at an even faster rate , costing software companies billions of dollars a year . The piece of shit government can put as many copyright laws in a book as they feel , but soon it will be virtually impossible to stop .
Although computer illiteracy may still lurk by the thousands , computer intelligance lurks by the millions and even billions . We are going to bypass any laws you throw at us .There is no stopping it . America has gotta wake up , no matter what kind of warning you put out , or whatever other restrictions you try to enforce , there will always be another way . No matter what kind of encryption there will always be someone out there , wether it be me or the next guy , whose intelligence is greater then those who make the software .
According to the federal government , that by the way has no real control over america since they can't even control themselves , software is protected from the moment of its creation . As soon as that software hits the store it is protected by the United States Federal Government . Yet , thousands of software titles have been put out there , and the government hasn't protected a fucking thing from happening . What a joke , how can we let such morons run this nation . The law in the USA states that a person may who buys the software may (I) copy it to a single computer and (II) make a copy for "archival purposes" . This also holds true in canada with the exception of the user only being able to make a backup copy instead of the USA law which is allowed for both archival and backup . In actuality , the government can not baby sit everyone who buys software . How are they gonna know when John Doe buys a copy of Duke Nukem 3D and wants to install on Jane Smith's computer so they can get some network games going on . Yea right , they have control . People who do get caught have a chance of being fined of up to a 1 million dollars ? Jesus , all I did was install a program . Ahh... the beauty of America .
In a nutshell , the probability of someone stopping it from happening is slim to none . You take it away from the internet , we will goto mail , and so begins the cycle . Once you are so caught up in trying to stop one thing , we will go back and mess you up again . God bless America !
f:\12000 essays\sciences (985)\Computer\Solving Problem Creatively Over the Net.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dishonest Me
Since I got my internet privileges last 3 month, I had learn and encounter many weird and wonderful things. I have met the ugly side of internet and learnt something called "if you overspend your time limit, the phone bill gonna be very ugly."Perhaps the most interesting moment I encounter in the internet is when I discover homepage making. I made a homepage from learning HTML language from a web site. I want my homepage to be bold and simple but most of all animation-free. As a surfer myself, I know how it feels when entering a homepage that is full with high resolution graphic and animation. The animation had to be reload and reload again. Within 2 hours I managed to made myself a homepage.I also know to make an impressive homepage,one must have a high counter number so that people will revisit the homepage again. I can't use any "sensual" word to attract people cause it's against Geocities's rule. So I did a very nasty thing. I cheated, I put an extra counter number in my homepage beside my original counter number so each time when it reload it will look like this---->0101.While the only people who visited my homepage was myself, but instead the counter number show 101.
MIRC The Solution
When my PC suffered a data crash, I lost all my data. I lost all my e-mail address and most importantly my browser.The computer technician managed to repair my PC but he gave me an old version of Netscape.I have trouble using it in Win'95.So I downloaded the later version of Netscape.The downloading seize when it reaches 52%. I had to reload if I were to use Netscape.Instead I used MIRC to download the program because MIRC come with this neat feature that allow me to resume downloading where I left out. As a result, I get to continue my downloading at 17564 byte from a friend.
I'm The Biggest Leech
The rule of warez is-to download you must upload. The warez people even wrote a scipt to ban people who didn't upload when they download. To upload any program of mine to anyone will take forever, all the file I have is at least 6 MB long. So what I did was gather all my saved file and compress it, it sum up to 1.5 MB long and named it Ultra.zip.Then I sent it to the warez people. I found out that if you send the same file for the second time, the script will recognize it as a diffrent file and immediately add credit to my downloading account. As a result I got 9MB of credit within minutes when actually I sent 1.5MB.
f:\12000 essays\sciences (985)\Computer\Speeding Up Windows 95.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SPEEDING UP WINDOWS 95
Windows 95 with certain minor alterations and software upgrades can operate at a faster more efficient
speed. With this Windows 95 tutorial, all the things you do now will be easier and faster, and what you always
wanted to know is now here for you to learn. This tutorial will provide you with insightful instructional and
informative tips about free programs such as TweakUI, and day to day maintenance OS needs. First, it is very
important that you run Windows 95 with at least a high-end 486 (Pentium recommended), 8 megs of ram(adding
more ram will increase overall performance), and at least 1 meg of video memory. Most of the following tips included
here are for speedy application processes while others simply rewrites or bug fixes.
One advantage Windows 95 has over its competitors is the user interface feature that comes built in with the
operating system. User interface is a program within Windows 95 that allows customization of certain interface
settings based on personal preference. About a year ago Microsoft released a small program called TweakUI that
actually adds more flexibility and functionality to the already current user-friendly interface. TweakUI is actually a
rewrite (bug fix) program that edits certain data files from the Windows 95 registry. With TweakUI running on your
machine you can disable the following options which in turn will speed up your access time: windows animation,
reboot start up, GUI interface, and last log on settings. TweakUI also adds a few nifty extras such as: smooth scroll,
mouse enhancement, instant CD-ROM data load, and much more. Surprisingly enough TweakUI is offered free of
charge to any WWW user and can be found at: http://www.microsoft.com or http://www.tucows.com. TweakUI is a
definite must for any Windows 95 user looking to benefit the most from their home computer.
No can argue that Windows 95 is the cleanest and most efficiently set up OS around. In fact, Windows 95 is
by far the messiest OS to ever hit the market this decade. When compared to operating systems such as MacOS,
OS2Warp, and Windows NT, Windows 95 finishes in dead last. This is due mainly to the fact that when installing or
uninstalling a program in the Windows 95 environment, the program manager scatters files all over different parts of
the file system (fixed disk directory). These scattered bits of files are often called leftovers (which is to be taken by
definition of) which if left on your drive, cause extreme slow downs when you CPU is at work. Usually leftovers can
be found in your c:/windows, c:/windows/system, or c:/windows/temp. The suffixed name for leftovers is as follows
txt, old, log, ***,..., and tmp. Deletion of file leftovers make for faster access time and more hard disk space
available.
We've already seen several simple but effective ways to increase performance in the Windows 95
environment, but of all the most important is, disk defragmentation. Disk fragmentation is the breaking up of different
access files all relative to certain programs installed on your fixed disk drive. Think of your fixed disk drive as a big
completed jigsaw puzzle, which of moved, will break apart into several sub-puzzles. The same holds true for your
fixed disk. When a program is installed it takes up the amount of disk space it needs to function correctly (usually the
last available part of your drive). On the contrary, when a program is uninstalled it creates a space or hole on your
fixed disk relative to where the program was before. Taking the same concept and applying it in terms of the jigsaw
puzzle, we can clearly see what our fixed drive would physically look like. This is where disk defragmentation comes
into play. It moves the rest of the currently installed programs on your drive from their current position to the
position where the space is. Speed comes into play due to the fact that if you drive has never been defragmented,
your CPU probably has to search in different areas of your physical drive for certain start up files. Disk
Defragmentation comes with every version of Windows 95 and can usually be found by clicking the taskbar and
highlighting the following: programs/accessories/system tools/disk defragmenter. Overall defragmentation increases
performance by about 30 percent and make for a neater set up system.
As discussed earlier, the addition of extra ram, faster processor, and a good video card make up a great
conventional way of boosting the level of your performance, unfortunately the expense is never a pretty to hear. If
you currently have the minimum required setup (high-end 486, 8 megs of ram, 1 meg of video memory), you should
see some good effective results from this tutorial. However, if your system falls short of the minimum requirements, I
would definitely recommend a hardware upgrade or the purchase of a newer more up to date machine.
f:\12000 essays\sciences (985)\Computer\Spiderweb.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#----------------------------------PLEASE NOTE---------------------------------#
#This file is the author's own work and represents their interpretation of the #
#song. You may only use this file for private study, scholarship, or research. #
#----------------#
song: Spiderwebs
by : No Doubt
Riff 1
e---------------------------
B---------------------------
G---------------------------
D---------------------------
A---6--6--5--5--3--3--1--0--
E---------------------------
Riff 2
e-----------------------------------
B-----------------------------6~~--- every other time riff 2
G----------------------------------- is played, hit the high F
D------8-------8-------8------------ as an artificial harmonic
A-----------------------------------
E--6-6---6---6---6---6---6--6-------
Riff 3
e---------------------------------------
B---------------------------------------
G---------------------------------------
D------8-------8-----------------5--8---
A--------------------------5--8---------
E--6-6---6---6---6---6--8---------------
Intro (reggae beat)
Bb F Gm Eb | Bb F Gm (riff 1)
vi viii x vi vi viii iii
Play Riff 2 twice
Then into VERSE
Riff 2 x 3
Riff 3
TAB FOR CHORDS
Bb F Gm Eb Gm Bb F
vi viii x vi iii i i
e----6-------------6---3---1---1-----
B----6---10---11---8---3---3---1-----
G----7---10---12---8---3---3---2-----
D----8---10---12---8---5---3---3-----
A----8----8---10---6---5---1---3-----
E----6-----------------3-------1-----
PRECHORUS
(8th notes)
Eb F Bb Gm | Eb F slide f up....
vi viii vi iii vi viii
CHORUS x 2
Bb F Gm Riff 1
i i iii
VERSE
Riff 2 x 3
Riff 3
PRECHORUS
CHORUS x 2
CHORUS x 2 - with choppy offbeats
BRIDGE
Gm Eb
iii vi
CHORUS x 2
CHORUS x 2 - with choppy offbeats
CHORUS x 6
(last time play riff 1 really slowly)
(throughout the last choruses, play harmonics along the low frets of
the E strings, a natural flange)
OUTRO - with reggae beat of intro
Bb F Gm Eb
vi i iii vi
Lyrics
You think that we connect
That the chemisty's correct
Your words walk right through my ears
Presuming I like what I hear
And now I'm stuck in the web
You're spinning
You've got me for your prey...
Sorry I'm not home right now
I'm walking into spiderwebs
So leave a message
And I'll call you back
A likely story, but leave a message
And I'll call you back
You take advantage of what's mine
You're taking up my time
Don't have the courage inside me
To tell you please let me be
Communication, telephonic invasion
I'm planning my escape...
CHORUS
And It's all your fault
I screen my phone calls
No matter who calls
I gotta screen my phone calls
Now it's gone too deep
You wake me in my sleep
My dreams become nightmares
'Cause you're ringing in my ears
CHORUS
f:\12000 essays\sciences (985)\Computer\Starting a Business on the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mrs. -----, I understand that some students that have already graduated from College are
having a bit of trouble getting their new businesses started. I know of a tool that will
be extremely helpful and is already available to them; the Internet. Up until a few years
ago, when a student graduated they were basically thrown out into the real world with just
their education and their wits. Most of the time this wasn't good enough because after
three or four years of college, the perspective entrepreneur either forgot too much of what
they were supposed to learn, or they just didn't have the finances. Then by the time they
save sufficient money, they again had forgotten too much. I believe I have found the
answer. On the Internet your students will be able to find literally thousands of links to
help them with their future enterprises. In almost every city all across North America, no
matter where these students move to, they are able to link up and find everything they
need. They can find links like "Creative Ideas", a place they can go and retrieve ideas,
innovations, inventions, patents and licensing. Once they come up with their own products,
they can find free expert advice on how to market their products. There are easily
accessible links to experts, analysts, consultants and business leaders to guide their way
to starting up their own business, careers and lives. These experts can help push the
beginners in the right direction in every field of business, including every way to
generate start up revenue from better management of personal finances to diving into the
stock market. When the beginner has sufficient funds to actually open their own company,
they can't just expect the customers to come to them, they have to go out and attract them.
This is where the Internet becomes most useful, in advertising. On the Internet, in every
major consumer area in the world, there are dozens of ways to advertise. The easiest and
cheapest way, is to join groups such as "Entrepreneur Weekly". These groups offer weekly
newsletters sent all over the world to major and minor businesses informing them about new
companies on the market. It includes everything about your business from what you
make/sell and where to find you, to what your worth. These groups also advertise to the
general public. The major portion of the advertising is done over the Internet, but this
is good because that is their target market. By now, hopefully their business is doing
well, sales are up and money is flowing in. How do they keep track of all their funds
without paying for an expensive accountant? Back to the Internet. They can find lots of
expert advice on where they should reinvest their money. Including how many and how
qualified of staff to hire, what technical equipment to buy and even what insurance to
purchase. This is where a lot of companies get into trouble, during expansion. Too many
entrepreneurs try to leap right into the highly competitive mid-size company world. On the
Internet, experts give their secrets on how to let their companies natural growth force its
way in. This way they are more financially stable for the rough road ahead. The Internet
isn't always going to give you the answers you are looking for, but it will always lead you
in the right direction. That is why I hope you will accept my proposal and make aware the
students of today of this invaluable business tool.
??
f:\12000 essays\sciences (985)\Computer\Surfing the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chances are, anyone who is reading this paper has at one time, at least, surfed the net
once. Don't worry if you haven't, I will explain everything you need to know about the
Internet and the World Wide Web. Including how it started, it's growth, and the purpose it
serves in today's society.
The Internet was born about 20 years ago, as a U.S. Defense Department network called the
ARPnet. The ARPnetwork was an experimental network designed to support military research.
It was research about how to build networks that could withstand partial outages (like bomb
attacks) and still be able to function.
From that point on, Internet developers were responding to the market pressures, and began
building or developing software for every conceivable type of computer. Internet uses
started out with big companies, then to the government, to the universities and so on.
The World Wide Web or WWW, is an information service that is on the Internet. The WWW is
based on technology called hypertext, and was developed for physicist so they could send
and retrieve information more easily. The WWW basically is a tool for exploring or surfing
the Internet. The WWW is an attempt to organize the Internet so you can find information
easier moving threw document to document.
Why do I need to know this?
Well now that I got threw all the techno-babble, let's get down to it. If you know how to
utilize the Net, in just five minutes you could trade information and comments with
millions of people all over the world, get a fast answer to any question imaginable on a
scientific, computing, technical, business, investment, or any other subject. You could
join over 11,000 electronic conferences, anytime, on any subject, you would be broadcasting
your views , questions, and information to millions of other partic There has never been
anything like it in the history of the world, and in this English class we've covered alot
of history. At a growing rate of about 20% per month the Internet is only getting bigger
and if people don't start utilizing it's resources they could be road kill on this
Information Superhighway. Hey, I'll bet in the middle of that last sentence another
computer just got on-line to the Net.
There are three major features of the Internet, On-line discussion groups, Universal
Electronic Mail, files and software. There's about 11,000 on-line discussion groups called
Newsgroups, on most any topic you can imagine. If you are on the Net, you can participate
in any of these discussions in any of these newsgroups.
The next thing is Universal Electronic Mail or E-mail. E-mail is the biggest and cheapest
system on the Net and is also one of it's biggest attractions. Since all commercial on-line
services have something called "gateways" for sending and receiving electronic mail
messages on the Internet, you're able to send and receive messages or files to anyone else
who is on-line, anywhere in the world and in seconds.
The third feature I mentioned was files and software. This in my opinion is the most
impressive one. All the thousands of individual computer facilities connected to the
Internet are also vast storage repositories for hundreds of thousands of software programs,
information text files, video and sound clips, and other computer based resources. And
their all accessible in minutes from any personal computer on-line to the Internet.
So I could do all this stuff on the Internet, why should I take notice?
Because of it's sheer size, volume of messages, and it's incredible monthly growth. From
the latest statistics I was able to get, their are currently 30 million people who use the
Internet worldwide. To try and put that number into perspective, that's over five times the
size of CompuServe, America On-line, Prodigy, and all other on-line commercial information
services combined. Or if you're not familiar with those services, it's more than the
combined populations of New York City, London, and Moscow. Eri Just a few years ago, the
Internet had a small exclusive domain of a small band of computer science students,
university researchers, government defense contractors, and computer nerds. All of whom had
free or cheap access through their universities or research labs. Because of the widespread
free use, many people who used the Internet as students have demanded and received
connections to the Internet from their employers as they got jobs in the outside world.
Because of that, use of the Internet has expl The Internet is rapidly achieving a state of
critical mass, attracting interest from huge numbers of personal computer users from non
technical backgrounds. All these new Internet users are rapidly transforming the nerd
orientated culture of the network and opening up the Internet to new and exciting
possibilities.
"I'm not sure threat is exactly the right word, but if you ignore the Internet, you do so
at your own peril, the Internet is going to force a new way of doing business on some
people." says Norman DeCarteret, senior systems analyst at Advantis. (A company that links
other companies to the Internet. "Internet becomes the road more traveled as E-mail users
discover no usage fee." Steve Stecklow, Wall Street Journal (9/2/93).
Here are some good things about the Net and why you should be using it. People in all kinds
of businesses and industries are sharing a wide spectrum of educational, business, and
personal interest on the Net. Most, probably share a high enthusiasm for the Internet and
want to send and receive e-mail messages. But also, one to one communications by newsgroups
or electric mail is different and better than conventional letter writing or voice phone
conversations in that the people you communicate with seem m You also have instant access
to such a large, varied, and intelligent based individuals on the Net, which gives you the
power of being able to get good information. When you ask a question on the Internet, you
stand an excellent chance at getting at least one intelligent answer from someone who has
gone threw the same experience. Whether it's advice on a paper you have to write, how to
research a certain topic, or something of a personal interest, there's always someone on
the Internet willing to share th Profit, this is something I thought I would throw in for
all those entrepreneurs out there. A rapidly increasing number of companies and
entrepreneurs are using the Internet to market and sell their products and services. When
it's done in an informative way, and in good taste, and in the on-line areas designated for
advertising orientated messages, most Internet users like to see announcements of new
products and services. A growing number of companies are generating substantial sales of
their products a But hey, the Internet isn't just for academics, business, and professional
use. It could also be really fun! There are over 11,000 special interest on-line
confrenceing areas called newsgroups, on the Internet. Many of these groups feature large,
active, and sometimes raucous discussions on the widest imaginable range of interests,
hobbies, and activities. Anything from antique cars, new business opportunities and
personal investing to politics, gun control, sex, and The Simpsons. Participating in these
Of course, like most other things, the Internet isn't all good and gloryice. You could say
that the Internet is like the Wild West of the late 1800's. It's lawless, individualistic,
brutal, and chaotic. And like any new frontier the Internet is not without it's problem's.
If you decide you want to connect to the Internet, there are a few things you should know.
The Internet can be pretty raw. That is, if you get a raw connection to the Internet, it
lags behind modern personal computer interface technology by about 15 years. Without a good
Windows or Macintosh based graphical software interface, also called a Web browser, to use
all the features of the Internet you would need to know UNIX, a terse computer operating
system command language that's a throwback to time sharing computer systems of the 1970's.
For Internet access I would recommend you to go with an In The Internet has many powerful
capabilities and an almost infinite range of information and communication power, all of
which can never be adequately covered in any one paper or book. All the information in this
paper came from hard copy sources to show you don't have to get on the Net to find out
about the Net.
Work Cited :
Cagnon, Eric. What's on the Internet :
Berkeley : Peach Pit Press. 1995
Krol, Edward. The Whole Internet : User's Guide and Catalog.
Sebastopol : O'Reilly Ass, Inc. 1992
Internet World Magazine. On Internet 94.
Westport : Mecklermedia Ltd. 1994
Newby, Gregory B. Directory of Directories on the Internet :
Westport : Mecklemedia Ltd. 1994
Carmen, John. "The New Wave of the Internet."
Wall Street Journal : 9/2/93
Michael LaCroix
Eng 101
Dr. Sonnchein
4/10/96
f:\12000 essays\sciences (985)\Computer\Technology Advances.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My report is on Computer Games and the advancements in technology. I am very intersted in this field because of the rapid change in out society that pretty much requires a person to own a computer.
Whenever there is work, there must be pleasure; thus resulting in computer games. In the beginning there were games like "Pong", single pixel tennis. On each end of the screen there were two bars and the object was to hit a square pixel back and forth in an attempt to score.
These types of games were good, but as technology advanced , graphics and sound were in demand. From the ATARI came NINTENDO ( I am skipping a few minute advances in technology like the ODDESY) Then Nintendo, which dominated the market at the time, soon had competition with SEGA. Both of these systems were 16 bit. Theses machines still weren't enough to satisfy consumers for a while so thay came out with the most significant change yet. The change from cartridges to CD's.
I believe the first one to use CD technology was 3DO. The 3DO was now the item on evey childs mind. The 3DO featured stunning 3D Graphics as well as the quality sound you recieved froom AUDIO CD's. The only reason this machine did not dominate the market was it's price tag, a whopping 300$. Alot to pay for your childs (or husbands) entertainment.
The only prblem I find with sytems like the SEGA, NINTENDO, and 3DO is the lack of variety. When PC's became sensible in the home there was really no comparison exept in the price. 2,000$ for a PC or 300$ for a 3DO the difference is quite clear. I hope that this essay has been informative.
f:\12000 essays\sciences (985)\Computer\Technology effects modern America.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How Technology Effects Modern America
The microeconomics picture of the U.S. has changed immensely since 1973, and the trends are proving to be consistently downward for the nation's high school graduates and high school drop-outs. "Of all the reasons given for the wage squeeze - international
competition, technology, deregulation, the decline of unions and defense cuts - technology is probably the most critical. It has favored the educated and the skilled," says M. B. Zuckerman, editor-in-chief of U.S. News & World Report (7/31/95). Since 1973, wages adjusted for inflation have declined by about a quarter for high school dropouts, by a sixth for high school graduates, and by about 7% for those with some college education. Only the wages of college graduates are up.
Of the fastest growing technical jobs, software engineering tops the list. Carnegie Mellon
University reports, "recruitment of it's software engineering students is up this year by over 20%." All engineering jobs are paying well, proving that highly skilled labor is what
employers want! "There is clear evidence that the supply of workers in the [unskilled labor] categories already exceeds the demand for their services," says L. Mishel, Research Director of Welfare Reform Network.
In view of these facts, I wonder if these trends are good or bad for society. "The danger of the information age is that while in the short run it may be cheaper to replace workers with technology, in the long run it is potentially self-destructive because there will not be enough purchasing power to grow the economy," M. B. Zuckerman. My feeling is that the trend from unskilled labor to highly technical, skilled labor is a good one! But, political action must be taken to ensure that this societal evolution is beneficial to all of us. "Back in 1970, a high school diploma could still be a ticket to the middle income bracket, a nice car in the driveway and a house in the suburbs. Today all it gets is a clunker parked on the street, and a dingy apartment in a low rent building," says Time Magazine (Jan 30, 1995 issue).
However, in 1970, our government provided our children with a free education, allowing
the vast majority of our population to earn a high school diploma. This means that anyone, regardless of family income, could be educated to a level that would allow them a
comfortable place in the middle class. Even restrictions upon child labor hours kept
children in school, since they are not allowed to work full time while under the age of 18.
This government policy was conducive to our economic markets, and allowed our country to prosper from 1950 through 1970. Now, our own prosperity has moved us into a highly technical world, that requires highly skilled labor. The natural answer to this problem, is that the U.S. Government's education policy must keep pace with the demands of the highly technical job market. If a middle class income of 1970 required a high school diploma, and the middle class income of 1990 requires a college diploma, then it should be as easy for the children of the 90's to get a college diploma, as it was for the children of the 70's to get a high school diploma. This brings me to the issue of our country's political process, in a technologically advanced world.
Voting & Poisoned Political Process in The U.S.
The advance of mass communication is natural in a technologically advanced society. In
our country's short history, we have seen the development of the printing press, the radio,
the television, and now the Internet; all of these, able to reach millions of people. Equally
natural, is the poisoning and corruption of these medias, to benefit a few.
From the 1950's until today, television has been the preferred media. Because it captures
the minds of most Americans, it is the preferred method of persuasion by political figures,
multinational corporate advertising, and the upper 2% of the elite, who have an interest in
controlling public opinion. Newspapers and radio experienced this same history, but are
now somewhat obsolete in the science of changing public opinion. Though I do not
suspect television to become completely obsolete within the next 20 years, I do see the
Internet being used by the same political figures, multinational corporations, and upper 2% elite, for the same purposes. At this time, in the Internet's young history, it is largely
unregulated, and can be accessed and changed by any person with a computer and a
modem; no license required, and no need for millions of dollars of equipment. But, in
reviewing our history, we find that newspaper, radio and television were once unregulated
too. It is easy to see why government has such an interest in regulating the Internet these
days. Though public opinion supports regulating sexual material on the Internet, it is just
the first step in total regulation, as experienced by every other popular mass media in our
history. This is why it is imperative to educate people about the Internet, and make it be
known that any regulation of it is destructive to us, not constructive! I have been a daily
user of the Internet for 5 years (and a daily user of BBS communications for 9 years), which makes me a senior among us. I have seen the moves to regulate this type of
communication, and have always openly opposed it.
My feelings about technology, the Internet, and political process are simple. In light of the history of mass communication, there is nothing we can do to protect any media from the "sound byte" or any other form of commercial poisoning. But, our country's public
opinion doesn't have to fall into a nose-dive of lies and corruption, because of it! The first experience I had in a course on Critical Thinking came when I entered college. As many good things as I have learned in college, I found this course to be most valuable to my basic education. I was angry that I hadn't had access to the power of critical thought over my twelve years of basic education. Simple forms of critical thinking can be taught as early as kindergarten. It isn't hard to teach a young person to understand the patterns of persuasion, and be able to defend themselves against them. Television doesn't have to be a weapon against us, used to sway our opinions to conform to people who care about their own prosperity, not ours. With the power of a critical thinking education, we can stop being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to persuade us.
I feel that the advance of technology is a good trend for our society;
however, it must be in conjunction with advance in education so that society is able to
master and understand technology. We can be the masters of technology, and not let it be
the masters of us.
Bibliography
Where have the good jobs gone?, By: Mortimer B. Zuckerman
U.S. News & World Report, volume 119, pg. 68 (July 31, 1995)
Wealth: Static Wages, Except for the Rich, By: John Rothchild
Time Magazine, volume 145, pg. 60 (January 30, 1995)
Welfare Reform, By: Lawrence Mishel
http://epn.org/epi/epwelf.html (Feb 22, 1994)
20 Hot Job Tracks, By: K.T. Beddingfield, R. M. Bennefield, J. Chetwynd,
T. M. Ito, K. Pollack & A. R. Wright
U.S. News & World Report, volume 119, pg. 98
f:\12000 essays\sciences (985)\Computer\Technology in our Society.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No doubt, technology is increasingly important in the modern world. It is amazing how fast
technology has been developed. Nearly every major advance was invented in the last century.
These invention are always planned for a positive result, however the negative effects often
do not become apparent until after the event. These effects will be deal in the following
paragraphs with related materials.
The text, "Whose Life is it Anyway?", by Brian Clark, has clearly illustrated that with the
development of medical technology, people can now have a better quality of life. Moreover,
many lives which normally would not survive without the advance in medical treatment can now
be artificially prolonged. The central character, Ken Harrison, who becomes a quadriplegic
after a car accident, has met this situation. Nevertheless, it is cruel to ask him to face
this life if he does not desire to. He can no longer sculpt, run, move, kiss or have any
form of sexual fulfillment. Obviously, his normal life has drifted away. The tendency to
sustain people's lives, just because the technology is available, is intolerance under
certain circumstances. It is the individual patient who must make a decision about whether
to keep himself alive. "What is the point of prolonging a person's biological life if it is
obtained at the cost of a serious assault on that person's liberty?" There is probably no
simple answer for this question. Any patient's decision should be respected, not based on
the fact of all available technologies. This medical technology has the potential for both
good and bad results. However, it is very important in today's society.
"Insurance in the Genes" is a piece of valuable material which explores another area in the
technological field. Nowadays, genetic engineering essentially plays an important role.
Genetic testing can predict a person's biological use-by date, forecasting everything from
heart attacks to breast cancer. People can therefore have a basic concept of their health
situation and prevent what is going to happen if technology allows them to know this
beforehand. "Up until now, only 50 genetic tests have been developed to detect diseases. But
within a decade, there will be tests for 5000 diseases." It is a remarkable increase. In the
near future, hopefully, genetic testing will be employed to reveal potential health risks.
It is a positive effect of technology in the modern world.
Another useful source for the effects of technology in our world is the documentary. On 23
April 1996, SBS broadcasted a film entitled "Weapon: A Battle for Humanity". It recorded
that landmines and laser weapons are devils. Evidently, mines do not just shatter individual
lives, they also shatter whole communities. In World War II, mines were used to be defensive
weapons. However, they do not just only kill soldiers, but also farmers farming, children
playing and women collecting food. People in the past or even now have complained about
their existence.
Laser weapons have been abused in military field. Militarism plans to install these weapons
in war. Their power have been recognized that under a certain condition, laser weapons can
result in losing sight. No medical science today can actually give sight back.
Weapons should only be objects of defense. However, because of the advance of technology,
they have become more and more powerful. Scientists clearly know that misusing weapons will
result in deaths, but they are still working towards more powerful weapons which can result
in even more death. Why is this? Weapons lead to homelessness, disasters, sacrifices and
death. This study of the development of landmines and laser weapons shows that technology
can be used for destructive and immoral reasons. It is shocking to know that the USA, a
peaceful nation and a member of the United Nations, has spent more than two-thirds of its
research and development finance on military projects in the 1980s.
My personal experience has inspired in me a lot of understanding of this issue. In today's
society, communication and transport are significant features. Over the last decade, their
developments in technology are rapidly increasing. People who want to go to other countries
can travel by airplane; and people who want to communicate with friends overseas can use the
telephone, fax or Internet. Not only in Australia, but also in other developing countries,
Internet has become more and more common. With the use of Internet, I can now travel all
over the world without stepping out of my door. Most importantly, a large amount of money is
saved and having Internet is important to me. Internet has taken communication a further
step: all information is totally accessible to any who owns this form of technology. It
opens up a new international community which is positive and should lead to a peaceful
modern world.
So in this world today, technology is perhaps the most important driving force of our
society, creating dilemmas concerning life and death, changing nature with genetic
engineering, developing such immoral weapons and the instant advantages of using Internet.
f:\12000 essays\sciences (985)\Computer\Telecommunication.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Telecommunication
1. Introduction
Computer and telephone networks inflict a gigantic impact on today's society. From letting you call John in Calgary to letting you make a withdraw at your friendly ATM machine they control the flow of information. But today's complicated and expensive networks did not start out big and complicated but rather as a wire and two terminals back in 1844. From these simple networks to the communication giants of today we will look at the evolution of the network and the basis on which it functions.
2. The Beginnings
2.1. Dot Dot Dot Dash Dash Dash Dot Dot Dot
The network is defined as a system of lines or structures that cross. In telecommunications this is a connection of peripherals together so that they can exchange information. The first such exchange of information was on May 24, 1844 when Samuel Morse sent the famous message "What hath God wrought" from the US Capitol in Washington D.C. across a 37 mile wire to Baltimore using the telegraph. The telegraph is basically an electromagnet connected to a battery via a switch. When the switch is down the current flows from the battery through the key, down the wire, and into the sounder at the other end of the line. By itself the telegraph could express only two states, on or off. This limitation was eliminated by the fact that it was the duration of the connection that determined the dot and dash from each other being short and long respectively. From these combinations of dots and dashes the Morse code was formed. The code included all the letters of the English alphabet, all the numbers and several punctuation marks. A variation to the telegraph was a receiving module that Morse had invented. The module consisted of a mechanically operated pencil and a roll of paper. When a message was received the pencil would draw the corresponding dashes and dots on the paper to be deciphered later. Many inventors including Alexander Bell and Thomas Edison sought to revolutionize the telegraph. Edison devised a deciphering machine. This machine when receiving Morse code would print letters corresponding to the Morse code on a roll of paper hence eliminating the need for decoding the code.
2.2. Mr. Watson, Come Here!
The first successful telephone was invented by Alexander Graham Bell. He along with Elisha Gray fought against time to invent and patent the telephone. They both patented their devices on the same day-February 14, 1876- but Bell arrived a few hours ahead of gray thus getting the patent on the telephone. The patent issued to Bell was number 174,465, and is considered the most valuable patent ever issued. Bell quickly tried to sell his invention to Western Union but they declined and hired Elisha Gray and Thomas Edison to invent a better telephone. A telephone battle began between Western Union and Bell. Soon after Bell filed suit against Western Union and won since he had possessed the basic rights and patents to the telephone. As a settlement Western Union handed over it's whole telephone network to Bell giving him a monopoly in the telephone market. During his experiments to create a functional telephone Bell pursued two separate designs for the telephone transmitter. The first used a membrane attached to a metal rod. The metal rod was submerged in a cup of mild acid. As the user spoke into the transmitter the membrane vibrated which in turn moved the rod up and down in the acid. This motion of the rod in the acid caused variations in the electrical resistance between the rod and the cup of acid. One of the greatest drawbacks to this model was that the cup of acid would have to be constantly refilled. The second of Bell's prototypes was the induction telephone transmitter. It used the principle of magnetic induction to change sound into electricity. The membrane was attached to a metal rod which was surrounded by a coil of wire. The movement of the rod in the coil produced a weak electric current. An advantage was that theoretically it could also be used both as a transmitter and a receiver. But since the current produced was so weak, it was unsuccessful as a transmitter. Most modern day telephones still use a variation of Bell's design. The first practical transmitter was invented by Thomas Edison while he was working for the Western Union. During his experiments Edison noticed that certain carbon compounds change their electrical resistance when subjected to varying pressure. So he sandwiched a carbon button between a metal membrane and a metal support. The motion of the membrane changed the pressure on the carbon button, varying the flow of electricity through the microphone. When the Bell Vs. Western Union lawsuit was settled the rights to this transmitter were also taken over by Bell.
2.3. Please Wait, I'll Connect You.
The first network of telephones consisted of switchboards. When a customer wanted to place a call he would turn a crank on his telephone terminal at home. This would produce a current through the line. A light at the switchboard would light up. The caller would tell the operator where he wanted to call and she would connect him by means of inserting a plug into a jack corresponding to the desired phone. In earlier years he found that he could use the ground as the return part of the circuit, but this left the telephone very susceptible to interference from anything electrical. So in the mid 1880s Bell realized that he would have to change the telephone networks from one wire to two wire. In 1889 Almon Brown Strowger invented the telephone dial which eliminated the use for telephone operators.
2.4. The Free Press Reported That President Carter.......
French inventor Emile Baudot created the first efficient printing telegraph. The printing telegraph was the first to use a typewriter like keyboard and allowed eight users to use the same line. More importantly, his machines did not use Morse code. Baudot's five level code sent five pulses for each character transmitted. The machines did the encoding and decoding, eliminating the need for operators. After some improvements by Donald Murray the rights to the machine were sold to Western Union and Western Electric. The machine was named the teletypewriter and was also known by it's nickname TTY. A service called telex was offered by Western Union. It allowed subscribers to exchange typed messages with one another.
3. From The Carterfone to the 14,400
3.1. I'll Patch Her Up On The Carterfone, Captain.
The first practical computers used the means of punched cards as a method of storing data. These punched cards held 80 characters each. They dated back to the mechanical vote-counting machine invented by Hermen Hollerith in 1890. But this type of computer was very hard and expensive to operate. They were very slow in computing speed and the punch cards could be very easily lost or destroyed. One of the first VDTs (Video Display Terminal) was the Lear-Siegler ADM-3A. It could display 24 lines of 80 characters each (a remarkable feat of technology).
One of the regulations that AT&T passed was that no other company's equipment could be physically connected to any of it's lines or equipment. This meant that unless AT&T invented a peripheral it would not be legal to connected to the telephone jack. In 1966 a small Texas company called Carterfone invented a simple device that could go around these regulations. The Carterfone allowed for a company's radio to be connected to the telephone system. The top portion of the Carterfone consisted of molded plastic. When a radio user needed to use the telephone, the radio operator at the base station placed the receiver in the Carterfone and dialed the number. This allowed the user to call through the radio. AT&T challenged the integrity of the Carterfone on the phone lines and lost the battle in court. In 1975 the FCC passed Part 68 rules. They were specifications that, if met would allow third party companies to sell and hook up their equipment to the telephone network. This turned the telephone industry upside down and challenged AT&T's monopoly in the telephone business.
3.2. So Gentelmen A' Will Be 65
With more and more electronic communication and the invention of VDTs the shortcomings of the Baudot code were realized. So in 1966, several telecommunications companies devised a replacement for the Baudot code. The result was the American Standard Code for Information Interchange, or ASCII. ASCII uses 7 bits of code, allowing it to represent 128 characters without a shift code. The code defined 96 printable characters (A through Z in upper- and lowercase, numbers from 0 to 9, and various punctuation marks) and several control characters such as carriage return, line feed, backspace etc. ASCII also included an error checking mechanism. An extra bit, called the parity bit, is added to each character. When in even parity mode, the bit would have a value of one if there was an even number of ones and zero if there was an odd number of ones. IBM invented it's own code which used 8 bits of code giving 256 character possibilities. The code was called EBCDIC, for Extended Binary Coded Decimal Interchange Code and was not sequential. The Extended ASCII was designed so that PCs could again attain compatibility with the IBM machines. The other upper 128 characters of the EASCII code include pictures such as lines, hearts and scientific notation. In 1969 guidelines were set for the construction of serial ports. The RS-232C standard was established to define a way to move data over a communications link. The RS-232C is commonly used to transmit ASCII code but can also transmit Baudot and EBCDIC data. The connector normally uses a 25 pin D shell connector with a male plug on the DTE (Data Terminal Equipment) and a female plug on the DCE (Data Communications Equipment).
3.3. Hello Joshua, Would You Like To Play A Game...
In the 1950s a need arose to connect computer terminals across ordinary telephone lines. This need was fulfilled by AT&T's Bell 103 modem. A modem (modulator/demodulator) is used to convert the on-off digital pulses of computer data into on-off analog tones that can be transmitted over a normal telephone circuit. The Bell 103 operated at a speed of 300 bits per second, which at that time was more than ample for the slow printing terminals of the day. The Bell 103 used two pairs of tones to represent the on-off states of the RS-232C data line. One pair for the modem that is calling and the other pair for the modem answering the call. The calling modem sends data by switching between 1070 and 1270 hertz, and the answering modem by switching between 2025 and 2225 hertz. The principle on which the Bell 103 operated is still in use today.
During the sixties and seventies the concept of mainframe networks arose. A mainframe consisted of a very powerful computer to which thousands of terminals were connected. The mainframe worked on a timesharing process. Timesharing was when many users on terminals could use limited amounts of the host computer's resources, thus letting many parties access the host at the same time. This type of network, however, was very expensive, and since on time sharing you could only use small amounts of the host's total computing power (CPU), the use of the terminal was slow and sluggish.
In the late seventies the personal computer was introduced to the public. A personal computer consisted of a monitor, a keyboard, a CPU (Central Processing Unit), and various other connectors and memory chips. The good things about PCs were that they did not have to share their CPU and that the operating costs of these systems were much less that that of their predecessors. The computers could, with a software package, emulate terminals, and be connected to the mainframe network.
Bell laboratories came up with the 212a unit which operated at the speed of 1200 bits per second. This unit, however, was very susceptible to noise interference.
3.4. Hey Bell! I Can Hang Myself Up!
After the breakup of the AT&T empire that controlled the modem industry, many other companies started to create new designs of modems. Hayes Microcomputer Products, took the lead in the PC modem business. Hayes pioneered the use of microprocessor chips inside the modem itself. The Hayes Smartmodem, introduced in 1981, used a Zilog Z-8 CPU chip to control the modem circuitry and to provide automatic dialing and answering. The Hayes unit could take the phone off the hook, wait for the dialtone, and dial a telephone number all by itself. The Hayes Smartmodems sometimes had more powerful CPUs than the computers that they were connected to. The next advancement was the invention of the 2400 bits per second modem. The specifications came from the CCITT, an industry standard setting organization composed of hundreds of companies world wide. The new standard was designated as V.22bis and is still in use today. Other CCITT standards that followed were the V.32 (9600 bps), the V.32bis (14400 bps), the V42 (error control), and the V42bis (data compression). Virtually all modems today conform to these standards.
The next big computer invention was the fax modem. It uses the on-off data transmission just as a modem but for the purpose of creating a black and white image. Each on-off signal represents a black or white area on the image. The image is sent as a set of zeros and ones and is then reconstructed on the receiving end.
4. LANs
4.1. I Donnwanna File-Share!
Network Operating Systems (OS) are actually a group of programs that give computers and peripherals the ability to accept requests for service across a network and give other computers the ability to correctly use those services. Servers share their hard disks, attached peripherals such as printers and optical drives, and communication devices. They inspect requests for proper authorization, check for conflicts and errors and then perform the requested service. There is a multitude of different types of servers. File servers are equipped with large hard drives that are used to share files and information, as well as whole applications. The file-server software allows shared access to specific segments of the data files under controlled conditions. Print servers accept print jobs sent by anyone on the network. These servers are equipped with spooling software (saving data to disk until the printer is ready to accept it) that is vital in the situations where many requests can pour in at the same time. Network Operating Systems package requests from the keyboard and from applications in a succession of data envelopes for transmission across the network. For example, Novell's NetWare will package a directory request in an IPX (Internetwork Packet Exchange) packet, and the LAN adapter will then package the IPX request into an Ethernet frame. In each step information about data and error control data is added to the packet.
4.2. Eight Go In One Comes Out
The Network Interface Card or LAN adapter, is an interface between the computer and the network cabling. Within the computer it is responsible for the movement of data between the RAM (Random Access Memory) and the card itself. Externally it is responsible for the control of the flow of data in and out of the network cabling system. Since typically computers are faster than the network, the LAN adapter must also function as a buffer between the two. It is also responsible for the change of the form of data from a wide parallel stream coming in eight bits at a time to a narrow stream moving one bit at a time in and out of the network port. To handle these tasks the LAN adapters are equipped with a microprocessor and 8-64K of RAM. Some of the cards include sockets for ROM chips called Boot ROM. These chips allow computers without hard drives to boot operating systems from the file server.
4.3. Take Your Turn!
Ethernet and Token Ring network adapters use similar systems of electrical signaling over the network cable. These signals are very similar to the Baudot and Morse codes. A technique called Manchester encoding uses voltage pulses ranging from -15v to +15v in order to transmit the zeros and ones. The network cable has only one drawback, it can only carry signals from one network card at a time. So each LAN architecture needs a media-access control (MAC) scheme in order to make the network cards take turns transmitting into the cable. Ethernet cards listen to the traffic on the cable and transmit only if there is a break in the traffic when the channel is quiet. This technique is called Carrier-Sense Multiple Access With Collision detection (CSMA/CD). With collision detection, if two cards start transmitting at the same time, they see the collision, stop, and resume some time later. Token Ring networks use a much more complex process called token passing. Token Ring cards wait for permission in order to transmit into the cable that forms an electrical loop. The cards use their serial numbers in order to find the master interface card. This card starts a message called a token. When a card with information to send receives the token, it sends the data across the network. After the addressed interface card receives the information and returns it to the originating card, the token is given back to the master to be passed onto the next card. The ARCnet network uses a very similar system to that of the Token Ring. Instead of using a token, the master card keeps a table of all active cards and polls each one in turn, giving permission to transmit.
4.4. Tied In A Knot
Various types of cabling are used to connect the LAN adapters to the servers. Unshielded twisted pair wires offer rather slow speed, are very inexpensive, are small, and can only span very short distances. These cables use the RJ-45 connector. Coaxial cable offers fast speed, is rather expensive, has a medium sized diameter, and can span medium distances. Coaxial cable uses BNC connectors. The shielded twisted pair cable offers fast speed, is more expensive than the coaxial cable, has a large diameter, and can only span short distances. These cables use the IBM data connector. The fiber optic cable is the fastest possible type of data transfer, costs astronomical amounts of money, has a tiny diameter, and can span very long distances. This cable uses the ST fiber optic connector. Wiring hubs are used as central points for the cables from the network interface cards.
5.5. Loves Me, Loves Me Not, Server Based, Peer To Peer...
There are two general types of LANs. The Server-based networks rely on one major server to store data, offer access to perhiperals, handle the printing and accomplish all the work associated with network management. The Server-based networks have a high start up cost, but offer high security as well as ease of operation. These networks become more economical as more computers are added to the network. In Peer to peer networks the network responsibilities are divided among many computers. Some act as file servers, others as print servers, some as CD-ROM servers, tape drive servers, etc. The Startup cost of these networks is much cheaper, but when more computers are added to the network, some of the servers may not be able to handle the extra activity.
5. Links Between LANs
5.1. She Just Won't Send Sysop!
Most networks have very short information transfer ranges. But, in an ever shrinking world the need for links between LANs has never been higher. This section will explain the components and information needed to link LANs. When an electric current travels over a long length, it's charge decreases, and it is susceptible to electromagnetic interference. To combat the length problem a component has been devised. A repeater is a little box that is inserted between a cable. It's primary function is to amplify the weakening pulse and send it on it's way. Bridges are used to analyze the station address of each Ethernet packet and determine the destination of the message. The Routers strip the outer Ethernet packets of a data packet in order to get the data. This data is sent to other routers in other places of the world and then repackaged by those routers. The removal of the excess data packets by the routers decreases the time required to transfer that data. If networks use the same addressing protocol, bridges can be used to link them, however, if they use different addressing protocols, only routers may be used. During these times MANs (Metropolitan Area Networks) are in use and development today. These use routers that are connected preferably via a fiber optic cable, to create one large network.
5.2. Pluto Calling Earth!
Any networks larger than 1000m typically rely on telephone digital lines for data transfer. These networks are called Circuit Switched Digital Networks . Circuit Switched Digital Networks utilize a switching matrix at the central office of a telephone company that connects local calls to long distance services. The Telephone companies now offer dial up circuits with signaling rates of 56, 64, and 384 kilobits per second as well as 1.544 megabits per second. Another type of LAN to LAN connections are packet switching networks. These are services that a network router calls up on a digital line. They consist of a group of packet switches that are connected via intraswitch trunks (usually fiber optic) that relay addressed packets of information between them. Once the packet reaches the destination packet switch, it sends it via another digital connection to the receiving router.
f:\12000 essays\sciences (985)\Computer\Telecommunications.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Telecommunications
The transmission of words, sounds, images, or data in the form of electronic or electromagnetic signals or impulses. Transmission media include the telephone (using wire or optical cable), radio, television, microwave, and satellite. Data communication, the fastest growing field of telecommunication, is the process of transmitting data in digital form by wire or radio.
Digital data can be generated directly in a 1/0 binary code by a computer or can be produced from a voice or visual signal by a process called encoding. A data communications network is created by interconnecting a large number of information sources so that data can flow freely among them. The data may consist of a specific item of information, a group of such items, or computer instructions. Examples include a news item, a bank transaction, a mailing address, a letter, a book, a mailing list, a bank statement, or a computer program.
The devices used can be computers, terminals (devices that transmit and receive information), and peripheral equipment such as printers (see Computer; Office Systems). The transmission line used can be a normal or a specially purchased telephone line called a leased, or private, line (see Telephone). It can also take the form of a microwave or a communications-satellite linkage, or some combination of any of these various systems.
Hardware and Software
Each telecommunications device uses hardware, which connects a device to the transmission line; and software, which makes it possible for a device to transmit information through the line.
Hardware
Hardware usually consists of a transmitter and a cable interface, or, if the telephone is used as a transmission line, a modulator/demodulator, or modem.
A transmitter prepares information for transmission by converting it from a form that the device uses (such as a clustered or parallel arrangement of electronic bits of information) to a form that the transmission line uses (such as, usually, a serial arrangement of electronic bits). Most transmitters are an integral element of the sending device.
A cable interface, as the name indicates, connects a device to a cable. It converts the transmitted signals from the form required by the device to the form required by the cable. Most cable interfaces are also an integral element of the sending device.
A modem converts digital signals to and from the modulated form required by the telephone line to the demodulated form that the device itself requires. Modems transmit data through a telephone line at various speeds, which are measured in bits per second (bps) or as signals per second (baud). Modems can be either integral or external units. An external unit must be connected by cable to the sending device. Most modems can dial a telephone number or answer a telephone automatically.
Software
Among the different kinds of software are file-transfer, host, and network programs. File-transfer software is used to transmit a data file from one device to another. Host software identifies a host computer as such and controls the flow of data among devices connected to it. Network software allows devices in a computer network to transmit information to one another.
Applications
Three major categories of telecommunication applications can be discussed here: host-terminal, file-transfer, and computer-network communications.
Host-Terminal
In these types of communications, one computer-the host computer-is connected to one or more terminals. Each terminal transmits data to or receives data from the host computer. For example, many airlines have terminals that are located at the desks of ticket agents and connected to a central, host computer. These terminals obtain flight information from the host computer, which may be located hundreds of kilometers away from the agent's site.
The first terminals to be designed could transmit data only to or from such host computers. Many terminals, however, can now perform other functions such as editing and formatting data on the terminal screen or even running some computer programs. Manufacturers label terminals as "dumb," "smart," or "intelligent" according to their varying capabilities. These terms are not strictly defined, however, and the same terminal might be labeled as dumb, smart, or intelligent depending upon who is doing the labeling and for what purposes.
File-Transfer
In file-transfer communications, two devices are connected: either two computers, two terminals, or a computer and a terminal. One device then transmits an entire data or program file to the other device. For example, a person who works at home might connect a home computer to an office computer and then transmit a document stored on a diskette to the office computer.
An outgrowth of file transfer is electronic mail. For example, an employee might write a document such as a letter, memorandum, or report on a computer and then send the document to another employee's computer.
Computer-Network
In computer-network communications, a group of devices is interconnected so that the devices can communicate and share resources. For example, the branch-office computers of a company might be interconnected so that they can route information to one another quickly. A company's computers might also be interconnected so that they can all share the same hard disk.
The three kinds of computer networks are local area networks (LAN), private branch exchange (PBX) networks, and wide-area networks (WAN). LANs interconnect devices with a group of cables; the devices communicate at a high speed and must be in close proximity. A PBX network interconnects devices with a telephone switching system; in this kind of network, the devices must again be in close proximity. In wide-area networks, on the other hand, the devices can be at great distances from one another; such networks usually interconnect devices by means of telephone.
Telecommunication Services
Public telecommunication services are a relatively recent development in telecommunications. The four kinds of services are network, information-retrieval, electronic-mail, and bulletin-board services.
Network
A public network service leases time on a WAN, thereby providing terminals in other cities with access to a host computer. Examples of such services include Telenet, Tymnet, Uninet, and Datapac. These services sell the computing power of the host computer to users who cannot or do not wish to invest in the purchase of such equipment.
Information-Retrieval
An information-retrieval service leases time on a host computer to customers whose terminals are used to retrieve data from the host. An example of this is CompuServe, whose host computer is accessed by means of the public telephone system. This and other such services provide general-purpose information on news, weather, sports, finances, and shopping.
Other information-retrieval services may be more specialized. For example, Dow Jones News Retrieval Services provide general-purpose information on financial news and quotations, corporate-earning estimates, company disclosures, weekly economic survey updates, and Wall Street Journal highlights. Newsnet provides information from about 200 newsletters in 30 different industries; Dialog Information Services, BRS Bibliographic Retrieval Services, and Orbit Information Retrieval Services provide library information; and Westlaw provides legal information to its users. See Database.
Electronic-Mail
By means of electronic mail, terminals transmit documents such as letters, reports, and telexes to other computers or terminals. To gain access to these services, most terminals use a public network. Source Mail (available through The Source) and EMAIL (available through CompuServe) enable terminals to transmit documents to a host computer. The documents can then be retrieved by other terminals. MCI Mail Service and the U.S. Postal ECOM Service (also available through The Source) let terminals transmit documents to a computer in another city. The service then prints the documents and delivers them as hard copy. ITT Timetran, RCA Global Communications, and Western Union Easylink let terminals send telexes to other cities.
Bulletin-Board
By means of a bulletin board, terminals are able to facilitate exchanges and other transactions. Many bulletin boards do not charge a fee for their services. Users of these services simply exchange information on hobbies, buy and sell goods and services, and exchange computer programs.
Ongoing Developments
Certain telecommunication methods have become standard in the telecommunications industry as a whole, because if two devices use different standards they are unable to communicate properly. Standards are developed in two ways: (1) the method is so widely used that it comes to dominate; (2) the method is published by a standard-setting organization. The most important organization in this respect is the International Telecommunication Union, a specialized agency of the United Nations, and one of its operational entities, the International Telegraph and Telephone Consultative Committee (CCITT). Other organizations in the area of standards are the American National Standards Institute, the Institute of Electrical Engineers, and the Electronic Industries Association. One of the goals of these organizations is the full realization of the integrated services digital network (ISDN), which is projected to be capable of transmitting through a variety of media and at very high speeds both voice and nonvoice data around the world in digital form.
Other developments in the industry are aimed at increasing the speed at which data can be transmitted. Improvements are being made continually in modems and in the communications networks. Some public data networks support transmission of 56,000 bits per second (bps), and modems for home use (see Microcomputer) are capable of as much as 28,800 bps.
Introduction
When a handful of American scientists installed the first node of a new computer network in the late 60's, they could not know by any chance what phenomenon they had launched. They were set a challenging task to develop and realise a completely new communication system that would be either fully damage-resistant or at least functional even if an essential part of it was in ruins, in case the Third World War started. The scientists did what they had been asked to. By 1972 there were thirty-seven nodes already installed and ARPANET (Advanced Research Projects Agency NET), as the system of the computer nodes was named, was working (Sterling 1993). Since those "ancient times", during which the network was used only for national academic and military purposes (Sterling 1993), much of the character of the network has changed. Its today users work in both commercial and non-commercial branches and not just in academic and governmental institutions. Nor is the network only national: it has expanded to many countries around the world, the network has become international and in that way it got its name. People call it Internet.
The popularity of this new phenomenon is rising rapidly, almost beyond belief. In January 1994 there were an estimated 2 million computers linked to the Internet. However, this is nothing compared to the number from last year's statistics. At the end of 1995, 10 million computers with 40-50 million users were assumed to be connected to the network-of-networks. If it goes on like this, most personal computers will be wired to the network at the end of this century (Internet Society 1996).
The Internet is phenomenal in many ways. One of them is that it connects people from different nations and cultures. The network enables them to communicate, exchange opinions and gain information from one another. As each country has its own national language, in order to communicate and make themselves understood in this multilingual environment the huge number Internet users need to share a knowledge of one particular language, a language that would function as a lingua franca. On the Internet, for various reasons, the lingua franca is English.
Because of the large number of countries into which the Internet has spread and which bring with
them a considerable variety of languages English, for its status of a global language, has become a necessary communication medium on the Internet. What is more, the position of English as the language of the network is strengthened by the explosive growth of the computer web as great numbers of new users are connecting to it every day.
Internet, in computer science, an open interconnection of networks that enables connected computers to communicate directly. There is a global, public Internet and many smaller-scale, controlled-access internets, known as enterprise internets. In early 1995 more than 50,000 networks and 5 million computers were connected via the Internet, with a computer growth rate of about 9 percent per month.
Services
The public Internet supports thousands of operational and experimental services. Electronic mail (e-mail) allows a message to be sent from one computer to one or more other computers. Internet e-mail standards have become the means of interconnecting most of the world's e-mail systems. E-mail can also be used to create collaborative groups through the use of special e-mail accounts called reflectors, or exploders. Users with a common interest join a mailing list, or alias, and this account automatically distributes mail to all its members.
The World Wide Web allows users to create and use point-and-click hypermedia presentations. These documents are linked across the Internet to form a vast repository of information that can be browsed easily.
Gopher allows users to create and use computer file directories. This service is linked across the Internet to allow other users to browse files.
File Transfer Protocol (FTP) allows users to transfer computer files easily between host computers. This is still the primary use of the Internet, especially for software distribution, and many public distribution sites exist.
The Usenet service allows users to distribute news messages automatically among thousands of structured newsgroups. Telnet allows users to log in to another computer from a remote location. Simple Network Management Protocol (SNMP) allows almost any Internet object to be remotely monitored and controlled.
Connection
Internets are constructed using many kinds of electronic transport media, including optical fiber, telephone lines, satellite systems, and local area networks. They can connect almost any kind of computer or operating system, and they are self-aware of their capabilities. An internet is usually implemented using international standards collectively called Transmission Control Protocol/Internet Protocol (TCP/IP). The protocols are implemented in software running on the connected computer. Most computers connected to the internet are called hosts. Computers that route data, or data packets, to other computers are called routers. Networks and computers that are part of the global Internet possess unique registered addresses and obtain access from Internet service providers.
There are four ways to connect to the public Internet: by host, network, terminal, or gateway access. Host access is usually done either with local area networks or with the use of telephone lines and modems combined with Internet software on a personal computer. Host access allows the attached computer to fully interact with any other attached computer-limited only by the bandwidth of the connection and the capability of the computer.
Network access is similar to host access, but it is usually done via a leased telephone line that connects to a local or wide area network. All the attached computers can become Internet hosts.
Terminal access is usually done via telephone lines and modems combined with terminal-emulation software on a personal computer. It allows interaction with another computer that is an Internet host.
Gateway access is similar to terminal access but is provided via on-line or similar proprietary services, or other networks such as Bitnet, Fidonets, or UUCP nets that allow users minimally to exchange e-mail with the Internet.
Development
The Internet technology was developed principally by American computer scientist Vinton Cerf in 1973 as part of a United States Department of Defense Advanced Research Projects Agency (DARPA) project managed by American engineer Robert Kahn. In 1984 the development of the technology and the running of the network were turned over to the private sector and to government research and scientific agencies for further development.
Since its inception, the Internet has continued to grow rapidly. In early 1995, access was available in 180 countries and there were more than 30 million users. It is expected that 100 million computers will be connected via the public Internet by 2000, and even more via enterprise internets. The technology and the Internet have supported global collaboration among people and organizations, information sharing, network innovations, and rapid business transactions. The development of the World Wide Web is fueling the introduction of new business tools and uses that may lead to billions of dollars worth of business transactions on the Internet in the future.
In the Internet nowadays, the majority of computers are from the commercial sphere (Vrabec
1996). In fact, the commercialisation of the network, which has been taking place during the last
three or four years, has caused the recent boom of the network, of the WWW service in particular
(Vrabec 1996). It all started in the network's homeland in 1986, when ARPANET was gradually
replaced by a newer and technologically better built network called NSFNET. This network was
more open to private and commercial organisations (Vrabec 1996) which, realising the potential of
the possible commercial use of the Internet, started to connect themselves to the network.
There are several possibilities how commercial organisations can benefit from their connection to
the English-speaking Internet. Internet users are supposed to be able to speak and understand
English, and actually most of them do. With the rapidly rising number of users, the network is a
potential world market (Vrabec 1996) and English will be its important tool. The status of English
as a world language, or rather its large number of people who are able to process and use
information in English, already enables commercial organisations to present themselves, their work
and their products on the Internet. Thanks to English and the Internet companies can correspond
with their partners abroad, respond to any question or give advice on any problem that their
international customers can have with their products almost immediately (Vrabec 1996).
Considering the fact that many of the biggest, economically strongest and influential organisations
are from the USA or other native English speaking countries, the commercialisation has very much reinforced the use of English on the Internet.
BIBLIOGRAPHY:
Cepek, Ales and Vrabec, Vladimir 1995 Internet :-) CZ, Praha, Grada
Demel, Jiri 1995 Internet pro zacatecniky, Praha, NEKLAN
Falk, Bennett 1994 InternetROADMAP, translated by David Krásensk?, Praha, Computer
Press
Jenkins, Simon 1995 "The Triumph Of English" The Times, May 1995
Philipson, Robert 1992 Linguistic imperialism, Oxford, Oxford University Press
Schmidt, Jan 1996 "Carka , hacek a WWW" Computer Echo Vol. 3/6
(also available on http://omicron.felk.cvut.cz/~comecho/ce/journal.html)
Sterling, Bruce 1993 "A short history of the Internet" The magazine Of Fantasy And Science
Fiction, Feb. 1993
Vrabec, Vladimir 1996 "Komerce na Internetu" LanCom, Vol. 4/3
f:\12000 essays\sciences (985)\Computer\Telecommuting.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Telecommuting is a very interesting and complex subject. The pros and cons of this concept are numerous and both sides have excellent arguments. In the research I've done I feel I have to argue both sides to maintain a sense of perspective. I had mixed feelings about telecommuting before I started this research and I find that this is something many others have in common with me.
The reasons for and against telecommuting can be complex or simple depending on which view point you take. From a manager's view point telecommuting is a very dangerous undertaking that requires a high readiness level on the employee's part. Allowing an employee with a low (R1, or R2) readiness level to telecommute is not likely to result in a positive manner. When an employee has a high readiness level and a definite desire to attempt working in the home, for some reason or another, many factors should be considered. What kind of schedule does the employee feel constitutes telecommuting? Generally speaking, telecommuting is defined as spending at least one day out of a five day work week working in the home. Is one day home enough for the employee? Or, too little? How does the employer decide how many days to allow? Does the employee's job lend itself well to telecommuting? Some jobs, obviously, can't be accomplished using a telecommuting format. Does the employee have a good track record for working unsupervised? This relates back to readiness levels. An employee who isn't performing at a high readiness level should not even be considered as a candidate for telecommuting. All of these questions and many more must be answered on a case by case basis.
This particular venture into creative scheduling has its ups and downs as well from an employee's point of view. It can be quite a bed of roses for both employee and employer. A lot of nice smells and pretty sights, but watch out for the thorns. In several studies I reviewed I noticed that the telecommuting population loses many of the basics of the social contacts associated with the office environment. Judging the correct amount of time that an employee should spend working at home in relation to working at the office can have a significant impact on both performance and satisfaction. It's usually hard for someone to completely cut themselves off from their work environment and still perform well. The sense of being out of touch with the others in the work force can be mitigated by the use of e-mail, teleconferencing, and the ever faithful telephone. These devices, in a best case scenario, can completely substitute for face to face interaction. That's a strong statement and I would like to explain a few conditions. The best case scenario assumes an individual is at a very high readiness level and has very little perceived need for social interaction with the other office employees. In a worst case scenario an employee can lose touch with the pulse of the office, lose motivation, and their readiness level could drop. This type of scenario is likely to get out of hand if the employee is never in the office to receive the appropriate feedback.
It sounds as if I'm not really impressed with telecommuting but that's not true. Let's look at a few of the really solid benefits for the employer. The employer can offer telecommuting as an option for prospective employees to improve recruitment. The current employees could be offered it to keep them around. Saving one employee could save the company a large amount of money. "Most employers don't keep accurate records of the costs of losing good employees and finding and retraining replacements, but there have been estimates ranging from $30,000 to over $100,000 to replace a professional." The ever present crunch for space could drive a company to reduce the amount of office space it requires. Telecommuting makes the employee provide his own office space. It's been shown that telecommuting does increase productivity with typical increases in the 15 to 25 percent range. These gains may come from the significantly less time a person spends at the company water cooler. A company can improve customer service by making use of telecommuters. It would cost much less to have a few people answering phones at home at 3 o'clock in the morning than running a skeleton crew in a heated/air-conditioned, lighted, and such office building.
So what's in it for the employee? That depends mostly on which particular employee we are referring too. Telecommuting allows someone with a physical handicap that could not actually commute to the workplace to still function as a valuable employee. It would allow someone who has small children and feels a great need to be home for them to still work and have a career. The distance an employee must travel daily to work is a factor that can induce great amounts of frustration and expense into their lives. Telecommuting can alleviate this stress. Job satisfaction can be enhanced by allowing greater freedom and bestowing greater responsibility. Employees should be aware of some of the pitfalls of telecommuting as well as the benefits. It is estimated that telecommuters earn less overall then office workers.
As a general rule a professional telecommuter will earn approximately 91% of the wage of an office working professional and clerical workers.
All of these considerations must factor into a decision by a company to implement a telecommuting program. Many factors must be taken into account and clear organizational goals must be stated. It is vitally important for the management to support the program and for a great degree of trust to exist between employer and employee. Implementation of a pilot program can take years and involve many aspects of the company as a whole.
On the whole, I am impressed with the possibilities that telecommuting presents and daunted by the problems that can crop up. I feel that a well thought out, carefully planned, and conscientiously applied program can benefit most companies in most situations. I don't feel that telecommuting is for every company but it could certainly benefit many.
Bibliography
1. Byte Magazine, May 91, Vol. 16 Issue 5, "Is it Time to Telecommute?", Don Crabb, et al.
2. Compute! Magazine, Oct. 91, Vol. 13 Issue 10, "Workplace", D. Janal
3. The New Era of Home Based Work: Directions and Policies, Kathleen E. Christensen, WestView Press, 1988
4. Telecommuting: The Organizational and Behavioral Effects of Working at Home, Reagan Mays Ramsower, UMI Research Press, 1985
f:\12000 essays\sciences (985)\Computer\Telephones.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TELEPHONES
About 100 years ago, Alexander Graham Bell invented the telephone by accident with his assistant Mr. Watson. Over many years, the modern version of the telephone makes the one that Bell invented look like a piece of junk. Developments in tone dialing, call tracing, music on hold, and electronic ringers have greatly changed the telephone.
This marvelous invention allows us to communicate with the entire globe 24 hours a day just by punching in a simple telephone number. It is the most used piece of electronic apparatus in the world. It is probably one of the most easy to use electronics available too. All you have to do is pick up the receiver, listen for the tone, and then select a number using either tone or pulsing dial.
A telephone can be separated into two main categories: there is the tone (touch tone) or the older rotary dial (pulse) telephones. Then you can divide those into other categories such as business line (multi -- line) or home line (single line). You can also have many other types of phones: there are those that hang on the wall, on the desk, etc.
THE HANDSET
No matter what kind of telephone you own, there has to be some device that allows you to talk to and listen to. This device is called the handset. The handset is usually made out of plastic and inside it are two main components: the transmitter and the receiver.
THE TRANSMITTER
It is the job of the transmitter to turn the air pressure created by your sound waves to electrical signals so they can be sent to the other telephone. The waves hit a thin skin called the diaphragm that is physically connected to a reservoir of carbon granules. When the pressure hits the diaphragm, it shakes up the carbon granules. Then the carbon expands and contracts, depending on what force is exerted. At two points on the outer shell of the reservoir of the carbon are two outlets of electricity from the talk battery. By applying voltage, a current is made and is passed along the lines to the waiting telephone. At the other end the current is transformed back to speech.
THE RECEIVER
The receiver turns an ever varying current back to speech. A permanently magnetized soft iron core is covered in many turns of very fine wire. Through the wire, the electrical current is applied. The currents attract and repel an iron diaphragm. By the vibrating actions the diaphragm does, a different pressure is created and these pressures are translated by ear into intelligible speech.
TELEPHONE NETWORKS
If you have ever opened up a phone (do not try this at home, you might screw it up) you will probably see a PC (printed circuit) board. The board contains the needed electronics for the phone to work properly. In older models of a working telephone, this board may look like an electronic box. This board is called the telephone network.
The telephone network's function is to provide all the necessary components and termination points (screw on or push on terminals). The components and the termination points connect and match the impedance of a handset (transmitter and receiver) to a two -- wire telephone circuit.
Every component in the telephone has to be connected to the PC board. Usually, the board is the most reliable component inside the phone. All the delicate components are securely sealed by a metal enclosure. The PC board is a very fragile object and can be broken easily. If you look closely, you can see wires poking out of the board. The wires are soldered to the terminal legs. If you break one of those wires, man are you dead!
TELEPHONE HOOK SWITCH
Every time you talk over a line, you always need to disconnect. The most simple thing to do is to let the handset sit down. While sitting down, the handset can give force to a spring loaded operating arm, which is connected to a number of switch contacts. When this happens, the phone disconnects.
THE PHONE RINGER
Once a call has been dialed through, the telephone of the person being called must be given some kind of signal to let him/her know that he/she has been called. This is when the telephone rings. This type of signal is generated using an alternating current somewhere between 90 to 220 V with a frequency of 30 Hz.
But what if you have 5 or 6 phones connected on a party line? How can you signal one telephone to ring? The answer is by frequency selection. Older telephones had a different capacitor and ringer coil impedance values. It was these small differences that made the bell select one frequency.
For example, if you have 5 telephones on one party line. If a call came through for line 1, the central board would send 10 Hz (this is a guess) signal to the party line. Line 1 would ring and all the others would remain quiet. If the call was for line 5, the central board would send a 50 Hz (this is also a guess) signal so that only line 5 would ring.
The phone rings by applying voltage where needed, a resonant circuit is made. The xx Hz signal would make a magnetic field around a device called the hammer. The hammer is attracted and then repelled by the constant changing of the magnetic field. If two gongs were placed on either side of the hammer, the hammer would strike each gong successively.
Other phones can use a one gong system. This system is like the two gong system, but more compact. Due to the compact in size, this ringer is perfect for small wall phones or such.
THE ROTARY DIAL (PULSE)
A rotary dial creates equally spaced make -- and -- break pulses according to how far the plastic dialing plate goes. A good description of a dialing plate is like this: it has regularly spaced holes to dial with and a metal object called the finger stop. That makes the number you want to go to as easy as 1-2-3. Each hole in the wheel represents one number 1 through to 10.
By using some small gears and a device that times the velocity of the return of the finger wheel after you have dialed, the internal switches are opened and closed at a rate of 1 pulse per second. The number of pulses created is determined by how far the finger wheel has gone around before being stopped by the finger stop. Let's say that you dial the number 5, that means the internal switches open and close 5 times before the finger stop stops it.
During the dial, a second set of switches remain closed and stay that way for the entire time that you are dialing. The purpose for this second set of switches is to keep the telephone receiver short for the whole dialing period. If this switch was not there, you would hear loud and frustrating clicks in the receiver.
TONE DIALING
There is also an alternative to the pulse dial, that is the tone dial. Phones that use tone dialing are made with a piece of machinery that makes tones on the phone line. These tones are transformed by the central board into numbers.
The act of putting an audio signal on the telephone line as a dialing utility is called the DTMF (dual tone multi -- frequency) dialing. It is called this because the tone dial makes a combination of two tones. These audio signals are made by a mixture of both high and low frequencies. When a button on the dial pad is pushed, vertical and horizontal tones are combined to make a signal. It is this newly made tone that is sent down the central board and then transformed back to the number.
TELEPHONE CORDS
Older telephone lines were made of fork shaped piece of metal attached to wires with a tool called the crimper. When installed, these wires were screwed into the terminal box on the wall. This is really a pain in the rear end because if you are going to fix the phone, you have to unscrew the box, then all the screws. This process could last for hours at a time.
To make this job a lot easier, coiled cords and modular lines were invented. To take out the handset or telephone, all you have to do is to unplug the modular connector from its match and that is it. Modular cords can be bought nearly in any electronics store.
There are three kinds of cords. One is the full modular cord. There are small modular clips on both ends of the cord. The second is the one mentioned in the first paragraph, this is called the spade -- lug cord. The third one is called the 1/4 modular, this cord has one modular connector on one side and the old fashioned spade -- lug end on the other. These 1/4 cords are not very common.
BIBLIOGRAPHY
BOOK: THE TALKING TELEPHONE
AUTHOR: STEVE SOKOLOWSKI
PUBLISHER: TAB BOOKS NOV. 1991
f:\12000 essays\sciences (985)\Computer\TELNET.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TELNET
PURPOSE OF THIS REPORT
Before gophers, hypertext, and sophisticated web browsers, telnet was the primary means by which computer users connected their machines with other computers around the world. Telnet is a plain ASCII terminal emulation protocol that is still used to access a variety of information sources, most notably libraries and local BBS's. This report will trace the history and usage of this still popular and widely used protocol and explain where and how it still manages to fit in today.
HISTORY AND FUTURE OF TELNET
"Telnet" is the accepted name of the Internet protocol and the command name on UNIX systems for a type of terminal emulation program which allows users to log into remote computer networks, whether the network being targeted for login is physically in the next room or halfway around the globe. A common program feature is the ability to emulate several diverse types of terminals--ANSI, TTY, vt52, and more. In the early days of networking some ten to fifteen years ago, the "internet" more or less consisted of telnet, FTP (file transfer protocol), crude email programs, and news reading. Telnet made library catalogs, online services, bulletin boards, databases and other network services available to casual computer users, although not with the friendly graphic user interfaces one sees today.
Each of the early internet functions could be invoked from the UNIX prompt, however, each of them used a different client program with its own unique problems. Internet software has since greatly matured, with modern web browsers (i.e. Netscape and Internet Explorer) easily handling the WWW protocol (http) along with the protocols for FTP, gopher, news, and email. Only the telnet protocol to this day requires the use of an external program.
Due to problems with printing and saving and the primitive look and feel of telnet connections, a movement is underway to transform information resources from telnet-accessible sites to full fledged web sites. However, it is estimated that it will still take several years before quality web interfaces exist for all of the resources now currently available only via telnet. Therefore, knowing the underlying command structure of terminal emulation programs like telnet is likely to remain necessary for the networking professional for some time to come.
ADVANTAGES AND DISADVANTAGES OF TELNET
The chief advantage to the telnet protocol today lies in the fact that many services and most library catalogs on the Internet remain accessible today only via the telnet connection. Since telnet is a terminal application, many see it as a mere holdover from the days of mainframe computers and minicomputers. With the recent interest in $500 Internet terminals may foretell a resurgence in this business. Disadvantages include the aforementioned problems that telnet tends to have printing and saving files, and its primitive look and feel when compared to more modern web browsers.
OTHER APPROACHES
The functionality of the telnet protocol may be compared with the UNIX "rlogin" command, an older remote command that still has some utility today. Rlogin is a protocol invoked by users with accounts on two different UNIX machines, allowing connections for certain specified users without a password. This requires setting up a ".rhosts" or "/etc/hosts.equiv" file and may involve some security risks, so caution is advised.
Using telnet instead of the rlogin command will accomplish the same results, but the use of the rlogin command will have the effect of saving keystrokes, particularly if it is used in conjunction with an alias.
CONCLUSION
Some argue that the future of the Internet lies in sophisticated web browsers like Netscape and Internet Explorer, or tools such as Gopher that "save" end users from having to deal with the command line prompt and the peculiar details of commands like Telnet. While that may be the case, the tendency remains in place for programmers to develop new software by building on the old. Therefore, knowing the underlying command structure of older protocols like telnet and rlogin are likely to remain essential skills for the networking professional in the forseeable future.
f:\12000 essays\sciences (985)\Computer\The Antitrust Case Against Microsoft.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Anti-Trust Case Against Microsoft
Since 1990, a battle has raged in United States courts between the United States
government and the Microsoft Corporation out of Redmond, Washington, headed by Bill
Gates. What is at stake is money. The federal government maintains that Microsoft's
monopolistic practices are harmful to United States citizens, creating higher prices and
potentially downgrading software quality, and should therefore be stopped, while
Microsoft and its supporters claim that they are not breaking any laws, and are just doing
good business.
Microsoft's antitrust problems began for them in the early months of 1990(Check
1), when the Federal Trade Commission began investigating them for possible violations
of the Sherman and Clayton Antitrust Acts,(Maldoom 1) which are designed to stop the
formation of monopolies. The investigation continued on for the next three years without
resolve, until Novell, maker of DR-DOS, a competitor of Microsoft's MS-DOS, filed a
complaint with the Competition Directorate of the European Commission in June of 1993.
(Maldoom 1) Doing this stalled the investigations even more, until finally in August of
1993, (Check 1)the Federal Trade Commission decided to hand the case over to the
Department of Justice. The Department of Justice moved quickly, with Anne K.
Bingaman, head of the Antitrust Division of the DOJ, leading the way.(Check 1) The case
was finally ended on July 15, 1994, with Microsoft signing a consent settlement.(Check 1)
The settlement focused on Microsoft's selling practices with computer
manufacturers. Up until now, Microsoft would sell MS-DOS and Microsoft's other
operating systems to original equipment manufacturers (OEM's) at a 60% discount if that
OEM agreed to pay a royalty to Microsoft for every single computer that they sold
(Check 2) regardless if it had a Microsoft operating system installed on it or not. After the
settlement, Microsoft would be forced to sell their operating systems according to the
number of computers shipped with a Microsoft operating system installed, and not for
computers that ran other operating systems. (Check 2)
Another practice that the Justice Department accused Microsoft of was that
Microsoft would specify a minimum number of minimum number of operating systems
that the retailer had to buy, thus eliminating any chance for another operating system
vendor to get their system installed until the retailer had installed all of the Microsoft
operating systems that it had installed.(Maldoom 2)
In addition to specifying a minimum number of operating systems that a vendor
had to buy, Microsoft also would sign contracts with the vendors for long periods of time
such as two or three years. In order for a new operating system to gain popularity, it
would have to do so quickly, in order to show potential buyers that it was worth
something. With Microsoft signing long term contracts, they eliminated the chance for a
new operating system to gain the popularity needed, quickly.(Maldoom 2)
Probably the second most controversial issue, besides the per processor agreement,
was Microsoft's practice of tying. Tying was a practice in which Microsoft would use their
leverage in one market area, such as graphical user interfaces, to gain leverage in another
market, such as operating systems, where they may have competition.(Maldoom 2) In the
preceding example, Microsoft would use their graphical user interface, Windows, to sell
their operating system, DOS, by offering discounts to manufacturers that purchased both
MS-DOS and Windows, and threatening to not sell Windows to companies who did not
also purchase DOS.
In the end, Microsoft decided to suck it up and sign the settlement agreement. In
signing the agreement, Microsoft did not actually have to admit to any of the alleged
charges, but were able to escape any type of formal punishment such as fines and the like.
The settlement that Microsoft agreed to prohibits it, for the next six and a half years from:
-Charging for its operating system on the basis of computer shipped rather than on
copies of MS-DOS shipped;
-Imposing minimum quantity commitments on manufacturers;
-Signing contracts for greater than one year;
-Tying the sale of MS_DOS to the sale of other Microsoft products;(Maldoom 1)
Although these penalties look to put an end to all of Microsoft's evil practices, some
people think that they are not harsh enough and that Microsoft should have been split up
to put a stop to any chance of them forming a true monopoly of the operating system
market and of the entire software market.
On one side of the issue, there are the people who feel that Microsoft should be
left alone, at least for the time being. I am one of these people, feeling that Microsoft does
more good than bad, thus not necessitating their breakup. I feel this way for many reasons,
and until Microsoft does something terribly wrong or illegal, my opinion will stay this way.
First and foremost, Microsoft sets standards for the rest of the industry to follow.
Jesse Berst, editorial director of Windows Watcher newsletter out of Redmond,
Washington, and the executive director of the Windows Solutions Conference, says it best
with this statement: "To use a railroad analogy, Microsoft builds the tracks on which the
rest of the industry ships its products." ("Why Microsoft (Mostly) Shouldn't Be Stopped."
4) With Microsoft creating the standards for the rest of the computer industry, they are
able to create better standards and build them much faster than if an outside organization
or committee were to create them. With these standards set, other companies are able to
create their applications and other products that much faster, and better, and thus the
customers receive that much better of a product.
Take for instance the current effort to develop the Digital Video Disc (DVD)
standard. DVD's are compact discs that are capable of storing 4900 megabytes of
information as apposed to the 650 megabytes that can be stored on a CD-ROM disc now.
For this reason, DVD's have enormous possibilities in both the computer industry and in
the movie industry. For about the last year, companies such as Sony, Mitsubishi, and other
prominent electronics manufacturers have been trying to decide on a set of standards for
the DVD format. Unfortunately, these standards meetings have gone nowhere, and
subsequently, many of the companies have broken off in different directions, trying to
develop their own standards. In the end, there won't be one, definite standard, but instead,
many standards, all of which are very different from one another. Consumers will be
forced to make a decision on which standard to choose, and if they pick the wrong one,
they could be stuck down the road with a DVD player that is worthless. Had only one
company set the standards, much like Microsoft has in the software business, there
wouldn't be the confusion that arose, and the consumers could sit back and relax, knowing
that the DVD format is secure and won't be changed.
Another conclusion that many anti-Microsoft people and other people around the
world jump to is that the moment that we have a company, such as Microsoft, who is very
successful, they immediately think that there must be something wrong; they have to be
doing something illegal or immoral to have become this immense. This is not the case.
Contrary to popular belief, Microsoft has not gained its enormous popularity through
monopolistic and illegal measures, but instead through superior products. I feel that
people do have brains, and therefore have the capacity to make rational decisions based on
what they think is right. If people didn't like the Microsoft operating systems, there are
about a hundred other choices for operating systems, all of which have the ability to
replace Microsoft if the people wanted them. But they don't, the people for the most part
want Microsoft operating systems. For this reason, I don't take the excuse that Microsoft
has gained their popularity through illegal measures. They simply created products that the
people liked, and the people bought them.
On the other side of the issue, are the people who believe that Microsoft is indeed
operating in a monopolistic manner and therefore, the government should intervene and
split Microsoft up. Those who are under the assumption that Microsoft should indeed be
split up, believe that they should either be split into two separate companies: one dealing
with operating systems and the other dealing strictly with applications. The other group
believes that the government should further split Microsoft up into three divisions: one
company to create operating systems, one company to create office applications, and one
company to create applications for the home. All of these people agree that Microsoft
should be split up, anyway possible.
The first thing that proponents of Microsoft being split up argue that although
Microsoft has created all kinds of standards for the computer software industry, in today's
world, we don't necessarily need standards. Competing technologies can coexist in today's
society, without the need for standards set by an external body or by a lone company such
as Microsoft. A good analogy for this position is given in the paper, "A Case Against
Microsoft: Myth Number 4." In this article, the author states that people who think that
we need such standards, give the example of the home video cassette industry of the late
1970's. He says that these people point out that in the battle between the VHS and Beta
video formats, VHS won not because it was a superior product, but because it was more
successfully marketed. He then goes to point out that buying an operating system for a
computer is nothing at all like purchasing a VCR, because the operating system of a
computer defines that computer's personality, whereas a VCR's only function is to play
movies, and both VHS and Beta do the job equally.
Also, with the development of camcorders, there have been the introduction of
many new formats for video tapes that are all being used at once. VHS-C, S-VHS and
8mm formats all are coexisting together in the camcorder market, showing that maybe in
our society today, we are not in need of one standard. Maybe we can get along just as well
with more than one standard. Along the same lines, there are quite a few other industries
that can get along without one standard. Take for instance the automobile industry. If you
accepted the idea that one standard was best for everyone involved, then you would never
be tempted to purchase a BMW, Lexus, Infiniti, Saab or Porsche automobile, due to the
fact that these cars all have less than one percent market share in the automobile industry
and therefore will never be standards.
Probably the biggest proponent of government intervention into the Microsoft
issue is Netscape Communications, based out of Mountain View, California. Netscape has
filed law suits accusing Microsoft of tying again.("Netscape's Complaint against
MicroSoft." 2) This time, Microsoft is bundling their world wide web browser, Internet
Explorer 3.0 into their operating system, Windows 95. Netscape is the maker of Netscape
Navigator, currently the most widely used internet browser on the market, and now,
facing some fierce competition from Microsoft's Internet Explorer. Netscape says that in
addition to bundling the browser, Microsoft was offering Windows at a discount to
original equipment manufacturers (OEM's),("Netscape's Complaint against MicroSoft." 2)
to feature Internet Explorer on the desktop of the computers that they shipped, thus
eliminating any competition for space on the desktop by rival companies such as Netscape.
If the OEM wants to give the consumer a fair and even choice of browsers by placing
competitors' browser icons in a comparable place on the desktop, Netscape has been
informed that the OEM must pay $3 more for Windows 95 than an OEM that takes the
Windows bundle as is and agrees to make the competitors' browsers far less accessible and
useful to customers.("Netscape's Complaint against MicroSoft." 2) Another accusation
that Netscape is making against Microsoft is that they are doing the same type of things
with the large internet service providers of the nation. They are offering the large internet
providers of the nation, such as Netcom and AT&T, space on the Windows 95 desktop, in
return for the internet provider's consent that they will not offer Netscape Navigator, or
any other competing internet software to their customers.("Netscape's Complaint against
MicroSoft." 3)
Netscape is becoming ever more concerned with Microsoft's practices, because for
now, they are going untouched by the government and it looks as if it will stay that way
for quite some time now. The are very much worried, as they watch the numbers of users
switching to Microsoft's browser, and the number of users using Navigator slipping.
Besides all of the accusations of monopolistic actions Netscape lay down on them,
Microsoft does seem to have one advantage when it comes to the browser wars. Their
new browser, version 3.0, matches Netscape's feature for feature, with one added plus: it
is free and Microsoft says that it always free. So is their internet server, Internet
Information Server. Whereas Netscape charges $50 and $1500 for their browser and their
web server, respectively.("Netscape's Complaint against MicroSoft." 3)
With all the information that has been presented for both sides of the issue, you are
probably left in a daze, not knowing what to think. Is Microsoft good? Or is Microsoft
bad? Well, the answer is a little bit of both. Even though the Justice Department found
that Microsoft might be practicing some techniques that are less than ethical, they did not
find that Microsoft was breaking any anti-trust laws, nor did Microsoft actually admit to
the accusations when they signed the agreement. If anything, them signing the agreement
was more of a sorry than an full fledged admission of guilt. Other people might disagree
with me, and there might be a lot of allegations floating around from different companies,
but the fact of the matter is plain and simple. Microsoft has not been formerly charged and
found guilty of any illegal practices pertaining to them being a monopoly.
I believe that the government should stay out of the affairs of the economy, rather
than get tangled up in a mess, and just end up deadlocked like the FTC did in 1990. And
even if the government did get involved, due to the extremely fast paced nature of the
computer industry, and the extremely slow nature of the government, there may not be
any resolve for quite a while.
f:\12000 essays\sciences (985)\Computer\The basics of a hard drive.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'm sure, by now you have used a computer be it to play games or write a paper. But do you know how a computer works and runs all the programs you what it to? Well if not I will tell you. To begin with I will explain a little about the history about the computers history. About 50 years or maybe a little longer someone came up with the thought that all the boring stuff like math could be automated so humans would not have to do it all. Hence the computer, as to who exactly I could not tell you. That person than began to work with his Idea and figured out that if he could turn a machine on and off at a specified time for a specified time he could in a way alter what it could do. To turn it on and off he came up with a very interesting way, he used a sheet that looked almost like a scantron sheet but with holes and those holes where used to turn it on and off. The holes represented 1s and the noholes 0s. the 1s turned it on an the 0s turned it off. With this knowledge he began to make little programs that could solve math problems. I guess he must have gotten bored with the math or something because he came up with a way to let him play tic-tack toe with the computer, which by the way was the first came ever to be created on the computer. Now there is one more thing you have to know about this computer, the computer was half the size of West High Schools gym. And it was thought that when it was ecomoical for people to own there own computer it would fill a decent size room. Could you imagine a computer filling up your entire living room, where wolud you put your TV? But with the invetion of keyboards and nanotechnology they reduced the size of the computer by nearly 200% and every year the keep getting smaller and smaller and it is estimated that nearly 85 to 90 percent of American homes have at least one computer in their home.
Now that I have bored you to death with the history of computers here's the fun stuff.
Programs that let you play games and surf the net aren't just ideas put in a niffty little box and sold. They are ideas put on paper then translated into a really, really huge math problem that the computer can understand, after all the computer was invented to do math problems, by people called programers. From there the computer further translates the math problems into 1s and 0s which in turn translates into the image you see on the computer screen. And all this is stored on a little thing called a hard drive. Now before I go to far into depth on this topic, imagine a city block with excatly 1000 houses on it and every house can only store so much, so when that house fills up the house next it the last one fills up and being the nice people they are let the computer pull any thing it needs out of the houses to use and when its done with the stuff it puts it back in the same house. The process described above is what makes up a hard drive.
f:\12000 essays\sciences (985)\Computer\The Central Processing Unit.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Central Processing Unit
Microprocessors, also called central processing units (CPUs), are frequently described as the "brains" of a
computer, because they act as the central control for the processing of data in personal computers (PCs) and
other computers. Chipsets perform logic functions in computers based on Intel processors. Motherboards combine
Intel microprocessors and chipsets to form the basic subsystem of a PC. Because it's part of every one of
your computer's functions, it takes a fast processor to make a fast PC. These processors are all made of transistors.
The first transistor was created in 1947 by a team of scientists at Bell Laboratories in New Jersey. Ever since 1947
transistors have shrunk dramitically in size enabling more and more to be placed on each single chip.
The transistor was not the only thing that had to be developed before a true CPU could be produced. There also
had to be some type of surface to assemble the transistors together on. The first chip made of semiconducitve material
or silicon was invented in 1958 by Jack Kilby of Texas Instruments. Now we have the major elements needed to produce
a CPU. In 1965 a company by the name of Intel was formed and they began to produce CPU's shortly thereafter.
Gordon Moore, one of the founders of Intel, predicted that the number of transistor placed on each CPU would double
every 18 months or so. This sounds almost impossible, however this has been a very accutate estimation of the evolution
of CPUs. Intel introduced their first processor, a 4004, in November of 1971. This first processor had a clock speed
of 108 kilohertz and 2,300 transistors. It was used mainly for simple arithmetic manipulation such as in a calculator.
Ever since this first processor was introduced the market has done nothing but soared to unbelievable highs. The first
processor common in personal computers was the 8088. This processor was introduced in June of 1978. It could be
purchased in three different clock speeds starting at 5 Megahertz and going up to 10 Megahertz. This CPU had 29,000
transistors. Then came the 80286 and 80386 processors. The 386 was the first processor to be introduced in the DX,
SX, and SL versions. Next came the 80486 processors of which there were even more choices here. The first 486
processor had 1,200,000 transistors and the latest have 1.4 million transistors. There clock speeds varied any
where from 16 MHz on the first ones to 100 MHz on the most recent 486 processors. Some of which are still in use in
homes all around the country. Next came the Pentium Processor, March 1993, running at clock speeds of 60 & 66 Mhz.
These first pentium processors had 3.1 million transistors, and had a 32-bit data path. Now the pentium processor
range anywhere from 90 MHz to 200 MHz and are the most widely used processor today. Intel is currently producing two
new pentium processors with MMX technology. These two processors, running at 166 & 200 MHz, are made to accelarate
graphics and multimedia software packages. Currently the newest processor to be introduced in a 400 MHz processor
made also by Intel. This new processor illustrates the performance potential of the new P6 architecture. It
contains 7.5 million transistors and also includes the new MMX technology.
f:\12000 essays\sciences (985)\Computer\The Changing Role of the Database Administrator.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
March 1996
Technology Changes Roll of Database Administrator
The database administrator (DBA) is responsible for managing and coordinating all database activities. The DBA's job description includes database design, user coordination, backup, recovery, overall performance, and database security. The database administrator plays a crucial role in managing data for the employer. In the past the DBA job has required sharp technical skills along with management ability. (Shelly, Cashman, Waggoner 1992). However, the arrival on the scene of the relational database along with the rapidly changing technology has modified the database administrator's role. This has required organizations to vary the way of handling database management.
(Mullins 1995)
Traditional database design and data access were complicated. The database administrator's job was to oversee any and all database-oriented tasks. This included database design and implementation, installation, upgrade, SQL analysis and advice for application developers.. The DBA was also responsible for back-up and recovery, which required many complex utility programs that run in a specified order. This was a time-consuming energy draining task. (Fosdick 1995)
Databases are currently in the process of integration. Standardizing data, once done predominately by large corporations, is now filtering down to medium-size and small companies. The meshing of the old and new database causes administrators to maintain two or three database products on a single network. (Wong 1995)
Relational database management systems incorporate complex features and components to help with logic procedures. This requires organizations to expand the traditional approach to database management and administration. The modern database management systems not only share data, they implement the sharing of common data elements and code elements. (Mullins 1995)
Currently, the more sought after relational database products are incorporating more and more complex features and components to simplify procedural logic. Due to the complexity of todays relational database, corporations are changing the established way of dealing with database management personnel. Traditionally, as new features were added to the database, more and more responsibility fell on the DBA. With the emergence of the relational database management system (RDBMS), we are now beginning to see a change in the database administrator's role.(Mullins 1995)
The design of data access routines in relational database demands extra participation from programmers. The database administrator simply checks the system's optimization choice, because technology is responsible for building access paths to the data. Program design and standard query language (SQL) tools have become essential requirements for the database administrator to do this job. However, this technology requires additional supervision and many DBAs are not competent in SQL analysis and performance monitoring. The database administrator had to learn to master the skills of application logic and programming techniques. (Mullins 1995)
The database administrator's job description and responsibilities have changed with technology. The DBA is greatly concerned with database quality, maintenance and availability . If the relational database fails to perform, the database administrator will be held accountable for the failure.
The role of the database administrator is expanding to include too many responsibilities for a single person. This has led to the DBA's job being split into two separate titles: a traditional DBA along with a procedural DBA.
The traditional database administrator is responsible for organizing and managing data objects. However, with new technology, the DBA is not always responsible for debugging, utilities or programming in C, COBOL or SQL. (Mullins 1995). These tasks go to object builder programming personnel who are familiar with object-oriented programming languages. With the database manager unqualified in SQL, the job is referred to object builders well versed in using C, COBOL, SQL. (Sipolt 1995). The traditional database administrator's strength is in creating the physical design of the database. The procedural database administrator is an expert in accessing data. Procedural DBAs are responsible for procedural logic support, application code reviews, access path review and analysis, SQL rewrites, debugging, and analysis to assure optimal execution. (Mullins 1995)
Along with the changing job description, administrators are facing increased demands from the corporations for which they work. Database administrators are responsible for staff cost control, hardware, software, and are becoming increasingly responsible for the work quality and response time of their staff. (Riggsbee 1995)
The job modifications are not the only change in this industry. Database administrators received a substantial increase in their wages in 1995. The average earnings for a DBA are now $52,572 according to the 1995 survey source. However, salaries differ according to the specific region of the country in which one resides. The mid-level database administrator in San Francisco earns $55,000 to $65,000, substantially more than our survey states. However, Salt Lake City database administrator's salary fell between $30,000 to $35,000. Another area of salaries on the rise is the health care profession. Previously lower end on the pay scale, hospital pay is on the rise and currently mid-scale in the market. (Mael 1995)
Companies no longer feel responsible for additional training or long-term retention of an employee. The trend is currently opting for a new employee, rather than hiring from within the company. Companies are willing to compensate new blood for their knowledge, rather than invest time and effort in training. This cold hard fact is true from the top management down to data entry. Therefore, it is vital to individual database personnel to make sure they are receiving the proper training to prepare them for our rapidly changing technology world. (Mael 1995)
The database administrator's roll has become ambiguous. Therefore, the job description has been separated into two fields. The traditional database administratori is responsible for managing and the organization of data. He is no longer responsible for programming in C, COBOL, or SQL. Traditional database administration personnel create the physical data design. The task of procedural database administrator encompasses logic support, coding review, programming in SQL, C, or COBOL. The procedural database administrator's expertise is in data access.
Our world of rapidly changing technology has placed greater demands on database administration personnel. Relational database demanded modification of the database administraton into two separate specialities. This change should result in the traditional database administrator maintaining a managerial capacity, with responsibilities in the physical design of the database. The procedural database administrator's capacity is in the more technical aspects of building the relational database. He expertise in procedural logic support, data access path review and analysis demands superior performance of the relational database.
Works Cited
Fosdick, Howard. "Managing Distributed Database Servers" Database Programming & Design, Dec. 1995, p. 533-537.
Mael, Susan. " Want to Earn Big Money? West or Become CIO" , Datamation . Oct 1, 1995, p45-49.
Mullins, Craig S., "The Procedural DBA", Database Programming & Design, Dec 1995,
p. 40-47.
Riggsbee, Max. "Database Support: Can It Be Measured?", Database Programming & Design. July 1995, p. 32-37.
Shelly, Cashman, Waggoner., Complete Computer Concepts and Programming in Microsoft Basic., Massachusetts. Boyd & Frazer Publishing Company, 1992.
Sipolt, Michael J., "An Object Lesson In Management (Excerpt from 'The Object- Oriented Enterprise')", Datamation. July 1, 1995, p. 51-54.
Wong, William, "Database Integration", Network VAR, Nov 1995, p. 31-37.
f:\12000 essays\sciences (985)\Computer\The Communications Decency Act.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The U.S. Government should not attempt to place restrictions on the internet. The Internet
does not belong to the United States and it is not our responsibility to save the world, so why
are we attempting to regulate something that belongs to the world? The
Telecommunications Reform Act has done exactly that, put regulations on the Internet.
Edward Cavazos quotes William Gibson says, "As described in Neuromancer, Cyberspace
was a consensual hallucination that felt and looked like a physical space but actually was a
computer-generated construct representing abstract data." (1) When Gibson coined that
phrase he had no idea that it would become the household word that it is today. "Cyberspace
now represents a vast array of computer systems accessible from remote physical locations."
(Cavazos 2)
The Internet has grown explosively over the last few years. "The Internet's growth since its
beginnings in 1981. At that time, the number of host systems was 213 machines. At the
time of this writing, twelve years later, the number has jumped to 1,313,000 systems
connecting directly to the Internet." (Cavazos 10)
"Privacy plays a unique role in American law." (Cavazos 13) Privacy is not explicitly
provided for in the Constitution, yet most of the Internet users remain anonymous. Cavazos
says, "Computers and digital communication technologies present a serious challenge to
legislators and judges who try to meet the demands of economic and social change while
protecting this most basic and fundamental personal freedom." Networks and the Internet
make it easy for anyone with the proper equipment to look at information based around the
world instantly and remain anonymous. "The right to conduct at least some forms of speech
activity anonymously has been upheld by the U.S. Supreme Court." (Cavazos 15) In
cyberspace it is extremely uncommon for someone to use their given name to conduct
themselves, but rather they use pseudonyms or "Handles". (Cavazos 14) Not only is it not
illegal to use handles on most systems, but the sysop (System Operator) does not have to
allow anyone access to his data files on who is the person behind the handle. Some sysops
make the information public, or give the option to the user, or don't collect the information at
all.
The Internet brings forth many new concerns regarding crime and computers. With movies
like Wargames, and more recently Hackers, becoming popular, computer crime is being
blown out of proportion. "The word Hacker conjures up a vivid image in the popular
media." (Cavazos 105) There are many types of computer crime that fall under the umbrella
of "Hacking". Cavazos says, "In 1986 Congress passed a comprehensive federal law
outlawing many of the activities commonly referred to as 'hacking.'" (107) Breaking into a
computer system without the proper access being given, traditional hacking, is illegal;
hacking to obtain financial information is illegal; hacking into any department or agency of
the United States is illegal; and passing passwords out with the intent for others to use them
to hack into a system without authorization is also illegal.
"One of the more troubling crimes committed in cyberspace is the illicit trafficking in credit
card numbers and other account information." (Cavazos 109) Many people on the Internet
use their credit cards to purchase things on-line, this is a dangerous practice because anyone
with your card number can do the samething with your card. Millions of dollars worth of
goods and services a year are stolen using credit card fraud. No matter how illegal, many
are not caught. With the use of anonymous names and restricted access to provider's data on
users, it becomes harder to catch the criminals on-line.
The "[Wire Fraud Act] makes it illegal for anyone to use any wire, radio, or television
communication in interstate or foreigncommerce to further a scheme to defraud people of
money or goods." (Cavazos 110) This is interpreted to include telephone communications,
therefore computer communication as well. There is much fraud on the Internet today, and
the fraud will continue until a feasable way to enforce the Wire Fraud Act comes about.
Cavazos continues, "unauthorized duplication, distribution, and use of someone else's
intellectual property is subject to civil and criminal penalties under the U.S. Copyright Act."
(111) This "intellectual property" is defined to include computer software. (Cavazos 111)
Software piracy is very widespread and rampant, and was even before the Internet became
popular.
The spread of Computer Viruses has been advanced by the popularity of the Internet. "A
virus program is the result of someone developing a mischievous program that replicates
itself, much like the living organism for which it is named." (Cavazos 114) Cyberspace
allows for the rapid transfer and downloading of software over the entire world, this includes
viruses. If a file has been corrupted before you download it, you are infecting your system.
If you then give any software in any medium to any other user you run the risk of spreading
the virus, just as if you had taken in a person sick with the bubonic plague. "Whatever the
mechanism, there can be no doubt that virus software can be readily found in cyberspace."
(Cavazos 115)
The Electronic Communications Privacy Act was enacted to protect the rights of the on-line
users within the bounds of the United States. "Today the Electronic Communications
Privacy Act (ECPA) makes it illegal to intercept or disclose private communications and
provides victims of such conduct a right to sue anyone violating its mandate." (Cavazos 17)
There are exceptions to this law; if you are a party of the communication you can release it
to the public, your provider can use the intercepted communication in the normal course of
employment, your provider can intercept mail for the authorities if ordered by a court, if the
communication is public, and your provider can intercept communications to record the fact
of the communication or to protect you from abuse of the system. If you are not cardful as a
criminal then you will get caught, and the number of careful criminals are increasing.
Says Cavazos, "a person or entity providing an electronic communication service to the
public shall not knowingly divulge to anyone the contents o fa communication while in
electronic storage on that service." (21) The sysop is not allowed to read your e-mail,
destroy your e-mail before you read it, nor is he allowed to make public your e-mail unless
that e-mail has already been made public.
"Many systems monitor every keystroke entered by a user. Such keystroke monitoring may
very well constitute an interception for the purposes of the ECPS." (18) If the U/S/
Government is going to continue to place restrictions on the Internet then soon we will have
to do away with free speech and communications. Says Kirsten Macdissi, "Ultmiately,
control will probably have to come from the user or a provider close to the individual user..."
(1995, p.1). Monitoring individual users is still not the answer; to cut down on fraud and
other law violations a new system must be devised to monitor the Internet that does not
violate the right to privacy and does not prevent adults from having a right to free speech.
The Constitution reads, "Amendment 1. Congress shall make no law respecting an
establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of
speech, or of the press; or the right of the people peaceably to assemble, and to petition the
Government for a redress of grievances." (1776). If you read the Communications Decency
Act of the Telecommunications Reform Act, there are now seven words taht cannot be used
on the Internet. A direct abridgement of the right to choose what we say. Yet the providers,
who have the right to edit what is submitted, choose let many things slide. The responsibility
should lie with the provider, not the U.S. Government. Says Macdissi, "As an access
provider, Mr. Dale Botkin can see who is connected, but not what they are doing." (1995).
Yet almost all providers keep a running record of what files are communivated through their
servers.
Macdissi quoted Mr. Dale Botkin, president of Probe Technology, an Internet access
provider, as saying, "'There is a grass roots organization called Safe Surf,' Botkin said.
'What they've done is come up with a way for people putting up information on the internet
to flag it as okay or not okay for kids.'" (1995). The system is idiot proof. The information
provider flags his web page as appropriate for children, the Safe Surf program will connect
to the site. If the information provider chooses to not flag his web page, or to flag it as
inappropriate for children, the Safe Surf program will not connect to the site. If this, or
something similar, were mandated for the Internet the Communications Decency Act would
be unnecessary. Says Eric Stone, an Internet user, "[The C.D.A.] attempts to place more
restrictive constraints on the conversation in cyberspace than currently exist in the Senate
Cafeteria..." (1996).
The liability is still with the end-user. The American, or fireigner, who sits in front of their
computer everyday to conduct business, chat with friends, or learn about something he didn't
know about before. For us to take liability away from the end-user we must lay the liability
on either the providers or on the system operators. Cavazos says, "the Constitution only
provides this protection where the government is infringing on your rights." (1994).
When the providers and system operators censor the users it is called editorial discretion.
When the Government does it, it is infringement of privacy. So why are we still trying to let
the Government into our personal and private lives? The popularity of the C.D.A. with the
unknowledgeable and the right conservatives makes it a very popular law. The left ant the
knowledgeable are in the minority, so our power to change this law is not great. The
Government also won't listen to us because we pose less of a threat than the majority in re-
election. This law will have to be recognized for what it is, a blatant violation of the First
Amendment right of free speech, by the average citizen before the C.D.A. will be changed.
Works Cited
Cavazos, E. (1994) Cyberspace and the law: Your rights and duties in the on-line world.
Boston: MIT Press
Macdissi, K. (1995) Enforcement is the problem with regulation of the Internet.
Midlands Business Journal
Stone, E. (1996) A Cyberspace independence declaration. Unpublished Essay,
Heretic@csulb.com (E-Mail address)
f:\12000 essays\sciences (985)\Computer\the computer modem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
First of all I would like to start with an introduction I chose this topic because I thought it would be interesting to learn about how a modem works in a computer. With modem we are able to access the Internet BBS' or Bulletin Board Systems.
The MODEM is one of the smartest computer hardware tools ever created. modem is an abbreviation of Modulator De Modulator it is fairly simple to explain; through the telephone lines we are able to send messages between one single computer or a group of computers. The Originating computer sends a coded message to the Host computer which decodes it and there we have the power to access the Internet, talk to other people through terminal programs and retrieve files from other computers. The first patented computer modem was made by Hayes in the early eighties and from there they rapidly developed the first modem speed was 300 baud and from there a 600 baud than 1200 and so on. The fastest modem made today is a 56k which is very fast. Not as fast as ISDN (The Wave offered through Rogers cable) or even as advanced as Satellite modem. Most people now have 14.4 or 28.8 baud modems (Baud is "Slang" for Baud Rate Per Second) the reason for the increase in 14.4 and 28.8's is that they are cheap and fairly recent and haven't gone out of date yet. There are two types of modem external and external modems internal plugs into a 16 bit port inside your computer and external connects through either a serial (mouse)port or a parallel (printer)port most people like the external modems because they don't take up an extra space in your computer (according to PC Computing) prices in modems range price from $100 (28.8bps) to $500(software upgradable 56k). Facsimile machines also have a form of modem in them, usually a 2400baud modem to decode the message. So imagine a world without the modem for a second; NO fax NO Internet NO direct computer communications whatsoever.
The three major modem manufactures are Hayes (original modem) US Robotics and Microsoft.
In conclusion life today it would be very hard to live without modems some businesses would cease to exist due to ill communications between offices and without modems we wouldn't have videoconfrencing e-mail and other tools we have come to rely on in the past 15 years not to mention the phone companies loss not having to put in all of those extra phone lines because normal "voice" lines are tied up due to modem use. I believe that the modem is a very important and interesting tool of communication and the Internet is wonderful for knowledge due to the fact that is where I got almost all of my information today. Thank-You for reading my independent study and I hope you learned something from it.
f:\12000 essays\sciences (985)\Computer\The Computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers
Computer are electronic device that can receive a set of instructions, or program, and then carry out a program by performing calculations on numbered data or by compiling and correlating other forms of information. The old world of technology could not believe about the making of computers. Different types and sizes of computers find uses throughout our world in the handling of data including secret governmental files and making banking transactions to private household accounts. Computers have opened up a new world in manufacturing through the developments of automation, and they have made modern communication systems. They are great tools in almost everything you want to do research and applied technology, including constructing models of the universe to producing tomorrow's weather reports, and their use has in itself opened up new areas of development. Database services and computer networks make available a great variety of information sources. The same new designs also make possible ideas of privacy and of restricted information sources, but computer crime has become a very important risk that society must face if it would enjoy the benefits of modern technology. Two main types of computers are in use today, analog and digital, although the term computer is often used to mean only the digital type. Everything that a digital computer does is based on one operation the ability to determine if a switch, or gate is open or closed. That is, the computer can recognize only two states in any of its microscopic circuits on or off, high voltage or low voltage, or-in the case of numbers-0 or 1. The speed at which the computer performs this simple act, however, is what makes it a marvel of modern technology. Computer speeds are measured in megaHertz, or millions of cycles per second. A computer with a "clock speed" of 10 mHz-a fairly representative speed for a microcomputer-is capable of executing 10 million discrete operations each second. Business microcomputers can perform 15 to 40 million operations per second, and supercomputers used in research and defense applications attain speeds of billions of cycles per second. Digital computer speed and calculating power are further enhanced by the amount of data handled during each cycle. If a computer checks only one switch at a time, that switch can represent only two commands or numbers; thus ON would symbolize one operation or number, and OFF would symbolize another. By checking groups of switches linked as a unit, however, the computer increases the number of operations it can recognize at each cycle. The first adding machine, a precursor of the digital computer, was devised in 1642 by the French philosopher Blaise Pascal. This device employed a series of ten-toothed wheels, each tooth representing a digit from 0 to 9. The wheels were connected so that numbers could be added to each other by advancing the wheels by a correct number of teeth. In the 1670s the German philosopher and mathematician Gottfried Wilhelm von Leibniz improved on this machine by devising one that could also multiply. The French inventor Joseph Marie Jacquard , in designing an automatic loom, used thin, perforated wooden boards to control the weaving of complicated designs. Analog computers began to be built at the start of the 20th century. Early models calculated by means of rotating shafts and gears. Numerical approximations of equations too difficult to solve in any other way were evaluated with such machines. During both world wars, mechanical and, later, electrical analog computing systems were used as torpedo course predictors in submarines and as bombsight controllers in aircraft. Another system was designed to predict spring floods in the Mississippi River Basin. In the 1940s, Howard Aiken, a Harvard University mathematician, created what is usually considered the first digital computer. This machine was constructed from mechanical adding machine parts. The instruction sequence to be used to solve a problem was fed into the machine on a roll of punched paper tape, rather than being stored in the computer. In 1945, however, a computer with program storage was built, based on the concepts of the Hungarian-American mathematician John von Neumann. The instructions were stored within a so-called memory, freeing the computer from the speed limitations of the paper tape reader during execution and permitting problems to be solved without rewiring the computer. The rapidly advancing field of electronics led to construction of the first general-purpose all-electronic computer in 1946 at the University of Pennsylvania by the American engineer John Presper Eckert, Jr. and the American physicist John William Mauchly. (Another American physicist, John Vincent Atanasoff, later successfully claimed that certain basic techniques he had developed were used in this computer.) Called ENIAC, for Electronic Numerical Integrator And Computer, the device contained 18,000 vacuum tubes and had a speed of several hundred multiplications per minute. Its program was wired into the processor and had to be manually altered.
The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logical elements than were possible with vacuum- tube machines. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines called second-generation computers. Components became smaller, as did intercomponent spacings, and the system became much less expensive to build. Different types of peripheral devices-disk drives, printers, communications networks, and so on-handle and store data differently from the way the computer handles and stores it. Internal operating systems, usually stored in ROM memory, were developed primarily to coordinate and translate data flows from dissimilar sources, such as disk drives or co-processors (processing chips that perform simultaneous but different operations from the central unit). An operating system is a master control program, permanently stored in memory, that interprets user commands requesting various kinds of services, such as display, print, or copy a data file; list all files in a directory; or execute a particular program. A program is a sequence of instructions that tells the hardware of a computer what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. In some specialized, or "dedicated," computers the operating instructions are embedded in their circuitry; common examples are the microcomputers found in calculators, wristwatches, automobile engines, and microwave ovens. A general-purpose computer, on the other hand, contains some built-in programs (in ROM) or instructions, in a chip, but it depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. Software in widespread use includes a wide range of applications programs-instructions to the computer on how to perform various tasks.
f:\12000 essays\sciences (985)\Computer\The Computer2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|__ __| |_| | ___| | __| _ | \ / | _ | | | |_ _| ___| _ |
| | | _ | ___| | |__| |_| | \ / | __| |_| | | | | ___| |
|_| |_| |_|____| |____|_____|_|\_/|_|_| |_____| |_| |____|_|\_\
This report is about the impact that the personal computer has made during the
past 10 years on the the community. It is a report that includes detailed
information about the personal computer and the way it has worked its way
into a lot of peoples everyday lives. It includes information about the
Internet and how it has shaped peoples life from just a hobby and into an
obsession. It includes detailed information about its history, especially
the time in which it was first developed. There is information about future
possibilities for the computer about who it could be the future and destroy
the future. There is a description on how it is developed and an in-depth
look at how it works.
A personal computer is a machine that lets you do do just about everything you
could think of. You can do some basic word-processing and spreadsheets as well
as 'Surf the Internet'. You can play the latest computer games by yourself as
well as against someone from across the other side of the world. It can store
databases which could contain information that is kept by police for easier
records or you could just use it for your own family history. The basic structure
of a computer is a keyboard, a moniter, a and case which holds all the componets
to make a computer run like a Hard drive, a Motherboard, and a Video card. There
are many other additions you can make to this such as a Modem, a Joystick, and a
Mouse.
The personal computer was developed during the year 1945 by the Americans to help
them decode enemy secret codes during the Second World War. At this time the
computers were huge and only used by governments because they were as big as room.
This was because the main thing they used were vacuum valves which made the
computer enormous. They also never had anything to hold any memory so they
couldn't actually be classed as a true computer. The introduction of a way to
store a file was brought around in the year 1954.
The computer did not have a big impact on the community until about the year 1985
Commodore released a gange of computers called the Commodore 64 and also another
Commodore computer called the Vic 20 which was released in the year 1982. When
Intel saw the Commodore 64's success it released its brand new 386 processor in
the year 1985. Though the 386 was easily the better and faster processor the
Commodore 64 seemed to be the computer getting all the attention because of it's
lower prices so therefore it appealed to a much wider group of people. The 386 was
only in the price range of the mega-rich and agencies. The effect of the Commodore
64 was enormous because it seemed to turn people away from throwing away their
money on Arcade games such as Pac-man and Pong when they could be playing them
in the conveiniance of their own homes and without leaving a dent in the change
pocket. This marked the fall of arcade and the rise of the computer. Arcade
companies such as Namco were forced to make computer games from then if they were
to make any money.
The most specific event to help the rise of the home computer was the invention of
the transistor. Before the transistor the information could only travel through a
vacuum valve. Then the transistor came along and because of its size it reduced
the size of a computer enormously. With the transistor being smaller they could
fit more into a small space and with the more transistors ther was more activity
within them which in turn made them faster. Another worthy event was in the year
1954 when the first writable disk was invented. It was a great achievment because
instead of just being able to work out sums and display them they were able to
store some of the current information for future reference. This way the computer
didn't have to do so much work and therefore it made them evn quicker at doing
sums and cracking codes.
f:\12000 essays\sciences (985)\Computer\The Dependability of the Web.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Dependability of the Web
by
Nathan Redman
A new age is upon us - the computer age. More specifically, the internet. There's no doubt that the internet has changed how fast and the way we communicate and get information. The popularity of the net is rising at a considerable rate. Traffic, the number of people on the web, is four times what it was a year ago and every thirty seconds someone new logs on to the net to experience for themselves what everyone is talking about. Even Bill Gates, founder of Microsoft, a company that really looks into the future in order to stay ahead of the competition, said the internet is one part of the computer business that he really underestimated. As one of the richest men in the world I doubt if he makes too many mistakes. Nobody could have predicted that the internet would expand at this rate and that's probably the reason why troubles are arising about the dependability of the web.
As usage soars (some estimate there will be over eighty million users by the end of 1997) could the internet overload? Even though no one predicted the popularity of the net some are quick to foresee the downfall or doomsday of this fade. If you call it a fade. The demand continues to rise and is now so great that technological improvements are continually needed to handle the burden that's been created from all of the people using the net.
Their are many things that can lighten the load that's been slowing down the internet. First, it needs to have a lot better organization because with over seventy-five million web pages, and rising steadily fast, being able to pin-point the information you are trying to find is very difficult. It's like finding a needle in a haystack. When you use a search engine to find information on a certain topic the search can come back with thousands upon thousands of pages or sites. Sometimes with over fifty-thousand. Now you have the never ending task of looking through each site to find what you want. Plus, with links on most pages people can end up getting lost real fast and not being able to find their way back. Search engines should develop what I call the filter down affect. In the filter down affect a broad topic is chosen and then sub-topics are chosen continuously until the list of sites has narrowed the search to respectable sites and the information needed. Having better organization would remove some of the worthless sites around cyperspace from ending up first in the search engines results.
A second way to lighten the load of the internet is by improving the time it takes to load a page. The speed of loading a page depends greatly on the type of equipment, software, how much memory you have, your connection speed, plus a lot of other factors, but web page designers should make more text and less graphics, animation, and video because text takes a lot less time to load. According to an October 1996 survey by AT&T, sixty-seven percent of sites were "off the net" for at least on hour per day because of overload. The web wasn't designed to handle graphics, animation, audio, and video. It was first developed for E-mail (electronic mail) and transferring text files, but web page designers want their pages to look the best so that people or for in the case of business, potential customers, visit their site and are impressed by it so they come back in the future and tell others about the site. After all, the best way to have people visit your web site is by word of mouth because it is very hard for people to find the site unless they happen to know the exact address. Sometimes though, popularity can kill a site. For instance, when the weather got bad in Minnesota one weekend the National Weather Service web site got overloaded with people wanting to read and see the current weather forecasts. The result was the server going down from the overload leaving nobody the ability to visit that site. With more businesses seeing the dollar signs that the web could produce they compete for advertising on the web because it is much cheaper then advertising on television or the newspaper and it can reach people all the way around the world. Designers for these pages can't forget that surfers aren't willing to wait very long for pages to load so simplifying pages can make the web a lot faster.
Another way to make things faster is to make sure that servers can handle the load applied to them. Internet providers want to make money just like all other businesses so they try fitting as many customers on to one server as they can. Putting less people on each server would create faster service. Also popular businesses or sites should have big enough capacities to handle the amount of people that visit. Slow servers will lose a lot of business. Internet providers and businesses should look at future capacities and not just to current loads.
As in the case of doomsday more and more fears of logging into cyberspace are beginning to receive attention. As mentioned above, speed is a major concern. Besides what is recommended, technological improvements need to be developed. For example, bigger pipelines (lines carrying computer data) like fiber optics and satellite transmission are receiving high ratings from people, but like all good things in life putting bigger pipelines in the ground takes a lot of time and money. If the government or private industries are willing to lay the foundation for putting in faster lines it will change the world just like the railroad tracks in the 1800's.
Another major fear of people on the superhighway of information is security. Hackers (people that get into data on computers they aren't suppose to) can hack into a lot of private information on the information superhighway. In reality it isn't any different then credit cards and valuables being stolen in the real world. Their is currently cybercops surfing the web looking for illegal happenings on the information superhighway. Patrolling the web is only one way to help put a stop to hackers. Encrypting and making better security software needs to be developed along with other computer technology to help control hackers or cybercriminals.
Theirs no denying the fact that the internet is very powerful in today's world. It combines text, audio, animation, video and graphics all in one and at the click of a button you can receive entertainment and news, communicate with people around the world, do your banking, reserve tickets, buy, sell and trade merchandise or collectibles, or even order dinner for the night. These are just a few things that can be accomplished on the net. People aren't only attracted to what the internet has to offer now but to what will be available in the near future. Some day computers will replace our television, radio, answering machine, and telephone. Technology is developing so rapidly that things not even imaginable will be developed to make our lives easier but more confusing. After all, no one predicted where the net is today and how fast it would it would develop.
f:\12000 essays\sciences (985)\Computer\The Enviroment is going to hell.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This is the litany: Our resources are running out. The air is bad, the water worse. The planet's specis are dying off-more exactly, we're killing them
-at the staggering rate of 100,000 per year, a figure that works out to almost 2000 species per
weak, 300 per day, 10 per hour, another dead species every 6 minutes. We're trashing the planet, washing away the topsoil, paving over our farmlands, systematically
deforesting our wildernesses, decimating the biota, and ultimately killing ourselves.
The world is getting progressively poorer, and it's all because of populating, or more precisely, over-
population. There's a finite store of resources on our pale blue dot, spaceship Earth, our small and fragile tiny planet, and we're fast apporaching it's ultimate carrying capacity.
The limits to growth are finally upon us, and we're living on borrowed time. The laws of population growth are inexorable, Unless we act decisively, the final result is written
in stone: mass poverty, famine, starvation, and death.Time is short, and we have to act now.
That's the standard adn canonical litany. It's been drilled into our heads so far and so focefuly that to hear it yet once more is ... well, it's almost reasuring.
It's comforting, oddly consoling- at least we're face to face with the enemies: consumption, population, mindless growth. And we know the solution: cut back, contract, make do with less. "Live simply so that others may simply live."
There's just one problem with The Litany, just one slight little wee imperfection: every item in that dim and dreary recititaion, each and every last claim, is false. Incorrect. At
variance with the truth. Not the way it is, folks.
f:\12000 essays\sciences (985)\Computer\The evelution of the microprossesor.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The evaluation of the microprocessor.
The microprocessor has changed a lot over the years, says (Michael W.
Davidson,http://micro.magnet.fsu.edu/chipshot.html) Microprocessor
technology is progressing so rapidly that even experts in the field are
having trouble keeping up with current advances. As more competition
develops in this $150 billion a year business, power and speed of the
microprocessor is expanding at an almost explosive rate. The changes
have been most evident over the last decade. The microprocessor has
changed the way computers work by making them faster. The
microprocessor is often called the brain of the C.P.U.(or the central
processing unit)and without the microprocessor the computer is more or
less useless. Motorola and Intel have invented most of the
microprocessors over the last decade. Over the years their has been a
constant battle over cutting edge technology. In the 80's Motorola won
the battle, but now in the 90's it looks as Intel has won the war.
The microprocessor 68000 is the original microprocessor(Encarta 95). It
was invented by Motorola in the early 80's. The 68000 also had two very
distinct qualities like 24-bit physical addressing and a 16-bit data
bus. The original Apple Macintosh ,released in 1984, had the 8-MHz
found at the core of it. It was also found in the Macintosh Plus, the
original Macintosh SE, the Apple Laser-Writer IISC, and the Hewlett-
Packard's LaserJet printer family. The 68000 was very efficient for its
time for example it could address 16 megabytes of memory, that is 16
more times the memory than the Intel 8088 which was found in the IBM PC
. Also the 68000 has a linear addressing architecture which was better
than the 8088's segmented memory architecture because it made making
large applications more straightforward.
The 68020 was invented by Motorola in the mid-80's(Encarta 95). The
68020 is about two times as powerful as the 68000. The 68020 has 32-bit
addressing and a 32-bit data bus and is available in various speeds like
16MHz, 20MHz, 25MHz, and 33MHz. The microprocessor 68020 is found in the
original Macintosh II and in the LaserWriter IINT both of which are from
Apple.
The 68030 microprocessor was invented by Motorola about a year after the
68020 was released(Encarta 95). The 68030 has 32-bit addressing and a
32-bit data bus just like it's previous model, but it has paged memory
management built into it, delaying the need for additional chips to
provide that function. A 16-MHz version was used in the Macintosh IIx,
IIcx, and SE/30. A 25-MHz model was used in the Mac IIci and the NeXT
computer. The 68030 is produced in various versions like the 20-MHz,
33MHz, 40-MHz, and 50MHz.
The microprocessor 68040 was invented by Motorola(Encarta 95). The
68040 has a 32-bit addressing and a 32-bit data bus just like the
previous two microprocessors. But unlike the two previous
microprocessors this one runs at 25MHz and includes a built-in floating
point unit and memory management units which includes 4-KB instruction
and data coaches. Which just happens to eliminate the need additional
chips to provide these functions. Also the 68040 is capable of parallel
instruction execution by means of multiple independent instruction
pipelines, multiple internal buses, and separate caches for both data
and instructions.
The microprocessor 68881 was invented by Motorola for the use with both
microprocessor 68000 and the 68020(Encarta 95). Math coprocessors, if
supported by the application software, would speed up any function that
is math-based. The microprocessor 68881 does this by additional set of
instructions for high-proformance floating point arithmetic, a set of
floating-point data registers, and 22 built-inconstants including p and
powers of 10. The microprocessor 68881 conforms to the ANSI/IEEE 754-
1985 standard for binary floating-point arithmetic. When making the
Macintosh II, Apple noticed that when they added a 68881, the
improvement in performance of the interface, and thus the apparent
performance was changed dramatically. Apple then decided to add it as
standard equipment.
The microprocessor 80286, also called the 286was invented by Motorola in
1982(Encarta 95). The 286 was included in the IBM PC/AT and compatible
computers in 1984. The 286 has a 16-bit resister, transfers information
over the data bus 16 bits at a time, and use 24 bits to address memory
location. The 286 was able to operate in two modes real (which is
compatible with MS-DOS and limits the 8086 and 8088 chips) and protected
( which increases the microprocessor's functionality). Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the addressing access is increased and is
capable of accessing up to 16 megabytes of memory directly. Also, an
286 microprocessor in protected mode protects the operating system from
mis-behaved applications that could normally halt (or "crash") a system
with a non-protected microprocessor such as the 80286 in real mode or
just the plain old 8088.
The microprocessor 80386dx also called the 386 or the 386dx was invented
in 1985(Encarta 95). The 386 was used in IBM and compatible
microcomputers such as the PS/2 Model 80. The 386 is a full 32-bit
microprocessor, meaning that it has a 32-bit resister, it can easily
transfer information over its data bus 32 bits at a time, and it can use
32 bits in addressing memory. Like the earlier 80286, the 386 operates
in two modes, again real (which is compatible with MS-DOS and limits the
8086 and 8088 chips) and protected ( which increases the
microprocessor's functionality and protects the operating system from
halting because of an inadvertent application error.) Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the total amount of memory that the 386 can
address directly is 4 gigabytes, that is roughly 4 billion bytes. The
80386dx also has a virtual mode, which allows the operating systems to
effectively divide the 80386dx into several 8086 microprocessors each
having its own 1-megabyte space, allowing each "8086" to run its own
program.
The microprocessor 80386sx also called the 386sx was invented by Intel
in 1988 as a low-cost alternative to the 80386DX(Encarta 95). The
80386SX is in essence an 80386DX processor limited by a 16-bit data bus.
The 16-bit design allows 80386SX systems to be configured from less
expensive AT-class parts, ensuring a much lower complete system price.
The 80386SX offers enhanced performance over the 80286 and access to
software designed for the 80386DX. The 80386SX also offers 80386DX
comforts such as multitasking and virtual 8086 mode.
The microprocessor 80387SX also called the 387SX was invented by
Intel(Encarta 95). A math, or floating-point, coprocessor from Intel
for use with the 80386SX family of microprocessors. The 387sx is
available in a 16-MHz version only, the 80387SX, if supported by the
application software, can dramatically improve system performance by
offering arithmetic, trigonometric, exponential, and logarithmic
instructions for the application to use-instructions not offered in the
80386SX instruction set. The 80387SX also offers perfect operations for
sine, cosine, tangent, arctangent, and logarithm calculations. If used,
these additional instructions are carried out by the 80387SX, freeing
the 80386SX to perform other tasks. The 80387SX is capable of working
with 32- and 64-bit integers, 32-, 64-, and 80-bit floating-point
numbers, and 18-digit BCD (binary coded decimal) operands; it coincides
to the ANSI/IEEE 754-1985 standard for binary floating-point arithmetic.
The 80387SX operates individually on the 80386SX's mode, and it performs
as expected regardless of whether the 80386SX is running in real,
protected, or virtual 8086 mode.
The microprocessor mi486 also called the 80486 or the 486 was invented
in 1989 by Intel(Encarta 95). Like its 80386 predecessor, the 486 is a
full-bit processor with 32-bit registers, 32-bit data bus, and 32-bit
addressing. It includes several enhancements, however, including a
built-in cache controller, the built-in equivalent of an 80387 floating-
point coprocessor, and provisions for multiprocessing. In addition, the
486 uses a "pipeline" execution scheme that breaks instructions into
multiple stages, resulting in much higher performance for many common
data and integer math operations.
In conclusion it is evident by the following that microprocessors are
developing at leaps and bounds and it is not surprising that if by the
time it hits the teacher's desk or by the time you read this the next
superchip will be developed(Encarta 95).
f:\12000 essays\sciences (985)\Computer\THe Evolution of the PC and Microsoft.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Kasey Anderson
2/21/97
Computer Tech.
ESSAY
The Evolution of the PC
Xerox, Apple, IBM, and Compaq all played major roles in the development of the Personal Computer, or ³PC,² and the success of Microsoft. Though it may seem so, the computer industry did not just pop-up overnight. It took many years of dedication, hard-work, and most importantly, thievery to turn the personal computer from a machine the size of a Buick, used only by zit-faced ³nerds,² to the very machine I am typing this report on.
Xerox started everything off by creating the first personal computer, the ALTO, in 1973. However, Xerox did not release the computer because they did not think that was the direction the industry was going. This was the first of many mistakes Xerox would make in the next two decades. So, in 1975, Ed Roberts built the Altair 80800, which is largely regarded as the first PC. However, the Altair really served no real purpose. This left computer-lovers still yearning for the ³perfect² PC...actually, it didn¹t have to be perfect, most ³nerds² just wanted their computer to do SOMETHING.
The burning need for a PC was met in 1977, when Apple, a company formed by Steve Jobs and Steve Wozniak, released it¹s Apple II. Now the nerds were satisfied, but that wasn¹t enough. In order to catapult the PC in to a big-time product, Apple needed to make it marketable to the average Joe. This was made possible by Visical, the home spread sheet. The Apple II was now a true-blue product.
In order to compete with Apple¹s success, IBM needed something to set its product apart from the others. So they developed a process called ³open architecture.² Open architecture meant buying all the components separately, piecing them together, and then slapping the IBM name on it. It was quite effective. Now all IBM needed was software. Enter Bill Gates.
Gates, along with buddy Paul Allen, had started a software company called Microsoft. Gates was one of two major contenders for IBM. The other was a man named Gary Kildall. IBM came to Kildall first, but he turned them away (He has yet to stop kicking himself) and so they turned to Big Bad Bill Gates and Microsoft.
Microsoft would continue supplying IBM with software until IBM insisted Microsoft develop Q/DOS, which was compatible only with IBM equipment. Microsoft was also engineering Windows, their own separate software, but IBM wanted Q/DOS.
By this time, PC clones were popping up all over. The most effective clone was the Compaq. Compaq introduced the first BIOS (Basic Input-Output System) chip. The spearheaded a clone market that not only used DOS, but later Windows as well, beginning the incredible success of Microsoft.
With all of these clones, Apple was in dire need of something new and spectacular. So when Steve Jobs got invited to Xerox to check out some new systems (big mistake), he began drooling profusely. There he saw the GUI (graphical user interface), and immediately fell in love. SO, naturally, Xerox invited him back a second time (BBBBIIIIGGGG mistake) and he was allowed to bring his team of engineers. Apple did the obvious and stole the GUI from Xerox. After his own computer, the LISA, flopped, Jobs latched on to the project of one of his engineers. In 1984, the Apple Macintosh was born. Jobs, not wanting to burden his employees with accolades, accepted all of the credit.
Even with the coveted GUI, Apple still needed a good application. And who do you call when you need software? Big Bad Bill Gates. Microsoft designed ³desktop publishing² for Apple. However, at the same time, Gates was peeking over Jobs¹s shoulder to get some ³hints² to help along with the Windows production.
About the same time, IBM had Microsoft design OS/2 for them so they could close the market for clones by closing their architecture. This was the last straw for Microsoft. They designed OS/2 and then split with IBM to concentrate fully on Windows. The first few versions of Windows were only mediocre, but Windows 3.0 was the answer to what everyone wanted. However, it did not have it¹s own operating system, something that Windows Œ95 does. 3.0 sold 30 million copies in its first year, propelling Microsoft to success.
So, neither the PC industry nor Microsoft was built overnight. Each owes a lot to several different people and companies. Isn¹t it amazing that so much has developed in just twenty-three years? Here¹s something even more amazing. Remember the ALTO? Guess what it had... a GUI, a mouse, a networking system, everything. So maybe we haven¹t come all that far.
f:\12000 essays\sciences (985)\Computer\The Future of Computer Crime in America .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sociology Research Paper
Sociology per. #2
10/8/96
The Future of Computer Crime
in
America
Sociology Topics:
Future Society
Social Change
Social and Enviromental Issues
Deviant Behavior
Crime/Corrections
Name: Brandon Robinson Period: # 2
Page 1
The proliferation of home computers, and of home computers equipped with
modems, has brought about a major transformation in the way American society
communicates, interacts, and receives information. All of these changes being
popularized by the media and the wide increased personal and private sector use of the
Internet. All of these factors plus the fact of more and more business and government
institutions are jumping to make the use of these services has put a much wider range of
information at the finger tips of those, often select and few individuals whom know how
to access, understand and use these information sources. Often times today this
information is of a very sensitive and private nature on anything from IRS Tax returns, to
Top Secret NASA payload launch information. Piled on top of that many times the
individuals accessing these information sources are doing so by illegal means and are
often motivated by deviant and illegal means. It is said that at any given time the average
American has his name on an active file in over 550 computer information databases of
which nearly 90% are online, and of the 550 databases the number comes no where close
to how many time your personal information is listed in some database in an unactive
file. The "Average American" could simply sit in his/her home doing nearly nothing all
day long and still have his/her name go through over 1,000 computers a day.
All of these vast information files all hold the crucial ones and zero's of data that
make up your life as you and all others know it. All of these data bits, at the hands
100,000's of people. With little or NO central control or regulatory agency to oversee the
safe handling of your precious little ones and zero's of information. As it would seem
Arson Wells was little late with his title of "1984" . "BIG BROTHER" is INDEED
WATCHING, US ALL and as it would seem our BIG BROTHER is alot bigger then Mr.
Wells could have ever imagined. And that our BIG BROTHER is EVERYWHERE! The
100,000's of people that do have this information make up our modern BIG BROTHER
in the form of government institutions to private advertising companies, these people are
all the "trusted" ones who use our information everyday for legal and useful purposes but
what about the others who use their skills and and knowledge to gain their "own"
personal and illegal access to these vast depositories of information? These individuals
popularized and demonized by the media are often referred to as "Hackers" or "One who
obtains unauthorized if not illegal, access to computer data systems and or networks." or
the media definition "maladjusted losers forming "high-tech street gangs that are
dangerous to society" (Chicago Tribune, 1989) Which ever one is best fitted they are
indeed becoming a very serious issue and worry to some in our ever and constantly
changing American Techno Society. Because of the serious delection by our elected
representatives whom have valiantly once again failed to keep up with the ever changing
times, there is if any major or clear and easy to understand "CONSTITUTIONAL" (The
recent 3 to 1 over turn of the not only controversial but deemed UNconstituional law
culled the Communications Decency Act") laws as to the
Page 2
governing of the vastly wild and uncharted realms of cyberspace. The flagrant and
serious if not slightly laughable attempts of our technologically illiterate and ignorant
masses of elected officials. Sends a clear S.O.S. message to the future generations of
America to not only LOCK you PHYSICAL DOORS but also LOCK and double LOCK
all or your COMPUTER DOORS as well. In order for this society to evolve efficiently
with our ever changing technology rate. We as the masses are going to have to keep
abreast with the current events that are lurking out in the depths of cyberspace. Before
we, as a result of our inability to adapt and our arrogance and ignorance, are all products
of our own technological over indulgence. So to avoid the tragic and ending collision of
our own self manufactured technological self-destruction and the break-down of our
society, in every tangible aspect, as we know of it today.
I believe that in the future we are headed towards you will see our society divided
into Two major parts 1.) Those whom pose the knowledge and capability to obtain the
knowledge/information i.e.. the "LITERATE" and 2.) Those who don't pose the skills
necessary to obtain that crucial knowledge/information, i.e.. the "ROAD KILL" Because
in the future, the power structure will not be decided by who has the most guns or
missiles or weapons but the powers structure will be made up of little tiny ones and
zero's, bits of data giving to those whom ever poses the power of the knowledge and the
power to manipulate who has that knowledge. The "rich" and "elitist" will be the
knowledge posers and givers and the "poor" will be those with the lack of knowledge.
Knowledge will bring power and wealth and the lack of will bring.....well the lack of
power, wealth and knowledge.
Brandon Robinson
10/8/96 864 words
Sources
1.Thesis by Gordon R. Meyer "The Social Organization of the Computer Underground"
2.2600 Magazine The Hacker Quarterly
3.The Codex Magazine Monthly Security and Technical Update.
4.Secrets of a Super Hacker by the Knightmare
5.Personal Knowledge, Brandon Robinson
f:\12000 essays\sciences (985)\Computer\The future of the internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Future Of The Internet
In Today's world of computers, the internet has become part of ones regular vocabulary. The internet is everywhere, in the news, the newspaper, magazines, and entire books are written on it regularly. Its growth rate is incredible, increasing by about 10% every month (Dunkin 180). This rapid growth rate could either help the system or destroy it.
The possibilities are endless on what can be done on the internet. People can tap into libraries, tap into weather satellites, download computer programs, talk to other people with related interests, and send electronic mail all across the world (Elmer-Dewitt 62). It is used by thousands of different kinds of people and organizations, like the military, businesses, colleges and universities, and common people with no specific purpose to even use it (Dunkin 180). Phillip Elmer-Dewitt stated it perfectly, "It is a place for everyone."
The rapid growth of the internet has many positive aspects to it. The new technology that is developing with this rapid growth will help keep computers up to date with what is being developed on the internet. With these technological advances, systems will be faster, more powerful, and capable of doing more complicated tasks. As more people with different interests, thoughts, and ideas get involved with the internet, there will be more information available (Elmer-Dewitt 64). As the number of internet users increases, the prices will gradually decrease on internet software and organizations (Peterson 358). The best quality about the size of the internet is it is so big that it cannot be destroyed (Elmer-Dewitt 62).
There are many problems with the constant growth of the internet. It's largest weakness is that it is not owned or controlled by anyone (Elmer-Dewitt 63). There is no base plan for the future of the internet (Dunkin 180). As it grows in size, there is less control of the system. Many groups are fighting for censorship, but that is in=impossible with the size of the internet (Peterson 358). With more sites and pages being added to the internet, information is becoming harder to find, and it getting more difficult to find your way around. There are also problems just like "any heavily traveled highway, including vandalism, break-ins, and traffic jams. It's like an amusement park that is so successful that there are long waits for the most popular rides." (Elmer-Dewitt 63).
Right now, no one knows what direction the future of the internet will take. The future of the internet will be determined by whether the growth is just a trend or if it will keep growing and technology will keep up with it.
Works Cited
Dunkin, Amy. "Ready To Cruise The Internet." Business Week 28 Mar. 1994: 180- 181.
Elmer-Dewitt, Philip. "First Nation In Cyberspace." Time 6 Dec. 1993: 62-64.
Peterson, I. "Guiding The Growth Of The Info Highway." Science News 4 Jun. 1994: 357-358.
f:\12000 essays\sciences (985)\Computer\The History of Computers and the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The History of the Internet and the WWW
1. The History of the World Wide Web-
The internet started out as an information resource for the government so that they could talk to each other. They called it "The Industrucable Network" because it was so many computers linked to gether that if one server went down, no-one would know. This report will mainly focus on the history of the World Wide Web (WWW) because it is the fastest growing resource on the internet. The internet consists of diferent protocals such as WWW, Gopher (Like the WWW but text based), FTP (File Transfer Protocal), and Telnet (Allows you to connect to different BBS's). There are many more smaller one's but they are inumerable. A BBS is an abreviation for Bullitin Board Service. A BBS is a computer that you can ether dial into or access from the Internet. BBS's are normally text based.
2. The Creator of the WWW-
A graduate of Oxford University, England, Tim is now with the Laboratory for Computer Science ( LCS)at the Massachusetts Institute of Technology ( MIT). He directs the W3 Consortium, an open forum of companies and organizations with the mission to realize the full potential of the Web.
With a background of system design in real-time communications and text processing software development, in 1989 he invented the World Wide Web, an internet-based hypermedia initiative for global information sharing. while working at CERN, the European Particle Physics Laboratory. He spent two years with Plessey elecommunications Ltd a major UK Telecom equipment manufacturer, working on distributed transaction systems, message relays, and bar code technology.
In 1978 Tim left Plessey to join D.G Nash Ltd, where he wrote among other things typesetting software for intelligent printers, a multitasking operating system, and a generic macro expander.
A year and a half spent as an independent consultant included a six month stint as consultant software engineer at CERN, the European Particle Physics Laboratory in Geneva, Switzerland. Whilst there, he wrote for his own private use his first program for storing information including using random associations. Named "Enquire", and never published, this program formed the conceptual basis for the future development of the World Wide Web. I could go on and on forever telling you about this person, but my report is not about him.
From 1981 until 1984, Tim was a founding Director of Image Computer Systems Ltd, with technical design responsibility. In 1984, he took up a fellowship at CERN, to work on distributed real-time systems for
scientific data acquisition and system control.
In 1989, he proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier "Enquire" work, it was designed
to allow people to work together by combining their knowledge in a web
of hypertext documents. He wrote the first World Wide Web server and
the first client, a wysiwyg hypertext browser/editor which ran in the
NeXTStep environment. This work was started in October 1990, and the
program "WorldWideWeb" first made available within CERN in December,
and on the Internet at large in the summer of 1991.
Through 1991 and 1993, Tim continued working on the design of the Web, coordinating feedback from users across the Internet. His initial
specifications of URIs, HTTP and HTML were refined and discussed in
larger circles as the Web technology spread.
In 1994, Tim joined the Laboratory for Computer Science (LCS)at the
Massachusetts Institute of Technology (MIT). as Director of the W3 Consortium which coordinates W3 development worldwide, with teams at MIT and at INRIA in France. The consortium takes as it goal to realize the full potential of the web, ensuring its stability through rapid evolution and revolutionary transformations of its usage.
In 1995, Tim Berners-Lee received the Kilby Foundation's "Young
Innovator of the Year" Award for his invention of the World Wide Web,
and was corecipient of the ACM Software Systems Award. He has been
named as the recipient of the 1996 ACM Kobayashi award, and
corecipient of the 1996 Computers and Communication (C&C) award.
He has honorary degrees from the Parsons School of Design, New York (D.F.A., 1996) and Southampton University (D.Sc., 1996), and is a Distinguished Fellow of the British Computer Society. This has just been about Tim, but here is the real hsitory of the WWW.
3. History of the WWW dates -
"Information Management: A Proposal" written by Tim BL and circulated for comments at CERN (TBL). Paper "HyperText and CERN" produced as background (text or WriteNow format). Project proposal reformulated with encouragement from CN and ECP divisional management. Robert Cailliau (ECP) is co-author. The name World-Wide Web was decided because the name tells you what the reasorce does. HyperText is the language that users who want homepages on the internet use to write them. (See a sample of this on last page). In November of 1990 Initial WorldWideWeb program developed on the NeXT (TBL) . This was a wysiwyg browser/editor with direct inline creation of links. This made the WWW easier to use and navigate without having to type long numbers. Technical Student Nicola Pellow (CN) joins and starts work on the line-mode browser. Bernd Pollermann (CN) helps get interface to CERNVM "FIND" index running. TBL gives a colloquium on hypertext in general. When this happend the WWW really started sprouting because this new browsers made the WWW easier to navigate.
4. History of the World Wide Web dates 1991-1993
In 1991 a line mode browser (www) released to limited audience on "priam"
vax, rs6000, sun4. On the 17th of May a general release of WWW software was made avalible on Cern servers. This allowed people to start ther own internet providing such as America Online and South Carolina SuperNet. On the 12th of June a siminar was held for the WWW that allowed people to come in and see this new software in progres. I would like to skip ahead to present day because more intersting things are happening now.
5. Present Day World Wide Web and Internet reasorces-
The World Wide web today is the most popular reasource on the internet. Facts show that the internet has an average 45 million users on a day with one more joining every eight seconds. The internet transmits at a maximum speed of 100mb per second. Present day internet is fast and relyable, it is also very popular. The internet started out as just a few computers linked together, and now look what we have. The internet will live on forever, and so will the WWW. I belive that the WWW will be replaced by something in the next 10 years.
f:\12000 essays\sciences (985)\Computer\The History of Computers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The History of Computers
A computer is a machine built to do routine calculations with speed, reliability, and ease, greatly simplifying processes that without them would be a much longer, more drawn out process.
Since their introduction in the 1940's. Computers have become an important part of the world. Besides the systems found in offices, and homes, microcomputers are now used in everyday locations such as automobiles, aircrafts, telephones, and kitchen appliances.
Computers are used for education as well, as stated by Rourke Guides in his book, Computers: Computers are used in schools for scoring examination papers, and grades are sometimes recorded and kept on computers (Guides 7).
"The original idea of a computer came from Blaise Pascal, who invented the first digital calculating machine in 1642. It performed only additions of numbers entered by dials and was intended to help Pascal's father, who was a tax collector" (Buchsbaum 13).
However, in 1671, Gottfried Wilhelm von Leibniz invented a computer that could not only add but, multiply. Multiplication was quite a step to be taken by a computer because until then, the only thing a computer could do was add. The computer multiplied by successive adding and shifting (Guides 45).
Perhaps the first actual computer was made by Charles Babbage. He explains himself rather well with the following quote:
"One evening I was sitting in the rooms of the Analytical Society at Cambridge with a table full of logarithms lying open before me. Another member coming into the room, and seeing me half asleep called out, 'Well Babbage, what are you dreaming about?', to which I replied, 'I am thinking that all these tables might be calculated by machinery'"(Evans 41).
"The first general purpose computer was invented in 1871 by Charles Babbage, just before he died"(Evans 41). It was still a prototype of course, but it was a beginning.
Around this time, there was little or no interest in the development of computers. People feared, due to the lack of their knowledge, that computers would take over everything and run their lives (Buchsbaum 9).
If only these 18th century Americans, who were ignorant to the necessity of computers, would have known the many benefits they were missing out on, they would have more readily funded individuals such as Charles Babbage.
As Glossbrenner states in The Complete Handbook of Personal Computers, Computers are great information resources: Computers are great conversationalists. Its' keyboard is its' mouth, the processor its' brain, and the monitor, its' eyes, and just like a person it can communicate with you (Glossbrenner 18). People did not comprehend this early on, and didn't take computers as seriously as they should have.
In conclusion, throughout the years, people should have been more interested and involved in computers. Today, nearly everything is centered around them with their high speed capabilities getting even better with every day. They will continue to grow and become more advanced forever.
f:\12000 essays\sciences (985)\Computer\The History of Intel Corporation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
History of The Intel Corporation
The Intel Corporation is the largest manufacturer of computer devices in the world. In this research paper I will discuss where, ehrn, and how Intel was founded, the immediate effects that Intel made on the market, their marketing strategies, their competition, and finally, what Intel plans to do in the future.
Intel didn't just start out of thin air, it was created after Bob Noyce and Gordon Moore first founded Fairchild Semiconductor with six other colleagues. Fairchild Semiconductor was going pretty well for about ten years when Bob and Gordon decided to resign because they were tired of not being able to do things the way they wanted to; they proceeded to establish a new integrated cicuits electronics company. Gordon suggested that semiconductor memory looked promising enough to risk starting a new company. Intel was born.
Intel made quite an impact on the industry soon after it was founded. The sales revenues jumped enormously through Intel's International exspansion to many countries including Europe and the Phillipines in the early 70's. From 1969 to 1970 Intel's revenues went up by almost four-million dollars! Today, Intel is one of the biggest companies pulling in billions and billions of dollars each year.
Intel has had many factors over the years that has allowed it to monopolize the computer industry thus resulting in small competition. First of all, Intel is almost 25 years ahead of it's competitors. Therefore, most companies are just starting out and have little or no effect on Intel's sales. Another reason is obviously Intel's reputation. They have built up such a standard of excellence that when someone hears the word Intel they think high-quality.
Intel's popularity, reputation, and revenues are a direct result of their marketing strategies. Again, one of the most important factors that has made Intel so sucessful is their reputation that has been built up since they started. The Intel Inside program which was launced in May of 1991 was a promotional campaign that placed the Intel Inside Logo on all computers containing the new 486 processor. Clever and effective advertising has also increased Intel's popularity. One of the most popular commercials advertising the Pentium processor shows a fly-through inside a computer then it scans down showing the Intel Logo on the processor.
Intel definately has a very bright future ahead of them. By continually creating faster and more advanced processors and other computer components. They are always one step ahead of the competition which makes them a leader.
f:\12000 essays\sciences (985)\Computer\The impact of AI on Warfare.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Impact of AI on Warfare.
It is well known that throughout history man's favourite past time has been to make war. It has always been recognised that the opponent with the better weapons usually came out victorious. Nowadays, there is an increasing dependency, by the more developed nations, on what are called smart weapons and on the development of these weapons. The social impact of AI on warfare is something which needs to be considered very carefully for it raises many ethical and moral issues and arguments. The use of smart weapons raises many questions on the price paid to develop these weapons; money which could be used to solve most of the world's social problems such as poverty, hunger, etc. Another issue is the safety involved in the use of these weapons. Can we really make a weapon that does everything on its own without human help and are these weapons a threat to civilians? The main goal of this essay is to discuss whether it is justifiable to use AI in warfare and to what extent.
The old time dream of making war bloodless by science is finally becoming a reality. The strongest man will not win, but the one with the best machines will. Modernising the weapons used in war has been an issue since the beginning. Nowadays, the military has spent billions of dollars perfecting stealth technology to allow planes to slip past enemy lines undetected. The technology involved in a complicated system such as these fighter planes is immense. The older planes are packed with high tech gear such as micro processors, laser guiding devices, electromagnetic jammers and infrared sensors. With newer planes, the airforce is experimenting with a virtual reality helmet that projects a cartoon like image of the battlefield for the pilot, with flashing symbols for enemy planes. What is more, if a pilot passes out for various reasons such as the "G" force from a tight turn, then a computer system can automatically take over while the pilot is disabled. A recent example of the use of Al in warfare is the Gulf War. In operation Desert Storm, many weapons such as 'smart' Bombs were used. These were highly complex systems which used superior guidance capabilities but they did not contain any expert systems or neural networks.
The development of weapons which use highly complex systems has drastically reduced the number of human casualties in wartime. The bloodshed is minimised because of the accuracy of the computer systems used. This has been an advantage that has brought a lot of praise to the development of such sophisticated (not to mention expensive) weapons. More and more taxpayer's money is invested into research and development of weapons that may never be used. This is because the weapons are mostly for deterrent uses only and no country really wants to use them because of the power which they hold. The problem with using sophisticated computer systems in warfare is that the technology being used may fall into the wrong hands. But who is to say what are the wrong hands? Most people tend to think that if the technology is on their side, then it can not be misused. This has been proven to be false when in the Gulf War a whole battalion of British armoured vehicles were accidentally annihilated by an allied American stealth fighter which contained complex computer systems which were thought to be faultless.
The major problem with the use of highly sophisticated weapons is the cost of development. The best solution to this problem has been found to be the fitting of old B52's with modern technology which is almost as good and gets the job done, all at a minute fraction of the price. The other problem arising from the issue is the control over the development and employment of such weapons. The solution to this problem would be an international control over development and use of weapons by independent organisations such as the United Nations. Also, associations can be formed in order to group all scientists who are involved in the development of the weapons in order to keep track of them. The use of the extremely high tech weapons should be reserved for cases where it is absolutely necessary. Although governments are eager to try out equipment on which they have spent millions and sometimes billions of taxpayer's money, the use of Al is showing proof that it is serving its ultimate purpose: to slowly move men farther and farther from the killing fields.
f:\12000 essays\sciences (985)\Computer\The Interestingnet .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
With only 1000 or so networks in the mid 1980's, the Internet has become tremendous technological change to society over the past few years. In 1994, more than twenty-five million people gained access to the Internet (Groiler..). The Internet users are mainly from the United States of America and Europe, but other countries around the world will be connected soon as improvements of communication lines are made.
The Internet originated in the United States Defense Department's ARPAnet (Advanced Research Project Agency, produced by the Pentagon) Project in 1969 (Krol). Military planners sought to design a computer networking system that could withstand an attack such as a nuclear war. In the 1980's, the National Science Foundation built five Superconductor Computer Centers to give several universities academic access to high powered computers formerly available to only the United State's military (Krol). The National Science Foundation then built its own network chaining more universities together. Later, the network connections were being used for purposes unrelated to the National Science Foundation's idea such as the universities sending electronic mail (today, it is understood as Email). The United States government then helped pushed the evolution of the Internet, calling the project: Information Super Highway (Groiler..).
In the early 1990's the trend then boomed. Businesses soon connected to the Internet, and started using the Internet as a way of saving money through advertising products and electronic mailing (Abbot). Communications between different companies also arose due to the convenience of the Internet. Owners of personal computers soon became eager to connect to the Internet. Through a modem or Ethernet adapter (computer hardware devices that allow a physical connection to Internet), home computers can now be made to be accessible to the Internet (Groiler..).
New Internet servers have evolved since the National Sciences Foundation's basic idea back in the 1980's (Krol). The majority of the home users subscribe to services such as Netscape, Prodigy, America Online, and CompuServe. These services are connected to the Internet and provide user-friendly access to the Internet for a reasonable monthly fee. These services are connected to a main server called the World Wide Web. The World Wide Web is a service that is defined as global international networking (Abbot). The Web makes all of the systems from other countries work together with compatibility. Thus, allowing the Internet to be internationally user-friendly. The United State's stock market has greatly benefited due to sudden interest and popularity in the Internet. Stock holders with share of Internet related companies have noticed a skyrocket in the prices over a short period of time.
The Internet holds an endless amount of information. From Chia Pets, to vacation sites, to the anatomy of a bullfrog, the Internet covers information on and about anything. For example, I was very interested in the sport Broomball when I played my first game at Iowa State's Hockey Rink. Not knowing much about the newly experienced sport, my interests grew to find out more about it. Using my computer, I typed in "Broomball" into Netscape. To my surprise, forty sites that contained the word "Broomball" popped up, and I was able to find out much more about the sport. One of the sights that I visited happened to be down in Australia, another up in Canada! From there, I now know that Broomball leagues can be found all over Canada, and that Broomball was first invented in 1981.
Millions of college student's lives have be effected because of the Internet. To college students, the Internet is a twenty-four hour library that can be accessed through various computer labs across campuses. To others, it is a way of electronically sending in homework, or sending a letter to a friend who is enrolled to a different college. It is also an exciting, growing spot to visit when boredom casts over. From obtaining information to Emailing, uses of the Internet can be endless for students.
With my personal computer set up with Netscape service along with a thirty dollar Ethernet Card, I am able to browse through the Internet in my dorm room. I often Email friends at the University of Northern Iowa, to my cousin in Chicago, and to friends back in my hometown Dubuque. This is quite handy because I quickly found out that the cost of phone calls can be ridiculous, and the wait for a computer to free up in the labs to be quite frustrating. In a few computer science classes of mine, Project Vincent is a system used with the Internet during class. In class, we use it to gain entry into different programs and software. I also use it weekly to submit my Computer Science 227 programming class homework, which is handy because I do not have to leave my room in order to do homework. In my English class, we often head over to a computer center and discuss previous readings though networking. Here, we can join each other in group discussion individually logged onto computers at the same time. From my point of view, the Internet has drastically changes my life since my arrival at college.
The lines once constructed for nuclear protection, have now proven to be a source of useful information and means of mass communication (Krol). The Internet aids education and makes the amount of resources endless. In the future, more and more colleges, high schools, and grade schools will soon be connecting to the Internet. Those who are currently connected, will definitely stay connected. From my point of view, the Internet will continually be the exciting road towards information and communication in the years to come.
Works Cited:
Abbot, Tony. On Internet 94: An International Guide to Journals, Newsletters, Texts,
Discussion Lists, and Other Resources on the Internet. 1994.
Krol, Edward. The Whole Internet. 1992.
The 1995 Groiler Multimedia Encyclopedia. "Internet." 1995.
f:\12000 essays\sciences (985)\Computer\The Internet 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Internet is a worldwide connection of thousands of computer networks. All of them speak the same language, TCP/IP, the standard protocol. The Internet allows people with access to these networks to share information and knowledge. Resources available on the Internet are chat groups, e-mail, newsgroups, file transfers, and the World Wide Web. The Internet has no centralized authority and it is uncensored. The Internet belongs to
everyone and to no one.
The Internet is structured in a hierarchy. At the top, each country has at least one
public backbone network. Backbone networks are made of high speed lines that connect to other backbones. There are thousands of service providers and networks that connect
home or college users to the backbone networks. Today, there are more than
fifty-thousand networks in more than one-hundred countries worldwide. However, it all
started with one network.
In the early 1960's the Cold War was escalating and the United States Government was faced with a problem. How could the country communicate after a nuclear war? The Pentagon's Advanced Research Projects Agency, ARPA, had a solution. They would create a non-centralized network that linked from city to city, and base to base. The network was designed to function when parts of it were destroyed. The network could not have a center because it would be a primary target for enemies. In 1969, ARPANET was created, named after its original Pentagon sponsor. There were four supercomputer stations, called nodes, on this high speed network.
ARPANET grew during the 1970's as more and more supercomputer stations were
added. The users of ARPANET had changed the high speed network to an electronic post
office. Scientists and researchers used ARPANET to collaborate on projects and to trade
notes. Eventually, people used ARPANET for leisure activities such as chatting. Soon
after, the mailing list was developed. Mailing lists were discussion groups of people who
would send their messages via e-mail to a group address, and also receive messages. This
could be done twenty-four hours a day.
As ARPANET became larger, a more sophisticated and standard protocol was needed. The protocol would have to link users from other small networks to ARPANET, the main network. The standard protocol invented in 1977 was called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other network was made possible. In 1983, the military portion of ARPANET broke off and formed MILNET. The same year, TCP/IP was made a standard and it was being used by everyone. It linked all parts of the branching complex networks, which soon came to be called the Internet.
In 1985, the National Science Foundation (NSF) began a program to establish Internet access centered on its six powerful supercomputer stations across the United States. They created a backbone called NSFNET to connect college campuses via regional networks to its supercomputer centers. ARPANET officially expired in 1989. Most of the networks were gained by NSFNET. The others became parts of smaller networks. The Defense Communications Agency shut down ARPANET because its functions had been taken over by NSFNET. Amazingly, when ARPANET was turned off in June of 1990, no one except the network staff noticed.
In the early 1990's the Internet experienced explosive growth. It was estimated that the
number of computers connected to the Internet was doubling every year. It was also
estimated that at this rapid rate of growth, everyone would have an e-mail address by the
year 2020. The main cause of this growth was the creation of the World Wide Web. The World Wide Web was created at CERN, a physics laboratory in Geneva,
Switzerland. The Web's development was based on the transmission of web pages over the
Internet, called Hyper Text Transmission Protocol or HTTP. It is an interactive system for
the dissemination and retrieval of information through web pages. The pages may consist
of text, pictures, sound, music, voice, animations, and video. Web pages can link to other
web pages by hypertext links. When there is hypertext on a page, the user can simply click
on the link and be taken to the new page. Previously, the Internet was black and white,
text, and files. The web added color. Web pages can provide entertainment, information, or commercial advertisement. The World Wide Web is the fastest growing Internet resource.
The Internet has dramatically changed from its original purpose. It was
formed by the United States government for exclusive use of government officials and the
military to communicate after a nuclear war. Today, the Internet is used globally for a
variety of purposes. People can send their friends an electronic "hello." They can
download a recipe for a new type of lasagna. They can argue about politics on-line, and
even shop and bank electronically in their homes. The number of people signing on-line is
still increasing and the end it not in sight. As we approach the 21st century, we are
experiencing a great transformation due to the Internet and the World Wide Web. We are
breaking through the restrictions of the printed page and the boundaries of nations and
cultures.
You may not be aware of it, but the World Wide Web is currently transforming the world as we know it. You've probably heard a lot about the Internet and the World Wide Web, but you may not know what these terms mean and may be intimidated by this rapidly advancing field of science. If there is one aspect of this field that is advancing faster than any other, it is the ease with which this technology can be learned.
The Internet, by definition is a "network of networks." That is, it is a world-wide network that links many smaller networks. The World Wide Web is a new subdivision of the Internet. The World Wide Web consists of computers (servers) all over the world that store information in a textual as well as a multimedia format. Each of these servers has a specific Internet address which allows users to easily locate information. Files stored on a server can be accessed in two ways. The first is simply by clicking on a link in a Web document (better known as a Web page) that points to the address of another document. The second way to locate a particular Web page is by typing the Universal Resource Locator (URL) of the page in your browser (the software interface used to navigate the World Wide Web). The URL of a page is the string of characters that appears in the Location: box at the top of your screen. Every Web page has a unique URL which begins with the letters "http://" that identify it as a Web page. This is the equivalent of the Internet address and tells the computer where to find the particular page you are looking for.
The greatest advantage of producing information in HTML format, is that files may be linked to one another via hyperlinks (or links) within the documents. Links usually appear in a different color than the rest of the text on a Web page and are often underlined. Navigating the Web is as simple as clicking a mouse button. Clicking the mouse on a link tells the computer to go to another Internet location and display a specific file. Also, most Web browsers allow easy navigation of the Web by utilizing "Back" and "Forward" buttons that can trace your path around the Web. Links within Web pages aren't limited to just other Web pages. They can include any type of file at all. Some of the more common types of files found on the Web are graphics files, sound files, and files containing movie clips. These files can be run by different helper applications that the Web browser associates with files of that type.
As a student, the Web can provide you with an enormous source of information pertaining to any area of academic interest. This can be especially useful when information is needed to write a term paper. Students can use one of the many Search Engines on the Web to locate information on virtually any topic, just by typing the topic that they wish to find information on.
Another application many students find the World Wide Web to be useful for is career planning. There are hundreds of Web sites that contain information about job openings in every field all over the United States as well as abroad. Job openings can be found listed either by profession or by geographical location, so students don't have to waste time looking through job listings that don't pertain to their area of interest or location of preference. Alas, if students fail to find job openings they are interested in, they can post their resumes to employment service Web sites which try to match employers with those seeking employment.
The Web can also be a useful place for high school students applying to college or college graduates who wish to delay their job hunt by going to graduate school. Many colleges and universities around the world are getting on the Internet to provide their students with access to the enormous amount of information available on it. This allows students the opportunity to browse Web servers at different colleges where they can find information useful in selecting the institution most appropriate to their academic needs.
While the World Wide Web can provide information crucial to your academic and professional career, the information contained on it is not limited to such serious matters. The Web can also provide some entertaining diversions from academics. You can spend hours on the internet and it only feels like a couple minutes. A recent topic I have personally been looking into is three dimensional chat rooms. In this type of chat room you virtually walk around and approach other people and attempt to have a conversation with them. Unfortunatlely everyone is not always responsive as you would like them to be. As an avid user of the internet I highly recommend all people to look into "Worlds Chat".
As the 21st century approaches, it seems inevitable that computer and telecommunications technology will radically transform our world in the years to come. The Internet and the World Wide Web, in particular, appear to be the protocol that will lead us into the Information Age. The social and political implications for this new technology are astounding. Never before has such an enormous amount of information been available to a limitless number of people. Already, issues of censorship and free-speech have come to take center stage, as the world scrambles to deal with the power of modern technology.
The World Wide Web has already affected our educational, political, and commercial sectors, and it now seems poised to affect every other aspect of human life. The days where every home will have a computer are not far from the present. In order to keep up with the technology of the future, you need to catch up with the technology of the present. The easiest way to do this is to simply wander around the World Wide Web. It's as easy as clicking a mouse. So sit back and explore the World Wide Web at your own pace, and don't let yourself get left behind when the next technological breakthrough comes along.
f:\12000 essays\sciences (985)\Computer\The Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Imagine talking about the latest elections with someone three thousand miles away without receiving a tremendous phone bill. Or sending a letter to a friend or relative and having it arrive one second later. How would it feel to know that any source of information is at your fingertips at the press of a button? All of these are possible and more with a system of networks all connected and sending information at light speed from place to place known as the Internet. This is a trend word for the nineties yet it has a background that spans all the way back to the sixties. The history of the Internet is a full one at that even though it has only been around for about 30 years. It has grown to be the greatest collection of networks in the world, its origins go back to 1962.
In 1962 the original idea for this great network of computers sprung forth from a question "How could U.S. authorities successfully communicate after a nuclear war?" The answer came from the Rand Corporation, America's foremost Cold War think-tank. Why not create a network of computers without one central main authoritative unit (Sterling 1) The Rand Corporation working along side the U.S. Advanced Research Projects Agency (ARPA) devised a plan. The network itself would be considered unreliable at all times; therefore it would never become too dependable and powerful. Each computer on the network or node would have its own authority to originate, pass, and receive messages. The name given to this network was the ARPANET.
To fully understand the ARPANET, an understanding of how a network works is needed. A network is a group of computers connected by a permanent cable or temporary phone line. The sole purpose of a network is to be able to communicate and send information electronically. The plan for the ARPANET was to have the messages themselves divided into packets, each packet separately addressed to be able to wind its way through the network on an individual basis. If one node was gone it would not matter, the message would find a way to another node.
The idea was kicked around by MIT, UCLA, and RAND during the sixties. After the British setup a test network of this type, ARPA decided to fund a larger project in the USA. The first university to receive a node called an Interface Message Processor for this network was UCLA around Labor Day, marking September 1, 1969 the birth date of the Internet as we know it today (Cerf 1). The next university was Stanford Research Institute (SRI) then UC Santa Barbara (UCSB), and finally University of Utah (Cerf 1).
The original computers used to connect to the ARPANET were consider super computers of the time. Science Data Systems (SDS) Sigma 7 was the name of the original computer at UCLA (Cerf 1). Each one of the computers connected to each other at a speed of about 400,000 bytes per second or 400 kbps over a dedicated line, which was fast at the time. Originally they connected using a protocol, "Network Control Protocol", or NCP but as time passed and the technology advanced, NCP was superseded by the protocol used by most Internet users today TCP/IP (Sterling 2). TCP or Transmission Control Protocol converts the message into streams of packets at the source, then reassembles them back into messages at the destination. IP, or Internet Protocol handles the addressing, seeing to it that packets are routed across multiple nodes and even across multiple networks with multiple standards not only ARPA's. This protocol came into use around 1977 (Zakon 5).
In 1969 there existed 4 nodes, in 1971 there were 15, and in 1972 there were 37 nodes. This exponential growth has continued even today in 1996 there are about 5.3 million nodes connected to the Internet (Zakon 14). The number of people, however, is estimated because the number of people connected to any one network varies. The amount of content over the Internet is estimated at about 12,000,000 web pages. As the numbers grew and grew the military finally dropped out in 1983 and formed MILNET. The ARPANET also dawned a new name in 1989; it became known as the Internet.
The ARPANET was not the only network of this time. Companies had their own Local Area Network or LAN and Ethernet. LANs usually have one main server and several computers connected to that server, such as the computer lab at Prep. The server usually has a large hard drive and possibly share a printer. The computers connected to the server generally have a microprocessor and maybe a small hard drive. All the important software is shared from the server. An Ethernet on the other hand, is similar to a LAN but the connecting cable is large and enables other computers on the network to be up to 1000ft. away. The speed of an Ethernet is faster than a regular LAN its base speed is 10Mbps. To put this in perspective it is more than 300% more faster than a regular modem traveling at 28.8kbps. Each of these types of networks connected to the Internet through their own dedicated node.
There is no government regulating the Internet, it is anarchy in its greatest form. The Internet's "anarchy" may seem strange, but it makes a certain deep and basic sense. It's rather like the "anarchy" of the English language. Nobody rents or owns English. As an English-speaking person, it's up to you to learn how to speak English properly and use it however you want. Though many people earn their living from using, exploiting, and teaching English, "English" as an institution is public property. Much the same goes for the Internet. Would the English language be improved if there was an English Language Co.? There'd probably be far fewer new words in English, and fewer new ideas. People on the Internet feel the same way about their institution. It's an institution that resists institutionalization. The Internet belongs to everyone and no one (Sterling 4).
Our government and many others are attempting to regulate material on the Internet. The Telecommunications Act that passed about a year ago which included the Communication decency act (CDA), put a few rules not on the Internet but on the people who own computers connected to the Internet, such as child pornography. It is illegal to post on any website anywhere. This Act was ruled unconstitutional by the Supreme Court. Other governments have tried to put limitations on the Internet and some have even succeeded. China requires users and ISPs to register with police. Germany cut off access to some newsgroups carried on CompuServe. This ban was lifted due to protest. Saudi Arabia confines Internet Access to universities and hospitals. Singapore requires political and religious content to register with the state. New Zealand classifies computer disks as "publications" that can be censored and seized (Zakon 14). On November 1 the New York state senate passed a bill which, barring a constitutional challenge, made speech that is "harmful to minors" punishable as a felony. Ann Beeson, chief cyberlitigator for the American Civil Liberty Union (ACLU), said "The law will show how nonsensical state regulation of the Internet is. It will affect online users not just in New York, but throughout the world. In addition to violating the First Amendment the law violates the commerce clause because it regulates the actions of the online community even wholly outside the state of New York." This trend is not only limited to New York. In 1995 and '96, 11 states passed laws that somehow censor speech on the Internet. They restrict everything from soliciting minors for online sex (North Carolina) to prohibiting college professors from using university-sponsored Internet resources to view sexually explicit material (Virginia). The ACLU has the Internet's biggest defense in cases such as the CDA.
With over 2 million servers connected to the Internet there is always something to do online. In fact this is a major problem for some people. They spend so much time in cyberspace they forget how to interact with other people, and their social skills deteriorate. A person like this is known as a net addict. A common question asked is "What is on the Internet that is so addicting?" One possible answer to this question is online a person can gain a false sense of reality. A person can be anyone they want to be online. This attraction alone is enough for a person to give up reality altogether. This statement can be debated, but if the choice had to be made between an ideal person or the regular person, which would be chosen more often?
One of the many attractions to the Internet is electronic mail (E-mail), faster by several orders of magnitude than the U.S. mail, which is known by Internet regulars as "snail-mail." Internet mail is like a fax, it is electronic text written then sent from the computer over the phone line to the Internet Service Provider (ISP). The ISP then routes the mail to its destination. One piece of e-mail may go over 1000 computers bouncing of each one before it reaches its destination. This process takes place all in a matter of seconds depending on your letters length and if you have a file attached. New forms of e-mail are being developed such as voice mail and video mail; both these exist and require special hardware and software. They also take longer to send and receive.
One of the first features on the ARPANET then Internet were discussion groups. These discussion groups or "newsgroups" as they are more commonly known, are a world of their own. This world of news, debate, and argument is generally known as USENET. The Internet and USENET are quite different. USENET is rather like an enormous billowing crowd of gossipy, news-hungry people, wandering in and through the Internet on their way to various private backyard barbecues (Sterling 4). At any moment or time there are over 28,000 separate newsgroups on USENET, and the discussions generate about 7 million words of typed commentary every single day (Sterling 4). All USENET newsgroups are organized by hierarchies and given prefix names such as: alt (alternative), rec (recreation), comp (computers), misc (miscellaneous), and soc (society). These were the top five newsgroup hierarchies in 1996 (Georgia 206). USENET is the focus of most of the censorship because this is were much of the pornography is view. It is uncontrollable because a newsgroup can be created at anytime with out regulation or supervision. 7.6% of all the newsgroups deal with adult oriented material. It may be a small number yet it has been blown out of proportion by media and the like.
The main use of the Internet is using a browser such as Netscape or Internet Explorer to view web pages. The trendy word for this is "surfing" or for some people with slow connections, "crawling". To view a web page the user types in the desired address and then magically it appears on screen; This is the description most users give when asked to explain the Internet. Underneath that there are complex commands telling the computer what to send and receive, what data is given out and who is denied or accepted. The process begins with typing in the address the usual Http://www... and so on. The http stands for HyperText Tranfer Protocol which tells the computer which protocol to use over the World Wide Web (www). When the user enters an address such as http://www.microsoft.com it sends the request over the www to find Microsoft's web server. The .com section specifies that this is a commercial site; other suffixes include .edu (education), .mil (military), .gov (government) , and .net (external network). When the user accesses Microsoft's site they can explorer Microsoft's computer by clicking on hyperlinks which are links to other pages. When viewing specific sites they normally are labeled .htm or .html; these are acronyms for hypertext markup language which is the programming language in which most webpages are made. All these elements combined are what most people consider the Internet.
The Internet is so vast and huge, a person could spend 24 hours a day 7 days a week 365 days a year and more online and never see all of it. The amount of information on the Internet is over several trillion (tera) bytes. To put this into perspective that is over 600000 floppy disks. With all that information it is so easy to loose track of your target and waste time. Sometimes there is multiple tasks needed to be done and once an interesting site is found one hyperlink leads to another, one hour turns into three and the rest of the world is put on hold. Other times a blank screen can sit there and nothing can be thought of to visit or learn about. The Internet is great if a person has 3 or 4 hours to kill. One tip on how to limit time online is: download a timer that disconnects if a time limit has been passed. These programs usually know what day it is and allow only so much time online per day. Self discipline is another method to; train yourself to get up and leave. The consequence of being online for long periods of time is large access bill from the ISP.
Day by day the Internet grows. Some people are predicting a crash because of the excessive traffic online and the limited capabilities of the servers that are visited. AOL did crash for 15 hours several months ago and the question was raised "Can our servers handle the traffic?" The answer though is in the future. As the Internet progresses so does technology. Every 5 months newer computers are released and the computers released 5 months earlier go out of date. The technological forecast call for Virtual Reality Markup Language (VRML) in the near future. This enables the user to explorer in 3D. Imagine walking through the Sistine chapel while sitting in an office in Spokane. Many ask "What does the future of the Internet hold?" the answer only time will tell.
f:\12000 essays\sciences (985)\Computer\The MouseComputer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Mouse
The computer mouse is a common pointing device, popularized by
its inclusion as standard equipment with the Apple Macintosh. With
the rise in popularity of graphical user interfaces in MS-DOS; UNIX,
and OS/2, use of mice is growing throughout the personal computer
and workstation worlds. The basic features of a mouse are a casing
with a flat bottom, designed to be gripped by one hand; one or more
buttons on the top; a multidirectional detection device (usually a ball)
on the bottom; and a cable connecting the mouse to the computer. By
moving the mouse on a surface (such as a desk), the user controls an
on-screen cursor. A mouse is a relative pointing device because there are
no defined limits to the mouse's movement and because its placement on
a surface does not map directly to a specific screen location. To select
items or choose commands on the screen, the user presses one of the
mouse's buttons, producing a "mouse click."
f:\12000 essays\sciences (985)\Computer\The Necessity of Computer Security.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Necessity Of Computer Security
When the first electronic computers emerged from university and military laboratories in
the late 1940s and early 1950s, visionaries proclaimed them the harbingers of a second industrial
revolution that would transform business, government and industry. But few laymen, even if they
were aware of the machines, could see the connection. Experts too, were sceptical. Not only
were computers huge, expensive, one-of-a-kind devices designed for performing abstruse
scientific and military calculations, such as cracking codes and calculations missile trajectories,
they were also extremely difficult to handle.
Now, it is clear that computers are not only here to stay, but they have a profound effect
on society as well. As John McCarthy, Professor of Computer Science at Stanford University,
speculated in 1966: "The computer gives signs of becoming the contemporary counterpart of the
steam engine that brought on the industrial revolution - one that is still gathering momentum and
whose true nature had yet to be seen."
Today's applications of computers are vast. They are used to run ordinary household
appliances such as televisions and microwaves, to being tools in the workplaces through word
processing, spreadsheets, and graphics software, to running monumental tasks such as being the
heart and soul of the nations tax processing department, and managing the project timetables of
the Space Shuttle. It is obvious that the computer is now and always will be inexorably linked to
our lives, and we have no choice but to accept this technology and learn how to harness its total
potential.
With any progressing technology, an unauthorized application can almost be found for it.
A computer could and has been used for theft and fraud - for example, as a database and manager
of illegal activities such as drug trafficking and pornography. However, we must not just consider
the harmful applications of the computer, but also take into account the good that they have
caused.
When society embraced the computer technology, we have to treat this as an extension of
what we already have at hand. This means that some problems that we had before the computer
era may also arise now, in the form where computers are an accessory to a crime.
One of the problems that society has faced ever since the dawn of civilization is privacy.
The issue of privacy on the Internet has risen many arguments for and against having it. The issue
of privacy has gotten to the point where the government of the United States has placed a bill
promoting a single chip to encrypt all private material on the Internet.
Why is privacy so important? Hiding confidential material from intruders does not
necessarily mean that what we keep secret it illegal. Since ancient times, people have trusted
couriers to carry their messages. We seal out messages in a envelope when sending mail through
the postal service. Using computer and encrypting programs to transfer electronic messages
securely is not different from sending a letter the old-fashioned way. This paper will examine the
modern methods of encrypting messages and analyse why Phil Zimmerman created an extremely
powerful civilian encipherment program, called the PGP, for "Pretty Good Privacy." In
particular, by focusing on cryptography, which was originally intended for military use, this paper
will examine just how easy it is to conclude why giving civilians a military-grade encrypting
program such as the PGP may be dangerous to national security. Therefore, with any type of new
technology, this paper will argue that the application of cryptography for civilian purposes is not
just a right, but is also a necessity.
Increasingly in today's era of computer technology, not only banks but also businesses and
government agencies are turning to encryption. Computer security experts consider it best and
most practical way to protect computer data from unauthorized disclosure when transmitted and
even when stored on a disk, tape, of the magnetic strip of a credit card.
Two encryption systems have led the way in the modern era. One is the single-key
system, in which data is both encrypted and decrypted with the same key, a sequence of eight
numbers, each between 0 and 127. The other is a 2-key system; in this approach to cryptography,
a pair of mathematically complementary keys, each containing as many as 200 digits, are used for
encryptions and decryption. In contrast with ciphers of earlier generations, where security
depended in part on concealing the algorithm, confidentiality of a computer encrypted message
hinges solely on the secrecy of the keys. Each system is thought to encrypt a message so
inscrutably that the step-by-step mathematical algorithms can be made public without
compromising security.
The single key system, named the Data Encryption Standard - DES for short - was
designed in 1977 as the official method for protecting unclassified computer data in agencies of
the American Federal government. Its evolution began in 1973 when the US National Bureau of
Standards, responding to public concern about the confidentiality of computerized information
outside military and diplomatic channels, invited the submission of data-encryption techniques as
the first step towards an encryption scheme intended for public use.
The method selected by the bureau as the DES was developed by IBM researchers.
During encryption, the DES algorithm divides a message into blocks of eight characters, then
enciphers them one after another. Under control of the key, the letters and numbers of each block
are scrambled no fewer than 16 times, resulting in eight characters of ciphertext.
As good as the DES is, obsolescence will almost certainly overtake it. The life span of
encryption systems tends to be short; the older and more widely used a cipher is, the higher the
potential payoff if it is cracked, and the greater the likelihood that someone has succeeded.
An entirely different approach to encryption, called the 2-key or public-key system,
simplifies the problem of key distribution and management. The approach to cryptography
eliminates the need for subscribers to share keys that must be kept confidential. In a public-key
system, each subscriber has a pair of keys. One of them is the so-called public key, which is freely
available to anyone who wishes to communicate with its owner. The other is a secret key, known
only to its owner. Though either key can be used to encipher or to decipher data encrypted with
its mate, in most instances, the public key is employed for encoding, and the private key for
decoding. Thus, anyone can send a secret message to anyone else by using the addressee's public
key to encrypt its contents. But only the recipient of the message can make sense of it, since only
that person has the private key.
A public key cryptosystem is called the PGP, for Pretty Good Privacy. Designed by Phil
Zimmerman, this program is freely distributed for the purpose of giving the public the knowledge
that whatever communications they pass, they can be sure that it is practically unbreakable.
PGP generates a public and private key for the user using the RSA technique. The data is
then encrypted and decrypted with the IDEA algorithm - which is similar to the DES, but the
work factor to decode the encrypted message by brute force is much higher than what the DES
could provide. The reason why the RSA is used only when generating the keys is that the RSA
takes a very long time to encrypt an entire document, where using the RSA on the keys takes a
mere fraction of the time.
At this time, Zimmerman is bing charged by the US government for his effort in
developing the PGP. The government considers encryption as a weapon, and they have
established regulations controlling or prohibiting the export of munitions. Since the PGP is a
powerful encryption program, it is considered and can be used as a powerful weapon and may be
a threat to national security.
On the Internet, it is clear that many people all over the world are against the US
government's effort on limiting the PGP's encryption capabilities, and their reason is that the ban
infringes on the people's right to privacy.
The PGP must not be treated only as a weapon, for it contains analogies that are not used
in wartime. One of them is authentication. The two-key cryptosystem is designed with
authentication in mind: Using someone's public key to encrypt enables only the owner of the
private key to decrypt the same message. In the real world, we use our own signature to prove
out identity in signing cheques or contracts. There exists retina scanners that check the blood
vessels in out eyes, as well as fingerprint analysis devices. These use our physical characteristics
to prove our identity. A digital signature generated by a public key cryptosystem is much harder
to counterfeit because of the mathematics of factoring - which is an advantage over conventional
methods of tests for out identity.
Another analogy the PGP has with the real world is the need for security. Banks and
corporations employ a trusted courier - in the form of an armoured truck or a guard - to transfer
sensitive documents or valuables. However, this is expensive for civilian purposes, and the PGP
provides the same or better security when securing civilian information.
While many argue that limiting the PGP's abilities are against the people's right to privacy,
the PGP must also be seen as a necessity as we enter the Information Age. There is currently
little or no practical and inexpensive way to secure digital information for civilians, and the PGP is
an answer to this problem.
Computer privacy must not be treated differently than any other method to make private
any documents. Rather, we must consider the computer as a tool and use it as an extension of
society's evolution. Clearly the techniques we employ for computer privacy such as encryption,
secure transfers and authentication closely mirrors past efforts at privacy and non-criminal efforts.
The government is putting more pressure against the distribution of PGP outside of the
United States. One of their main reasons was that since it is freely distributed and thus can be
modified in such a way that even the vast computational resources of the US government cannot
break the PGP's secured message. The government could now reason that the PGP can provide
criminal organizations a means of secure communications and storage of their activities, and thus
make the law enforcement's job much harder in tracking criminals down and proving them guilty.
Also, we must never forget one of out basic human rights - one that many laid their lives
for, is freedom. We have the freedom to do anything we wish that is within the law. The
government is now attempting to pass a bill promoting a single algorithm to encrypt and decrypt
all data that belongs to its citizens. A multitude of people around the world are opposed to this
concept, arguing that it is against their freedom and their privacy.
f:\12000 essays\sciences (985)\Computer\The Office of Tomorrow.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Office of Today
In an increasing number of companies, traditional office space is giving way to community areas and empty chairs as employees work from home, from their cars or from virtually anywhere. Advanced technologies and progressive HR strategies make these alternative offices possible.
Imagine it's 2 o'clock on a Wednesday afternoon. Inside the dining room of many nationwide offices, Joe Smith, manager of HR, is downing a sandwich and soda while wading through phone and E-mail messages. In front of him is a computer-equipped with a fax-modem-is plugged into a special port on the dining table. The contents of his briefcase are spread on the table. As he sifts through a stack of paperwork and types responses into the computer, he periodically picks up a cordless phone and places a call to a colleague or associate. As he talks, he sometimes wanders across the room.
To be sure, this isn't your ordinary corporate environment. Smith doesn't have a permanent desk or workspace, nor his own telephone. When he enters the ad agency's building, he checks out a portable Macintosh computer and a cordless phone and heads off to whatever nook or cranny he chooses. It might be the company library, or a common area under a bright window. It could even be the dining room or Student Union, which houses punching bags, televisions and a pool table. Wherever he goes, a network forwards mail and phone pages to him and a computer routes calls, faxes and E-mail messages to his assigned extension. He simply logs onto the firm's computer system and accesses his security-protected files.
He is not tethered to a specific work area nor forced to function in any predefined way. Joe Smith spends mornings, and even sometimes an entire day, connected from home via sophisticated voicemail and E-mail systems, as well as a pager. His work is process and task-oriented. As long as he gets everything done, that's what counts. Ultimately, his productivity is greater and his job-satisfaction level is higher. And for somebody trying to get in touch with him, it's easy. Nobody can tell that Joe might be in his car or sitting at home reading a stack of resumes in his pajamas. The call gets forwarded to him wherever he's working.
You've just entered the vast frontier of the virtual office-a universe in which leading-edge technology and new concepts redefine work and job functions by enabling employees to work from virtually anywhere. The concept allows a growing number of companies to change their workplaces in ways never considered just a few years ago. They're scrapping assigned desks and conventional office space to create a bold new world where employees telecommute, function on a mobile basis or use satellite offices or communal work areas that are free of assigned spaces with personal nick nacks.
IBM, AT&T, Travelers Corporation, Pacific Bell, Panasonic, Apple Computer and J.C. Penney are among the firms recognizing the virtual-office concept. But they're just a few. The percentage of U.S. companies that have work-at-home programs alone has more than doubled in the past five years, from 7% in 1988 to 18% today. In fact, New York-based Link Resources, which tracks telecommuting and virtual-office trends, has found that 7.6 million Americans now telecommute-a figure that's expected to swell to 25 million by the year 2000. And if you add mobile workers-those who use their cars, client offices, hotels and satellite work areas to get the job done-there's an estimated 1 million more virtual workers.
Both companies and employees are discovering the benefits of virtual arrangements. Businesses that successfully incorporate them are able to slash real-estate costs and adhere to stringent air-quality regulations by curtailing traffic and commuters. They're also finding that by being flexible, they're more responsive to customers, while retaining key personnel who otherwise might be lost to a cross-country move or a newborn baby. And employees who successfully embrace the concept are better able to manage their work and personal lives. Left for the most part to work on their own terms, they're often happier, as well as more creative and productive.
Of course, the basic idea of working away from the office is nothing new. But today, high-speed notebook computers, lightning-fast data modems, telephone lines that provide advanced data-transmission capabilities, portable printers and wireless communication are starting a quiet revolution. As a society, we're transforming the way we work and what's possible. It's creating tremendous opportunities, but it also is generating a great deal of stress and difficulty. There are tremendous organizational changes required to make it work. As markets have changed-as companies have downsized, streamlined and restructured-many have been forced to explore new ways to support the work effort. The virtual office, or alternative office, is one of the most effective strategies for dealing with these changes.
Of course, the effect of alternative officing on the HR function is great. HR must change the way it hires, evaluates employees and terminates them. It must train an existing work force to fit into a new corporate model. There are issues involving benefits, compensation and liability. And, perhaps most importantly, there's the enormous challenge of holding the corporate culture together-even if employees no longer spend time socializing over the watercooler or in face-to-face meetings. When a company makes a commitment to adopt a virtual-office environment-whether it's shared work-space or basic telecommuting-it takes time for people to acclimate and adjust. If HR can't meet the challenge, and employees don't buy in, then the program is destined to fail.
Virtual offices break down traditional office walls. Step inside one and you quickly see how different an environment the concept has created. Gone are the cubicles in which employees used to work. In their place are informal work carrels and open areas where any employee-whether it's the CEO or an administrative assistant-can set up shop. Teams may assemble and disperse at any given spot, and meetings and conferences happen informally wherever it's convenient. Only a handful of maintenance workers, phone operators and food-services personnel, whose flexibility is limited by their particular jobs, retain any appearance of a private workspace.
Equally significant is the fact that on any given hour of any day, as many as one-third of the salaried work force aren't in the office. Some are likely working at a client's site, others at home or in a hotel room on the road. The feeling is that the employees of Virtual Offices are self-starters. The work environment is designed around the concept that one's best thinking isn't necessarily done at a desk or in an office. Sometimes, it's done in a conference room with several people. Other times it's done on a ski slope or driving to a client's office. Fonders of the concept wanted to eliminate the boundaries about where people are supposed to think. They wanted to create an environment that was stimulating and rich in resources. Employees decide on their own where they will work each day, and are judged on work produced rather than on hours put in at the office.
One company that has jumped headfirst into the virtual-office concept is Armonk, New York-based International Business Machine's Midwest division. The regional business launched a virtual-office work model in the spring of 1993 and expects 2,500 of its 4,000 employees-salaried staff from sales, marketing, technical and customer service, including managers-to be mobile by the beginning of 1995. Its road workers, equipped with IBM Think Pad computers, fax-modems, E-mail, cellular phones and a combination of proprietary and off-the-shelf software, use their cars, client offices and homes as work stations. When they do need to come into an office-usually once or twice a week-they log onto a computer that automatically routes calls and faxes to the desk at which they choose to sit.
So far, the program has allowed Big Blue's Midwest division to reduce real-estate space by nearly 55%, while increasing the ratio of employees to workstations from 4-to-1 to almost 10-to-1. More importantly, it has allowed the company to harness technology that allows employees to better serve customers and has raised the job-satisfaction level of workers. A recent survey indicated that 83% of the region's mobile work force wouldn't want to return to a traditional office environment.
IBM maintains links with the mobile work force in a variety of ways. All employees access their E-mail and voicemail daily; important messages and policy updates are broadcast regularly into the mailboxes of thousands of workers. When the need for teleconferencing arises, it can put hundreds of employees on the line simultaneously. Typically, the organization's mobile workers link from cars, home offices, hotels, even airplanes.
Virtual workers are only a phone call away. To be certain, telephony has become a powerful driver in the virtual-office boom. Satellites and high-tech telephone systems, such as ISDN phone lines, allow companies to zap data from one location to another at light speed. Organizations link to their work force and hold virtual meetings using tools such as video-conferencing. Firms grab a strategic edge in the marketplace by providing workers with powerful tools to access information.
Consider Gemini Consulting, a Morristown, New Jersey-based firm that has 1,600 employees spread throughout the United States and beyond. A sophisticated E-mail system allows employees anywhere to access a central bulletin board and data base via a toll-free phone number. Using Macintosh Powerbook computers and modems, they tap into electronic versions of The Associated Press, Reuters and The Wall Street Journal, and obtain late-breaking news and information on clients, key subjects, even executives within client companies. And that's just the beginning. Many of the firm's consultants have Internet addresses, and HR soon will begin training its officeless work force via CD-ROM. It will mail disks to workers, who will learn on their own schedule using machines the firm provides. The bottom line of this technology? Gemini can eliminate the high cost of flying consultants into a central location for training.
Today, the technology exists to break the chains of traditional thought and the typical way of doing things. It's possible to process information and knowledge in dramatically different ways than in the past. That can mean that instead of one individual or a group handling a project from start to finish, teams can process bits and pieces. They can assemble and disassemble quickly and efficiently.
Some companies, such as San Francisco-based Pacific Bell, have discovered that providing telecommuters with satellite offices can further facilitate efficiency. The telecommunications giant currently has nearly 2,000 managers splitting time between home and any of the company's offices spread throughout California. Those who travel regularly or prefer not to work at home also can drop into dozens of satellite facilities that each are equipped with a handful of workstations. At these centers, they can access exclusive data bases, check E-mail and make phone calls.
Other firms have pushed the telecommuting concept even further. One of them is Great Plains Software, a Fargo, North Dakota-based company that produces and markets PC-based accounting programs. Despite its remote location, the company retains top talent by being flexible and innovative. Some of its high-level managers live and work in such places as Montana and New Jersey. Even its local employees may work at home a few days a week.
Lynne Stockstad's situation at Great Plains demonstrates how a program that allows for flexible work sites can benefit both employer and worker. The competitive-research specialist had spent two years at Great Plains when her husband decided to attend chiropractic college in Davenport, Iowa. At most firms, that would have prompted Stockstad to resign-something that also would have cost the company an essential employee. Instead, Stockstad and Great Plains devised a system that would allow her to telecommute from Iowa and come to Fargo only for meetings when absolutely necessary. Using phone, E-mail, voicemail and fax, she and her work team soon found they were able to link together, and complete work just as efficiently as before. Today, with her husband a recent graduate, Stockstad has moved back to Fargo and has received a promotion.
Great Plains uses similar technology in other innovative ways to build a competitive advantage. For example, it has developed a virtual hiring process. Managers who are spread across the country conduct independent interviews with candidates, and then feed their responses into the company's computer. Later, the hiring team holds a meeting, usually via phone or videoconferencing, to render a verdict. Only then does the firm fly the candidate to Fargo for the final interview.
HR must lay the foundation to support a mobile work force. Just as a cafeteria offers a variety of foods to suit individual taste and preferences, the workplace of the future is evolving toward a model for which alternative work options likely will become the norm. One person may find that telecommuting four days a week is great; another may find that he or she functions better in the office. The common denominator for the organization is: How can we create an environment in which people are able to produce to their maximum capabilities?
Creating such a model and making it work is no easy task, however. Such a shift in resources requires a fundamental change in thinking. And it usually falls squarely on HR's shoulders to oversee the program and hold the organization together during trying times. When a company decides to participate in an alternative officing program, people need to adapt and adjust to the new manners. Workers are used to doing things a certain way. Suddenly, their world is being turned upside down.
One of the biggest problems is laying the foundation to support such a system. Often, it's necessary to tweak benefits and compensation, create new job descriptions and methods of evaluation and find innovative ways to communicate. Sometimes, because companies are liable for their workers while they're "on the clock," HR must send inspectors to home offices to ensure they're safe.
When Great Plains Software started its telecommuting program in the late 1980s, it established loose guidelines for employees who wanted to be involved in the program. they pretty much implemented policies on an unscientific basis. Over time, the company has evolved to a far more stringent system of determining who qualifies and how the job is defined.
For example, as with most other companies that embrace the virtual-office concept, Great Plains stipulates that only salaried employees can work in virtual offices because of the lack of a structured time schedule and the potential for working more than eight hours a day. Those employees who want to telecommute must first express how the decision will benefit the company, the department and themselves. Only those who can convince a hiring manager that they meet all three criteria move on to the next stage.
Potential telecommuters then must define how they'll be accountable and responsible in the new working model.
Finally, once performance standards and guidelines have been created, Great Plains presents two disclaimers to those going virtual. If their performance falls below certain predetermined standards, management will review the situation to determine whether it's working. And if the position changes significantly and it no longer makes sense to telecommute, management will have to reevaluate.
Other companies have adopted similar checks and balances. They are training HR advisers to make accommodations for the individual, but to not make accommodations for the person's job responsibilities.
IBM provides counseling from behavioral scientists and offers ongoing assistance to those having trouble adapting to the new work model. By closely monitoring preestablished sales and productivity benchmarks, managers quickly can determine if there's a problem. So far, only approximately 10% to 15% of its mobile work force has required counseling, and only a handful of employees have had to be reassigned.
Virtual workers need guidance from HR. Not everyone is suited to working in a virtual-office environment. Not only must workers who go mobile or work at home learn to use the technology effectively, but they also must adjust their workstyle and lifestyle. The more you get connected, the harder it is to disconnect. At some point, the boundaries between work and personal life blur. Without a good deal of discipline, the situation can create a lot of stress.
Managers often fear that employees will not get enough work done if they can't see them. Most veterans of the virtual office, however, maintain that the exact opposite is true. All too often, employees wind up fielding phone calls in the evening or stacking an extra hour or two on top of an eight-hour day. Not surprisingly, that can create an array of problems, including burnout, errors and marital conflict.
IBM learned early on that it has to teach employees to remain in control of the technology and not let it overrun their lives. One of the ways it achieves the goal is to provide its mobile work force with two-line telephones. That way, employees can recognize calls from work, switch the ringer off at the end of the workday and let the voicemail system pick up calls.
Another potential problem with which virtual employees must deal is handling all the distractions that can occur at home. As a result, many firms provide workers with specific guidelines for handling work at home. It is expected that those who work at home will arrange child care or elder care. And although management recognizes there are times when a babysitter falls through or a problem occurs, if someone's surrounded by noisy children, it creates an impression that the individual isn't working or is distracted.
Still, most say that problems aren't common. The majority of workers adjust and become highly productive in an alternative office environment. The most important thing for a company to do is lay out guidelines and suggestions that help workers adapt.
At many firms, including IBM, HR now is providing booklets that cover a range of topics, including time management and family issues. Many companies also send out regular mailings that not only provide tips and work strategies but also keep employees informed of company events and keep them ingrained in the corporate culture.
This type of correspondence also helps alleviate workers' fears of isolation. IBM goes one step further by providing voluntary outings, such as to the Indianapolis 500, for its mobile work force. Even without these events, virtual workers' isolation fears often are unproven. The level of interaction in a virtual office actually can be heightened and intensified. Because workers aren't in the same place every day, they may be exposed to a wider range of people and situations. And that can open their eyes and minds to new ideas and concepts.
However, dismantling the traditional office structure can present other HR challenges. One of the most serious can be dealing with issues of identity and status. Workers who've toiled for years to earn a corner office suddenly can find themselves thrown into a universal work pod. Likewise, photographs and other personal items often must disappear as workspace is shared. But solutions do exist. For instance, when IBM went mobile, top executives led by example. They immediately cleared out their desks and began plugging in at common work pods.
Not surprisingly, one of the most difficult elements in creating a virtual office is dealing with this human side of the equation. The human factor can send shock waves reverberating through even the most sober organization.
This challenge requires HR to become a active business partner. That means working with other departments, such as real estate, finance and information technology. It means creating the tools to make a virtual office work. In some cases, that may require HR to completely rewrite a benefits package to include a $500 or $1,000-a-month pay for those working at home. That way, the company saves money on real-estate and relocation costs, while the employee receives an incentive that can be used to furnish a home office.
Management also must change the way supervisors evaluate their workers. Managers easily can fall into the trap of thinking that only face-to-face interaction is meaningful and may pass over mobile workers for promotions. Great Plains has gone to great lengths to ensure that its performance-evaluation system functions in a virtual environment. The company asks its managers to conduct informal reviews quarterly with telecommuting employees, and formal reviews every six months. By increasing the interaction and discussion, the company has eliminated much of the anxiety for employees-and their managers-while providing a better gauge of performance. In the final analysis, the system no longer measures good citizenship and attendance, but how much work people actually get done and how well they do it.
Still, many experts point out that too much reliance on voicemail and E-mail can present problems. Although instantaneous messaging is convenient and efficient, it can overload virtual workers with too much information and not enough substance. Without some human interaction it's impossible to build relationships and a sense of trust within an organization. Sending workers offsite can boost productivity, while saving costs.
Those who have embraced the virtual office say that it's a concept that works. At Pacific Bell, which began experimenting with telecommuting during the 1984 Summer Olympics in Los Angeles, employees routinely have reported 100% increases in productivity. Equally important: this fits into family and flexibility issues and that they enjoy working for the company more than ever before.
Although the final results aren't yet in, IBM's mobile work force reports a 10% boost in morale and appears to be processing more work, more efficiently. What's more, its customers have so far reported highly favorable results. People are happier and more productive because they can have breakfast with their family before they go off to client meetings. They can go home and watch their child's soccer game and then do work in the evening. They no longer are bound by a nine-to-five schedule. The only criterion is that they meet results.
Society is on the frontier of a fundamental change in the way the workplace is viewed and how work is handled. In the future, it will become increasingly difficult for traditional companies to compete against those embracing the virtual office. Companies that embrace the concept are sending out a loud message. They're making it clear that they're interested in their employees' welfare, that they're seeking a competitive edge, and that they aren't afraid to rethink their work force for changing conditions. Those are the ingredients for future success.
f:\12000 essays\sciences (985)\Computer\The Origins of the Computer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This report is to be distributed freely and not to be sold for profit ect.
This report can be modifyed as long as you keep in mind that you didn't
write it. And you are not to hand in this report claiming credit for it
heheh.
The Roman Empire, founded by Augustus Caesar in 27 B.C. and lasting in Western
Europe for 500 years, reorganized for world politics and economics. Almost the entirety of the
civilized world became a single centralized state. In place of Greek democracy, piety, and
independence came Roman authoritarianism and practicality. Vast prosperity resulted. Europe and
the Mediterranean bloomed with trading cities ten times the size of their predecessors with public
amenities previously unheard of courts, theaters, circuses, and public baths. And these were now
large permanent masonry buildings as were the habitations, tall apartment houses covering whole
city blocks.
This architectural revolution brought about by the Romans required two innovations: the
invention of a new building method called concrete vaulting and the organization of labor and
capital on a large scale so that huge projects could be executed quickly after the plans of a single
master architect.
Roman concrete was a fluid mixture of lime and small stones poured into the hollow
centers of walls faced with brick or stone and over curved wooden molds, or forms, to span
spaces as vaults. The Mediterranean is an active volcanic region, and a spongy, light, tightly
adhering stone called pozzolana was used to produce a concrete that was both light and extremely
strong.
The Romans had developed potsalana concrete about 100 B.C. but at first used it only for
terrace walls and foundations. It apparently was emperor Nero who first used the material
on a grand scale to rebuild a region of the city of Rome around his palace, the expansive Domus
Aurea, after the great fire of AD 64 which he said to have set. Here broad streets, regular blocks
of masonry apartment houses, and continuous colonnaded porticoes were erected according to a
single plan and partially at state expense. The Domus Aurea itself was a labyrinth of concrete
vaulted rooms, many in complex geometric forms. An extensive garden with a lake and forest
spread around it.
The architect Severus seems to have been in charge of this great project. Emperors and
emperors' architects succeeding Nero and Severus continued and expanded their work of
rebuilding and regularizing Rome. Vespasian (emperor AD 63-79) began the Colosseum.
Which I have a model bad of. Built by prisoners from the Jewish wars the 50,000 Colosseum is
one of the most intresting architectural feets of Rome. At its opening in 80 A.D. the Colosseum
was flooded by diverting the Tiber river about 10 kilometers to renact a naval battel with over
3,000 participants. Domitian (81-96) rebuilt the Palatine Hill as a huge palace of vaulted concrete
designed by his architect Rabirius. Trajan (97-117) erected the expansive forum that bears his
name (designed by his architect Apollodorus) and a huge public bath. Hadrian (117-138) who
served as his own architect, built the Pantheon as well as a villa the size of a small city for himself
at Tivoli. Later Caracalla (211-217) and Diocletian (284-305) erected two mammoth baths that
bear their names, and Maxentius (306-312) built a huge vaulted basilica, now called the Basilica
of Constantine.
The Baths of Caracalla have long been accepted as a summation of Roman culture and
engineering. It is a vast building, 360 by 702 feet (110 by 214 meters), set in 50 acres (20
hectares) of gardens. It was one of a dozen establishments of similar size in ancient Rome devoted
to recreation and bathing. There were a 60- by 120-foot (18- by 36-meter) swimming pool, hot
and cold baths, gymnasia, a library, and game rooms. These rooms were of various geometric
shapes. The walls were thick, with recesses, corridors, and staircases cut into them. The building
was entirely constructed of concrete with barrel, groined, and domical vaults spanning as far as 60
feet (18 meters) in many places. Inside, all the walls were covered with thin slabs of colored
marble or with painted stucco. The decorative forms of this coating were derived from Greek
The rebuilding of Rome set a pattern copied all over the empire. Nearby, the ruins of
Ostia, Rome's port (principally constructed in the 2nd and 3rd centuries AD), reflect that model.
Farther away it reappears at Trier in northwestern Germany, at Autun in central France, at
Antioch in Syria, and at Timgad and Leptis Magna in North Africa. When political disintegration
and barbarian invasions disrupted the western part of the Roman Empire in the 4th century AD,
new cities were founded and built in concrete during short construction campaigns: Ravenna, the
capital of the Western Empire from 492-539, and Constantinople in Turkey, where the seat of the
empire was moved by Constantine in 330 and which continued thereafter to be the capital of the
Eastern, or Byzantine, Empire.
Christian Rome. One important thing had changed by the time of the founding of Ravenna
and Constantinople; after 313 this was the Christian Roman Empire. The principal challenge to
the imperial architects was now the construction of churches. These churches were large vaulted
enclosures of interior space, unlike the temples of the Greeks and the pagan Romans that were
mere statue-chambers set in open precincts. The earliest imperial churches in Rome, like the first
church of St. Peter's erected by Constantine from 333, were vast barns with wooden roofs
supported on lines of columns. They resembled basilicas, which had carried on the Hellenistic
style of columnar architecture. Roman concrete vaulted construction was used in certain cases,
for example, in the tomb church in Rome of Constantine's daughter, Santa Costanza, of about
350. In the church of San Vitale in Ravenna, erected in 526-547, this was expanded to the scale of
a middle-sized church. Here a domed octagon 60 feet (18 meters) across is surrounded by a
corridor, or aisle, and balcony 30 feet (9 meters) deep. On each side a semicircular projection
from the central space pushes outward to blend these spaces together.
f:\12000 essays\sciences (985)\Computer\The Power On Self Test.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Power On Self Test
When the system is powered on, the BIOS will perform diagnostics and initialize system components, including the video system.
(This is self-evident when the screen first flicks before the Video Card header is displayed).
This is commonly referred as POST (Power-On Self Test).
Afterwards, the computer will proceed its final boot-up stage by calling the operating system.
Just before that, the user may interrupt to have access to SETUP.
To allow the user to alter the CMOS settings, the BIOS provides a little program, SETUP.
Usually, setup can be entered by pressing a special key combination (DEL, ESC, CTRL-ESC, or CRTL-ALT-ESC)
at boot time (Some BIOSes allow you to enter setup at any time by pressing CTRL-ALT-ESC).
The AMI BIOS is mostly entered by pressing the DEL key after resetting (CTRL-ALT-DEL) or powering up the computer.
You can bypass the extended CMOS settings by holding the key down during boot-up. This is really helpful,
especially if you bend the CMOS settings right out of shape and the computer won't boot properly
anymore. This is also a handy tip for people who play with the older AMI BIOSes with the XCMOS setup.
It allows changes directly to the chip registers with very little technical explanation.
A Typical BIOS POST Sequence
Most BIOS POST sequences occur along four stages:
1. Display some basic information about the video card like its brand, video BIOS version and video memory available.
2. Display the BIOS version and copyright notice in upper middle screen. You will see a large sequence of numbers at the bottom of the screen. This sequence is the .
3. Display memory count. You will also hear tick sounds if you have enabled it (see Memory Test Tick Sound section).
4. Once the POST have succeeded and the BIOS is ready to call the operating system (DOS, OS/2, NT, WIN95, etc.) you will see a basic table of the system's configurations:
· Main Processor: The type of CPU identified by the BIOS. Usually Cx386DX, Cx486DX, etc..
· Numeric Processor: Present if you have a FPU or None on the contrary. If you have a FPU and the BIOS does not recognize it, see section Numeric Processor Test in Advanced CMOS Setup.
· Floppy Drive A: The drive A type. See section Floppy drive A in Standard CMOS Setup to alter this setting.
· Floppy Drive B: Idem.
· Display Type: See section Primary display in Standard CMOS Setup.
· AMI or Award BIOS Date: The revision date of your BIOS. Useful to mention when you have compatibility problems with adaptor cards (notably fancy ones).
· Base Memory Size: The number of KB of base memory. Usually 640.
· Ext. Memory Size: The number of KB of extended memory.
In the majority of cases, the summation of base memory and extended memory does not equal the total system memory.
For instance in a 4096 KB (4MB) system, you will have 640KB of base memory and 3072KB of extended memory, a total of 3712KB.
The missing 384KB is reserved by the BIOS, mainly as shadow memory (see Advanced CMOS Setup).
· Hard Disk C: Type: The master HDD number. See Hard disk C: type section in Standard CMOS Setup.
· Hard Disk D: Type: The slave HDD number. See Hard disk D: type section in Standard CMOS Setup.
· Serial Port(s): The hex numbers of your COM ports. 3F8 and 2F8 for COM1 and COM2.
· Parallel Port(s): The hex number of your LTP ports. 378 for LPT1.
· Other information: Right under the table, BIOS usually displays the size of cache memory.
Common sizes are 64KB, 128KB or 256KB. See External Cache Memory section in Advanced CMOS Setup.
AMI BIOS POST Errors
During the POST routines, which are performed each time the system is powered on, errors may occur.
Non-fatal errors are those which, in most cases, allow the system to continue the boot up process.
The error messages normally appear on the screen.
Fatal errors are those which will not allow the system to continue the boot-up procedure.
If a fatal error occurs, you should consult with your system manufacturer or dealer for possible repairs.
These errors are usually communicated through a series of audible beeps.
The numbers on the fatal error list correspond to the number of beeps for the corresponding error.
All errors listed, with the exception of #8, are fatal errors. All errors found by the BIOS will be forwarded to the I/O port 80h.
· 1 beep: DRAM refresh failure. The memory refresh circuitry on the motherboard is faulty.
· 2 beeps: Parity Circuit failure. A parity error was detected in the base memory (first 64k Block) of the system.
· 3 beeps: Base 64K RAM failure. A memory failure occurred within the first 64k of memory.
· 4 beeps: System Timer failure. Timer #1 on the system board has failed to function properly.
· 5 beeps: Processor failure. The CPU on the system board has generated an error.
· 6 beeps: Keyboard Controller 8042-Gate A20 error. The keyboard controller (8042) contains the gate A20 switch which allows the computer to operate in virtual mode.
This error message means that the BIOS is not able to switch the CPU into protected mode.
· 7 beeps: Virtual Mode (processor) Exception error. The CPU on the motherboard has generated an Interrupt Failure exception interrupt.
· 8 beeps: Display Memory R/W Test failure. The system video adapter is either missing or Read/Write Error its memory is faulty. This is not a fatal error.
· 9 beeps: ROM-BIOS Checksum failure. The ROM checksum value does not match the value encoded in the BIOS. This is good indication that the BIOS ROMs went bad.
· 10 beeps: CMOS Shutdown Register. The shutdown register for the CMOS memory Read/Write Error has failed.
· 11 beeps: Cache Error / External Cache Bad. The external cache is faulty.
Other AMI BIOS POST Codes
· 2 short beeps: POST failed. This is caused by a failure of one of the hardware testing procedures.
· 1 long & 2 short beeps: Video failure. This is caused by one of two possible hardware faults. 1) Video BIOS ROM failure, checksum error encountered. 2) The video adapter installed has a horizontal retrace failure.
· 1 long & 3 short beeps: Video failure. This is caused by one of three possible hardware problems. 1) The video DAC has failed. 2) the monitor detection process has failed. 3) The video RAM has failed.
· 1 long beep: POST successful. This indicates that all hardware tests were completed without encountering errors.
If you have access to a POST Card reader, (Jameco, etc.) you can watch the system perform each test by the value that's displayed.
If/when the system hangs (if there's a problem) the last value displayed will give you a good idea where and what went wrong, or what's bad on the system board. Of course, having a description of those codes would be helpful,
and different BIOSes have different meanings for the codes. (could someone point out FTP sites where we could have access to a complete list of error codes for different versions of AMI and Award BIOSes?).
BIOS Error Messages
This is a short list of most frequent on-screen BIOS error messages. Your system may show them in a different manner. When you see any of these, you are in trouble - Doh! (Does someone has any additions or corrections?)
· "8042 Gate - A20 Error": Gate A20 on the keyboard controller (8042) is not working.
· "Address Line Short!": Error in the address decoding circuitry.
· "Cache Memory Bad, Do Not Enable Cache!": Cache memory is defective.
· "CH-2 Timer Error": There is an error in timer 2. Several systems have two timers.
· "CMOS Battery State Low" : The battery power is getting low. It would be a good idea to replace the battery.
· "CMOS Checksum Failure" : After CMOS RAM values are saved, a checksum value is generated for error checking. The previous value is different from the current value.
· "CMOS System Options Not Set": The values stored in CMOS RAM are either corrupt or nonexistent.
· "CMOS Display Type Mismatch": The video type in CMOS RAM is not the one detected by the BIOS.
· "CMOS Memory Size Mismatch": The physical amount of memory on the motherboard is different than the amount in CMOS RAM.
· "CMOS Time and Date Not Set": Self evident.
· "Diskette Boot Failure": The boot disk in floppy drive A: is corrupted (virus?). Is an operating system present?
· "Display Switch Not Proper": A video switch on the motherboard must be set to either color or monochrome.
· "DMA Error": Error in the DMA (Direct Memory Access) controller.
· "DMA #1 Error": Error in the first DMA channel.
· "DMA #2 Error": Error in the second DMA channel.
· "FDD Controller Failure": The BIOS cannot communicate with the floppy disk drive controller.
· "HDD Controller Failure": The BIOS cannot communicate with the hard disk drive controller.
· "INTR #1 Error": Interrupt channel 1 failed POST.
· "INTR #2 Error": Interrupt channel 2 failed POST.
· "Keyboard Error": There is a timing problem with the keyboard.
· "KB/Interface Error": There is an error in the keyboard connector.
· "Parity Error ????": Parity error in system memory at an unknown address.
· "Memory Parity Error at xxxxx": Memory failed at the xxxxx address.
· "I/O Card Parity Error at xxxxx": An expansion card failed at the xxxxx address.
· "DMA Bus Time-out": A device has used the bus signal for more than allocated time (around 8 microseconds).
If you encounter any POST error, there is a good chance that it is an HARDWARE related problem.
You should at least verify if adaptor cards or other removable components (simms, drams etc...) are properly inserted before calling for help. One common attribute in human nature is to rely on others before investigating the problem yourself.
f:\12000 essays\sciences (985)\Computer\The Telephone System.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The telephone is one of the most creative and prized inventions in the world. It has advanced from its humble beginnings to its wireless communication technology today and for the future. The inhabitants of the earth have long communicated over a distance, which has been done by shouting from one hilltop or tower to another. The word "telephone" originated from a combination of two Greek words: "tele", meaning far off, and "phone", meaning voice or sound, and became the known term for "far- speaking."
A basic telephone usually contains a transmitter, that transfers the caller's voice, and a receiver, that amplifies sound from an incoming call. In the transmitter there are two common kinds of transmitters: the carbon transmitter, and the electret transmitter. The carbon transmitter uses carbon granules between metal plates called, electrodes, with one consisting of a thin diaphragm that moves by pressure from sound waves and transmits them to the carbon granules. These electrodes conduct electricity flowing through the carbon. The sound waves hit the diaphragm causing the electrical resistance of the carbon to vary. The electret transmitter is composed of a thin disk of metal-coated plastic held above a thicker, hollow metal disk. This plastic disk is electrically charged, and creates an electric field. The sound waves from the caller's voice cause the plastic disk to vibrate, changing the distance between the disks, thus changing the intensity of
2
the electric field. These variations are translated into an electric current which travels across the telephone lines. The receiver of a telephone is composed of a flat ring of magnetic material. Underneath this magnetic ring is a coil of wire where the electric current flows. Here, the current and magnetic field from the magnet cause a diaphragm between the two to vibrate, and replicate the sounds that are transformed into electricity.
The telephone is also composed of an alerter and a dial. The alerter, usually known as the ringer, alerts a person of a telephone call, created by a special frequency of electricity sent by the telephone number typed in. The dial is the region on the phone where numbers are pushed or dialed. There are two types of dialing systems; the rotary dial, and the Touch-Tone. The rotary dial is a movable circular plate with the numbers one to nine, and zero. The Touch-Tone system uses buttons that are pushed, instead of the rotary that send pulses.
The telephone was said to be invented by many people. However, the first to achieve this success, although by accident, was Alexander Graham Bell. He and his associate were planning to conduct an experiment, when Mr. Bell spilt acid on himself in another room, and his associate clearly heard the first telephone message: "Mr. Watson, come here; I want you." Although Alexander Graham Bell had invented the telephone, his case had to be defended in court more than 600 times for this to be proven.
After the invention of the telephone, many other great technological advances were made, which boosted the telephone into a worldwide affair. The first great advance was the invention of automatic switching. Next, long distance telephone calls were established in small steps. For example, from city to city, across a country, and across
3
the ocean. Following this, undersea cable and satellites, which made it possible to link points halfway around the earth sounding as if from next door. Finally, by adding three digit area codes, all phone calls, either to next door or around the world, could be done by the caller.
The first telephone company to establish a telephone industry was the Bell Telephone Company, in 1877, by Alexander Graham Bell. This did last for sometime, however, independent telephone companies were started in many cities and small towns. By 1908, many customers were being served by a new company called AT&T, which eventually bought out the Bell Company. Since it was costly to have the wires run to a household, many residential people often shared lines, which is called a party line. Although these lines were cheaper for the customers, it was a nuisance because only one person could use the phone at a time, and other households could listen in on the calls. Finally, the price of local calls was relatively low, however, long-distance calls were placed relatively high when compared to the local telephone bill.
Today, approximately 95% of the households across North America have telephones, which is creating a huge opportunity for companies that provide local and long-distance service. Although prices for calls are slowly decreasing, the competition between companies is increasing. This can be seen from advertisements on television and in the newspaper. And not only is this competing going to continue, it will increase as new technology is discovered.
4
What is in store for the future? No one will now. However, some of the latest futuristic ideas that will soon be upon us are; television screens soon accompany the telephone, so that the caller can see who he or she is having a conversation with. Also, having all of the copper wire replaced with fiber optics will greatly increase the telephones capabilities. This will give us the advantage of sending very large pieces of information over the phone line. The only thing that we do know about the telephone, is that it sure has come a long way since its first discovery by the inventor Alexander Graham Bell. A man who will always be remembered.
f:\12000 essays\sciences (985)\Computer\The Unkindest Cut Censorship Online.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There is a section of the American populace that is slowly slithering into the spotlight after nearly two decades in clandestine. Armed with their odd netspeak, mouses, glowing monitors, and immediate access to a world of information, both serious and amateur Hackers alike have at last come out of the computer lab and into mainstream pop culture. Since I despise pleading ignorant about anything, I chose to read Mr. McDonalds article because of its minutia concerning the future of the more amusing aspect of computing: the game. This article is relevant because whether we like it or not, the PC (personal computer) is only going to grow in popularity and use, and the best weapon against the abuse of this new gee-whiz technology is to be educated about it.
It is simply amazing how far gaming has come in the past decade. We have gone from stick figures on a blank screen to interactive movies. The PC is the newest way to play because it has the capability to process and display much more complex games than anything by Nintendo or Sega. Some problems with this, however, are the enormous cost of s descent system and software and the technology that moves at lightning speed. The computer you buy tomorrow will not be able to handle any of the new software two years from now. Owners must not only keep up with the new trends but must also be well aware of what their own system can sustain so that they do not overload it and cause it to crash. This article focuses on interactive video, which is a relatively new field in the gaming industry. The games that have been on the market have not lived up to the bombardment of advertising gamers have been subjected to. The video itself is often choppy and blurry, it rarely enhances the plot of the game, and has yet to be truely interactive. This is because it is not part of a movies nature to mingle with the audience. New software consumers should be aware of this before shelling out $60-$80 for an over-hyped game.
This article offers the titles of the few good interactive games that have hit the shelves this year as well as a list of ones to avoid. It also describes several of the video cards (special flat chips that can be inserted into the back of your machine to help it process data) that you would have to purchase to play these games. It does a wonderful job of informing the readers about the games and hardware in terms that even a new gamer (a newbie) would be able to grasp. Often, many computing magazines will use Hacker lingo (netspeak) so frequently that the meaning and fact are lost. The article suggests that avoiding the whole genre for a few years until the industry polishes its product is the best move. From the experiences I have had with computer games of all kinds, I would have to agree.
f:\12000 essays\sciences (985)\Computer\Truth and Lies about Computer Viruses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Truth and Lies About the Computer Virus
Walk into any computer store today and there will be at least twenty or thirty computer virus programs. From the looks of it computer viruses have gotten out of hand and so has the business of stopping it. The computer user must cut through the media hype of apocoliptic viruses and shareware programs and discover the real facts.
Before we even start the journey of exploring the computer virus we must first eliminate all the "fluff." The computer user needs to understand how information about viruses reaches the public. Someone creates the virus and then infects at least one computer. The virus crashes or ruins the infected computer. A anti-virus company obtains a copy of the virus and studies it. The anti-virus company makes an "unbiased" decision about the virus and then disclose their findings to the public. The problem with the current system is that there are no checks and balances. If the anti-virus company wants to make viruses seem worse all they have to do is distort the truth. There is no organization that certifies wheather or not a virus is real. Even more potentially harmful is that the anti-virus companies could write viruses in order to sell their programs.
Software companies have and do distort the truth about viruses.
"Antivirus firms tend to count even the most insignificant variations of viruses for advertising purposes. When the Marijuana virus first appeared, for example, it contained the word "legalise," but a miscreant later modified it to read "legalize." Any program which detects the original virus can detect the version with one letter changed -- but antivirus companies often count them as "two" viruses. These obscure differentiations quickly add up." http://www.kumite.com/myths/myth005.htm
Incidentally the Marijuana virus is also called the "Stoned" virus there by making it yet another on the list of viruses that companies protect your computer against.
I went to the McAfee Anti-virus Web site looking for information on the Marijuana virus but was unable to obtain that information. I was however able to get a copy of the top ten viruses of their site. On specific virus called Junkie:
"Junkie is a multi-partite, memory resident, encrypting virus. Junkie specifically targets .COM files, the DOS boot sector on floppy diskettes and the Master Boot Record (MBR). When initial infection is in the form of a file infecting virus, Junkie infects the MBR or floppy boot sector, disables VSafe (an anti-virus terminate-and-stay-resident program (TSR), which is included with MS-DOS 6.X) and loads itself at Side 0, Cylinder 0, Sectors 4 and 5. The virus does not become memory resident, or infect files at this time. Later when the system is booted from the system hard disk, the Junkie virus becomes memory resident at the top of system memory below the 640K DOS boundary, moving interrupt 12's returns. Once memory resident, Junkie begins infecting .COM files as they are executed, and corrupts .COM files. The Junkie virus infects diskette boot sectors as they are accessed. The virus will write a copy of itself to the last track of the diskette, and then alter the boot sector to point to this code. On high density 5.25 inch diskettes, the viral code will be located on Cylinder 79, Side 1, Sectors 8 and 9."
Junkie's description is that of a basic stealth/Trojan virus which have been in existance for 10 years. They also listed Anti-exe as one of the top ten viruses but did not acknowlege the fact that it has three aliases. It's no wonder that the general public is confused about computer viruses!
I decided to investigate the whole miss or diss-information issue a little further. I went to the Data Fellows Web site to what the distributors of F-prot had to say about viruses. It is to no surprise that I found them trying to see software with the typical scare tactics:
Quite recently, we read in the newspapers how CIA and NSA (National
Security Agency) managed to break into the EU Commission's systems and
access confidential information about the GATT negotiations. The stolen
information was then exploited in the negotiations. The EU Commission denies the allegation, but that is a common practice in matters involving information security breaches. At the beginning of June, the news in Great Britain told the public about an incident where British and American banks had paid 400 million pounds in ransom to keep the criminals who had broken into their systems from publicizing the systems' weaknesses [London Times, 3.6.1996]. The sums involved are simply enormous, especially since all these millions of pounds bought nothing more than silence. According to London Times, the banks' representatives said that the money had been paid because "publicity about such attacks could damage consumer confidence in the security of their systems". Criminal hackers are probably encouraged by the fact that, in most cases, their victims are not at all eager to report the incidents to the police. And that is not all; assuming that the information reported by London Times is correct, they may even get paid a "fee" for breaking in... a computer is broken into in Internet every 20 seconds... Whatever the truth about these incidents may be, the fact remains that current information systems are quite vulnerable to penetration from
outside. As Internet becomes more popular and spreads ever wider,
criminals can break into an increasing number of systems easily and
without a real risk of being caught."
Then the next paragraph stated:
"Even at their initial stages, Data Fellows Ltd's F-Secure products meet many of these demands. It is the goal of our continuing product development to eventually address all such information security needs." In other words nothing is safe unless you buy their products.
Now that we have cleared the smoke on viruses we know that there are only roughly 500 basic viruses. These viruses are tweaked, renamed, and re-cycled.
So, what is a virus? First of all, we must be aware that there is no universally accepted naming practice or discovery method for viruses. Therefore all virus information is subjective and subject to interpretation and constant dispute.
To define a virus we must ask an expert. According to Fred Cohen a computer virus is a computer program that can infect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself. This does not mean that a virus has to cause damage because a virus may be written to gather data and obtain hidden files in your system.
Now that you are aware of the hoaxes and miss-information about viruses you will be better equipped to deal with viral information. The next time you hear of a killer virus just remember what you have learned. You know that all viruses have the same roots.
f:\12000 essays\sciences (985)\Computer\V chip.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What is a V-chip? This term has become a buzz word for any discussion evolving
telecommunications regulation and television ratings, but not too many reports define the
new technology in its fullest form. A basic definition of the V-chip; is a microprocessor
that can decipher information sent in the vertical blanking of the NTSC signal,
purposefully for the control of violent or controversial subject matter. Yet, the span of
the new chip is much greater than any working definition can encompass. A discussion of the
V-chip must include a consideration of the technical and ethical issues, in addition to
examining the constitutionally of any law that might concern standards set by the US
government. Yet in the space provided for this essay, the focus will be the technical
aspects and costs of the new chip. It is impossible to generally assume that the V-chip
will solve the violence problem of broadcast television or that adding this little device
to every set will be a first amendment infringement. We can, however, find clues through
examining the cold facts of broadcast television and the impact of a mandatory regulation
on that free broadcast. "Utilizing the EIA's Recommended Practice for Line 21 Data
Service(EIA-608) specification, these chips decode EDS (Extended Data Services)program
ratings, compare these ratings to viewer standards, and can be programmed to take a variety
of actions, including complete blanking of programs." Is one definition of the V-chip from
Al Marquis of Zilog Technology. The FCC or Capitol Hill has not set any standards for
V-chip technology; this has allowed many different companies to construct chips that are
similar yet not exact or possibly not compatible. Each chip has advantages and
disadvantages for the rating's system, soon to be developed. For example, some units use
onscreen programming such as VCR's and the Zilog product do, while others are considering
set top options. Also, different companies are using different methods of parental control
over the chip.
Another problem that these new devices may incur when included in every television is a
space. The NTSC signal includes extra information space known as the subcarrier and Vertical
blanking interval. As explained in the quotation from Mr. Marquis, the V-chips will use a
certain section of this space to send simple rating numbers and points that will be compared
to the personality settings in the chip. Many new technologies are being developed for
smart-TV or data broadcast on this part of the NTSC signal. Basically the V-chip will
severely limit the bandwidth for high performance transmission of data on the NTSC signal.
There is also to be cost to this new technology, which will be passed to consumers.
Estimates are that each chip will cost six dollars wholesale and must be designed into the
television's logic. The V-chip could easily push the price of televisions up by twenty five
or more dollars during the first years of production. The much simpler solution of set top
boxes allows control for those who need it and allow those consumers who don't to save
money and use new data technology. Another cost will most definitely be levied to
television advertisers for the upgrade of the transmitting equipment. Weather the V-chip
encoding signal is added upstream of the transmitter or directly into uplink units and
other equipment intended for broadcast; this cost will have to compensated for in
advertising sales and prices. The V-chip regulation may also require another staff employee
at most stations to effectively rate locally aired programs and events. All three of these
questions have been addressed in minute detail. Most debate has focused upon the new rating
system and its implementation. Though equally important, this doesn't deal with the ground
floor concerns for the television producing and broadcasting industries. Now as members of
the industry we must hold our breath until either the fed knocks the wind from free
broadcast with mandatory ratings' devices, or allows the natural regulation to continue.
f:\12000 essays\sciences (985)\Computer\video card.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
People are living in a three-dimensional space. They know what is up, down, left, right, close and far. They know when something is getting closer or moving away. However, the traditional personal computers can only make use of two dimensional space due to relatively low technology level of the video card in the past. As the new technology has been introduced to the video card industry in recent years, the video card can now render 3D graphics. Most of the PC computer games nowadays are in three dimensions. In addition, some web sites also apply the use of three dimensional space. This means that they are no longer a flat homepage, but instead a virtual world. With that added dimension, they all look more realistic and attractive. Nevertheless, 3D do not exist in most of the business programs today, but it can be forecasted that it is not far away.
Many new kinds of video cards have been introduced to the market recently. In the past, the video card could only deliver two dimensional graphics which were only in low resolution. However, there has now emerged as a result of high resolution three dimensional graphics technology. This paper will discuss why the video card nowadays can process high resolution three dimensional graphics, but why the video card in the past could only process low resolution two dimensional graphics. The explanation will be based on some recently developed video cards such like Matrox Millenium. This paper will also discuss how the 3D graphic displays on a 2D monitor. Lastly, the video card, Matrox Millennium, will also be discussed.
Basic principles
In order to understand the recent development of the video card, let's take a look on how a video card works.
The video card is a circuit, which is responsible for processing the special video data from the central processing unit (CPU) into a format that the visual display unit (VDU) or monitor can understand, to form a picture on the screen. The Video Chipset, the Video Memory ( Video RAM ) and the Digital Analog Converter ( RAM DAC ) are the major parts of a video card.
After the special video data leaves the CPU, it has to pass through four major steps inside the video card before it reaches the VDU finally. First, the special video data will transfer from the CPU to the Video Chipset, which is the part responsible for processing the special video data, through the bus. Secondly, the data will transfer from the Video Chipset to the Video Memory which stores the image displayed on a bitmap display. Then, the data will transfer to the RAM DAC which is responsible for reading the image and converting the image from digital data to analog data. It should be noted that every data transfer inside the computer system is digital. Lastly, the analog data will transfer from the RAM DAC to the VDU through a cable connected between them outside the computer system.
The performance of a video card is mainly dependent upon its speed, the amount and quality of the Video Memory, the Video Chipset and the RAM DAC.
The faster the speed, the higher the picture quality and resolution the video card can deliver. This is due to the fact that the picture on the VDU has to change continuously, and this change must be made as fast as possible in order to display a high quality and realistic image. In the process of transferring data from the CPU to the Video Chipset, the speed is mainly dependent upon the type and speed of the bus, the mainboard and its chipset.
The amount of the Video Memory is also responsible for the color and screen resolution. The higher the amount of the Video Memory, the higher the color depth the video card can render. On the other hand, the type of the Video RAM is an another factor that affects the speed of the video card.
The Video Chipset is the brain of a video card. It similar to the CPU in the motherboard. However, unlike the CPU which can be fitted with different motherboards, certain Video Chipsets can only be fitted with certain video cards. The Video Chipset is responsible for processing the special video data received from the CPU. Thus, it determines all the performance aspects of the video card.
The RAM DAC is the part responsible for the refresh rates of the monitor. The quality of the RAM DAC and its maximum pixel frequency, which is measured in MHz, are the factors affecting the refresh rates. In fact, a 220 MHz RAM DAC is not necessarily but most likely better than a 135 MHz one.
Recent developments
Traditionally, the personal computer can only deliver two dimensional pictures. However, as people want to increase their living standards, they want the picture on their personal computer be more realistic and attractive. Thus, the display of three dimensional pictures in the personal computer is being developed. The rendering of the 3D image requires the computer to update the screen of the VDU at least 15 times per second as the one navigate through it, and each of the objects have to go through the transformation in depth space which is known as the z-axis, and is on the coordinate of the x-y plane. Nevertheless, the video card in the past was not "powerful" enough to render the three dimensional graphics. The introduction of some new kind of video cards in recent years has solved this problem, and are able to render 3D graphics now.
In the past, the video card could only deliver two dimensional graphics because the technology at that time limited what they can do. One of the problems is that the speed of the transfer of data from the CPU to the Video Chipset was relatively low, but it is actually not the problem associated with the video card. It is associated with the type of the CPU, the bus and the motherboard in the computer system. On the other hand, the biggest problem is actually the quality of Video RAM. The Video RAM is the part in a video card which is situated between two very busy devices, the Video Chipset and the RAM DAC; and the Video RAM has to serve both of them all the time. Whenever the screen has to change, the Video Chipset has to change the content in the Video Memory. On the other hand, the RAM DAC has to read the data from the Video Memory continuously. This means that when the Video Memory is reading the data from the Video Chipset, the RAM DAC has to wait aside. Whenever the video card has to render three dimensional graphics, the screen has to change at least 15 times per second which means that more data has to be transferred from the Video Chipset to the Video Memory, and the data has to be read faster by the RAM DAC. However, the video card, or referred to as the Video Memory, at that time did not have such technology to achieve this kind of process. Thus, the video card in the past was not able to deliver three dimensional graphics.
In recent years, the video card manufacturer has developed some high technology to solve the problem of the poor Video Memory. They have found three different ways to deal with this problem which involves using a higher quality of Video Memory, increasing the video memory bus size, and increasing the clock speed of the video card.
1 ) Dual ported Video RAM
The major step is to make the Video RAM dual ported. This means that when data is transferred from the Video Chipset to the Video Memory via one port, the RAM DAC can read the data from the Video Memory through an independent second port. Thus, these two processes can occur at the same time. Both the Video Chipset and the RAM DAC need not wait for each other anymore. This kind of RAM is called VRAM. Of course, the technology applied is not just double the port in the RAM; it is actually very complicated. Thus, VRAM is more expensive than the normal one.
The invention of the VRAM can offer a higher refresh rate and higher color depth of the graphic on the monitor. The high refresh rate means that the RAM DAC will send a complete picture to the monitor more frequently. Therefore, the RAM DAC has to read the data from the Video Memory more often. However, when the video card in the past, which without the VRAM, wants to achieve this high refresh rate, it has to lower the video performance as the Video Memory cannot afford this kind of heavy work load. As to maintain the high refresh rate and high video performance at the same time, the VRAM has to be used since this kind of RAM can serve the Video Chipset and the RAM DAC at the same time. Thus, the video card need not reduce the video performance when a higher refresh rate occurs. On the other hand, to archive the high color depth, the Video Memory has to read more data from the Video Chipset per time, and thus more data will be sent to the RAM DAC . This process will surely take a longer time. At an 8 bit color resolution ( 256 color ), a 1024 ´ 768 screen needs 786432 bytes of data to be read by RAM DAC from the Video Memory. For the same screen, a 24 bit color resolution ( 16777216 color) needs 2359296 bytes of data to be read by the RAM DAC. For similar reasons, if the video card in the past wants to archive this kind of high color depth, it has to lower the refresh rate. This problem can also be solved by the use of the VRAM. In short, the new video card with VRAM can provide a high refresh rate and high color depth at the same time. Thus, the render of three dimensional graphics is possible now.
The WRAM is used in the Martox card instead of the VRAM. The WRAM is developed by the Martox company. It is such like the VRAM which is dual ported. However, the WRAM is designed smarter than the VRAM, so it is faster. Ironically, the WRAM is even cheaper than the VRAM.
Lastly, there are many different types of the Video RAM such as DRAM (Dynamic RAM), EDO DRAM (Extended Data Out DRAM), SDRAM (Synchronous DRAM), SGRAM (Synchronous Graphics RAM), MDRAM (Multibank DRAM), and RDRAM (RAMBUS DRAM). Unlike the VRAM and WRAM, they are all single ported and so are slower. The DRAM is the slowest one amongst all of them.
2 ) Increase video memory bus size
Three years ago, the release of the 32 bit video card amazed people all over the world. However, the 64 bit video card is being introduced nowadays, which has a 64 bit video memory bus inside it. In addition, the 128 bit video card is also available. The video memory bus is a path which links the Video Chipset, the Video RAM and the RAM DAC together. With the 64 bit video memory bus, 8 bytes of data can be transferred in one clock cycle while 4 bytes data with 32 bit video memory bus. Thus, the amount of data transfer is doubled with the use of the 64 bit video card. It is important to notice that a 1 MB Video RAM usually has only a 32 bit data bus. Thus, a 64 bit video card should always work with at least 2MB Video RAM; otherwise, this 64 bit video card will not be able to use its 64 bit data path. All in all, with the use of a 64 bit video card, more data can be transferred at one time. Thus, it actually can shorten the time to transfer data from the Video Chipset to the Video RAM or from the Video RAM to the RAM DAC. This means that a higher color resolution graphic can be rendered.
3 ) Increase the clock speed
The third one is the most obvious one which just increases the clock speed of the Video Chipset and the Video RAM. Of course, the technology to increase the clock speed is very complicated. The fastest Video Chipset so far is the ET 6000 chipset which can run at 100 MHz, while the fastest video memory is SDRAM which can run at clock speed up to 125 MHz. The SDRAM is a special graphic version of SDRAM ( synchronous DRAM ).
It is not just the job of the video card to archive high resolution three dimensional graphics. The video card has to work with a good computer system. To recall the speed of the transfer of the data from the CPU to the Video Chipset is mainly dependent upon the bus type, the mainboard and its chipset. Thus, a good computer system to perform good graphics should have a PCI bus which runs at 33MHz with Pentium processor, a Pentium processor with MMX technology, and a good mainboard such as Intel 430 HX chipset which will affect the PCI performance.
3D graphics on 2D monitor
Although the video card can render 3D graphics now, the monitor that the graphic displays on is still flat two dimensions. Thus, the three dimensional graphic has to be mapped to the 2D screen. This is done using perspective algorithms. This means that if an object is farther away, it will appear smaller; if it is closer, it will appear larger.
To display 3D animations, an object is first presented as a set of vertices in a three dimensional coordinates which is x, y, z axes. The vertices of the object is then stored in the Video RAM. Afterwards, the object has to be rendered. Rendering is a process, which referred to calculate the different color and position information, which will make the user believe that there is a 3D graphic on a flat 2D screen. To make the calculation more efficiently, the vertices of the object are segmented into triangles. Rendering also fills in all of the points on the surface of the object which were only saved as a set of vertices previously. In this way, an object with 3D effect is able to display on a flat 2D monitor.
A new video card - Matrox Millennium
Lastly, let's discuss some new features of a new video card - Matrox Millennium. Matrox Millennium is a 64-bit video card. It can be work with 2MB or 4MB or even 8MB video RAM. The video RAM are the Matrox company authorized WRAM. It also has a powerful 220 MHz RAMDAC. Actually, it is the fastest video card available in the market now. However, according to its extreme high speed, the graphics quality is relatively lower when compared to other video cards.
The following is a summary of the new 3D features of the Matrox Millennium:
Texture mapping :
This applies bitmapped texture images which are stored in memory to objects in the screen so as to add realism.
Bilinear and trilinear filtering :
They smooth textures in a scene to lessen the blocky effect. With MIP ( multim in parvum ) mapping, an application provides different resolutions of an object as they move closer or further in the screen.
Perspective correction :
This rotates the texture bitmaps to give a better sense of convergence. Thus, when the video card renders a continuous moving object such as a meadow, it is able to maintain a realistic look as it recedes from the viewer.
Anti - aliasing :
This diminishes the "stair step" effect since the computer generated image has a finite discrete resolution.
Alpha blending :
This allows one object to show through another to create a transparent look.
Atmospheric effects :
This usually make use of the alpha blending. The effects are like fog and lighting cues.
Flat shading :
This is a technique where an whole triangle is a single color. Thus, this can create a blocky effect.
Gouraud shading :
This is a more advance method than the flat shading. It improves the overall appearance of the graphics and allows curves to be more round.
Z-buffering :
This techniques is one of the most important features to render 3D graphics. This controls how objects overlay one another in the third dimension. It is particularly important when filled polygons are included in the drawing. With Z buffering off, objects are drawn in the order in which they are transmitted to the display. With Z buffering on, objects are drawn from the back to the front.
Matrox Millennium can also playback a movie with the use of Moving Picture Experts Group (MPEG). With this technology, the video card can compress the movie data into a special format. With the Chroma-key feature, the video card also supports for "blue-screen" video effects, so that two unrelated displays can easily be pasted together. Moreover, if the video card has the Image scaling feature, it can map a video onto any window or screen size desired.
References
Magazine
· PC Magazine - December 3, 1996, Vol. 15, NO. 21
Internet
· http://www.dimension3d.com
· http://wfn-shop.princeton.edu/cgi-bin/foldoc
· http://www-sld.slac.stanford.edu/HELP/@DUCSIDA:IDAHELP/DSP/INTERACTIVE/
· http://www.ozemail.com.au/~slennox/hardware/video.htm#memory
· http://www.imaginative.com/VResources/vr_artic/marcb_ar/3dcards/3dcards.html
· http://www.atitech.com
· http://www.matrox.com
· http://www.diamondmm.com
· http://www.tseng.com
· http://www.s3.com
f:\12000 essays\sciences (985)\Computer\Video On Demand.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
OVERVIEW OF
VIDEO ON DEMAND
SYSTEMS
Joseph Newcomer
SCOPE
INTRODUCTION
THE INITIATIVE FOR WORLDWIDE MULTIMEDIA TELECONFERENCING AND VIDEO SERVER STANDARDS
NEW BUSINESS IMPERATIVES
STARTING WITH STANDARDS
TWO STANDARDS, ONE GOAL
STANDARDS FIRST
SUMMARY
CONTENT PREPARATION:
REQUIREMENTS:
CODECs/Compression
Object Oriented Database Management Systems
Encoding Verification
SUMMARY
VIDEO SERVER
REQUIREMENTS
LIMITATIONS
PRODUCTS
DISTRIBUTION NETWORK:
LAN TYPES
PROTOCOLS
WAN TYPES
CLIENT INTERFACES
RETRIEVAL INTERFACE
VIEWER REQUIREMENTS
PRODUCTS
HARDWARE MINIMUMS
SUMMARY
DEFINITIONS
A
C
D
E
F
G
H
I
J
L
M
N
O
P
S
T
BIBLIOGRAPHY:
MULTIMEDIA:
WEB Sites:
Hard Copy References:
WANS/GOV:
WEB Sites:
Hard Copy References:
ODBMS:
WEB Sites:
Hard Copy References:
MPEG:
WEB Sites:
Hard Copy References:
LANS:
WEB Sites:
TOPICS FOR FUTURE MEETINGS:
THE ATM ADAPTION LAYER
ATM STANDARDS
ISDN-B
BROADBAND WAN IMPLEMENTATION
VIDEO CONFERENCING
ODBMS
VIDEO ENCODING/DECODING STANDARDS
SCOPE
Video on demand has evolved as a major implementation problem for network integrators. Clients want the ability to retrieve and view stored video files asynchronously at near broadcast quality, on a local host. Some problems integrators face to achieve this goal include: video content preparation, server storage, network throughput, latency, client interfaces, quality of service, and cost. This paper addresses the design considerations for a private video on demand implementation.
INTRODUCTION
The Initiative for Worldwide Multimedia Teleconferencing and Video Server Standards
The market for multipoint multimedia teleconferencing and video server equipment is poised for explosive growth. The technology for this necessary and much-anticipated business tool has been in development for years. By the turn of the century, teleconferences that include any combination of video, audio, data, and graphics will be standard business practice.
Compliance with teleconferencing standards will create compatible solutions from competing manufacturers, feeding the market with a variety of products that work together as smoothly as standard telephone products do today. Specifically, with the adoption of International Telecommunications Union (ITU) recommendations T.120, H.320 and H261, multimedia teleconferencing equipment manufacturers, developers, and service providers will have a basic established connectivity protocol upon which they can build products, applications, and services that will change the face of business communications.
New Business Imperatives
Voice on Demand systems are starting to be required by commercial, industrial, governmental and military associations to retrieve past information in order to prepare and anticipate future events. This preparation and anticipation can be crucial to the survival of these industries because of the key roll of the individuals or groups being monitored. It is this monitoring and collection of data that allows these organizations to make informed decisions and to take the appropriate action to current events.
Multipoint multimedia teleconferencing and video servers offer the required solution. As defined here, it involves a user-specified mix of traditional voice, motion video, and still-image information in the same session. The images can be documents, spreadsheets, simple hand-written drawings, highly-detailed color schematics, photographs or video clips. Participants can access the same image at the same time, including any changes or comments on that image that are entered by other participants. Video servers allow users to view stored video files of specific events, conferences, news clips and important information in near realtime.
The benefits are obvious. Instead of text interpretation of a video clip, all interested parties can access the information. Little is left to verbal interpretation since all users have access to the original video. In the case of video clips, a persons actions, verbal tones, mannerisms and reactions to events around them can be viewed and interpreted. Increased productivity, reduced cost, and reduced travel time are the primary benefits while proprietary technology and solutions are specified as the primary inhibitors of using video on demand products and services.
Starting with Standards
While multimedia teleconferencing and video servers promise to revolutionize vital everyday corporate tasks such as project management, training, and communication between geographically-dispersed teams, it is clear that standards-based solutions are a prerequisite for volume deployment. Standards ensure that end-users are not tied to any one supplier's proprietary technology. They also optimize capital investment in new technologies and prevent the creation of de facto communication islands, where products manufactured by different suppliers do not interoperate with each other or do not communicate over the same type of networks.
When adopted and adhered to by equipment suppliers and service providers alike, standards represent the most effective and rational market-making mechanism available. ISDN, fax, X.25, and GSM are a few obvious examples of standards-based technologies. Without internationally-accepted standards and the corresponding ability to interoperate, the services based on these technologies would almost certainly languish as simple curiosities.
Interoperability is particularly important in multipoint operation, where more than two sites communicate. A proprietary solution might suffice if two end users want to communicate only with each other; however, this limited type of communication is rare in today's business world. In typical business communications, multiple sites, multiple networks, and multiple users have communications equipment from multiple manufacturers, requiring the support of industry standards to be able to work together. This interoperability is also critically important when a video server may be transmitting data across a WAN to multiple users, in multiple sites.
Perhaps the most important effect of standards is that they protect the end users' investments. A customer purchasing a standards-based system can rely on not only the current interoperability of his equipment but also the prospect of future upgrades. In the end, standards foster the growth of the market by encouraging consumer purchases. They also encourage multiple manufacturers and service providers to develop competing and complementary solutions and services.
Two Standards, One Goal
Fortunately, standards for multimedia teleconferencing are at hand. Working within the United Nations-sanctioned ITU's Telecommunications Standardization Sector, two goals have been achieved: the T.120 audiographics standards and the H.320 videotelephony standards. T.120, H.320 and H.261 are "umbrella" standards that encompass the major aspects of the multimedia communications standards set. The T.120 series governs the audiographic portion of the H.320 series and operates either within H.320 or by itself.
Ratification of the core T.120 series of standards is complete. These recommendations specify how to use a set of infrastructure protocols to efficiently and reliably distribute files and graphical information in a multipoint multimedia meeting. The T.120 series consists of two major components. The first addresses interoperability at the application level, and includes T.126 and T.127. The second component includes three infrastructure components: T.122/T.125, T.124, and T.123.
The H.320 standards were ratified in 1990, but work continues to encompass connectivity across LAN-WAN gateways. The existing H.320 umbrella covers several general types of standards that govern video, audio, control, and system components. With many businesses using LANs to connect their PCs, the pressure is on to add videoconferencing to those networks. Since the H.320 standards currently address interoperability of video conferencing equipment across digital WANs, it is a logical and necessary step to expand the standards to address LAN connectivity issues. As the work to expand H.320 continues, it remains the accepted standard.
Both the T.120 and the H.320 series of standards will be improved upon and extended to cover networks and provide new functionality. This work will maintain interoperability with the existing standards.
Standards First
Standards as complex and universal as the H.320 and T.120 series need a coordination point for the interim steps a proposal takes on its way to becoming a standard. The IMTC is an international group of more than 60 industry-leading companies working to complement the efforts of the ITU-T with an emphasis on assisting the industry to bring standards-based products successfully to the market. Its goals include promoting open standards, educating the end user and the industry on the value of standards compliance and applications of new technologies, and providing a forum for the discussion and development of new standards. The IMTC is approved as an ITU-T liaison, and interfaces with the ITU-T by participating in standards discussion and development, feeding information and findings into the appropriate ITU-T Study Groups.
The Standards First initiative encourages multimedia equipment manufacturers to start with compliance to at least the H.320 T.120 and H.261 standards described above. Further standards compliance is recommended but optional, and manufacturers will still have the ability to differentiate their products with proprietary features, creating Standards Plus products. Compliance to the minimum H.320/T.120 standards will ensure a basic level of connectivity across equipment from all participating manufacturers.
Summary
Standards have played an important part in the establishment and growth of several consumer and telecommunications markets. By creating a basic commonality, they insure compatibility among products from different manufacturers, thereby encouraging companies to produce varying solutions and end users to purchase products without fear of obsolescence or incompatibility.
The work of both the IMTC and the ITU-T represents an orchestrated effort to promote a basic connectivity protocol that will encourage the growth of the multimedia telecommunications market. The Standards First initiative, which has been accepted by several industry leading companies, requires a minimum of H.320, H.261 and T.120 compliance to establish that basic connectivity. Manufacturers are then able to build on the basic compliance by adding features to their products, creating Standards Plus equipment. By insuring interoperability among equipment from competing manufacturers, developers, and service providers, Standards First ensures that a customer's initial investment is protected and future system upgrades are possible.
Content Preparation:
The first step in a VOD system is the entry of Video information. The possible sources of video information in a large scale (Government) VOD system include: Recorded and Live video, Scanned Images, EO, IR, SAR collected Images. Recorded video is the primary concern of this paper. Since latency and jitter do not effect Imagery data types they will be noted but not expanded upon. Live video is the primary concern of video conferencing, but the requirements do overlap with recorded (VOD) video.
REQUIREMENTS:
Recorded video must be digitized and compressed as soon as possible in the VOD architecture to minimize the system storage requirements. The Motion Picture Experts Group of the ISO developed the MPEG-1 and MPEG-2 standards for video compression. With MPEG 1 a 50 to 1 ratio is typical. MPEG-1 can encode images at up to 4k X 4k X 60 frames/sec. MPEG-2 was optimized for digital compression of TV and supports rates up to 16K X 16K X 30 frames/sec, but 1920 x 1080 x 30 frames/sec is considered broadcast quality (MPEG-2, Hewlet Packard pub. 5963-7511E). MPEG-2 offers a more efficient means to code interlaced video signals such as those which originate from electronic cameras. (Chadd Frogg 8/95)
CODECs/Compression
CODECs encode and decode video into digital format. The CODEC must be configured to encode the information at the desired end resolution. If the end user requires broadcast quality video the CODEC must support that level of quality. The CODEC should also be compatible with the desired data throughput rate of the Content Preparation element. (This can of course be overcome with sufficient buffering .) Several CODECs output information in a form which is directly compatible with distribution HW. Some are designed to output information in DS3, ATM OC3, or Fiber Channel. The Pacific Bell "Cinema of the Future" project utilizes a HDTV CODEC. The analog HDTV signal is digitized and compressed to a DS3 rate (44.7mhz) by Alcatels 1741 CODEC. The CODEC imposes a Discrete Cosign Transform (DCT) hybrid compression algorithm with compensation for video motion. Though the precise algorithm performed by the 1741 is proprietary the following is a overview of the process: Pixel groups called blocks are translated into frequency information using the DCT (similar to a Fourier transform). Next a Quantization step drops off the least significant bits of information. These coefficients are then "entropy- encoded into variable bit length codes. This digital information , now 1/50 of its original size can be passed onto a output mechanism (HW or SW driver ). This is of course just a quick overview, the process for encoding information has been fairly well documented by the ISO.
Object Oriented Database Management Systems
In order to setup a searchable database of these MPEG objects several companies are introducing Object Oriented Data Base Management Systems (ODBMS). These systems can either be coupled with the Media Server element or Content Preparation element of the VOD system. It would be ideal if all ODBMS spoke the same language so that information could be exchanged between data bases. A common query language would be advantageous, but established standards such as SQL do not adequately address Video Objects. Illustra has added Object-Oriented extensions onto ANSI- SQL. These extensions are then used to create "DATABLADES" which provide image handling and manipulation capabilities. Since this architecture uses SQL it is more likely that third party front end Authoring software will be compatible with Illustra. (Interoperability 10/95').
Encoding Verification
If the VOD server is seen as a central library of video files, with multiple users archiving files and other users retrieving files; the requirement for format standards is evident. There is then, also a requirement to verify that these format standards are being met. This verification usually falls upon the content Preparation element of a VOD system. The natural medifore being that of a publisher ensuring that a book is legible and free of grammatical errors before releasing it to the public. ( This paper would probably be caught by such a publisher.) This auditing of compressed video information is not as straight forward. A particular video stream can flow through an MPEG-2 encoder without incident while a second stream will bog-down the system (possibly inducing errors). Rapidly changing backgrounds , like sports coverage can cause problems.. The MPEG-2 standard is complex and requires more than just an astute systems engineer to ensure that equipment designers of the encoders have not interpreted the MPEG standard differently (from the decoder designers). Hewlett Packard suggests that the industry needs to consider testability as a primary requirement of VOD systems. One way to resolve encoding concerns could be to create standardized test that carefully verify the implementation of the MPEG standard. Bit error rate testers can test transport layers, traditional data analysis tools can also be used to build new test tools for MPEG. It should be no surprise that testability is the last area of standardization for the VOD marketplace.
Summary
Preparing video information for VOD archiving has reached a point that developers are able to concentrate on accelerating the compression phase. The compression techniques are relatively well documented. The industry is now addressing how to implement them faster; HW vs. SW, Digitizing Cameras vs. DSP cards. Most experts agree that even though today's workstations have the processing power to perform the MPEG compression it is usually more efficient to perform as much processing in HW (like dedicated video cards) as possible. This is not always the case in Multimedia applications where the end product (do to BW limitations) is not really Broadcast Quality . Quality of Imagery the user expects is also a major consideration in selecting a content preparation element. If the user cannot take advantage of a hi-resolution 2k X 2k image; or if the BW of the distribution network is limited; then a hi-resolution MPEG-2 CODEC might not be justified. If the CODEC implements the "Spatial scalabilty" capability of MPEG-2 then the encoder provides the video in a two part format. This lets low-resolution decoders extract the video signal and with additional processing in more capable decoders, a high resolution picture can be provided.
Video Server
Requirements
Once the content is uploaded to the video server in the content preparation phase, and registered appropriately in the database, it becomes available for the end user. In order for this data to be available and viewable by the end user the server should have at least a Raid 5 SCSI controller, 4GB Hard Drives with 7200 RPM, and a high speed network interface. The server should support MPEG-2 compression at 4.0 Mpbs to deliver approximately 28 hours or 96 Hours of MPEG-1 compression of 30-fps, 640-by-480 pixel video on demand which equates to a minimum of 50 GB of Hard disk space. The server should employ RAM in order to buffer the data being received from the disk drive to ensure a smoother transfer of the video to the end user. A minimum of 256MB is recommended. The server should be able to handle MPEG-2 and MPEG-1 in NTSC, PAL or SECAM video formats and be able to meet broadcast and cable requirements for on-air program applications and video caching.
Compression Method *
Storage Required in Mb per 30 Second video clip
Storage Required in Mb per 60 Second video clip
Total Capacity 52GB HDD Holds
MPEG-1 @ 1.2 Mbps
36
72
96.3 Hours
MPEG-2 @ 4 Mbps
120
240
28.8 Hours
* Assumming the standard compression ratio per method type.
Limitations
There are several major limitations that must be addressed in order to understand why the above requirements are imposed.
1) Storage--There appears to currently be a storage limitation on video servers because of retrieval and transmission time associated with video. Multiple servers will be needed to store and retrieve from large archives of video information. These servers should be distributed remotely to maximize local retrieval and viewing while minimizing WAN traffic.
2) Data stream--in order to view video information with a minimum of latency and without jitter the data stream needs to be constant and uninterrupted (with the exception of some buffering as necessary). There are several forms of buffering:
a) Media stream storage on hard disk.
b) cached at the transmit buffer
c) network transit latency and buffers may be viewed as another buffer.
d) the receive end may buffer a sufficient amount of the media stream to maintain a continuous stream for display and suitable synchronization with the transmit end.
3) Concurrent users--The video server should be limited to 100 concurrent users in order to ensure that each user is able to access the requested data as expeditiously as possible.
4) Network bandwidth size--The network needs to directly proportional to the number of simultaneous video streams. The bandwidth of the system is effectively limited by the bandwidth / transmission capabilities originating at the server.
5) Latency--Although hard to determine, there should be no more than 2 seconds for a video file retrieved locally and no more than 10 seconds for a video file retrieved over the WAN from a remote site.
6) ODBMS
Products
Several products that are currently being marketed as video servers are:
1) The Network Connection, M2V Video Server:
a) 120 simultaneous 1.2 Mbps MPEG-1 video streams
b) 112GB, RAID 5 storage.
c) In excess of 200 Hours MPEG-1, and 60 Hours MPEG-2.
d) Supports JPEG, M-JPEG, DVI, AVS, AVI, Wavelte, Indeo and other video formats.
e) Supports Ethernet, Token Ring, FDDI and ATM.
2) Micropolic Corp, AV Server:
a) 16 Mpeg-2 Video Decoder Boards with 4 Channels per card is 64 channels at 6Mbps per channel.
b) 252GB, Raid storage.
c) In excess of 120 hours MPEG-2
d) Supports only MPEG-2
3) Sun Microsystems, Media Center 1000E Video Server:
a) 63GB, RAID4 storage.
b) In excess of 32 Hours MPEG-2, and 81 Hours MPEG-1
c) Supports MPEG-1 and MPEG-2
d) Supports ATM and Fast-Ethernet
Distribution Network:
Video on Demand (VOD) requires predictability and continuity of traffic flow to ensure real-time flow of information. MPEG and MPEG-2 (as described above) require an effective BW of 1.5 - 4 Mbits/sec. Multiplying this "media stream" BW requirement by the number of clients will give a rough estimate of the effective distribution networks bandwidth. The Common Imagery Ground/Surface System (CIGSS) 1 Handbook suggests the following steps to size and specify the LAN technology use for Image dissemination systems:
1. Approximate the system usage profile by estimating the amounts of image, video and text handling that will be required.
2. Convert the amount of images, video and text to be processed into average effective data rates. Raw data transferred directly to an archive ( our video server) and near real- time processed imagery should be estimated separately. The bandwidth requirements can be combined later if needed.
3. Adjust calculated rate for growth. The growth factor should be at least 50%.
4. Add a fraction (about .3 to .4) of the peak capacity to the growth adjusted rate for interprocessor communications.
Updating heritage networks to this new BW requirement can incur substantial costs. The cost of implementing a hi-speed network varies depending on the network architecture.
LAN Types
Several LAN architectures are being used in "trial" VOD systems. ATM, FDDI token ring and even variations of the Ethernet standard can provide the required 10-100Mb/sec BW.
A version of Ethernet called switched Ethernet can provide up to 10Mbps to all clients. Since this is a switched architecture the full 10 Mbps can be available to each client. This architecture provides the quickest most cost effective method of upgrading legacy systems since it does not require upgrade of existing 10baseT wiring. A voice grade Ethernet 100VG-AnyLAN can also be implemented in a VOD system. This architecture however will require some cable upgrades from CAT 3 to CAT 5. Ethernet 100VG is expected to "top-out" at 100Mbps, no further upgrades are foreseen.
Token ring networks have been implemented in a few VOD trail systems. FDDI can be setup to provide 100Mbps and because of the Token-ring architecture, the network can specify BW for each client. A simulated system, described in the Sept '95 edition of Multimedia Systems would be capable of handling 60 simultaneous MPEG-1 video streams. The video server (486DX) not the 100-Mbit/sec token ring limited the system size. This is of course a small system, and due to the "shared" nature of a token ring FDDI architecture , it should not be implemented for larger (1000+) systems.
ATM provides the highest BW and probably the most expensive network solution. ATM provides the proper class of service for video on demand applications. ATM connections running at OC3 rates (155Mbps) are currently priced at approx. $300-$500. ATM is not a "shared" topology. BW is not dependent on the number of users. In fact, as the number of users on an ATM net is increased, the effective BW of the ATM network increases. ATM can have hundreds of services operating simultaneously; voice, video, LAN and ISDN. These services can all be guaranteed, and assured that they won't interfere with each other. The LAN marketplace is currently providing 155Mbps products. Some of the ATM forum leaders (such as FORE systems) are also providing 622Mbps (OC12) network interface cards (NICs). The problem is that ATM is a relatively new protocol. Several companies have come together to form the ATM Forum, to help standardize the architecture. For most network application software the cell-based ATM layer is not an appropriate interface. The ATM adaption layer (AAL) was designed to bridge the gap between the ATM layer and the application requirements. The Forum's efforts have been very successful at the lower ATM adaptive layers but some interoperability issues still exist. The American ATM Forum has standardized on ATM AAL 5 to map MPEG-2 for transport. While the European ETSI has chosen AAL2. These inconsistencies effect the transport of multimedia only through ATM LANS.
Protocols
There are several transport protocols that can be implemented for audio-video applications; TCP, UDP, SONET, TCP/IP Resource Reservation Protocol (RCVP) and IPX/SPX. Do to the effective data rate necessary to support VOD, protocols that minimize client/server interaction are preferable, except in cases where an over-abundance of network bandwidth exists. In ATM nets supporting mostly non-VOD applications retransmission of lost packets or corrupt packets will not be possible. For example, if cells are lost the Fore Systems AVA Real-time Display SW uses pixel tiles from a previous frame. In a typical VOD system , without error correction, QOS is directly proportional to network/LAN BER (Bit Error Rate). VOD systems which provide error correction as part of network protocol have to be designed to allow for the latency created by their error correcting protocols. (DSS currently implements interleaving, Reed Soloman and viterbi decoding) QOS trade-offs can be quantified and analyzed (see " QOS control in GRAMS for ATM LAN", IEEE Journal of Selected Areas in Communications, by Joseph Hui)
Networking, DBMS and server companies have been adopting upper layer protocols to VOD processes. Oracle Media Net utilizes a "sliding window" protocol. Sliding Window protocol is a well established methodology for ensuring transmission over lossy data links. Medianet monitors the response between client and server, lengthens the response checking time to the point of error and then backs off. (This process theoretically diminishes disruptive latencies ) . Novell developed the Novell Embedded System Technology (NEST) and Netware to run over IPX/SPX protocols. The Novell implementation provides prioritization for video users. Flow control from the client to the server does not yet exist. (Interoperability, 10/95).
WAN Types
Distributing VOD information outside the LAN requires either a very high bandwidth WAN with guaranteed availability, or substantial buffering and latency allowances at the client in order to ensure and maintain a constand display of data. When many people think of information distribution over a WAN, sourced by many different servers, to many isolated users; the Internet naturally comes to mind. The Internet was used by the National Information Infrastructure (NII) workshop as a model for the delivery of video services. This commercial organization conference in addition to supporting HDTV and DSS , is interested in providing VOD services to "all Americans". The Internet was seen as a good first attempt for distributing information. The Internet is inexpensive, requires no gatekeepers, provides search utilities and has several proven Human Machine Interfaces (HMIs). Unfortunately the Internet is also bandwidth limited, provides insufficient: traffic control, security, directories and no guaranteed delivery functions. The Internet may not be the solution to the VOD distribution problem, but it will expedite the development of an open architecture commercial VOD WAN.
Commercial enterprises have been considering hybrid fiber/coaxial cable as one possible solution. This implementation also referred to as "fiber to the curb" requires a partial upgrade to existing telephone distribution infrastructures. Signals are transmitted over fiber to a neighborhood distribution (Gateway) point. The signals are then either converted to RF and sent to the User (home) via coax, or converted to a lower data rate network Interface and sent onto the home. The RF implementation requires the "Set-Top Box" for decoding the RF , The latter could be a PC implementation. ISDN-B the broadband version of ISDN will probably evolve as the leading WAN technology. Narrowband ISDN is already an excepted method of providing the higher serial data rates necessary for minimal quality multimedia applications, like teleconferencing. True motion picture quality VOD implementations will require the Mbps data rates that should be provided by ISDN-B.
The DOD has also been interested in the distribution of video and imagery across WANs. The Defense Airborne Reconnaissance Office (DARO) has developed the Common Imagery
f:\12000 essays\sciences (985)\Computer\videogame.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
VIDEOGAME: The High Tech Threat to Our Younger Generation
Anyone who has ever walked through a shopping mall on a weekend knows how popular videogame arcades have become with our young people. It is becoming a force in the lives of millions of kids all across America. Parents and teachers become more concerned and worried when they see their kids devoted to videogames. They are highly concentrated because vidiogames greatly influence the mental and learning processes of the younger generation. Many parents believe that their children learn values more from the mass media rather than their from homes. Generally speaking, the video and computer game industry has been a growing concern to the religious groups, responsible politicians and bewildered parents for the disturbing contents and the substandard themes in some of its games. The videogame technology must be recognised for its role and influence on the younger generation because, for better or worse, it clearly affects their academic and social life.
Indeed, statistics are really alarming on the videogame industry. It is a multi-million dollar business growing at 40 per cent a year from 1987 to 1993 (Palmeri 102). Tetzeli in his article "Videogames: Serious Fun" compares videogames $ 6.5 billion--a--year business to the Hollywood film industry (110). He continues to point out that two Japan based conglomerate have put about 64 million videogame machines in US households in total. In addition to that they also produced and licenced for all their softwares for their machines (110). Palmery estimates to produce and market a ful featured videogame it would costs up to $10 million (102). Because of the cost producers attempt to make a return on their investments and earn as much profits as they can. To achieve their goals, they feature more blood, gore and human dismemberment in their games to appeal to the younger generation because violence sells. According to Palmery the game Mortal Kombat has sold a record 5 million copies for about$65 apiece.(102)
The advanced technology in upcoming videogame machines even allows the players to interact with screen images in ways never before possible. Analysts in this field say that it is only a prelude to the emerging world-wide network popularly known as the electronic information highway( ).
Two of the Japan's formidable corporate giants, Sega of America Inc., and Nintendo of America Inc., are a real force behind the growing phenomenon. `The world wide home--videogames marketwhich they dominate is worth arouwnd $20 billion, of which about two-thirds represents thegames themselvesand one third the machines theyare played on....Their empires are based on a manufactureing and distribution system built around cartridges and dedicated machines (Massacre 71). Their battle for the market share and the massive multi-billion dollar world wide market and their expensive advertisement battles have attracted the public attention(Hulme 20). For instance, ten million marketing budget and the publicity fuel a national debate on Videogame violence which obviously helped Mortal Kombat finish the year as the top selling Videogame of the year is the another success story. The game should bring $150 million in one year revenue by the end of 1994(20).
Who else is to blame other than the technology itself for the public outcry against the violence and sex in video games? Some computer experts say that with a modem, most videogames could be accessible, just like making airline reservations and holding library books. It is that easy. Evans, a concerned mother of two boys, complains, "You see, those mothers know when their kids go to the mall or some place like that, they won't be able to buy cigarettes, or alcohol or pornographic magazines." Evans continues, "But kids can walk into any movie rental store and pick up one of these violent video games, nobody will say no. The parents feel like they lost control with thesegames" (Browning 691).
Alain Jehlen says, "Our children are spending countless hours with these machines, their eyes glued to the screen, their fingers madly pressing buttons to meet contrived on-screen challenges" (74).
Nowadays, technology has become even more advanced and is able to produce CD quality stereo sound and three dimensional images without any jerky movements. It is almost like a short action film than the sticky figures of the computer programs of the mid eighties (Source). For now, the overall sales of 16-bit hardware systems have slowed dramatically as videogame players wait for the next generation of 32-bit and 64-bit systems due next year from Nintendo, Sega and Sony Corporations (Fitzwrald **).
For now, the main issue is violence. Richard Brandt, in his essay titled, "VIDEOGAMES: Is All That Gore Really Child's Play?" paints some graphic pictures of the hard core violence( ). While Nintendo, the trend and price setter of the industry acknowledges that two thirds of its consumers are under 15, they released a game called Street Fighter II, which features a character gnawing on opponent's head with all the usual violent formula. In its another massive hit 'The Mortal Kombat', each time a fighter lands a good punch or kick the victim emits bright red animated blood flying . Players can win a fatality bonus. The winner might knock off and rip out a still pulsating heart by hand or tear off opponents head and hold up victoriously the spinal cord dangling from its neck (Brandt 38). Brandt also observed, a 19-year old playing Mortal Kombat, in San Francisco arcade, enjoys the fact that "it doesn't look fake. It's a lot more real with all the blood and stuff"( ). What a taste! What is the purpose of videogame producers other than making money by exploiting animal desire and pure violence in the mind of our younger generation?(38). In fact it appears to be a growing demand for more violent sports games as well. Bing Gorden, a senior vice president of Electronic Arts Inc.., revealed when he recently tested a new hockey game with 25-year-old. They demanded, "Where is the blood?"( 38).
Tetzeli points out that up until now the videogames are only a boys game. So far girls have not been a factor. Most of the titles targeted only male audiences and are based on boys themes such as street fighting, car racing, and football. But now Sega of America's CEO, Kolinske has diverted his attention on women segment and set apart a team which comprises of well known female marketers and game makers to produce games according to the femenine taste. In that way they strategically targeted to enter into our living room like that of the television (116).
"It is a cultural disaster," lamented a successful producer of NOVA, Mr. Jehlen, a popular science documentary and an accomplished writer, "but it does not have to be a negative force. While most of the games of today's market show shooting and kicking and not of much thought, the videogame format has tremendous potential"(74). He believes that a video game can be a mental exercise machine(Jehlen 74).
Since the videogame industry is a relatively fast emerging industry, scientists have performed relatively little research on this area. However, while some condemn it outright, others endorse it conditionally. Unfortunately, definite answers are not yet available despite few complete articles on this subject, as it is a fairly new field of research. Researchers claim that video games can help develop necessary problem solving abilities, pattern recognition, resource management, logistics, mapping memory, quick thinking, and reasoned judgements( ). Helping to learn when to fight and when to run actually helps in real life situations (207). Brody honestly believes that videogames could give children a sense of mastery. For them success become like an addiction, and each time the games nourish them with constant doses of small successes which they deserve to become "confident citizens"(53).
A senior fellow of Manhattan Institute, Peter Huber overwhelmed many people by testifying that his 6 year old daughter learn basic musical notations by shooting ducks in the electronic key board which is linked with a MIDI and to a software that runs on another computer. Huber testifies, "television -even Sesame Street- holds no interest for her at all. What I see in her experience is the face of learning transformed, almost beyond recognition". This proud father further admonishes, Don't let your children (or may be grandchildren) miss the train (182).
Referring to the findings of some doctors, Sheff points out that playing games has the power to soothe any kind of pain for two reasons. First, the more they interact with an undivided attention towards a game, it is clinically proven that all kinds of pain and every thing else considerably vanish. Second, the player's highly excited state of mind generates a steady flow of a "feel --good--chemical called endorphin into their blood stream. Endorphin is known for its natural suppressant for pain and develops a sense of euphoria. Playing games like Nintendo can create a sort of high, like that of jogging (204).
In an interview with the Information System and Computer Science department chairperson and his staff with a working knowledge of interactive multimedia and computer generated games, they acknowledge that the modern information superhighway is clearly a link to the world and it is here to stay and cannot be ignored. Dr. Ellis Brett, recalls that a decade ago, we lived in a world of isolation with TV and mainframe computers, but now we live in the digital age', an age that bring virtually every thing into a cartridge or a CD. He occludes, "A teacher, a desk, and book are not adequate any more".
This point bring us back to the original thesis, that the videogame technology must be recognised for its role and influence on the younger generation because, for better or worse, it clearly affects their academic and social life.
The problem here, however, is not the educational or entertainment games but with the violent and substandard ones. Many parents see their children learning the values from mass media and video parlours rather than from schools or churches or homes. According to Browning there are two facts that are disturbing and have emerged from the congressional debate over violence in the media. The first one is that most parents feel that they are engaged in a battle with computer technology and other reason is that some parents apparently feel they are losing the battle (691).
In the April 1994 issue of Marketing Age , Kate Fitzgerald reports that the growing angry to the public outcry during 1993 forced the Nintendo to reduce the most violent scenes from Mortal Kombat's home version. But Sega, and the other America's very own competitor Atari refused to reduce the violent content. As a result, the No 1 Nintendo reduced to No 2 and lost its market share and millions of dollars beyond recovery. Uncompromised to the public fury the Sega took the No 1 position in the industry. Hayao Nakayama, president and CAE of Sega Enterprises says: Unfortunately, Nintendo is going down." Nintendo's genuine efforts backfired. Instead of deterring blood lust, it drove more than 1 million action hungry teenagers to its rival Sega. Which afford the pure hard core violence of the original arcade game. This year (1994) Nintendo realised its hard earned lessons and its past 'mistakes' and compromise with the moral responsibilities and expected to pick up significantly but to become its old position to No 1 remains questionable.(Fitzgerald 3)
One latest development , as part of a new industry policy urged by congress and by the arm twisting tactics , both the companies have voluntarily' added on package messages warning some content may not be suitable for players under 17. In reality, despite evidence that such warning some times increase sales of violent video games. (Fitzgerald 3]
It is a good sign and relief that is the past few months, the ever growing violence of the videogame has swept over even congress. Herbert H. Kohl, D-Wis., the chairman of the senate Judiciary Subcommittee of Juvenile Justice , and Joseph I. Liebermann, D-Conn., the chairman of the Senate Government Affairs Subcommittee on Regulation and Government Information, looked beyond television, to violence in video game(Browning 691).
During a recent (1993) congressional hearing Senator Herbert Kohl announced that if the video game industry does not monitor its contents, congress will. Kohl and other senators are co-sponsoring a bill that facilitates
and gives one year ultimatum to the videogame industry to create a set of standards that likely include industry wide ratings. The latest development during March 4 1994 is that because of the pressure rendered by the publics and other interest groups and the congress the game makers voluntarily come forward to announce the creation of the Industry Rating Council and embraced self regulations ( ).
The New York Times dated 15th June 1994 reports that the Industry's principal trade group announced two important news recently. The good news is that the computer games industry will develop a rating system to voluntarily label the amount sex and violence in about 2000 new games that reach the market each year. According to the source of the industries trade group, the bad news is unfortunately the 5,000 computer games already in stores, Mr Wasch reports, would not rated(Rating 36).
Bob Garfield, an expert on videogames software in his regular column in the Advertising age angrily exposes that the hideous manipulation of children's psyches is a disgrace and further charges the industry for aiming to 'Be heard by exploiting kids' distress(21). During the on-line interview with the Garfield through the Ad Age Bulletin Board Service (bbs) on Prodigy at EFPB35A, he responded by reiterating the same idea with more statistical data. To some extend he sounds reasonable and his comments are logical but his criticism on the U.S base Atari and other similar producers are concerned he is more of a business oriented and patriotic. He seems to be concerened more about the financial point of view than the cultural and moral aspects.
The question remains who judge the culprit? Should it be the culprit themselves or a responsible government agency?. It is a sheer mockery to the
censor board and the justice system as a whole. There should be an equality under the law. Cinema and video game or any other media in this matter should be considered equally under the law. Videogame industry is not only controlled by the two largest Japanese corporations such as Nintendo and Sega but it also severely affect the very fabric of the younger generation and the society as a whole economically. The government should take the moral responsibility to curb the illicit effects on its future citizens by establishing a uniform code of rating system all over its 50 states like that of the censorship on cinema and other popular media.
For example, in all the Hawaiian islands the stores that are selling fake guns, combat 'videogames' and other war toys may be forced to post warnings to the extend the general public to be informed about the playthings that can increase "anger and violence" in children. The state of Hawaii have been passed a bill which would require the stores to place signs on shelves in stating : "Warning. Think before you buy. This is a war toy. Playing with it increases anger and violence in children. Is this what you really want for your child? (WAR TOYS). Which may not be very effective altogether to control the vidiegames with violent contend. But still the warning gives a chance and may be the parents pause a moment before they decide to buy any thing for their offspring.
Voluntary rating system or any other form of self regulatory arrangement will only help to widen the loop holes of the existing system. By including this multi billion dollar industry under the existing film rating system or something similar to that would greatly reduce the risk of violence and ultimately would prevent the youths on turning for violent solution for all their problems. And also would help to form a violent free life style and prevent the younger generation and spend their quality time with their studies and parents. All other arrangements will, at least help us to further delay the process of controlling the emerging violent theme and content of the many thousands of videogames yet to be produced or released.
f:\12000 essays\sciences (985)\Computer\Virtual Reality 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Virtual Reality - What it is and How it Works
Imagine being able to point into the sky and fly. Or
perhaps walk through space and connect molecules together.
These are some of the dreams that have come with the
invention of virtual reality. With the introduction of
computers, numerous applications have been enhanced or
created. The newest technology that is being tapped is that
of artificial reality, or "virtual reality" (VR). When
Morton Heilig first got a patent for his "Sensorama
Simulator" in 1962, he had no idea that 30 years later
people would still be trying to simulate reality and that
they would be doing it so effectively. Jaron Lanier first
coined the phrase "virtual reality" around 1989, and it has
stuck ever since. Unfortunately, this catchy name has
caused people to dream up incredible uses for this
technology including using it as a sort of drug. This became
evident when, among other people, Timothy Leary became
interested in VR. This has also worried some of the
researchers who are trying to create very real applications
for medical, space, physical, chemical, and entertainment
uses among other things.
In order to create this alternate reality, however, you
need to find ways to create the illusion of reality with a
piece of machinery known as the computer. This is done with
several computer-user interfaces used to simulate the
senses. Among these, are stereoscopic glasses to make the
simulated world look real, a 3D auditory display to give
depth to sound, sensor lined gloves to simulate tactile
feedback, and head-trackers to follow the orientation of the
head. Since the technology is fairly young, these
interfaces have not been perfected, making for a somewhat
cartoonish simulated reality.
Stereoscopic vision is probably the most important
feature of VR because in real life, people rely mainly on
vision to get places and do things. The eyes are
approximately 6.5 centimeters apart, and allow you to have a
full-colour, three-dimensional view of the world.
Stereoscopy, in itself, is not a very new idea, but the new
twist is trying to generate completely new images in real-
time. In 1933, Sir Charles Wheatstone invented the first
stereoscope with the same basic principle being used in
today's head-mounted displays. Presenting different views
to each eye gives the illusion of three dimensions. The
glasses that are used today work by using what is called an
"electronic shutter". The lenses of the glasses interleaveÔe inflating air bladders in a glove,
arrays of tiny pins moved by shape memory wires, and even
fingertip piezoelectric vibrotactile actuators. The latter
method uses tiny crystals that vibrate when an electric
current stimulates them. This design has not really taken
off however, but the other two methods are being more
actively researched. According to a report called "Tactile
Sensing in Humans and Robots," distortions inside the skins
cause mechanosensitive nerve terminals to respond with
electrical impulses. Each impulse is approximately 50 to
100mV in magnitude and 1 ms in duration. However, the
frequency of the impulses (up to a maximum of 500/s) dependsÔoration simulations. Such things as virtual wind
tunnels have been in development for a couple years and
could save money and energy for aerospace companies.
Medical researchers have been using VR techniques to
synthesize diagnostic images of a patient's body to do
"predictive" modeling of radiation treatment using images
created by ultrasound, magnetic resonance imaging, and X-
ray. A radiation therapist in a virtual would could view
and expose a tumour at any angle and then model specific
doses and configurations of radiation beams to aim at the
tumour more effectively. Since radiation destroys human
tissue easily, there is no allowance for error.
Also, doctors could use "virtual cadavers" to practice
rare operations which are tough to perform. This is an
excellent use because one could perform the operation over
and over without the worry of hurting any human life.
However, this sort of practice may have it's limitations
because of the fact that it is only a virtual world. As
well, at this time, the computer-user interfaces are not
well enough developed and it is estimated that it will take
5 to 10 years to develop this technology.
In Japan, a company called Matsushita Electric World Ltd.
is using VR to sell their products. They employ a VPL
Research head-mounted display linked to a high-powered
computer to help prospective customers design their own
kitchens. Being able to see what your kitchen will look
like before you actually refurnish could help you save from
costly mistakes in the future.
The entertainment industry stands to gain a lot from VR.Ô
f:\12000 essays\sciences (985)\Computer\Virtual Reality.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Joe Blige
Virtual Reality
Virtual reality as of recent, while still extremely new, has become the topic of many
opposing viewpoints. It has caught the eye of the general public for several reasons.
Perhaps, this is mainly because of all the possibilities which virtual reality creates.
Note that the possibilities are not pre-determined as either good or bad, mainly because
there are many different opinions to the future of this developing technology. However,
despite the controversy this new technology has aroused, society should not remain
skeptical. Virtual reality has the potential, if used correctly, to become a great
technological advancement that will aid society in many ways.
In the past, virtual reality has been nothing more than a small step beyond video
games.
However, it is now apparent that this technology can be used for more practical purposes.
These purposes include national defense, surgical procedures and various other applications.
Society has not fully acknowledged the benefits of virtual reality as of yet because it is
still under development. The reason for virtual reality remaining in its development for so
long is mainly due to its complexity. The hardware that has developed so far is unable to
make the large calculations required by a virtual reality based machine. However, as
apparent in recent years, technology is advancing at an extreme rate. This is another
reason why society's hopes for virtual reality should and have remained unwaivered.
In Orenstein's story, she gives the perspective of the average citizen who is obviously
uncertain about the uses and/or affects that virtual reality will have upon society. The
show she attended was quick to point out the practicality of virtual reality however, it
still left much to be desired. It seems that Orenstein was disgruntled when she came to an
exhibit and the topic of cyber-sex was raised. Perhaps it wasn't just that it came up but
more like how it came up. The idea of a man and woman being in a virtual world and a man
fondling the womans breasts was probably, although very much possible, not a great first
impression. It gave Orenstein the opportunity to explore the evils that virtual reality
makes possible.
After a while, Orenstein realizes that just like the computing age has hackers, the
virtual
age will have it's own high-tech delinquents.
You can't prevent technology from being abused. There will be those who
use VR rudely, stupidly, dangerously--just as they do the telephone or
computer. Like the telephone and the modem, its popular rise will also
eliminate the need for certain fundamental kinds of human contact, even
as it enhances our ability to communicate. (Orenstein 258)
Here she is quick to point out that because virtual reality is such a new technology it is
extremely possible for hackers to have their way with it. Perhaps she also points out that
in order for society to accept this new technology they will have to accept it's risks as
well.
In the government's perspective use of virtual reality it is easy to see how this
technology
proves useful. Supposing that the United States got into a war, by using virtual reality
pilots instead of real pilots the number of casualties would obviously be less. Pilots
would fly their aircraft from a remote location via video and audio equipment in the form of
virtual reality. As technology increases over the next several years it will become easier
and easier for the pilots to fly planes from a remote location.
However, despite all the lives this may save there is a down side. The down side being
that perhaps this will stimulate the government to react more easily in a violent way.
Without any loss of lives the only thing the government has to lose by attacking are the
cost of planes. Keeping this idea in mind, it is very likely that the US will spend less
time negotiating and more time fighting. This is most definitely a negative side-affect of
virtual reality because it will weaken the relationship that the US has with other
countries.
Integrating virtual reality with society is where the majority of problems occur. It
is
clearly apparent that because this technology is so new society is unsure how it will fit
in. This is also a good example of why people's opinions are so varied. Some people see
virtual reality as just another tool which will aid society in several ways. Others see it
as dominating society all together and affecting everyone's lives everyday. It obviously
has the potential to be both and it is easy to see why people are so hesitant to decide.
Perhaps another reason for society's lack of optimism is their fear that they will
somehow be removed from actual reality. Although quite ironic, for a long time society has
had a fear that technology will someday take control of their lives. Perhaps the idea of
technology becoming so advanced that people will no longer be able to tell whether they are
in virtual or actual reality. It is clear that technology has definitely affected society
in recent years. However, it is quite difficult to predict the role of technology in the
future. The potential for technology is certainly there, it just needs to be focused it the
right direction.
Technology most definitely has the ability to run out of control. Just the idea alone,
of
man creating technology and having it run out of control is something society has been
fascinated with for many years. Books and movies depicting technology overwhelming society
have been created with much of this idea in mind. Perhaps it is possible that virtual
reality will be that technology which man is unable to control and will take over all of
society. If this is the case, society and the people within it would become uncertain if
they were in virtual or actual reality. It must be pointed out however, due to the nature
and precaution of society in general, it is very unlikely that anything like this will ever
actually occur. If society is intelligent enough to invent such a technology it should be
able to determine and control it's consequences.
Orenstein brings up a good point when she says, "This time, we have the chance to enter
the debate about the direction of a revolutionary technology, before that debate has been
decided for us"(258). Often times in the past, society as a whole has been subject to
decisions made by those of the creators of new technology. In this quote however, Orenstein
points out that with this technology people should not only try but make it a priority to
get involved. She, as many others do, see this technology as having a huge amount of
potential. Without the direction and influence of society upon virtual reality it could go
to waste, or even worse, turn into society's enemy of sorts.
Towards the end of the story she tries to depict how virtual reality will have an
impact
upon society whether they like it or not.
As I rode down the freeway, I found myself going a little faster than usual,
edging my curves a little sharper, coming a little close than was really
comfortable to the truck merging in the lane ahead of me. Maybe I was just
tired. It had been a long night. But maybe it just doesn't take the mind that
long to grab onto the new and make it real. Even when you don't want it to.
She depicts that no matter how much society is aware of virtual reality, the human brain
still has instincts that cannot be controlled. That is one of the drawbacks of virtual
reality. That no one is sure what to expect. Just as with any other technology, the only
way to find out the results of virtual reality are to test the limits.
Knowing that virtual reality has the ability to affect so many people in such a large
number of ways there needs to be some kind of limitation. This brings up another key
controversy as to who should be in control of limiting this virtual world. If the
government is in control it could likely be abused and mishandled. However, if society as a
whole is left to contemplate its uses, the affects could be either good or bad.
Although society knows a lot about virtual reality there is still so much that it
doesn't
know. Perhaps in the coming years, new technology will come out and people will learn more
about this virtual world. However, until that time, the questions will remain numerous and
doubtful yet the possibilities are unlimited.
f:\12000 essays\sciences (985)\Computer\Virtual Reality1.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Today Virtual reality allows people to study artificial worlds through simulation and computer graphics. Computers have changed the way we perform flight training, scientific research and conduct business. Flight simulators have drastically reduced the time and money required to learn to fly large jets. One of the most interesting capabilities of virtual reality is the ability to practice certain medical practices. Computers are helping many doctors perform complicated operations very simply. Computers have changed the way we look at health problems. They have made incurable health problems very easy to solve in today's society.
We have only begun to realize the extreme wastefulness of burning expensive fuel in aircraft in order to learn something in an hour that could be taught in ten minutes in a simulator. Simulators have come a long way since 1929, when Ed Link first built what was soon to be known as the pilot maker, or more affectionately, the blue box. Students often find themselves sitting at the end of a runway waiting for takeoff clearance on a busy day, with the engine turning and burning expensive gas. This is not a very effective way for students to spend money. Most students do not have access to expensive flight simulators. Most have to travel hundreds of miles to take advantage of these amazing simulators. Flight simulators are much better than an airplane for the simple reason that in a simulator the learning environment is much safer. Students are able to avoid the overriding need to keep the airplane flying and out of harm's way. In a simulator a student is constantly busy, practicing what he is supposed to be learning, and once he's flown a given maneuver, he is able to go back and do it over again, without wasting time or fuel.
Years ago doctors used X-rays to see the insides of humans. X-ray's were most helpful in finding broken bones. These machines were an incredible break through years ago. Today X-ray machines are hardly ever used. Today we use computer-aided volumetric images of internal organs, often referred to as cross-sectional images of the body's interior.
In the past scars were often left behind after major surgeries. We have avoided leaving these nasty scars through fiber optics. If a patient needs surgery on an injured nee, the doctor would cut two small holes in the side of the patient's knee and glide the tiny light, camera, and operating tools inside. The doctor would be able to monitor what he was doing from a colored monitor screen.
Virtual reality also allows leeway for doctor's mistakes. With virtual reality a student is able to try several different operations more than once. If the attempts are failures the patient will not be injured. Before virtual reality, students were often required to operate on animals. Because of virtual reality we are able to save money along with animals' lives.
We have come a long way in virtual reality since World War II. We have been able to save time, money and many lives, in both medical and flight training. The human race has many new and exciting advancements coming because of virtual reality. I hope that one day this new advancement will not be used in war tactics, rather only be useful for practical purposes.
f:\12000 essays\sciences (985)\Computer\Virus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Are "Good" Computer Viruses Still a Bad Idea?
Vesselin Bontchev
Research Associate
Virus Test Center
University of Hamburg
Vogt-Koelln-Str. 30, 22527 Hamburg, Germany
bontchev@fbihh.informatik.uni-hamburg.de [Editor's note: Vesselin's current email address is
bontchev@complex.is]
During the past six years, computer viruses have caused unaccountable amount of damage - mostly due to loss of time
and resources. For most users, the term "computer virus" is a synonym of the worst nightmares that can happen on
their system. Yet some well-known researchers keep insisting that it is possible to use the replication mechanism of
the viral programs for some useful and beneficial purposes.
This paper is an attempt to summarize why exactly the general public appreciates computer viruses as something
inherently bad. It is also considering several of the proposed models of "beneficial" viruses and points out the
problems in them. A set of conditions is listed, which every virus that claims to be beneficial must conform to. At last,
a realistic model using replication techniques for beneficial purposes is proposed and directions are given in which
this technique can be improved further.
The paper also demonstrates that the main reason for the conflict between those supporting the idea of a "beneficial
virus" and those opposing it, is that the two sides are assuming a different definition of what a computer virus is.
1. What Is a Computer Virus?
The general public usually associates the term "computer virus" with a small, nasty program, which aims to destroy the
information on their machines. As usual, the general public's understanding of the term is incorrect. There are many
kinds of destructive or otherwise malicious computer programs and computer viruses are only one of them. Such
programs include backdoors, logic bombs, trojan horses and so on [Bontchev94]. Furthermore, many computer
viruses are not intentionally destructive - they simply display a message, play a tune, or even do nothing noticeable at
all. The important thing, however, is that even those not intentionally destructive viruses are not harmless - they are
causing a lot of damage in the sense of time, money and resources spent to remove them - because they are generally
unwanted and the user wishes to get rid of them.
A much more precise and scientific definition of the term "computer virus" has been proposed by Dr. Fred Cohen in
his paper [Cohen84]. This definition is mathematical - it defines the computer virus as a sequence of symbols on the
tape of a Turing Machine. The definition is rather difficult to express exactly in a human language, but an approximate
interpretation is that a computer virus is a "program that is able to infect other programs by modifying them to include
a possibly evolved copy of itself".
Unfortunately, there are several problems with this definition. One of them is that it does not mention the possibility
of a virus to infect a program without modifying it - by inserting itself in the execution path. Some typical examples
are the boot sector viruses and the companion viruses [Bontchev94]. However, this is a flaw only of the
human-language expression of the definition - the mathematical expression defines the terms "program" and "modify"
in a way that clearly includes the kinds of viruses mentioned above.
A second problem with the above definition is its lack of recursiveness. That is, it does not specify that after infecting
a program, a virus should be able to replicate further, using the infected program as a host.
Another, much more serious problem with Dr. Cohen's definition is that it is too broad to be useful for practical
purposes. In fact, his definition classifies as "computer viruses" even such cases as a compiler which is compiling its
own source, a file manager which is used to copy itself, and even the program DISKCOPY when it is on diskette
containing the operating system - because it can be used to produce an exact copy of the programs on this diskette.
In order to understand the reason of the above problem, we should pay attention to the goal for which Dr. Cohen's
definition has been developed. His goal has been to prove several interesting theorems about the computational
aspects of computer viruses [Cohen89]. In order to do this, he had to develop a mathematical (formal) model of the
computer virus. For this purpose, one needs a mathematical model of the computer. One of the most commonly used
models is the Turing Machine (TM). Indeed, there are a few others (e.g., the Markoff chains, the Post Machine, etc.),
but they are not as convenient as the TM and all of them are proven to be equivalent to it.
Unfortunately, in the environment of the TM model, we cannot speak about "programs" which modify "other
programs" - simply because a TM has only one, single program - the contents of the tape of that TM. That's why
Cohen's model of a computer virus considers the history of the states of the tape of the TM. If a sequence of symbols
on this tape appears at a later moment somewhere else on the tape, then this sequence of symbols is said to be a
computer virus for this particular TM. It is important to note that a computer virus should be always considered as
related to some given computing environment - a particular TM. It can be proven ([Cohen89]) that for any particular
TM there exists a sequences of symbols which is a virus for that particular TM.
Finally, the technical computer experts usually use definitions for the term "computer virus", which are less precise
than Dr. Cohen's model, while in the same time being much more useful for practical reasons and still being much
more correct than the general public's vague understanding of the term. One of the best such definitions is ([Seborg]):
"We define a computer 'virus' as a self-replicating program that can 'infect' other programs by modifying
them or their environment such that a call to an 'infected' program implies a call to a possibly evolved, and in
most cases, functionally similar copy of the 'virus'."
The important thing to note is that a computer virus is a program that is able to replicate by itself. The definition does
not specify explicitly that it is a malicious program. Also, a program that does not replicate is not a virus, regardless of
whether it is malicious or not. Therefore the maliciousness is neither a necessary, nor a sufficient property for a
program to be a computer virus.
Nevertheless, in the past ten years a huge number of intentionally or non intentionally destructive computer viruses
have caused an unaccountable amount of damage - mostly due to loss of time, money, and resources to eradicate them
- because in all cases they have been unwanted. Some damage has also been caused by a direct loss of valuable
information due to an intentionally destructive payload of some viruses, but this loss is relatively minor when
compared to the main one. Lastly, a third, indirect kind of damage is caused to the society - many users are forced to
spend money on buying and time on installing and using several kinds of anti-virus protection.
Does all this mean that computer viruses can be only harmful? Intuitively, computer viruses are just a kind of
technology. As with any other kind of technology, they are ethically neutral - they are neither "bad" nor "good" - it is
the purposes that people use them for that can be "bad" or "good". So far they have been used mostly for bad purposes.
It is therefore natural to ask the question whether it is possible to use this kind of technology for good purposes.
Indeed, several people have asked this question - with Dr. Cohen being one of the most active proponents of the idea
[Cohen91]. Some less qualified people have attempted even to implement the idea, but have failed miserably (see
section 3). It is natural to ask - why? Let's consider the reasons why the idea of a "good" virus is usually rejected by the
general public. In order to do this, we shall consider why people think that a computer virus is always harmful and
cannot be used for beneficial purposes.
2. Why Are Computer Viruses Perceived as Harmful?
About a year ago, we asked the participants of the electronic forum Virus-L/comp.virus, which is dedicated to
discussions about computer viruses, to list all reasons they could think about why do they perceive the idea of a
"beneficial" virus as a bad one. What follows is a systematized and generalized list of those reasons.
2.1. Technical Reasons
This section lists the arguments against the "beneficial virus" idea, which have a technical character. They are usually
the most objective ones.
2.1.1. Lack of Control
Once released, the person who has released a computer virus has no control on how this virus will spread. It jumps
from machine to machine, using the unpredictable patterns of software sharing among the users. Clearly, it can easily
reach systems on which it is not wanted or on which it would be incompatible with the environment and would cause
unintentional damage. It is not possible for the virus writer to predict on which systems the virus will run and
therefore it is impossible to test the virus on all those systems for compatibility. Furthermore, during its spread, a
computer virus could reach even a system that had not existed when that virus has been created - and therefore it had
been impossible to test the virus for compatibility with this system.
The above is not always true - that is, it is possible to test the virus for compatibility on a reasonably large number of
systems that are supposed to run it. However, it is the damaging potential of a program that is spreading out of control
which is scaring the users.
2.1.2. Recognition Difficulty
Currently a lot of computer viruses already exist, which are either intentionally destructive or otherwise harmful.
There are a lot of anti-virus programs designed to detect and stop them. All those harmful viruses are not going to
disappear overnight. Therefore, if one develops a class of beneficial viruses and people actually begin to use them,
then the anti-virus programs will have to be able to make the difference between the "good" and the "bad" viruses - in
order to let the former in and keep the latter out.
Unfortunately, in general it is theoretically impossible even to distinguish between a virus and a non-viral program
([Cohen89]). There is no reason to think that distinguishing between "good" and "bad" viruses will be much easier.
While it might be possible to distinguish between them using virus-specific anti-virus software (e.g., scanners), we
should not forget that many people are relying on generic anti-virus defenses, for instance based on integrity checking.
Such systems are designed to detect modifications, not specific viruses, and therefore will be triggered by the
"beneficial" virus too, thus causing an unwanted alert. Experience shows that the cost of such false positives is the
same as of a real infection with a malicious virus - because the users waste a lot of time and resources looking for a
non-existing problem.
2.1.3. Resource Wasting
A computer virus would eat up disk space, CPU time, and memory resources during its replication. A computer virus
is a self-replicating resource eater. One typical example is the Internet Worm, accidentally released by a
Carnegie-Mellon student. It was not designed to be intentionally destructive, but in the process of its replication, the
multiple copies of it used so much resources, that they practically brought down a large portion of the Internet.
Even when the computer virus uses a limited amount of resources, it is considered as a bad thing by the owner of the
machine on which the virus is doing it, if it happens without authorization.
2.1.4. Bug Containment
A computer virus can easily escape the controlled environment and this makes it very difficult to test such programs
properly. And indeed - experience shows that almost all computer viruses released so far suffer from significant bugs,
which would either prevent them from working in some environments, or even cause unintentional damage in those
environments.
Of course, any program can (and usually does) contain bugs. This is especially true for the large and complex software
systems. However, a computer virus is not just a normal buggy program. It is a self-spreading buggy program, which is
out of control. Even if the author of the virus discovers the bug at a later time, there is the almost untreatable problem
of revoking all existing copies of the virus and replacing them with fixed new versions.
2.1.5. Compatibility Problems
A computer virus that can attach itself to any of the user's programs would disable the several programs on the market
that perform a checksum on themselves at runtime and refuse to run if modified. In a sense, the virus will perform a
denial-of-service attack and thus cause damage.
Another problem arises from some attempts to solve the "lack of control" problem by creating a virus that asks for
permission before infecting. Unfortunately, this causes an interruption of the task being currently executed until the
user provides the proper response. Besides of being annoying for the user, it could be sometimes even dangerous.
Consider the following example.
It is possible that a computer is used to control some kind of life-critical equipment in a hospital. Suppose that such a
computer gets infected by a "beneficial" computer virus, which asks for permission before infecting any particular
program. Then it is perfectly possible that a situation arises, when a particular program has to be executed for the first
time after the virus has appeared on the computer, and that this program has to urgently perform some task which is
critical for the life of a patient. If at that time the virus interrupts the process with the request for permission to infect
this program, then the caused delay (especially if there is no operator around to authorize or deny the request) could
easily result in the death of the patient.
2.1.6. Effectiveness
It is argued that any task that could be performed by a "beneficial" virus could also be performed by a non-replicating
program. Since there are some risks following from the capability of self-replication, it would be therefore much
better if a non-replicating program is used, instead of a computer virus.
2.2. Ethical and Legal Reasons
The following section lists the arguments against the "beneficial virus" idea, which are of ethical or legal kind. Since
neither ethics, nor the legal systems are universal among the human society, it is likely that those arguments will have
different strength in the different countries. Nevertheless, they have to be taken into account.
2.2.1. Unauthorized Data Modification
It is usually considered unethical to modify other people's data without their authorization. In many countries this is
also illegal. Therefore, a virus which performs such actions will be considered unethical and/or illegal, regardless of
any positive outcome it could bring to the infected machines. Sometimes this problem is perceived by the users as "the
virus writer claims to know better than me what software should I run on my machine".
2.2.2. Copyright and Ownership Problems
In many cases, modifying a particular program could mean that copyright, ownership, or at least technical support
rights for this program are voided.
We have witnessed such an example at the VTC-Hamburg. One of the users who called us for help with a computer
virus was a sight-impaired lawyer, who was using special Windows software to display the documents he was working
on with a large font on the screen - so that he could read them. His system was infected by a relatively non-damaging
virus. However, when the producer of the software learned that the machine was infected, they refused any technical
support to the user, until the infection was removed and their software - installed from clean originals.
2.2.3. Possible Misuse
An attacker could use a "good" virus as a means of transportation to penetrate a system. For instance, a person with
malicious intent could get a copy of a "good" virus and modify it to include something malicious. Admittedly, an
attacker could trojanize any program, but a "good" virus will provide the attacker with means to transport his
malicious code to a virtually unlimited population of computer systems. The potential to be easily modified to carry
malicious code is one of the things that makes a virus "bad".
2.2.4. Responsibility
Declaring some viruses as "good" and "beneficial" would just provide an excuse to the crowd of irresponsible virus
writers to condone their activities and to claim that they are actually doing some kind of "research". In fact, this is
already happening - the people mentioned above are often quoting Dr. Fred Cohen's ideas for beneficial viruses as an
excuse of what they are doing - often without even bothering to understand what Dr. Cohen is talking about.
2.3. Psychological Reasons
The arguments listed in this section are of psychological kind. They are usually a result of some kind of
misunderstanding and should be considered an obstacle that has to be "worked around".
2.3.1. Trust Problems
The users like to think that they have full control on what is happening in their machine. The computer is a very
sophisticated device. Most computer users do not understand very well how it works and what is happening inside.
The lack of knowledge and uncertainty creates fear. Only the feeling that the reactions of the machine will be always
known, controlled, and predictable could help the users to overcome this fear.
However, a computer virus steals the control of the computer from the user. The virus activity ruins the trust that the
user has in his/her machine, because it causes the user to lose his/her belief that s/he can control this machine. This
may be a source of permanent frustrations.
2.3.2. Negative Common Meaning
For most people, the word "computer virus" is already loaded with negative meaning. The media has already widely
established the belief that a computer virus is a synonym for a malicious program. In fact, many people call "viruses"
many malicious programs that are unable to replicate - like trojan horses, or even bugs in perfectly legitimate
software. People will never accept a program that is labelled as a computer virus, even if it claims to do something
useful.
3. Some Bad Examples of "Beneficial" Viruses
Regardless of all the objections listed in the previous section, several people have asked themselves the question
whether a computer virus could be used for something useful, instead of only for destructive purposes.
And several people have tried to positively answer this question. Some of them have even implemented their ideas in
practice and have been experimenting with them in the real world - unfortunately, without success. In this section we
shall present some of the unsuccessful attempts to create a beneficial virus so far, and explain why they have been
unsuccessful.
3.1. The "Anti-Virus" Virus
Some computer viruses are designed to work not only in a "virgin" environment of infectable programs, but also on
systems that include anti-virus software and even other computer viruses. In order to survive successfully in such
environments, those viruses contain mechanisms to disable and/or remove the said anti-virus programs and
"competitor" viruses. Examples for such viruses in the IBM PC environment are Den_Zuko (removes the Brain virus
and replaces it with itself), Yankee_Doodle (the newer versions are able to locate the older ones and "upgrade" the
infected files by removing the older version of the virus and replacing it with the newer one), Neuroquila (disables
several anti-virus programs), and several other viruses.
Several people have had the idea to develop the above behaviour further and to create an "anti-virus" virus - a virus
which would be able to locate other (presumably malicious) computer viruses and remove them. Such a
self-replicating anti-virus program would have the benefits to spread very fast and update itself automatically.
Several viruses have been created as an implementation of the above idea. Some of them locate a few known viruses
and remove them from the infected files, others attach themselves to the clean files and issue an error message if
another piece of code becomes attached after the virus (assuming that it has to be an unwanted virus), and so on.
However, all such pieces of "self-replicating anti-virus software" have been rejected by the users, who have considered
the "anti-virus" viruses just as malicious and unwanted as any other real computer virus. In order to understand why, it
is enough to realize that the "anti-virus viruses" matches several of the rules that state why a replicating program is
considered malicious and/or unwanted. Here is a list of them for this particular idea.
First, this idea violates the Control condition. Once the "anti-virus" virus is released, its author has no means to
control it.
Second, it violates the Recognition condition. A virus that attaches itself to executable files will definitely trigger the
anti-virus programs based on monitoring or integrity checking. There is no way for those programs to decide whether
they have been triggered by a "beneficial" virus or not.
Third, it violates the Resource Wasting condition. Adding an almost identical piece of code to every executable file on
the system is definitely a waste - the same purpose can be achieved with a single copy of the code and a single file,
containing the necessary data.
Fourth, it violates the Bug Containment condition. There is no easy way to locate and update or remove all instances
of the virus.
Fifth, it causes several compatibility problems, especially to the selfchecking programs, thus violating the
Compatibility condition.
Sixth, it is not as effective as a non-viral program, thus violating the Effectiveness condition. A virus-specific
anti-virus program has to carry thousands of scan strings for the existing malicious viruses - it would be very
ineffective to attach a copy of it to every executable file. Even a generic anti-virus (i.e., based on monitoring or
integrity checking) would be more effective if it exists only in one example and is executed under the control of the
user.
Seventh, such a virus modifies other people's programs without their authorization, thus violating the Unauthorized
Modification condition. In some cases such viruses ask the user for permission before "protecting" a file by infecting
it. However, even in those cases they cause unwanted interruptions, which, as we already demonstrated, in some
situations can be fatal.
Eight, by modifying other programs such viruses violate the Copyright condition.
Ninth, at least with the current implementations of "anti-virus" viruses, it is trivial to modify them to carry destructive
code - thus violating the Misuse condition.
Tenth, such viruses are already widely being used as examples by the virus writers when they are trying to defend their
irresponsible actions and to disguise them as legitimate research - thus the idea violates the responsibility condition
too.
As we can see from the above, the idea of a beneficial anti-virus virus is "bad" according to almost any of the criteria
listed by the users.
3.2. The "File Compressor" Virus
This is one of the oldest ideas for "beneficial" viruses. It is first mentioned in Dr. Cohen's original work [Cohen84].
The idea consists of creating a self-replicating program, which will compress the files it infects, before attaching itself
to them. Such a program is particularly easy to implement as a shell script for Unix, but it is perfectly doable for the
PC too. And it has already been done - there is a family of MS-DOS viruses, called Cruncher, which appends itself to
the executable files, then compresses the infected file using Lempel-Zev-Huffman compression, and then prepends a
small decompressor which would decompress the file in memory at runtime.
Regardless of the supposed benefits, this idea also fails the test of the criteria listed in the previous section. Here is
why.
First, the idea violates the Control condition. Once released, the author of the virus has no means to controls its
spread. In the particular implementation of Cruncher, the virus writer has attempted to introduce some kind of control.
The virus asks the user for permission before installing itself in memory, causing unwanted interruptions. It is also
possible to tell the virus to install itself without asking any questions - by the means of setting an environment
variable. However, there are no means to tell the virus not to install itself and not to ask any questions - which should
be the default action.
Second, the idea violates the Recognition condition. Several virus scanners detect and recognize Cruncher by name,
the process of infecting an executable triggers most monitoring programs, and the infected files are, of course,
modified, which triggers most integrity checkers.
Third, the idea violates the Resource condition. A copy of the decompressor is present in every infected file, which is
obviously unnecessary.
Fourth, the idea violates the Bug Containment condition. If bugs are found in the virus, the author has no simple
means to distribute the fix and to upgrade all existing copies of the virus.
Fifth, the idea violates the Compatibility condition. There are many files which stop working after being compressed.
Examples include programs that perform a self-check at runtime, self-modifying programs, programs with internal
overlay structure, Windows executables, and so on. Admitedly, those programs stop working even after being
compressed with a stand-alone (i.e., non-viral) compression program. However, it is much more difficult to compress
them by accident when using such a program - quite unlike the case when the user is running a compression virus.
Sixth, the idea violates the Effectiveness condition. It is perfectly possible to use a stand-alone, non-viral program to
compress the executable files and prepend a short decompressor to them. This has the added advantage that the code
for the compressor does not have to reside in every compressed file, and thus we don't have to worry about its size or
speed - because it has to be executed only once. True, the decompressor code still has to be present in each compressed
file and many programs will still refuse to work after being compressed. The solution is to use not compression at a
file level, but at a disk level. And indeed, compressed file systems are available for many operating environments
(DOS, Novell, OS/2, Unix) and they are much more effective than a file-level compressor that spreads like a virus.
Seventh, the idea still violates the Copyright condition. It could be argued that it doesn't violate the Data Modification
condition, because the user is asked to authorize the infection. We shall accept this, with the remark mentioned above -
that it still causes unwanted interruptions. It is also not very trivial to modify the virus in order to make it malicious,
so we'll assume that the Misuse condition is not violated too - although no serious attempts are made to ensure that
the integrity of the virus has not been compromised.
Eighth, the idea violates the responsibility condition. This particular virus - Cruncher - has been written by the same
person who has released many other viruses - far from "beneficial" ones - and Cruncher is clearly used as an attempt to
condone virus writing and to masquerade it as legitimate "research".
3.3. The "Disk Encryptor" Virus
This virus has been published by Mark Ludwig - author of two books and a newsletter on virus writing, and of several
real viruses, variants of many of which are spreading in the real world, causing real damage.
The idea is to write a boot sector virus, which encrypts the disks it infects with a strong encryption algorithm (IDEA in
this particular case) and a user-supplied password, thus ensuring the privacy of the user's data. Unfortunately, this idea
is just as flawed as the previous ones.
First, it violates the Control condition. True, the virus author has attempted to introduce some means of control. The
virus is supposed to ask the user for permission before installing itself in memory and before infecting a disk.
However, this still causes unwanted interruptions and reportedly in some cases doesn't work properly - that is, the
virus installs itself even if the user has told it not to.
Second, it violates the Recognition condition. Several virus-specific scanners recognize this virus either by name or as
a variant of Stealth_Boot, which it actually is. Due to the fact that it is a boot sector infector, it is unlikely to trigger
the monitoring programs. However, the modification that it causes to the hard disk when infecting it, will trigger most
integrity checkers. Those that have the capability to automatically restore the boot sector, thus removing any possibly
present virus, will cause the encrypted disk to become inaccessible and therefore cause serious damage.
Third, the idea violates the Compatibility condition. A boot sector virus that is permanently resident in memory
usually causes problems to Windows
f:\12000 essays\sciences (985)\Computer\Viruses.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It is morning. You awaken to the sweet smell of flowers and the sound of birds chirping. You turn on your new I B M compatible computer only to find that every bit and byte of information has been erased. A computer virus has struck.
Yes, these small bits of computer code have slowly overtaken the world of computing. A computer virus is a small program that attaches itself to disks and computer systems with instructions to do something abnormal. Sometimes the effects of a computer virus can be harmless. Sometimes the effects of a computer virus can be disastrous. But whichever way you look at it they still cause problems. There are many kinds of computer viruses. Three of the most common are the time bomb, the logic bomb and the Trojan horse. The time bomb is a virus triggered by the computers clock reaching a certain date and time (often Friday the thirteenth). The logic bomb is a virus triggered by a certain value appearing a certain part of the computers memory, either relevant to the viruses purposes or at random. The Trojan horse is an innocent seeming program deliberately infects with a virus and circulated publicly. There is a cure for these viruses, though. These "cures" are called vaccines. A vaccine is a program that watches for typical things viruses do, halts them, and warns the computer operator.
"Put a kid with the chicken pox together with a bunch of healthy kids and not all of them will get sick." But that is not the case with computer viruses. You see when a computer virus passes on a virus it never fails unless the computer is protected with a vaccine. A typical computer virus spreads faster than the chicken pox too. Now as I said before when a computer virus attempts to infect another computer the attack is not always successful. However that does not mean the infected computer stops trying. An infected computer will pass on the virus every chance it gets. Computer viruses are spread by two methods Floppy disks and modems. A modem is a phone link connected to a bulletin board service (B.B.S.). A B.B.S. is a lot like what it sounds, a bulletin board. If a person calls you and you're not home he leaves a message so that the next time you use the B.B.S. you can see the message. However sometimes a person can leave a virus in a B.B.S. or an unsuspecting computer user whose computer is infected the next time you hook up to the B.B.S. you may get infected. Once a virus reaches a B.B.S. it is virtually unstoppable unless the corporation controlling the B.B.S. uses a vaccine to flush out the virus. So far most virus attacks have been made on large computer networks and apple computers. That doesn't mean that single users or I B M owners are completely safe either. In 1989 there were two million five thousand outbreaks of viruses.
The most computer viruses originate from Bulgaria, a country in Europe. As a matter of fact the most deadly computer viruses originate from Bulgaria. One virus called the Dark Avenger was created in Bulgaria then sent to the United States of America and it started destroying military secrets. The military knew that it had to be designed alone because if Bulgarian government made it could just turn around like a boomerang and attack them. In Bulgaria there is no real law against computer crime. You could do something with a computer that could get you the death penalty here and get off with a slap on the wrist there.
One of the most famous viruses of all time was the Michelangelo virus. This virus was created by a mad man who wanted everybody to remember the famous painter. This virus was a time bomb virus set to go off on the artist's birthday march sixth 1990. This virus affected more computers than any other virus. When this virus exploded it erased every bit of information with it. The average price for the Michelangelo virus vaccine is about 160$.
To sum up my whole report I would think Clifford Stoll said it best when he said "a safe computer is one that isn't connected to the outside world."
f:\12000 essays\sciences (985)\Computer\VR 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
An Insight Into Virtual Reality
Virtual Reality is a creation of a highly interactive computer based multimedia environment in which the user becomes a participant with the computer in a "virtually real" world
We are living in an era characterized by 3D virtual systems created by computer graphics. In the concept called Virtual Reality (VR), the virtual reality engineer is combining computer, video, image-processing, and sensor technologies so that a human can enter into and react with spaces generated by computer graphics.
In 1969-70, a MIT scientist went to the University of Utah, where he began to work with vector generated graphics. He built a see-through helmet that used television screens and half-silvered mirrors, so that the environment was visible through the TV displays. It was not yet designed to provide a surrounding environment. It was not until the mid '80's that virtual reality systems were becoming more defined. The AMES contract started in 1985, came up with the first glove in February 1986. The glove is made of thin Lycra and is fitted with 15 sensors that monitor finger flexion, extension, hand position and orientation. Connected to a computer through fiber optic cables. Sensor inputs enable the computer to generate an on screen image of the hand that follows the operator's hand movements. The glove also has miniature vibrators in the finger tips to provide feedback to the operator from grasped virtual objects. Therefore, driven by the proper software, the system allows the operator to interact by grabbing and moving a virtual object within a simulated room, while experiencing the "feel" of the object.
The virtual reality line includes the Datasuit and the Eyephone. The Datasuit is an instrumented full-body garment that enables full-body interaction with a computer constructed virtual world. In one use, this product is worn by film actors to give realistic movement to animated characters in computer generated special effects. The Eyephone is a head mounted stereo display that shows a computer made virtual world in full color and 3D.
The Eyephone technology is based on an experimental Virtual Interface Environment Workstation (VIEW) design. VIEW is a head-mounted stereoscopic display system with two 3.9 inch television screens, one for each eye. The display can be a computer generated scene or a real environment sent by remote video cameras. Sound effects delivered to the headset increase the realism.
It was intended to use the glove and software for such ideas as a surgical simulation, or "3D virtual surgery" for medical students. In the summer of 1991, US trainee surgeons were able to practice leg operations without having to cut anything solid. NASA Scientists have developed a three-dimensional computer simulation of a human leg which surgeons can operate on by entering the computer world of virtual reality. Surgeons use the glove and Eyephone technology to create the illusion that they are operating on a leg.
Other virtual reality systems such as the Autodesk and the CAVE have also come up with techniques to penetrate a virtual world. The Autodesk uses a simple monitor and is the most basic visual example for virtual reality. An example where this could be used is while exercising. For example, Autodesk may be connected to an exercise bike, you can then look around a graphic world as you pedal through it. If you pedal fast enough, your bike takes off and flies.
The CAVE is a new virtual reality interface that engulfs the individual into a room whose walls, ceiling, and floor surround the viewer with virtual space. The illusion is so powerful you won't be able to tell what's real and what's not.
Computer engineers seem fascinated by virtual reality because you can not only program a world, but in a sense, inhabit it.
Mythic space surrounds the cyborg, embracing him/her with images that seem real but are not. The sole purpose of cyberspace virtual reality technology is to trick the human senses, to help people believe and uphold an illusion.
Virtual reality engineers are space makers, to a certain degree they create space for people to play around in. A space maker sets up a world for an audience to act directly within, and not just so the audience can imagine they are experiencing a reality, but so they can experience it directly. "The film maker says, 'Look, I'll show you.' The space maker says, 'Here, I'll help you discover.' However, what will the space maker help us discover?"
"Are virtual reality systems going to serve as supplements to our lives, or will individuals so miserable in their daily existence find an obsessive refuge in a preferred cyberspace? What is going to be included, deleted, reformed, and revised? Will virtual reality systems be used as a means of breaking down cultural, racial, and gender barriers between individuals and thus nurture human values?"
During this century, responsive technologies are moving even closer to us, becoming the standard interface through which we gain much of our experience. The ultimate result of living in a cybernetic world may create an artificial global city. Instead of a global village, virtual reality may create a global city, the distinction being that the city contains enough people for groups to form affiliations, in which individuals from different cultures meet together in the same space of virtual reality. The city might be laid out according to a three dimensional environment that dictates the way people living in different countries may come to communicate and understand other cultures. A special camera, possibly consisting of many video cameras, would capture and transmit every view of the remote locations. Viewers would receive instant feedback as they turn their heads. Any number of people could be looking through the same camera system. Although the example described here will probably take many years to develop, its early evolution has been under way for some time, with the steady march of technology moving from accessing information toward providing experience. As well, it is probably still childish to imagine the adoption of virtual reality systems on a massive scale because the starting price to own one costs about $300,000.
Virtual Reality is now available in games and movies. An example of a virtual reality game is Escape From Castle Wolfenstein. In it, you are looking through the eyes of an escaped POW from a Nazi death camp. You must walk around in a maze of dungeons were you will eventually fight Hitler. One example of a virtual reality movie is Stephen King's The Lawnmower Man. It is about a mentally retarded man that uses virtual reality as a means of overcoming his handicap and becoming smarter. He eventually becomes crazy from his quest for power and goes into a computer. From there he is able to control most of the world's computers. This movie ends with us wondering if he will succeed in world domination.
From all of this we have learned that virtual reality is already playing an important part in our world. Eventually, it will let us be able to date, live in other parts of the world without leaving the comfort of our own living room, and more. Even though we are quickly becoming a product of the world of virtual reality, we must not lose touch with the world of reality. For reality is the most important part of our lives.
f:\12000 essays\sciences (985)\Computer\VR.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Virtual Reality - What it is and How it Works
Imagine being able to point into the sky and fly. Or
perhaps walk through space and connect molecules together.
These are some of the dreams that have come with the
invention of virtual reality. With the introduction of
computers, numerous applications have been enhanced or
created. The newest technology that is being tapped is that
of artificial reality, or "virtual reality" (VR). When
Morton Heilig first got a patent for his "Sensorama
Simulator" in 1962, he had no idea that 30 years later
people would still be trying to simulate reality and that
they would be doing it so effectively. Jaron Lanier first
coined the phrase "virtual reality" around 1989, and it has
stuck ever since. Unfortunately, this catchy name has
caused people to dream up incredible uses for this
technology including using it as a sort of drug. This became
evident when, among other people, Timothy Leary became
interested in VR. This has also worried some of the
researchers who are trying to create very real applications
for medical, space, physical, chemical, and entertainment
uses among other things.
In order to create this alternate reality, however, you
need to find ways to create the illusion of reality with a
piece of machinery known as the computer. This is done with
several computer-user interfaces used to simulate the
senses. Among these, are stereoscopic glasses to make the
simulated world look real, a 3D auditory display to give
depth to sound, sensor lined gloves to simulate tactile
feedback, and head-trackers to follow the orientation of the
head. Since the technology is fairly young, these
interfaces have not been perfected, making for a somewhat
cartoonish simulated reality.
Stereoscopic vision is probably the most important
feature of VR because in real life, people rely mainly on
vision to get places and do things. The eyes are
approximately 6.5 centimeters apart, and allow you to have a
full-colour, three-dimensional view of the world.
Stereoscopy, in itself, is not a very new idea, but the new
twist is trying to generate completely new images in real-
time. In 1933, Sir Charles Wheatstone invented the first
stereoscope with the same basic principle being used in
today's head-mounted displays. Presenting different views
to each eye gives the illusion of three dimensions. The
glasses that are used today work by using what is called an
"electronic shutter". The lenses of the glasses interleave?
f:\12000 essays\sciences (985)\Computer\Was the Grand Prix Benificial for Melbourne.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Issues Part -B-
Was the Grand Prix, promoted as "The Great Race" which was
held at Albert Park beneficial for Melbourne, or was it just
a huge waste of taxpayers money? The race was televised to
650 million people in 130 different countries is expected to
pump $50 million into the Victorian economy every year and
boost tourism enormously.
I along with the owners of seventy-two percent of hotels, motels,
restaurants and other entertainment complexes agree that
Albert Park having the Grand Prix will have a positive impact
on business. Infact it pumped $10 - $15 million into local
business. This will mean these businesses did put on more
part time staff who will be gaining valuable work experience
and there will also be a flow on effect to suppliers of these
industries. Fifty-nine percent of interstate visitors and
forty five percent of overseas visitors would not have come
to Adelaide in a two year period because of the Grand Prix if
not for the race. By Albert Park getting the Grand Prix
created between 1000-1500 new jobs. The Grand Prix will
promote Victoria on an international scale with international
press, television and media caring out a world wide coverage
of this event. This could convince people to come and visit
Melbourne and would also be a major tourism boost.
Approximately $23.8 million has been spent overhauling the
park and upgrading the Lake side track. They built better
fences and barricades to help protect spectators in case of a
crash, and the track is said to be the safest and finest in
the world, creating a benchmark for Albert Park. Temporary
seating will cater for 150,000 people, and there was
approximately an attendance of 400,000 over the four days.
9,000 part-time jobs and 1,000 full-time jobs were created
over the weekend.
The "greenies" are still trying to stop the race at Albert
Park. First it was "Save The Park" and now it's "Stop The
Grand Prix." At first they protested about the cutting down
of hundreds of trees to make way for the track. But this has
been overcome by the replanting of 5000 new trees which would
cover 16 football ovals. This is almost double the amount of
trees that were there previously. They don't care about the
huge impact that the race had on Melbourne, instead they
unsuccessfully protest against it and by doing so it has cost
the Victorian taxpayers $1.3 million. But the track has
already been built and the first race held, so there is no
chance of it being removed and the park could never be
transformed back to its original state. Although there was
approximately 5,000 tons of rubbish, it has all been cleaned
up and in the process, a number of people have gained
temporary employment.
The residents of Albert Park that disagree with the idea for
the Grand Prix. They say it would spoil the "Parks Effect"
and the fumes will kill all plant and animal life there
previously. They say their houses will be engulfed with fumes
and that it would not be very safe for their young children.
They do not feel safe with their houses so close to the
track. But on the other hand because their houses are so
close to the track the value of their homes will rise.
Because the race was held so recently it is hard to judge how
big an impact it had on the economy. Probably at the same
time next year would be a better time to judge the impact it
had. But already we can see the benefits, Albert Park is now
known on a international scale, many new jobs have been
created, local and big business' have also benefited due to
tourism. So it is quite obvious that the race overall was a
success with no thanks to the protesters.
f:\12000 essays\sciences (985)\Computer\Welcome to the Internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Good evening I would like to welcome you all here this evening to our symposium on "Then & Now the Evolution of a society"
Things Have Changed dramatically since the 1920's
This Panel of researchers that you see before you are here to talk to you about the events and issues from the past 75 years that have led up society as we know it today.
starting from left to right I would like to Introduce this panel to you
first we have Lori who is an expert in the political arena and she will touch on a few issues on how politics itself has not really changed but how the politicians get there message across has
Next to her we have Yolanda who is an expert on African American leaders and the profound effect they have made in the Last 75 years.
Next in the Panel is Peggy she is here to talk about the role of women in society and the role that they have played in the formation of the world as we know it today.
***we all would not be here today if education didn't play a role in the evolution of society and here to talk to you about the evolution of education is Jaime.
Later on this evening I will be speaking about the role that technology has dramatically impacted the world as we know it and my Name is Matthew
And finally our last researcher is Christine and she will discuss the influence of music on this evolution of society
from then until now an evolution of society.
I will bring each of the speakers up to make a brief statement of there research and then after they have finished we will open the floor for questions
Again Welcome here this evening
I would like now to bring up our first expert Lori
f:\12000 essays\sciences (985)\Computer\What is ISDN.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What is ISDN?
ISDN, which stands for integrated services digital network, is a system of digitizing phone
networks which has been in the works for over a decade. This system allows audio, video,
and text data to be transmitted simultaneously across the world using end-to-end digital
connectivity.
The original telephone system used analog signals to transmit a signal across telephone
wires. The voice was carried by modulating an electric current with a waveform from a
microphone. The receiving end would then vibrate a speaker coil for the sound to travel back
to the ear through the air. Most telephones today still use this method. Computers, however,
are digital machines. All information stored on them is represented by a bit, representing a
zero or a one. Multiple bits are used to represent characters, which then can represent
words, numbers, programs, etc. The analog signals are just varying voltages sent across the
wires over time. Digital signals are represented and transmitted by pulses with a limited
number of discrete voltage levels. [Hopkins]
The modem was certainly a big breakthrough in computer technology. It allowed computers to
communicate with each other by converting their digital communications into an analog
format to travel through the public phone network. However, there is a limit to the amount
of information that a common analog telephone line can hold. Currently, it is about 28.8
kbit/s. [Hopkins] ISDN allows multiple digital channels to be operated simultaneously
through the same regular phone jack in a home or office. The change comes about when the
telephone company's switches are upgraded to handle digital calls. Therefore, the same
wiring can be used, but a different signal is transmitted across the line. [Hopkins]
Previously, it was necessary to have a phone line for each device you wished to use
simultaneously. For example, one line each for the phone, fax, computer, and live video
conference. Transferring a file to someone while talking on the phone, and seeing their
live picture on a video screen would require several expensive phone lines. [Griffiths]
Using multiplexing (a method of combining separate data signals together on one channel
such that they may be decoded again at the destination), it is possible to combine many
different digital data sources and have the information routed to the proper destination.
Since the line is digital, it is easier to keep the noise and interference out while
combining these signals. [Griffiths] ISDN technically refers to a specific set of services
provided through a limited and standardized set of interfaces. This architecture provides a
number of integrated services currently provided by separate networks.
ISDN adds capabilities not found in standard phone service. The main feature is that instead
of the phone company sending a ring voltage signal to ring the bell in your phone, it sends
a digital package that tells who is calling (if available), what type of call it is
(data/voice), and what number was dialed (if multiple numbers are used for a single line).
ISDN phone equipment is then capable of making intelligent decisions on how to answer the
call. In the case of a data call, baud rate and protocol information is also sent, making
the connection instantaneous. [Griffiths] ISDN Concepts:
With ISDN, voice and data are carried by bearer channels (B channels) occupying a bandwidth
of 64 kbit/s each. A delta channel (D channel) handles signalling at 16 kbit/s or 64
kbit/s. H channels are provided for user information at higher bit rates. [Stallings] There
are two types of ISDN service: Basic Rate ISDN (BRI) and Primary Rate ISDN (PRI).
BRI: consists of two 64 kbit/s B channels and one 16 kbit/s D channel for a total of 144
kbit/s. The basic service is intended to meet the needs of most individual users. PRI:
intended for users with greater capacity requirements. Typically the channel structure is 23
B channels plus one 64 kbit/s D channel for a total of 1.544 Mbit/s. H channels can also be
implemented: H0=384 kbit/s, H11=1536 kbit/s, H12=1920 kbit/s. [Stallings]
In this paper, I will concentrate on defining the specifics of Basic Rate ISDN for local
loop transmission. I will provide an in depth view of ISDN as it relates to layer 1 to 3
of the seven layer OSI model. I will also provide the specification for communication at
the S/T customer interface.
Basic Rate ISDN:
Basic Rate Interface (BRI) - The BRI is the fundamental building block of an ISDN network.
It is composed of a single 16 kbit/s "D-channel" which is used for call setup and control
and two 64 kbit/s "B-channels". The B-channels can be used to carry voice and both circuit
mode and packet mode data traffic. The D-channel may also be used to carry X.25 packet
traffic if the network supports that option. [Griffiths]
Basic Rate Interface D Channel - In the analog world, a telephone call is controlled
in-band. Tones and voltages are sent across lines for signalling conditions. ISDN does
away with this. The D channel becomes the vehicle for signalling. This signalling is
called common channel since a separate channel for signalling is used by two or more bearer
channels. [Hopkins]
User - Network protocols define how users interact with ISDN networks. Between the user
equipment and network equipment is a set of defined interfaces. The U interface is between
the central office and the customer premise. This interface carries information on the
twisted pair of wires between the customer and the central office. At the S/T interface
located at the customer location, two pairs of wires (one for transmitting, one for
receiving) are used. The intermediate device between the U and the S/T interface is known
as an NT1. The NT1 is a hybrid that converts from four wire to two wire and also
transforms the 2B+D signal into a different bit stream format. [Griffiths]
ISDN and the OSI Model - The OSI (Open Systems Interconnect) seven layer protocol was
developed to promote interoperability in the data world. ISDN, which followed OSI, was
designed to be a network technology inhabiting the lower three layers of the OSI model.
Consequently, an OSI end system that implements an OSI seven layer stack can contain ISDN at
the lower layers. Also, services such as TCP/IP (Internet Transmission Control Protocol)
can use the ISDN network. [Griffiths]
Layer 1 of User-Network Interface:
Layer 1 protocols provide the details that describe how the signals (electrical or optical)
are encoded onto the physical medium. These protocols describe how the user data and
signalling bits are transformed into line signals, then back again into user data bits.
The ISDN layer 1 protocol supports the functions outlined below. [ITU-T, I.430] ( B Channel
Transmission ( D Channel Transmission ( D Channel Access Procedure
B Channel Transmission - Layer 1 must support for each direction of transmission, two
independent 64 kbit/s B channels. The B channels contain user data which is switched by the
network to provide the end-to-end transmission source. There is no error correction
provided by the network on these channels. [ITU-T, I.430]
D Channel Transmission - Layer 1 must support for each direction of transmission, a 16
kbit/s channel for the signalling information. In some networks user packet data may also
be supported on the D channel. [ITU-T, I.430]
D Channel Access Procedure - This procedure ensures that in the case of two or more
terminals, on a point to multipoint configuration, attempting to access the D Channel
simultaneously, one terminal will always successfully complete the transmission of
information. [ITU-T, I.430]
Binary Organization of Layer 1 frame - The structures of Layer 1 frames across the interface
are different in each direction of transmission. Both structures are shown in figure 1
below. [Griffiths]
A frame is 48 bits long and lasts 250(s. The bit rate is therefore 192 kbit/s and each bit
is approximately 5.2(s long. Figure 1 also shows that there is a 2-bit offset between
transmit and receive frames. This is the delay between frame start at the receiver of a
terminal and the frame start of the transmitted signal. [Griffiths] Figure 1 also
illustrates that the line coding used is AMI (Alternate Marks Inversion); a logical 1 is
transmitted as zero volts and a logical 0 as a positive or negative pulse. Note that this
convention is the inverse of that used on line transmission systems. The nominal pulse
amplitude is 750mV. [Griffiths] A frame contains several L bits. These are balance bits to
prevent a build up of DC on the line. For the direction TE to NT, where each B-channel may
come from a different terminal, each terminal's output contains an L bit to form a balanced
block. [ITU-T, I.430] Examining the frame in the NT to TE direction, the first bits of the
frame are the F/L pair, which is used in the frame alignment procedure. The start of a new
frame is signalled by the F/L pair violating the AMI rules. Once a violation has occurred
there must be a second violation to restore correct polarity before the next frame. This
takes place with the first mark after the F/L pair. The FA bit ensures this second
violation occurs should there not be a mark in the B1, B2, D, E, or A channels. The E
channel is an echo channel in which D-channel bits arriving at the NT are echoed back to
the TEs. There is a 10 bit offset between the D channel leaving a terminal, traveling to
the NT and being echoed back in the E channel. [ITU-T, I.430] The A bit is used in the
activation procedure to indicate to the terminals that the system is in synchronization.
Next is a byte of the B2 channel, a bit of the E channel and a bit of the D channel,
followed by an M bit. This is used for multiframing. The M bit identifies some FA bits
which can be stolen to provide a management channel. [ITU-T, I.430] The B1, B2, D, and E
channels are then repeated along with the S bit which is a spare bit. [ITU-T, I.430]
Layer 1 D Channel Contention Procedure - This procedure ensures that, even in the case of
two or more terminals attempting to access the D channel simultaneously, one terminal will
always successfully complete the transmission of information by first gaining control of the
D channel and then retransmitting its information. The procedure relies on the fact that
the information to be transmitted consists of layer 2 frames delimited by flags consisting
of the binary pattern 01111110. Layer 2 applies a zero bit insertion algorithm to prevent
flag imitation by a layer 2 frame. The interframe time fill consists of binary 1s which are
represented by zero volts. The zero volt line signal is generated by the TE transmitter
going high impedance. This means a binary 0 from a parallel terminal will overwrite as
binary 1. Detection of collision is done by the terminal monitoring the E channel (D
channel echoed from the NT). [ITU-T, I.430]
To access the D channel a terminal looks for the interframe time fill by counting the
number of consecutive binary 1s in the D channel. Should a binary 0 be received the count
is reset. When the number of consecutive 1s reaches a predetermined value (which is
greater than the number of consecutive 1s possible in a frame because of the zero bit
insertion algorithm) the counter is reset and the terminal may access the D channel. When
a terminal has just completed transmitting a frame the value of the count needed to be
reached before another frame may be transmitted is incremented by 1. This gives other
terminals a chance to access the channel. Hence an access and priority mechanism is
established. [ITU-T, I.430] There is still the possibility of collision between two
terminals of the same priority. This is detected and resolved by each terminal comparing
its last transmitted bit with the next E bit. If they are the same the terminal continues
to transmit. If, however, they are different the terminal detecting the difference ceases
transmission immediately and returns to the D channel monitoring state leaving the other
terminal to continue transmission. [ITU-T, I.430]
Layer 1 Activation/Deactivation Procedure - This procedure permits activation of the
interface from both the terminal and network side, but deactivation only from the network
side. This is because of the multi-terminal capability of the interface. Activation and
deactivation information is conveyed across the interface by the use of line signals called
'Info signals'. [ITU-T, I.430]
Info 0 is the absence of any line signal; this is the idle state with neither terminals nor
the NT working. [ITU-T, I.430] Info 1 is flags transmitted from a terminal to the NT to
request activation. Note this signal is not synchronized to the network. [ITU-T, I.430]
Info 2 is transmitted from the NT to the TEs to request their activation or to indicate
that the NT has activated as a response to receiving an Info 1. An Info 2 consists of
Layer 1 frames with a high density of binary zeros in the data channels which permits fast
synchronization of the terminals. [ITU-T, I.430] Info 2 and Info 4 are frames containing
operational data transmitted from the TE and NT respectively.[ITU-T, I.430] The principal
activation sequence is commenced when a terminal transmits an Info 1. The NT activates the
local transmission system which indicates to the exchange that the customer is activating.
The NT1 responds to the terminals with an Info 2 to which the TEs synchronize. The TEs
respond with an Info 3 containing operational data and the NT is then in a position to send
Info 4 frames. Note that all terminals activate in parallel; it is not possible to have
just one terminal activated in a multi-terminal configuration. The network activates the
bus by the exchange activating the local network transmission system. Deactivation occurs
when the exchange deactivates the local network transmission system. [ITU-T, I.430]
Layer 2 of User-Network Interface:
The Layer 2 recommendation describes the high level data link (HDLC) procedures commonly
referred to as the Link Access Procedure for a D channel or LAP D. The objective of Layer
2 is to provide a secure, error-free connection between two endpoints connected by a
physical medium. Layer 3 call control information is carried in the information elements
of Layer 2 frames and it must be delivered in sequence and without error. Layer 2 also has
the responsibility for detecting and retransmitting lost frames.
LAP D was based originally on LAP B of the X.25 Layer 2 recommendation. However, certain
features of LAP D give it significant advantages. The most striking difference is the
possibility of frame multiplexing by having separate addresses at Layer 2 allowing many LAPs
to exist on the same physical connection. It is this feature that allows up to eight
terminals to share the signalling channel in the passive bus arrangement. [ITU-T, Q.920]
Each Layer 2 connection is a separate LAP and the termination points for the LAPs are
within the terminals at one end and at the periphery of the exchange at the other. Layer 2
operates as a series of frame exchanges between the two communicating, or peer entities.
The frames consist of a sequence of eight bit elements and the elements in the sequence
define their meaning as shown in Figure 2 below. [ITU-T, Q.920]
A fixed pattern called a flag is used to indicate both the beginning and end of a frame.
Two octets are needed for the Layer 2 address and carry a service identifier (SAPI), a
terminal identifier (TEI) and a command /response bit. The control field is one or two
octets depending on the frame type and carries information that identifies the frame and
the Layer 2 sequence numbers used for link control. The information element is only
present in frames that carry Layer 3 information and the Frame Check Sequence (FCS) is used
for error detection. A detailed breakdown of the individual elements is given in Figures 3
and 4 below. [ITU-T, Q.920] What cannot be shown in the diagrams is the procedure to avoid
imitation of the flag by the data octets. This is achieved by examining the serial stream
between flags and inserting an extra 0 after any run of five 1 bits. The receiving Layer 2
entity discards a 0 bit if it is preceded by five 1's. [ITU-T, Q.920]
Layer 2 Addressing - Layer 2 multiplexing is achieved by employing a separate Layer 2
address for each LAP in the system. To carry the LAP identity the address is two octets
long and identifies the intended receiver of a command frame and the transmitter of a
response frame. The address has only local significance and is known only to the two
end-points using the LAP. No use can be made of the address by the network for routing
purposes and no information about its value will be held outside the Layer 2 entity. [ITU-T,
Q.921]
The Layer 2 address is constructed as shown in Figure 3. The Service Access Identifier
(SAPI) is used to identify the service intended for the signalling frame. An extension of
the use of the D channel is to use it for access to a packet service as well as for
signalling. Consider the case of digital telephones sharing a passive bus with packet
terminals. The two terminal types will be accessing different services and possibly
different networks. It is possible to identify the service being invoked by using a
different SAPI for each service. This gives the network the option of handling the
signalling associated with different services in separate modules. In a multi-network ISDN
it allows Layer 2 routing to the appropriate network. The value of the SAPI is fixed for a
given service. [ITU-T, Q.921] The Terminal Endpoint Identifier (TEI) takes a range of
values that are associated with terminals on the customer's line. In the simplest case
each terminal will have a single unique TEI value. The combination of TEI and SAPI
identify the LAP and provide a unique Layer 2 address. A terminal will use its Layer 2
address in all transmitted frames and only frames received carrying the correct address
will be processed. [ITU-T, Q.921] In practice a frame originating from telephony call
control has a SAPI that identifies the frame as 'telephony' and all telephone equipment
examine this frame. Only the terminal whose TEI agrees with that carried by the frame will
pass it to the Layer 2 and Layer 3 entities for processing. There is also a SAPI
identified in standards for user data packet communication. [ITU-T, Q.921] Since it is
important that no two TEIs are the same, the network has a special TEI management entity
which allocates TEI on request and ensures their correct use. The values that TEIs can
take fall into the ranges:
0-63 Non-Automatic Assignment TEIs
64-126 Automatic Assignment TEIs
127 Global TEI [ITU-T, Q.921]
Non-Automatic TEIs are selected by the user; their allocation is the responsibility of the
user. Automatic TEIs are selected by the network; their allocation is the responsibility
of the network. The global TEI is permanently allocated and is referred to as the
broadcast TEI. [ITU-T, Q.921] Terminals which use TEIs in the range of 0-63 need not
negotiate with the network before establishing a Layer 2 connection. Terminals which use
TEIs in the range 64-126 cannot establish a Layer 2 connection until they have requested a
TEI from the network. In this case it is the responsibility of the network not to allocate
the same TEI more than once at any given time. The global TEI is used to broadcast
information to all terminals within a given SAPI; for example a broadcast message to all
telephones, offering an incoming telephone call. [ITU-T, Q.921]
Layer 2 Operation - The function of Layer 2 is to deliver Layer 3 frames, across a Layer 1
interface, error free and in sequence. It is necessary for a Layer 2 entity to interface
both Layer 1 and Layer 3. To highlight the operation of Layer 2 we will consider the
operation of a terminal as it attempts to signal with the network. [ITU-T, Q.921]
It is the action to establish a call that causes protocol exchange between terminal and
network. If there has been no previous communication it is necessary to activate the
interface in a controlled way. A request for service from the customer results in Layer 3
requesting a service from Layer 2. Layer 2 cannot offer a service unless Layer 1 is
available and so a request is made to Layer 1. Layer 1 then initiates its start-up
procedure and the physical link becomes available for Layer 2 frames. Before Layer 2 is
ready to offer its services to Layer 3 it must initiate the Layer 2 start-up procedure
known as 'establishing a LAP'. [ITU-T, Q.921] LAP establishment is achieved by the exchange
of Layer 2 frames between the Layer 2 handler in the terminal and the corresponding Layer 2
handler in the network. The purpose of this exchange is to align the state variables that
will be used to ensure the correct sequencing of information frames. Before the LAP has
been established the only frames that may be transmitted are unnumbered frames. The
establishment procedure requires one end-point to transmit a Set Asynchronous Balanced Mode
Extended (SABME) and the far end to acknowledge it with an Unnumbered Acknowledgment (UA).
[ITU-T, Q.921] Once the LAP is established Layer 2 is able to carry the Layer 3 information
and is said to be the 'multiple frame established state'. In this state Layer 2 operates
its frame protection mechanisms. Figure 5 below shows a normal Layer 2 frame exchange.
[ITU-T, Q.921]
Once established the LAP operates an acknowledged service in which every information frame
must be responded to by the peer entity. The most basic response is the Receiver Ready
(RR) response frame. Figure 5 shows the LAP establishment and the subsequent I frame RR
exchanges. The number of I frames allowed to be outstanding without an acknowledgment is
defined as the window size and can vary between 1 and 127. For telephony signalling
applications the window size is 1 and after transmitting an I frame the Layer 2 entity will
await a response from the corresponding peer entity before attempting to transmit the next
I frame. Providing there are no errors all that would be observed on the bus would be the
exchange of I frames and RR responses. However Layer 2 is able to maintain the correct
flow of information in the face of many different error types. [ITU-T, Q.921]
Layer 2 Error Control - It is unlikely that a frame will disappear completely but it is
possible for frames to be corrupted by noise at Layer 1. Corrupted frames will be received
with invalid Frame Check Sequence (FCS) values and consequently discarded. [ITU-T, Q.920]
The frame check sequence is generated by dividing the bit sequence starting at the address
up to (but not including) the start of the frame check sequence by the generator polynomial
X16 + X12 + X5 + 1. In practical terms this is done by a shift register as shown in figure
6. All registers are preset to 1 initially. At the end of the protected bits the shift
register contains the remainder from the division. The 1's complement of the remainder is
the FCS. At the receiver the same process is gone through , but this time the FCS is
included in the division process. In the absence of transmission errors the remainder
should always be 1101 0000 1111. [ITU-T, Q.920]
The method for recovering from a lost frame is based on the expiration of a timer. A timer
is started every time a command frame is transmitted and is stopped when the appropriate
response is received. This single timer is thus able to protect both the command and
response as the loss of either will cause it to expire. [ITU-T, Q.920] When the timer
expires it is not possible to tell which of the two frames has been lost and the action
taken is the same in both cases. Upon the timer expiring, Layer 2 transmits a command with
the poll bit set. This frame forces the peer to transmit a response that indicates the
value held by the state variables. It is possible to tell from the value carried by the
response frame whether or not the original frame was received. If the first frame was
received, the solicited response frame will be the same as the lost response frame and is
an acceptable acknowledgment. If however the original frame was lost, the solicited
response will not be an appropriate acknowledgment and the Layer 2 entity will know that a
retransmission is required.
It is possible for the same frame to be lost more than once and Layer 2 will restransmit the
frame three times. If after three transmissions of the frame the correct response has not
been received , Layer 2 will assume that the connection has failed and will attempt to
re-establish the LAP. [ITU-T, Q.921]
Another possible protocol error is the arrival of an I frame with an invalid send sequence
number N(S). This error is more likely to occur when the LAP is operating with a window
size greater than one. If, for example, the third frame in the sequence of four is lost
the receiving Layer 2 entity will know that a frame has been lost from the discontinuity in
the sequence numbers. The Layer 2 must not acknowledge the fourth frame as this will imply
acknowledgment of the lost third frame. The correction operation is to send a Reject (REJ)
frame with the receive sequence number N(R) equal to N(S) + 1 where N(S) is the send
variable of the last correctly received I frame, in this case I frame 2. This does two
things; first it acknowledges all the outstanding I frames up to and including the second I
frame, and secondly it causes the sending end to retransmit all outstanding I frames
starting with the lost third frame. [ITU-T, Q.920] The receipt of a frame with an out of
sequence, or invalid, N(R) does not indicate a frame loss and cannot be corrected by
retransmissions. It is necessary in this case to re-establish the LAP to realign the state
variables at each end of the link. [ITU-T, Q.920] The Receiver Not Ready (RNR) frame is
used to inhibit the peer Layer 2 from transmitting I frames. The reasons for wanting to do
this are not detailed in the specification but it is possible to imagine a situation where
Layer 3 is only one of many functions to be serviced by a microprocessor and a job of
higher priority requires that no Layer 3 processing is performed. [ITU-T, Q.920] Another
frame specified in Layer 2 is the FRaMe Reject frame (FRMR). This frame may be received by
a Layer 2 entity but may not be transmitted. It is included in the recommendation to
preserve alignment between LAP D and LAP B. After the detection of a frame reject
condition the data link is reset. [ITU-T, Q.920]
Disconnecting the LAP - After Layer 3 has released the call it informs Layer 2 that it no
longer requires a service. Layer 2 then performs its own disconnection procedures so that
ultimately Layer 1 can disconnect and the transmission systems associated with the local
line and the customer's bus can be deactivated. [ITU-T, Q.921]
Layer 2 disconnection is achieved when the frames disconnect (DISC) and UA are exchanged
between peers. At this point the LAP can no longer support the exchange of I frames and
supervisory frames. [ITU-T, Q.921] The last frame type to be considered is the Disconnect
Mode (DM) frame. This frame is an unnumbered acknowledgment and may be used in the same
way as a UA frame. It is used as a response to a SABME if the Layer 2 entity is unable to
establish the LAP, and a response to a DISC if the Layer 2 entity has already disconnected
the LAP. [ITU-T, Q.921]
TEI Allocation - Because each terminal must operate using a unique TEI, procedures have been
defined in a Layer 2 management entity to control their use. The TEI manager has the
ability to allocate, remove, check, and verify TEIs that are in use on the customer's bus.
As the management entity is a separate service point all messages associated with TEI
management are transmitted with a management SAPI. [ITU-T, Q.921]
TEI management procedures must operate regardless of the Layer 2 state and so the
unnumbered information frame (UI) is used for all management messages. The UI frames have
no Layer 2 response and protection of the frame content is achieved by multiple
transmissions of the frame.
In order to communicate with terminals which have not yet been allocated TEIs a global TEI
is used. All management frames are transmitted on a broadcast TEI which is associated with
a LAP that is always available. All terminals can transmit and receive on the broadcast TEI
as well as their own unique TEI. All terminals on the customer's line will process all
management frames. To ensure that only one terminal acts upon a frame a unique reference
number is passed between the terminal and the network. This reference number is contained
within an element in the UI frame and is either a number randomly generated by the terminal,
or 0 is the TEI of the terminal, depending on the exact situation. Figure 7 below shows the
frame exchange required for a terminal to be allocated a TEI and establish its data link
connection. [ITU-T, Q.921]
Layer 3 of User-Network Interface:
This layer effects the establishment and control of connections. It is carried in Layer 2
frames as can be seen in figure 8. [ITU-T, Q.930]
The first octet contains a protocol discriminator which gives the D channel the capability
of simultaneously supporting additional communications protocols in the future. The bits
shown in figure 8 are the standard for user-network call control messages. [ITU-T, Q.930]
The call reference value in the third octet is used to identify the call with which a
particular message is associated. Thus a call can be identified independently of the
communications channel on which it is supported.
The message type coded in the fourth octet describes the intention of the message (e.g. a
SETUP message to request call establishment). These are listed in Table 1 at the end of
this paper. A number of other information elements may be included following the message
type code in the fourth octet. The exact contents of a message are dependent on the message
type. [ITU-T, Q.931]
The message sequence for call establishment is shown in figure 9. In order to make an
outgoing call request, a user must send all of the necessary call information to the
network. Furthermore, the user must specify the particular bearer service required for the
call (i.e. Speech, 64 kbit/s/s unrestricted, or 3.1 kHz Audio) and any terminal
compatibility information which must be checked at the destination. [ITU-T, Q.931]
The initial outgoing call request may be made in an en bloc or overlap manner. Figure 9
illustrates the call establishment procedures. If overlap sending is used then the SETUP
message must contain the bearer service request but the facility requests and called party
number information may be segmented and conveyed in a sequence of INFORMATION messages as
shown. Furthermore if a speech bearer service is requested and no call information is
contained in the SETUP message, then the network will return in-band dial tone to the user
until the first INFORMATION message has been received. [ITU-T, Q.931] Following the receipt
of sufficient information for call establishment , the network returns a call PROCEEDING
f:\12000 essays\sciences (985)\Computer\What really is a hacker.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dan Parks
Julie Jackson - Instructor
CIS 101
11-18-96
What really is a hacker?
There is a common misconception among the general public about what constitutes a hacker and what hacking is. Hacking is defined as "gaining illegal entry into a computer system, with the intent to alter, steal, or destroy data." The validity of this definition is still being debated, but most individuals would describe hacking as gaining access to information which should be free to all. Hackers generally follow some basic principles, and hold these principles as the "ethical code." There are also a few basic "Hacker rules" that are usually viewed by all in this unique group.
The principles that hackers abide by are characteristic of most people who consider the themselves to be a hacker. The first, which is universally agreed upon is that access to computers should be free and unlimited. This is not meant to be a invasion of privacy issue, but rather free use of all computers and what they have to offer. They also believe that anyone should be able to use all of a computers resource with no restrictions as to what may be accessed or viewed. This belief is controversial, it not only could infringe upon people's right to privacy, but give up trade secrets as well. A deep mistrust of authority, some hackers consider authority to be a constriction force. Not all hackers believe in this ethic, but generally authority represents something that would keep people from being able to have full access and/or free information.
Along with the "ethical code" of hackers there are a few basic "hacking rules" that are followed, sometimes even more closely then there own code. Keep a low profile, no one ever suspects the quite guy in the corner. If suspected, keep a lower profile. If accused, simply ignore. If caught, plead the 5th.
Hackers consider a computer to be a tool and to limit its accessibility is wrong. Hacking would cease if there was no barrier as to what information could be accessed freely. By limiting the information which may be attained by someone, hampers the ability to be curious and creative. These people do not want to destroy, rather they want to have access to new technology, software, or information. These creations are considered an art form, and are looked upon much like an artist views a painting.
References Consulted
Internet. http://www.ling.umu.se/~phred/hackfaq.txt
Internet. http://www.jargon.com/~backdoor
Internet. http://www.cyberfractal.com/~andes.html
f:\12000 essays\sciences (985)\Computer\What Should And Shouldnt Computers Be Allowed To Run .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers have always scared people. Not just because they can be confusing and hard to operate, but also because how they affect peoples everyday lives. What jobs should highly advanced computers be able to run? This question can involve ethics, privacy, security, and many other topics.
What jobs can and can't we leave to the computer? As computers grow more and more advanced, not to mention complicated, so grows the number of job applications that can be filled by computers. But can we leave a job such as doctor to a highly advanced computer system? There are a great deal of moral issues involving that. What would happen if the doctor made a mistake? Could you sue the computer? What about the computer programmer? One error in the program could mean death for a patient. One job that I'm sure many people would give to a computer if they had the chance would be a lawyer. This eliminates the problem that occurs when someone with money is in trouble. They buy the best lawyer money can buy, but the person without any money cannot afford the great lawyers the other guy has. With this system, one single lawyer program could be provided to everyone so that the process of dispensing justice is much more fair. What about a judge and jury? Could a computer replace them? Is it right for a computer to pronounce sentence on an individual?
Because computers don't have any kind of actual thought or will, some jobs would be perfect for computers. Security would be a good job for a computer to handle. People like their privacy and don't want to be watched over by someone all the time. If computers could tell if a crime is happening without a human to point it out, it might be alright to install these systems everywhere to detect crimes taking place without interfering with someone's privacy. I'm not talking about "Big Brother" from 1984, but something that would be fair to everyone.
There is also the problem of changing jobs due to advancements in computer technology. There will be the same number of jobs available, but not at the same levels. More education will be needed for these new jobs. Computers might take away quite a few jobs from people doing manual labor on an assembly line, but at the same time, if something breaks down, there will have to be someone to come in and fix it. This is the affect computers will have as they become more and more advanced. The only problem with this is that some people may be unwilling to change. It would be hard for someone who has worked in manual labor all their life to suddenly become a computer technician. That is one of the costs we must have to live with though if there are to be advancements. But what about even further into the future? Will by that time, computers be so advanced that they can fix themselves and "evolve" on their own? Certainly then there would be job scarcity due to these technological advancements.
f:\12000 essays\sciences (985)\Computer\Why ARJ.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Compuer studies.
WHY_ARJ.DOC Jan 1997
This document describes the benefits of ARJ. ARJ is now a trend
setter in archivers with other archivers following suit.
You can find reviews of ARJ in the following magazine articles:
Computer Personlich, June 12, 1991, Leader of the Pack, Bernd
Wiebelt and Matthias Fichtner. In this German magazine, ARJ 2.0 was
named Test Sieger (Test Winner) over six other archivers including
PKZIP and LHA. Compression, speed, documentation, and features were
compared.
PC Sources, July 1991, Forum, Barry Brenesal, "A new challenger, ARJ
2.0, not only offers the speed of PKZIP, but also has the best
compression rate of the bunch."
Computer Shopper, September 1991, Shells, Bells, and Files:
Compressors for All Cases, Craig Menefee. "ARJ ... is extremely fast
and produces excellent compression; it ... has a rich set of options.
... This is a mature technology, and any of these programs will do a
fine and reliable job."
PC Magazine, October 15, 1991, Squeeze Play, Barry Simon. "Jung has
combined that foundation with academic research to produce an
impressive product. ... If your main criterion is compressed size,
ARJ will be one of your two main contenders, along with LHA."
SHAREWARE Magazine, Nov-Dec 1991, Fall Releases, Joseph Speaks. "Don't
tell the creators of ARJ that PKZIP is the standard for data
compression. They probably already know. But that hasn't stopped
them from creating a data compression utility that makes everyone -
even the folks at PKWare - sit up and take notice. ... but compression
statistics don't tell the whole story. The case for using ARJ is
strengthened by new features it debuts."
BOARDWATCH Magazine, December 1991, ARCHIVE/COMPRESSION UTILITIES.
"This year's analysis rendered a surprise winner. Robert K. Jung's
ARJ Version 2.22 is a relatively new compression utility that offers
surprising performance. The program emerged on the scene within the
past year and the 2.22 version was released in October 1991. It rated
number one on .EXE and database files and number two behind LHarc
Version 2.13 in our directory of 221 short text files."
INFO'PC, October 1992, Compression de donn‚es: 6 utilitaires du
domaine public, Thierry Platon. In this article, the French magazine
awarded ARJ 2.20, the Certificat de Qualification Labo-tests InfoPC.
PC Magazine, March 16, 1993, PKZIP Now Faster, More Efficient,
Barry Simon. "One of the more interesting features is the ability to
have a .ZIP file span multiple floppy disks, but this feature is not
nearly as well implemented as in ARJ."
ARJ FEATURES:
1) Registered users receive technical support from a full-time
software author with over FIFTEEN years of experience in
technical support and software programming. And YES, ARJ is a
full-time endeavor for our software company. ARJ and REARJ have
proven to be two of the most reliable archiver products. We
test our BETA test releases with the help of thousands of users.
2) ARJ provides excellent size compression and practical speed
compared to the other products currently available on the PC.
ARJ is particularly strong compressing databases, uncompressed
graphics files, and large documents. One user reported that in
compressing a 25 megabyte MUMPS medical database, ARJ produced
a compressed file of size 0.17 megabytes while LHA 2.13 and
PKZIP 1.10 produced a compressed file of 17 plus megabytes.
3) Of the leading archivers, only ARJ provides the capability of
archiving files to multiple volume archives no matter what the
destination media. ARJ can archive files directly to diskettes
no matter how large or how numerous the input files are and
without requiring EXTRA disk space.
This feature makes ARJ (DEARJ) especially suitable for
distributing large software packages without the concerns about
fitting entire files on one diskette. ARJ will automatically
split files when necessary and will reassemble them upon
extraction without using any EXTRA disk space.
This multiple volume feature of ARJ makes it suitable as a "cheap"
backup utility. ARJ saves pathname information, file date-time
stamps, and file attributes in the archive volumes. ARJ can also
create an index file with information about the contents of each
volume. For systems with multiple drives, ARJ can be configured
to save the DRIVE letter information, too. Files contained
entirely within one volume are easily extracted using just the one
volume. There is no need to always insert the last diskette of
the set. In addition, the ARJ data verification facility unique
to ARJ among archivers helps ensure reliable backups.
4) The myriad number of ARJ commands and options allow the user
outstanding flexibility in archiver usage. No other leading PC
archiver gives you that flexibility.
Here are some examples of ARJ's flexibility.
a) Search archives for text data without extracting the
archives to disk.
b) Save drive letter and pathname information.
c) Re-order the files within an ARJ archive.
d) Merge two or more ARJ archives without re-compressing files.
e) Extract files directly to DOS devices.
f) Synchronize an archive and a directory of files with just a
few commands.
g) Compare the contents of an archive and a directory of files
byte for byte without extracting the archive to disk.
h) Allow duplicates of a file to be archived producing
generations (versions) of a file within an archive.
i) Display archive creation and modification date and time.
j) And much more.
5) ARJ provides ARJ archive compatibility from revision 1.00 to now.
In other words, ARJ version 1.00 can extract the files from an
archive created by the current version of ARJ and vice-versa.
6) ARJ provides the facility to store EMPTY directories within its
archives. This makes it easier to do FULL backups and also to
distribute software products that come with EMPTY directories.
7) Both ARJ self-extracting modules provide default pathname support.
That means that you can build self-extracting archives of software
directories containing sub-directories. The end user of the
self-extracting archive does not have to type any command line
options to restore the full directory structure of the software.
This greatly simplifies software distribution.
8) The ARJ archive data structure with its header structure and 32
bit CRC provide excellent archive stability and recovery
capabilities. In addition, ARJ is the only archiver that allows
you to test an archive during an archive process. With other
archivers, you may have already deleted the input files with a
"move" command before you could test the built archive. In
addition, the test feature allows one to select an actual byte for
byte file compare with the original input files. This is
especially useful for verifying multi-megabyte files where a 32 bit
CRC compare would not provide sufficient reliability.
9) ARJ provides an optional security envelope facility to "lock" ARJ
archives with a unique envelope signature. A "locked" ARJ
archive cannot be modified by ARJ or other programs without
destroying the envelope signature. This provides some level of
assurance to the user receiving a "locked" ARJ archive that the
contents of the archive are intact as the "signer" intended.
10) ARJ has MS-DOS 3.x international language support. This makes ARJ
more convenient to use with international alphabets.
11) ARJ has many satisfied users in countries all over the world. ARJ
customers include the US government and many leading companies
including Lotus Development Corp.
f:\12000 essays\sciences (985)\Computer\Why stick to Qwerty.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computer Science 10
:), Why we should stick to Qwerty..
The Qwerty keyboard - named Qwerty because the letters q, w, e, r, t, y are arranged next to each other - has been the universal standard since the beginning of the 1890s. Since then, there have been many proposals by other keyboard makers to market products that would enable users to type faster. Other proposals put the most frequently used letters - dhiatensor - in the middle row.i Although these keyboards enable users to type far faster than the qwerty keyboard, they are rarely sold. There are several reasons for this. First, there is no need for the regular users to type any faster than at the current speed. Second, for the people whose job require fast typing, the new keyboards can lead to bigger health problems that develop from continuous typing. Third, and most importantly, standardization has led the qwerty keyboards to firmly hold the position as the keyboard.
There are major differences between the two types of keyboard users; the regular users and the other typists. The regular users are people who uses the keyboard for word processing, e-mailing, and internet; there is not much of a need for them to type extremely fast. They do not type mechanically but rather based on their thought, and thinking takes time. In other words, faster keyboards are irrelevant for them because they are not continuously typing. They need to think what they are going to write, one sentence one after another.
On the other hand, the typists whose job is simply to type, do so continuously. They also happen to be the major victims of repetitive Strain Injury (RSI) which is in large part caused by continuously stroking the keyboards. In an article about RSI, Huff explains the changes that the companies are undergoing to become more productive:
Many work practices are changing with automation to increase productivity. These include fewer staff, heavier workloads, more task specialization, faster pacing of work, fewer rest breaks, more overtime, more shift work and nonstandard hours, and more piece work and bonus systems. These work practices can entail very prolonged rapid or forceful repetitive motions leading to fatigue and overuse of muscles.ii
Because RSI is a major problem to the typists, it would be a suicidal move for them to adopt faster typable keyboards. More of them will develop RSI. As for the companies that hire these typists, not only will the frequency of RSI development increase, the amount of money that the companies have to compensate to the employees who develop RSI will also increase. The fact that the qwerty keyboard is less efficient presents typists from getting more serious health problems.
Finally, the role of standardization greatly influences where the qwerty stand in the keyboard market. Once the qwerty was standardized, no other types of keyboards could enter into competition regardless of how much more efficient they were. That is because a standardized layout enables users to have to know just one kind of layout. Keyboard layout is like different languages. If there are different languages being spoken when people are trying to communicate with each other, it becomes very difficult to understand. The communication would be very inefficient. What if a new keyboard becomes standardized? Navy studies in the 1940s showed that the change from qwerty to a more efficient keyboard would pay for itself within 10 days.iii However, this study shows the result from the corporation's view. Although corporations will certainly be able to make more money out of same amount of time by adopting the new keyboard, there are other factors that are not taken into account - human cost. If the new, more efficient keyboards are to be standardized, there would be enormous spending on reeducation, relearning, repurchasing, and replacement. The cost of doing this would be enormous.
In short, the qwerty keyboard is efficient enough for people to use. It's fast enough for regular users, and it's slow enough for typists to avoid further health problems. And, attempt to standardize a new keyboard would be extremely difficult and expensive. Yet, people might not even have to concern themselves with the keyboards anymore soon. The advancement of technology keeps bringing wonders to the world. In near future, voice recognition programs using microphones, might replace keyboards. Then, RTI - Repetitive Talking Injury - might be a big issue. Who knows?
i Huff, C., "Putting technology in its place" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 2.
ii Huff, C., "Computing and your health" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 103-104.
iii Huff, C., "Putting technology in its place" in Social Issues in Computing, Huff, C. and Finholt T. (Eds), McGraw Hill. 1994, pp. 3.
f:\12000 essays\sciences (985)\Computer\Why you should purchase a PC.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Computers are capable of doing more things every year. There are many advantages to
knowing how to use a computer, and it is important that everyone know how to use them
properly. Using the information I have gathered, and my own knowledge from my 12 years of
computer experience, I will explain the many advantages of owning a computer and knowing how
to use a PC and I will attempt to explain why you should purchase a computer and learn how to
use one properly.
Webster's New World Compact Dictionary defines a computer as "an electronic machine that
performs rapid, complex calculations or compiles and correlates data" ("Computer."). While this
definition gives one a very narrow view of what a computer is capable of doing, it does describe
the basic ideas of what I will expand upon. We have been living through an age of computers for a
short while now and there are already many people world wide that are computer literate.
According to Using Computers: A Gateway to Information World Wide Web Edition, over 250
million Personal Computers (PC's) were in use by 1995, and one out of every three homes had a
PC (Shelly, Cashman,& Waggoner, 138).
Computers are easy to use when you know how they work and what the parts are. All
computers perform the four basic operations of the information processing cycle: input, process,
output, and storage. Data, any kind of raw facts, is required for the processing cycle to occur.
Data is processed into useful information by the computer hardware. Most computer systems
consist of a monitor, a system unit which contains the Central Processing Unit (CPU), a
floppy-disk drive, a CD-ROM drive, speakers, a keyboard, a mouse, and a printer. Each
component takes a part in one of the four operations.
The keyboard and mouse are input devices that a person uses to enter data into the computer.
From there the data goes to the system unit where it is processed into useful information the
computer can understand and work with. Next the processed data can be sent to storage devices
or to output devices. Normally output is sent to the monitor and stored on the hard-disk or to a
floppy-disk located internal of the system unit. Output can also be printed out through the printer,
or can be played through the speakers as sound depending on the form it takes after it is
processed.
Once you have grasped a basic understanding of the basic parts and operations of a computer,
you can soon discover what you can do with computers to make life easier and more enjoyable.
Being computer literate allows you to use many powerful software applications and utilities to do
work for school, business, or pleasure. Microsoft is the current leading producer of many of these
applications and utilities.
Microsoft produces software called operating systems that manage and regulate the
information processing cycle. The oldest of these is MS-DOS, a single user system that uses typed
commands to initiate tasks. Currently Microsoft has available operating systems that use visual
cues such as icons to help enter data and run programs. These operating systems are ran under
an environment called a Graphical User Interface (GUI's). Such operating systems include
Windows 3.xx, Windows 95, and Windows NT Workstation. Windows 95 is geared more for use
in the home for productivity and game playing whereas Windows NT is more business orientated.
The article entitled "Mine, All Mine" in the June 5, 1995 issue of Time stated that 8 out of 10
PC's worldwide would not be able to start or run if it were not for Microsoft's operating systems
like MS-DOS, Windows 95, and Windows NT (Elmer-Dewitt, 1995, p. 50).
By no means has Microsoft limited itself to operating systems alone. Microsoft has also
produced a software package called Microsoft Office that is very useful in creating reports, data
bases, spreadsheets, presentations, and other documents for school and work. Microsoft Office:
Introductory Concepts and Techniques provides a detailed, step-by-step approach to the four
programs included in Microsoft Office.
Included in this package are Microsoft Word, Microsoft Excel, Microsoft Access, and
Microsoft PowerPoint. Microsoft Word is a word processing program that makes creating
professional looking documents such as announcements, resumes, letters, address books, and
reports easy to do. Microsoft Excel, a spreadsheet program, has features for data organization,
calculations, decision making, and graphing. It is very useful in making professional looking
reports. Microsoft Access, a powerful database management system, is useful in creating and
processing data in a database. Microsoft PowerPoint is ". . a complete presentation graphics
program that allows you to produce professional looking presentations" (Shelly, Cashman, &
Vermaat, 2). PowerPoint is flexible enough so that you can create electronic presentations,
overhead transparencies, or even 35mm slides.
Microsoft also produces entertainment and reference programs. "Microsoft's Flight Simulator
is one of the best selling PC games of all time" (Elmer-Dewitt, 50). Microsoft's Encarta is an
electronic CD-ROM encyclopedia that makes for a fantastic alternative to 20 plus volume book
encyclopedias. In fact, it is so popular, it outsells the Encyclopedia Britannica. These powerful
business, productivity, and entertainment applications are just the beginning of what you can do
with a PC.
Knowing how to use the Internet will allow you access to a vast resource of facts, knowledge,
information, and entertainment that can help you do work and have fun. According to Netscape
Navigator 2 running under Windows 3.1, "the Internet is a collection of networks, each of which
is composed of a collection of smaller networks" (Shelly, Cashman, & Jordan, N2). Information
can be sent over the Internet through communication lines in the form of graphics, sound, video,
animation, and text. These forms of computer media are known as hypermedia. Hypermedia is
accessed through hypertext links, which are pointers to the computer where the hypermedia is
stored. The World Wide Web (WWW) is the collection of these hypertext links throughout the
Internet. Each computer that contains hypermedia on the WWW is known as a Web site and has
Web pages set up for users to access the hypermedia. Browsers such as Netscape allow people to
"surf the net" and search for the hypermedia of their choice.
There are millions of examples of hypermedia on the Internet. You can find art, photos,
information on business, the government, and colleges, television schedules, movie reviews, music
lyrics, online news and magazines, sport sights of all kinds, games, books, and thousands of other
hypermedia on the WWW. You can send electronic mail (E-Mail), chat with other users around
the world, buy airline, sports, and music tickets, and shop for a house or a car. All of this, and
more, provides one with a limitless supply of information for research, business, entertainment, or
other personal use. Online services such as America Online, Prodigy, or CompuServe make it
even easier to access the power of the Internet. The Internet alone is almost reason enough to
become computer literate, but there is still much more that computers can do.
Knowing how to use a computer allows you to do a variety of things in several different ways.
One of the most popular use for computers today is for playing video games. With a PC you can
play card games, simulation games, sport games, strategy games, fighting games, and adventure
games. Today's technology provides the ultimate experiences in color, graphics, sound, music,
full motion video, animation, and 3D effects. Computers have also become increasingly useful in
the music, film, and television industry. Computers can be used to compose music, create sound
effects, create special effects, create 3D life-like animation, and add previous existing movie and
TV footage into new programs, as seen in the movie Forrest Gump. All this and more can be
done with computers.
There is truly no time like the present to become computer literate. Computers will be doing
even more things in the future and will become unavoidable. Purchasing and learning about a new
PC now will help put PC's into the other two-thirds of the homes worldwide and make the
transition into a computer age easier.
Works Cited
"Computer." Webster's New World Compact School and Office Dictionary. 1995.
Elmer-Dewitt, P. "Mine, All Mine." Time Jun. 1995: 46-54.
Shelly, G., T. Cashman, and K. Jordan. Netscape Navigator 2 Running Under Windows 3.1.
Danvers: Boyd & Fraser Publishing Co., 1996.
Shelly, G., T. Cashman, and M. Vermaat. Microsoft Office Introductory Concepts and
Techniques. Danvers: Boyd & Fraser Publishing Co., 1995.
Shelly, G., T. Cashman, G. Waggoner, and W. Waggoner. Using Computers: A Gateway to
Information World Wide Web Edition. Danvers: Boyd & Fraser Publishing Co., 1996.
f:\12000 essays\sciences (985)\Computer\Will Computers Control Humans In The Future .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Will computers control humans in the future?
People always tend to seek the easy way out looking for something that
would make their lives easier. Machines and tools have given us the
ability to do more in less time giving us, at the same time, more comfort.
As the technology advances, computers become faster and more powerful.
These new machines are enabling us to do more in less time making our lives
easier. The increased use of computers in the future, however, might have
negative results and impact on our lives. In the novel Nine Tomorrows
Isaac Asimov often criticizes our reliance on computers by portraying a
futuristic world where computers control humans.
One of the images which Asimov describes in the book is that humans
might become too dependent on computers. In one of the stories,
Profession, Asimov writes about people being educated by computer programs
designed to educate effortlessly a person. According to the Profession
story people would no longer read books to learn and improve their
knowledge. People would rely on the computers rather than "try to memorize
enough to match someone else who knows" (Nine Tomorrows, Profession 55).
People would not chose to study, they would only want to be educated by
computer tapes. Putting in knowledge would take less time than reading
books and memorizing something that would take almost no time using a
computer in the futuristic world that Asimov describes. Humans might began
to rely on computers and allow them to control themselves by letting
computers educate people. Computers would start teaching humans what
computers tell them without having any choice of creativity. Computers
would start to control humans' lives and make humans become too dependent
on the computers.
Another point that is criticized by Asimov is the fact that people
might take their knowledge for granted allowing computers to take over and
control their lives. In a story called The Feeling of Power, Asimov
portrays how people started using computers to do even simple mathematical
calculations. Over a long period of time people became so reliable on
computers that they forgot the simplest multiplication and division rules.
If someone wanted to calculate an answer they would simply use their pocket
computer to do that (The Feeling of Power 77). People became too
independent from the start making them forget what they have learned in the
past. People in the story The Feeling of Power would take for granted what
they have learned over centuries of learning and chose computers because of
their ability to do their work faster. The lack of manual mathematics,
which people chose to forget in the story, caused computers to be the ones
to solve simple mathematic problems for the people taking control of the
humans by doing the work for them (The Feeling of Power 81-82). The
reliance of computers went to such an extent that even Humans began to use
computers in all fields of study and work allowing computers to control
their lives by taking over and doing everything for them.
According to another story in the book, Asimov also describes how
computers would be able to predict probabilities of an event, future. In
the story All the Troubles of the World one big computer predicted crime
before it even happened, allowing the police to take the person who was
going to commit the crime and release him/her after the danger has passed
(All The Troubles of The World 144-145). This computer, called Multivac,
controlled humans by telling the authorities about who was going to commit
a crime causing someone to be imprisoned until the danger has passed. It
was the computer that made the decision of someone's freedom or
imprisonment and that controlled others to arrest a person it suspected of
committing a crime controlling his/her destiny. The decision of
imprisoning someone for a crime a person did not commit was all in the
hands of a computer. It was the computer that controlled humans and their
destiny and controlling other humans who believed in everything that
computer told them.
Multivac could not only predict the future but it also could answer
many questions that would normally embarrass people if they would have to
ask someone else about it. Multivac could access its vast database of
trillions of pieces of knowledge and find the best solution for one's
problem (All The Troubles of The World 153). All the people believed that
Multivac knows the best and allowed a computer to control their lives by
following the solutions Multivac had given them (All the Troubles of The
World 153). Humans followed a computer's solution to a problem they could
not solve themselves allowing a computer to take control over their lives
not allowing them to think for themselves.
In the Nine Tomorrows, Isaac Asimov often criticizes our reliance on
computers. The author predicts that computers will increase their role in
the future while the technology advances. Computers will become faster and
people will want to use them more to make their lives easier. Yet, just
like to any good side there is a bad side. Asimov reflects in his writing
that humans might depend on the computers so much that they will allow them
to control their lives.
f:\12000 essays\sciences (985)\Computer\william gibson and the internet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
The words ³Internet² and ³world wide web² are becoming everyday use these days, it has exploded into the mass market of information and advertising. There are bad points about the ³net² as well as good points, this relatively new medium is growing at such a rate that the media have to take it seriously.
This new form of communication was mainly populated by small groups of communities, but now that it is getting much easier to access the web these groups are growing.
The word Cyberpunk is nothing new in the world of the ³net² and to science fiction readers , and it is this term which names most of the online communities . Within the Cyberpunk cultures there are sub cultures such as hackers, phreaks ,ravers etc.. all have a connection with new technologies. The term Cyberpunk was originated in Science Fiction Literature, writers such as William Gibson tell stories of future worlds, cultures and the Internet.
it is william gibson and the cyberpunks who have carried out some of the most important mappings of our present moment and its future trends during the past decade. The present, in these mappings, is thus viewed from the persceptive of a future that is visible from within the experiences and trends of the current moment, from this perpscetive, cyberpunk can be read as a sort of social theory.
Chapter 1
Internet history
The Internet is a network of computer networks, the most important of which was called ARPANET(Advanced Research Projects Agency NETwork), a wide area experimental network connecting hosts and terminal servers together. Rules were set up to supervise the allocation of addresses and to create voluntary standards for the network. The ARPANET was built between October and December 1969 by a US company called Bolt, Beranak and Newman (BBN), which is still big in the Internet world. It had won a contract from the US Government's Department of Defence Advanced Research Projects Agency , or ARPA, to build a network that would survive a nuclear attack. Only four government mainframe computers were originally linked up, Unfortunately, ARPANET was also dependent on the involvement of hundreds of US computer scientists. Because the ARPANET was a military project, it was managed in true military style - the project manager appointed by ARPA gave the orders and they were carried out. It was therefore easy to tell who "ran" the network. By 1972 it had grown to 37 mainframe computers. At the same time, the way in which the network was being used was changing. As well as using the system to exchange important, but boring, military information, ARPANET users started sending e-mail - to each other by means of private mail boxes.
By 1983 ARPANET had grown to such an extent that it was felt that the military research component should be moved to a separate network, called MILNET. In 1987 the system was opened up to any educational facility, academic researcher or international research organisation who wanted to use it. As local area networks became more pervasive, many hosts became gateways to local networks. A network layer, to allow the inter operation of these networks was developed and called IPA (Internet Protocol). Over time other groups created long haul IP based networks (NASA, NSF, states...). These nets too, inter-operate because of IP. The collection of all of these inter operating networks is the Internet.
Up until 1990 the Internet was only a complicated and uninteresting text format of communication and most of the people using the net were either Computer programmers, students, Hackers, Societies, Governments officials and a few artists interested the digital media.
Everything changed in 92 when a British programmer came up with "Mosaic", a text and graphic based window (web browser) into the net, this programme was simple to use. The basic structure was in simple page form, Just click on a button, word or picture and you could cross half the world in seconds, it was also simple to construct a page. Over the last couple of years, anyone who had a computer and Internet account has created their own "Web page".
The growth of the Internet, those machines connected to the NSFNET backbone has been extraordinary. In 1989, the number of networks attached to the NSFNET/Internet increased from 346 to 997, data traffic increased five-fold. The latest estimate, is that 200,000 to 400,000 main computers are directly connected to NSFNET, with perhaps a total of eleven million individuals able to exchange information freely. The Internet is still growing and companies are developing new tools and programmes to speed up the communications so that immense amounts of data can be transferred in seconds.
"The future of the 20th century, of the 21st century, will be the net. Its awesome. But on the net, you still have to have someone on the other side. The poor nerd who sits in front of the computer just talking to themselves - that's kind of sad. It's the contact that's important, interpersonal, interactive communication." [T.Leery (observer 29/5/94) p16]
Internet Cultures
Over the years since the Internet first began, many clubs, organisations, cultures and societies have grown and congregated on the net. This is probably because to many users it is a cheap form (even free) of world wide communication, the new technology has link with their ideas and also because of the freedom of expression the Internet gives. No single government body or organisation owns the net and because of its size, no one can fully govern and censor the Internet.
So called "hackers" also part of the "Cyberpunk" group, were one of the first groups of individuals known on the Internet, these were mostly male students studying computer science, trying to break into government computers or anywhere they were not supposed to be. Most hackers live by this set of rules, First, access to computers should be unlimited and total: "Always yield to the Hands-On Imperative!". Second, all information should be free. Third, mistrust authority and promote decentralisation. Fourth, hackers should be judged by their prowess as hackers rather than by formal organisational or other irrelevant criteria. Fifth, one can create art and beauty on a computer. Finally, computers can change lives for the better.
One group i came across in an article call themselves the "Extropians", they want to be immortal and travel through space and time. They are also libertarians who want to privets the oceans and air. One member Jay Prime Positive wants to upload his consciousness to a computer "I'd probably want to spend most of my time in data space......i imagine having multiple bodies and multiple copies of myself. I have problems with gender identification, so I'd definitely have a female body in there somewhere".
The group have many idea's of the future. You perhaps never considered the idea of setting loose molecule-sized robots in your body to clean out your arteries.(see nanotechnology).
A floating free state banged together out of old oil tankers (similar to the sprawl described in Gibson's "Mona Lisa overdrive", a place where freedom and unrestrained intellect could reign and you could finally get the government and tax man off your back.
the Extropians want to go beyond the limits of nature and biology and move on up to the stars, they believe that computers have kick started the human evolution.
Chapter 2
Cyberspace
The term "Cyberspace" was first coined by the sci-fi writer William Gibson in his 1984 novel "Neuromancer". Gibson first identified the emergence of Cyberspace as the most recent moment in the development of electromechanical communications, telematics and virtual reality. Cyberspace, as Gibson saw it, is the simultaneous experience of time, space, and the flow of multi-dimensional, pan-sensory data: All the data in the world stacked up like one big neon city, so you could cruise around and have a kind of grip on it, visually anyway, because if you didn't, it was too complicated, trying to find your way to the particular piece of data you needed.
Cyberspace. "A con sensual hallucination experienced daily by billions of
legitimate operators, in every nation... A graphical representation of
data abstracted from the banks of every computer in the human system.
Unthinkable complexity. Lines of light ranged in the non space of the
mind, clusters and constellations of data. Like city lights, receding..."
- William Gibson, Neuromancer.
At the core of Cyberspace is the Internet.
The psychologist/guru Timothy Leery interviewed by David Gale in 1991, is very clear about Cyberspace :
"What were talking about is electronic real estate, a whole electronic reality. The problem we have is to organise the great continents of data that will soon become available. All the movies , all the TV , all the libraries, all recordable knowledge... These are the vast natural crude oil reserves waiting to be tapped, In the 15th century we explored the planet, now we must prepare once more to chart, colonise and open up a whole new world of data. Software becomes the maps and guides into that terrain".
The interesting thing about Cyberspace is the way it creates the idea of a community. Every subculture needs an image of an outsider's community to cling to, to run to. For the Cyberpunk, this community doesn't actually have a place. It can be accessed everywhere by modem, but its the nearest thing on earth. Cyberpunk subculture is the first subculture which doesn't have a particular place of congregation . There are now hundreds of bulletin boards around the world which have a Cyberpunk style, where young cyberpunks discuss the latest hardware and software. It is familiar to most people as the "place" in which a long-distance telephone conversation takes place. But it is also the treasure trove for all digital or electronically transferred information, and, as such, it is the place for most of what is now commerce, industry, and human interaction
Cyberpunk History
Cyberpunk literature, in general, deals with unimportant people in technologically-enhanced cultural "systems". In Cyberpunk stories' settings, there is usually a "system" which dominates the lives of most "ordinary" people, be it an oppressive government, a group of large, corporations, or a fundamentalist religion. These systems are enhanced by certain technologies , particularly "information technology" (computers, the mass media), making the system better at keeping those within it inside it. Often this technological system extends into its human "components" as well, via brain implants, prosthetic limbs, cloned or genetically engineered organs, etc. Humans themselves become part of "the Machine". This is the "cyber" aspect of Cyberpunk.
"Cyberpunk hit the front page of the New York Times when some young computer kids were arrested for cracking a government computer file. The Times called the kids "cyberpunks" From there, the performers involved in the high-tech-oriented radical art movement generally known as "Industrial" " [ R.U Sirius (Mondo 2000) 64 ]
In the mid-'80s Cyberpunk emerged as a new way of doing science fiction in both literature and film. The first book "Neuromancer"; the most important film, "Blade Runner".
"what's most important to me is that Neuromancer is about the present. its not really about an imagined future....." [William Gibson (MONDO 2000) 68]
William Gibson is widely considered to be the father of "Cyberpunk", dark novels about hi-tech computer bohemians and underground renegades. His first novel, "Neuromancer", bears the distinction of winning the Hugo, Nebula, and Philip K. Dick awards. The first to win all three.
William Gibson parlayed off the success of his first SF 'Cyberpunk' blockbuster Neuromancer to write a more complex, engaging novel in which these two worlds are rapidly colliding. In his novel Count Zero, we encounter teenage hacker Bobby Newmark, who goes by the handle "Count Zero." Bobby on one of his treks into Cyberspace runs into something unlike any other AI(artificial intelligence) he's ever encountered - a strange woman, surrounded by wind and stars, who saves him from 'flatlining.' He does not know what it was he encountered on the net, or why it saved him from certain death.
Later we meet Angie Mitchell, the mysterious girl whose head has been 'rewired' with a neural network which enables her to 'channel' entities from Cyberspace without a 'deck' - in essence, to be 'possessed'. Bobby eventually meets Beauvoir, a member of a Voudoun/cyber sect, who tells him that in Cyberspace the entity he actually met was Erzulie, and that he is now a favourite of Legba, the lord of communication... Beauvoir explains that Voudoun is the perfect religion for this era, because it is pragmatic - "It isn't about salvation or transcendence. What it's about is getting things done ."
Eventually, we come to realise that after the fracturing of the AI Wintermute, who tried to unite the Matrix, the unified being split into several entities which took on the character of the various Haitian loa, for reasons that are never made clear.
Now other writers like Bruce Sterling and Pat Cadigan have emerged. There is even a 'overground' Cyberpunk magazine called Mondo 2000, as well as a host of tiny desktop published fanzines.
A fundamental theme running through most Cyberpunk literature is that (in the near future Earth)commodities are unimportant. Since anything can be manufactured, very cheaply, manufactured goods (and the commodities that are needed to create them) are no longer central to economic life. The only real commodity is information. The bleak, 'no future' landscape of punk rock and post-apocalyptic movies like Blade runner and Mad Max, and imagined a way to escape from the street-level violence these films referred to.
Along with Neuromancer, Blade Runner together set the boundary conditions for emerging Cyberpunk: a hard-boiled combination of high tech and low life. As the William Gibson phrase puts it, "The street has its own uses for technology." So compelling were these two narratives that many people then and now refuse to regard as Cyberpunk anything stylistically and thematically different from them.
Literary Cyberpunk had become more than Gibson, and Cyberpunk itself had become more than literature and film. In fact, the label has been applied variously, promiscuously, often cheaply or stupidly. Kids with modems and the urge to commit computer crime became known as "cyberpunks or Hackers", however, so did urban hipsters who wore black, read Mondo 2000, listened to "industrial" pop, and generally subscribed to techno-fetishism. Gibson had become more han just another sf writer; he was a cultural icon of sorts.
[Gareth Branwyn] posted the following description of the Cyberpunk world view to the MONDO 2000 conference of the WELL (see glossary):
A) The future has imploded onto the present. there was no nuclear Armageddon. There's too much real estate to lose . The new battle field is people's mind's.
B) The megacorp's are the new governments.
C) The U.S is a big bully with lackluster economic power.
D) The world is splintering into a trillion subcultures and designer cults with their own languages, codes, and lifestyles.
E) Computer-generated info-domains are the next frontiers.
F) there is better living through chemistry.
G)Small groups or individual "console cowboys" can wield tremendous power over governments. corporations, etc.
H) The coalescence of a computer "culture" is expressed in self-aware computer music , art, virtual communities, and a hacker/street tech subculture. The computer nerd image is passe', and people are not ashamed anymore about the role the computer has in this subculture. The computer is a cool tool, a friend , important, human augmentation.
I) We're becoming cyborg's. Our tech is getting smaller, closer to us and it will soon merge with us.
J) [Some attitudes that seem to be related]
*Information wants to be free.
*Access to computers and anything which may teach you something about how the world works should be unlimited and total.
*Always yield to the hands-on imperative.
*mistrust authority.
*promote decentralisation.
*Do it yourself.
*Fight the power.
*Feed the noise back into the system.
*Surf the edges.
[(MONDO 2000)65-66 ]
Cyberpunk Culture
Science fiction deals with issues as diverse as the clash between religious fundamentalism and the consumer society, abortion and the church, life support for the terminally ill. or the freedom of the individual in the age of on-line databases.
William Gibson, whose brave new world is seen as in a state of impermanent decay compared to "Cyberspace",The "virtual world" already in embryonic existence in the Internet global computer network. In Gibson's latest novel, Virtual Light, a pair of designer sunglasses holds all the data on plans for property scam involving the rebuilding of post-quake San Francisco. Gibson's "heroes" are a handful of neo-punks and derelicts. His Future world is a grim approximation of today's social and technological trends, a graphic debunking of the progress principle.
In the 20th century, the Net is only accessible via a computer terminal, using a device called a modem to send and receive information. But in 2013, the Net can be entered directly using your own brain, neural plugs and complex interface programs that turn computer data into perceptual events" . In several places, reference is made to the military origin of the
Cyberspace interfaces: "You're a console cowboy. The prototypes of the programs you use to crack industrial banks were developed for [a military operation]. For the assault on the Kirensk computer nexus. Basic module was a Nightwing microlight, a pilot, a matrix deck, a jockey. We were running a virus called Mole. The Mole series was the first generation of real intrusion programs." [Neuromancer].
"The matrix has its roots in primitive arcade games... early graphics programs and military experimentation with cranial jack" [Neuromancer].
Gibson also assumes that in addition to being able to "jack in" to the matrix, you can go through the matrix to jack in to another person using a "simstim" deck. Using the simstim deck, you experience everything that the person you are connected to experiences: "Case hit the simstim switch. And flipped in to the agony of a broken bone. Molly was braced against the blank grey wall of a long corridor, her breath coming ragged and uneven. Case was back in the matrix instantly, a white-hot line of pain fading in his left thigh." [Neuromancer].
The matrix can be a very dangerous place. As your brain is connected in, should your interface program be altered, you will suffer. If your program is deleted, you would die. One of the characters in Neuromancer is called the Dixie Flatline, so named because he has survived deletion in the matrix. He is revered as a hero of the cyber jockeys: 'Well, if we can get the Flatline, we're home free. He was the best. You know he died brain death three times.' She nodded. 'Flatlined on his EEG. Showed me the tapes.'" [Neuromancer].
Incidentally, the Flatline doesn't exist as a person any more: his mind has been stored in a RAM chip which can be connected to the matrix:
Cyberpunk is fascinated by the media technologies which were hitting the mass market in the 80s. Desktop publishing, computer music and now desktop video are technologies taken up with enthusiasm by Cyberpunks..
The rapid evolution from video-games to virtual reality has been helped along by the hard core of enthusiasts eager to try out each generation of simulated experience. The multimedia convergence of the publishing industry, the computer industry, the broadcasting industry and the recording industry has a spot right at its centre called Cyberpunk, where these new product experiments find a critical but playful market.
Cyberpunk is a product of the huge batch of technical and scientific universities created in the US to service the military industrial complex. Your typical Cyberpunk is white, middle class, and technically skilled. They are a new generation of white collar worker, resisting the yoke of work and suburban life for a while. They don't drop out, they jack in. They are a example of how each generation, growing up with a given level of media technology, has to discover the limits and potentials of that technology by experimenting with everyday life itself.
In the case of Cyberpunk, the networked world of Cyberspace, the interactive world of multimedia and the new sensoria of virtual reality will all owe a little to their willingness to be the test pigs for these emergent technologies.
There is also a tension in Cyberpunk between the military that produces technology and the sensibility of the technically skilled individual trained for the high tech machine. Like all subcultures, Cyberpunk expresses a conflict. On the one side is the libertarian idea that technology can be a way of wresting a little domain of freedom for people from the necessity to work and live under the constraints of today. On the other is the fact that the technologies of virtual reality, multimedia, Cyberspace would never have existed in the first place had the Pentagon not funded them as tools of war.
On the one hand it is a drop out culture dedicated to pursing the dream of freedom through appropriate technology. On the other it is a ready market for new gadgets and a training ground for hip new entrepreneurs with hi-tech toys to market.
Cyberpunk's fast crawl to the surface has included not only pop music (industrial, post industrial, techno pop, etc.), but also television (MTV, Saturday morning cartoons, the late "Max Headroom" series, etc.) and movies ("Total Recall," "Lawnmower Man," the Japanese "Tetsuo" series, etc.). A bi-monthly magazine called Wired, aimed in part at the Cyberpunk set and financed in part by MIT Media Lab director Nicholas Negroponte. And the principals of Mondo 2000 .
"The micro technology that, in Cyberpunk, connects the streets to the multinational structures of information in Cyberspace also connects the middle-class structures of information in Cyberspace also connects the middle-class country to the middle-class city".
[S.R Delany (Flame Wars) 198]
Cyberpunk tends to fill some of us with uneasiness and even fear.The X Generation is made up of Slackers, Hackers (a.k.a. Phreakers, Cyberpunks, and Neuronauts). They are Ravers and techno- heads. According to most demographers, we are more street smart and pop-culture literate, and less versed in the classics, ethics, and formal education (especially in areas like geography, civics, and history: areas where we appear to be, in short, an academic disgrace.) We are said to have less ambition, less idealism, less morals, smaller attention spans, and less discipline than any previous generation of this century. We are the most aborted, most incarcerated, most suicidal, and most uncontrollable, unwanted, and unpredictable generation in history. (Or so claim the authors of 13th Generation. ).
"The work of cyberpunks is paralleled throughout eighties pop culture : in rock video ; in the hacker underground; in the jarring street tech of hip-hop and scratch music...."
[Bruce Sterling (MONDO 2000) 68]
Cyberpunk and Technology
In Gibson's world, Cyberspace is a con sensual hallucination created within the dense matrix of computer networks. Gibson imagines a world where people can directly jack their nervous
systems into the net, vastly increasing the intimacy of the connection between mind and matrix. Cyberspace is the world created by the intersection of every jacked-in consciousness, every database and installation, every form of interconnected information circuit, in short, human or in-human.
Cyberspace is no longer merely an interesting item in an inventory of ideas in Gibson's fiction. In Cyberspace: First Steps, a collection of papers from The First Conference on Cyberspace, held at the University of Texas, Austin, in May, 1990, Michael Benedikt defines Cyberspace as "a globally networked, computer-sustained, computer-accessed, and computer-generated,
multidimensional, artificial, or 'virtual' reality." He admits "this fully developed kind of Cyberspace does not exist outside of science fiction and the imagination of a few thousand people;" however he points out that "with the multiple efforts the computer industry is making toward developing and accessing three-dimensionalized data, effecting real-time animation, implementing ISDN and enhancing other electronic information networks, providing scientific visualisations of dynamic systems, developing multimedia software, devising virtual reality interface systems, and linking to digital interactive television . . . from all of these efforts one might cogently argue that Cyberspace is 'now under construction.'"
Cyberpunk in TV and Cinema
One Film "WAR GAMES" was based on a college student who hacked into the Us defence computer and started a simulation program of a nuclear attack on Russia, which looked like the real thing to the Russians. In the near future a British film call "Hackers" is to be released, directed by Iain Softley (BackBeat). Also soon to be released is "The Net" starring Sandra Bullock (Speed) and a Gibson Cyberpunk thriller called "Johnny Mnemonic" a $26 million science fiction movie based on his short story, and starring Keanu Reeves as the main character. Directed by Robert Longo. The film also stars Ice-T, Dolph Lundgren, Takeshi Kitano (of the cult "Sonatine"), Udo Kier, Henry Rollins and Dina Meyer. William Gibson also wrote the screenplay of his original story which was published in the anthology "Burning Chrome". "Johnny Mnemonic" goes into wide release in Dec 1995.
The film Blade Runner, loosely based on Dick's novel Do Androids Dream Of Electric Sheep, is set in early 21st century Los Angeles. Among the enormous human cultural diversity evident, five , synthetically designed organic robots - replicants - have escaped their slave status on an off-world colony. These replicants are the property of the Tyrell Corporation, and have extremely high levels of physically and mental development. The Tyrell Corporation, ensuring that the replicants do not develop the emotional capacity of their human masters genetically engineer a four- year life span. Tyrell Corporation, on the basis of this slavery, uses the market slogan 'More Human Than Human'. And like those who settled earth's New World in the seventeenth century, they expect slave labour." Whilst this commentary is certainly true, a further elaboration can be made on the technological nature of the replicants; they were, for all intents and purposes, a new life-form.
"Max Headroom was the most amazingly Cyberpunk thing that's ever been on network TV. Max started out as an animated VJ for a British music-video channel. In order to introduce him, a short film was made.....Entertainment with all the corners filled in . I think that's what a lot of Cyberpunk writing is .......Television is the greatest Cyberpunk invention of all time" . [Steve Roberts (MONDO 2000) 76]
Theories
One man who has his own theory about the net is Kevin Kelly (exective editor of Wired),
he combines ideas from chaos theory, cybernetics, current thinking on evolution and research into computerised artificial life with his own experience of on-line culture. His main argument is that we're 'the Neo-Biological Era'. The line between the made and the born is being blurred; machines are becoming biological and the biological is being engineered.
The reason is that we have reached the limits of industrial thinking. Linear cause and effect logic is no good for figuring out the hugely complex systems (phone networks, global economies, the Internet) that we have created, so we've begun to look instead at natural systems. After years of tapping mother nature for food and raw materials, we're now mining her for ideas.
One scenario of the Internet he is playing with is that the net might die. "You can imagine a situation in which there's 200 million people on the Internet trying to send E-mail messages and the whole thing just grinds to a halt. Its own success just kills it. In the meantime, a telephone companies steps in and offers E-mail for $5 a month, no traffic jams and its reliable. i hope it doesn't happen but it's a scenario one has to consider".
eorge Gilder of the Hudson institute stated "there is about to be a revolution, born of nothing less than sand, glass and air, and yet it was one which would have an incalcuble effect upon us all.
From sand will come microchips offering super computing power on slices of silicon smaller than a thumbnail and cheaper than a book.
From glass will be fashioned fibre-optic cables that will flash information of any size at lighting speed.
In the air, frequency bandwidths of practically limitless size and available at virtually no cost will permit the wireless transmission of any kind of digital data from anywhere to anywhere, instantly.
Timothy leary the man who coined the phrase "turn on, tune in and drop out" in the 60's thinks that the future of the 20th and 21st century, will be the net."Its awesome. But on the net. you still have someone on the other side . The poor nerd who sits in front of the computer just talking to themselves - that's kind of sad. It's the contact that's important, interpersonal, interactive communication. We're hard wiring global consciousness, we're moving towards a global mind. a global village. Soon we'll develop a global language. People will communicate with pictures not words".
Jean Baudrillard described the emergence of a new postmodern society organsied around simulation, in which models, codes, communication, information, and the media were the demiruges of a radical break with modern societies. Baudrillard's postmodern universe was also one of hyperreality, in which models and codes determined thought and behavior, and in which media of entertainment, information, and communication provided experience more intense and involving han the scenes of banal everything life. In this postmodern world, individuals abondoned the 'desert of the real' for the ecstasies of hyperreality and a new realm of computer, media, and technological experience.
Visions of the Future
Gibson's vision is of a multi-dimensional space inhabited by vast "data structures", where glowing and pulsing representations of data flow within the ubiquitous computer/ telecommunications networks of military and corporate memory banks.(see Johnny Mnemonic)
During the 80's, the Cyberspace vision was being fleshed out in the work shops and laboratories of silicon space , of seeing it, being in it, touching and feeling it, flying through it and hearing it were being developed. The inter-relationship between the vision and the practical, working "virtual reality" machines (such as W industries ' Virtuality and VPL's Reality built for two) were on sale in both the us and Britain by 1990. By 1994 cheap headsets and programmes were available to mostly anyone.
The Cyberpunk future includes the likes of a computer-generated artificial environment known as virtual reality. (Not so futuristic, perhaps: VR arcade games are already here.) It includes dreams of virtual sex. (Not so futuristic, either: text based "sex" already exists on computer networks. Call it Phone Sex: The Next Generation.) It includes further developments in robotics, artificial intelligence, even artificial life. More to the point of punk, it includes "smart drugs," legal substances that allegedly increase mental capacity.
" someday be possible for mental functions to be surgically extracted from the human brain and transferred to computer software in a process he calls "transmigration". the useless body with its brain tissue would then be discarded, while consciousness would remain stored in computer terminals, or for the occasional outing, in mobile robots".
[Hans Moravec, mind children : the future of robot and human
intelligence(Cambridge, MA, 1988),108]
Cyberpunk fiction characters are hard wired (see JohnnyMnemonic), jack into Cyberspace, plug
f:\12000 essays\sciences (985)\Computer\William Henry Gates III.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
William Henry Gates III
Chairman and Chief Executive Officer
Microsoft Corporatio
William (Bill) H. Gates is chairman and chief executive officer of Microsoft Corporation, the leading provider, worldwide, of software for the personal computer. Microsoft had revenues of $8.6 billion for the fiscal year ending June 1996, and employs more than 20,000 people in 48 countries.
Background on Bill
Born on October 28, 1955, Gates and his two sisters grew up in Seattle. Their father, William H. Gates II, is a Seattle attorney. Their late mother, Mary Gates, was a schoolteacher, University of Washington regent and chairwoman of United Way International.
Gates attended public elementary school and the private Lakeside School. There, he began his career in personal computer software, programming computers at age 13.
In 1973, Gates entered Harvard University as a freshman, where he lived down the hall from Steve Ballmer, now Microsoft's executive vice president for sales and support. While at Harvard, Gates developed the programming language BASIC for the first microcomputer -- the MITS Altair.
In his junior year, Gates dropped out of Harvard to devote his energies to Microsoft, a company he had begun in 1975 with Paul Allen. Guided by a belief that the personal computer would be a valuable tool on every office desktop and in every home, they began developing software for personal computers.
Gates' foresight and vision regarding personal computing have been central to the success of Microsoft and the software industry. Gates is actively involved in key management and strategic decisions at Microsoft, and plays an important role in the technical development of new products. A significant portion of his time is devoted to meeting with customers and staying in contact with Microsoft employees around the world through e-mail.
Under Gates' leadership, Microsoft's mission is to continually advance and improve software technology and to make it easier, more cost-effective and more enjoyable for people to use computers. The company is committed to a long-term view, reflected in its investment of more than $2 billion on research and development in the current fiscal year.
As of December 12, 1996, Gates' Microsoft stock holdings totaled 282,217,980 shares, currently selling at $95.25, as of Feb. 20th, 1997.
Giving a rough estimate of total worth:$ 26,881,262,595
In 1995, Gates wrote The Road Ahead, his vision of where information technology will take society. Co-authored by Nathan Myhrvold, Microsoft's chief technology officer, and Peter Rinearson, The Road Ahead held the No. 1 spot on the New York Times' bestseller list for seven weeks. Published in the U.S. by Viking, the book was on the NYT list for a total of 18 weeks. Published in more than 20 countries, the book sold more than 400,000 copies in China alone. In 1996, while redeploying Microsoft around the Internet, Gates thoroughly revised The Road Ahead to reflect his view that interactive networks are a major milestone in human history. The paperback second edition has also become a bestseller. Gates is donating his proceeds from the book to a non-profit fund that supports teachers worldwide who are incorporating computers into their classrooms.
In addition to his passion for computers, Gates is interested in biotechnology. He sits on the board of the Icos Corporation and is a shareholder in Darwin Molecular, a subsidiary of British-based Chiroscience. He also founded Corbis Corporation, which is developing one of the largest resources of visual information in the world-a comprehensive digital archive of art and photography from public and private collections around the globe. Gates also has invested with cellular telephone pioneer Craig McCaw in Teledesic, a company that is working on an ambitious plan to launch hundreds of low-orbit satellites around the globe to provide worldwide two-way broadband telecommunications service.
In the decade since Microsoft has gone public, Gates has donated more than $270 million to charities, including $200 million to the William H. Gates Foundation. The focus of Gates' giving is in three areas: education, population issues and access to technology.
Gates was married on Jan. 1, 1994 to Melinda French Gates. They have one child, Jennifer Katharine Gates, born in 1996.
Times are changing fast. Three years ago, while President Bushs camp was mounting a direct-mail campaign unchanged from that of Reagan before him, the Clinton camp, host to a horde of so-called "computer whiz kids," all in their twenties, was developing a completely new set of election tactics, using personal computer networks and electronic mail, or "e-mail". Many of these twenty-some-odd-year-old mini-Clintons, who now occupy the White House, show up for work in sneakers, T-shirts, and jeans, and spend each day, from morn till night, tapping away at personal-computer keyboards. As I myself have often experienced of late, when you exchange business cards with an American you nearly always see, imprinted on the card along with the phone and fax numbers, an e-mail number, as well. When the person inquires, "What is your e-mail number?"and you reply, "I don't have one yet," you can catch the briefest glimmer in his eye, which seems to say, "A bit behind the times, aren't we?" The darling of this multimedia age is a man named Bill Gates. Won over by then Vice-Presidential candidate Gores promise to vigorously promote the "information superhighway," Gates, declaring himself a representative of Silicon Valley, donated a large amount of money to the Clinton campaign. The support of Bill Gates boosted the popularity of the Democratic Party. This year, Forbes Magazine's traditional annual list ranked this same Bill Gates, head of Microsoft Corp., as the worlds richest human being. Myths and legends about this youthful success story abound; he has already published an autobiography which, along with a critical biography of Gates, is being read by people all over the world. He is, in short, a super-famous man. Gates rear-echelon e-mail activities have been reprinted not only in America and Europe, but even, in translation, in Japanese newspapers. Gates has been known for some time as a political liberal and a strong supporter of the Democratic Party; lately, however, the word about town is that Gates and the Democratic Party have had a falling-out. The U.S. Department of Justice under the Clinton administration, citing doubts about the legality under U.S. antitrust laws of attempted buy-outs of other companies by Microsoft, has put such purchases on hold, causing them to fall through and, it is said, greatly angering Bill Gates.
Gates: "modern-day Rockefeller"
Gates, an object of admiration for most Americans as a "modern-day Rockefeller," is also, it seems, an object of envy who arouses fierce jealousy: charges are currently being brought against him for violation of antitrust laws. Simply put, the Justice Department, under the traditional notion that allowing software makers to merge with the company which makes their computer operating systems to form a single giant company is less desirable than keeping them separate, is moving to block Gates' path. Some 80% of the personal computers in the world today use the MS DOS or Windows operating systems both Microsoft products. If you purchase a piece of software, such as a word processor, and try to run it on your personal computer, you will be unable to run the program unless it is first able to connect with and operating system. Because of this judgment that it is best to keep separate that which ought to be consolidated, it is difficult to see how the Internet, or any other information network, can in future be integrated into a single, unified whole. The specter of an antitrust law born in the age of Standard Oil has risen once again to haunt us. As a rule, disputes such as this are amicably settled by lobbyists. Astoundingly, however, Bill Gates had not a single lobbyist in Washington. Absorbed in his work, it seems, he had neglected to devote any attention to lobbying activities. Then, too, his is such a new industry that it simply hadn't had time to hire lobbyists and launch a carefully planned program of lobbying activities. Thus it appears that Gates' split with the Democratic Party is a fair accomplishment.
"The Road Ahead"
In "The Road Ahead," a book-and-CD-ROM package, Gates "predicts the future for you" (as Newsweek's cover put it). And, surprise!, things look bright indeed to America's richest guy. The "information highway" -- Gates generally clips it to a plain "the highway" -- isn't here yet; the Internet is only a genetic precursor, according to Gates. But when "the highway" itself arrives at our doors, with its ubiquitous high-bandwidth digital video feeds, our lives will undergo a seismic change for the better.
This "World of Tomorrow" prognostication game is old enough hat that even Gates admits many of his predictions will soon look comical. The CD-ROM's video portrait of "the highway" circa 2004 -- a world of heavy makeup, bad Muzak and super-efficient cappuccino bars -- will make for good party entertainment a decade hence. So will its wide-eyed virtual-reality walk-through of the still-unfinished Gates mansion, the Hearst Castle of the '90s.
"The Road Ahead," like an AT&T ad, is built around a ritual repetition of the word "will." I used the CD-ROM's "full text search" function and, though it wouldn't tell me how many times "will" appears, it reported that the word turns up on just about every page.
You will use "the highway" to "shop, order food, contact fellow hobbyists, or publish information for others to use." You will select how, when and where you wish to receive your news and entertainment. You will benefit from lower prices and the elimination of middlemen that the network's "friction-free" marketplace allows. Your wallet PC will identify you at airport gates and highway tollbooths. Your children will tap a torrent of homework helpers.
As the CD-ROM narrator breathlessly puts it, "The information flow into your home will be incredible!" ("Get the mop, Martha!")
At some point, all these "wills" change in character from predictive to prescriptive, and Gates' friendly if cool tone acquires an undercurrent of coercion. The promise of "the highway," according to Gates, is that it will allow us all to control our destinies more fully. The not-so-well-buried subtext of "The Road Ahead," though, tells a different story -- of Gates' and Microsoft's desperate struggle to maintain control of the high-tech marketplace.
"The Road Ahead" won't satisfy readers curious for insights into Chairman Bill's psyche; it mostly has the bland, confident air of an annual report. But in its very first chapter -- next to a cute high-school picture of Gates and Paul Allen scrunched over an old teletype terminal -- Gates does give one clue to his mindset. He was attracted to computers as a kid, he explains, because "we could give this big machine orders and it would always obey."
It's easy to jump on a line like that and make Gates out as some kind of silicon-chip Nazi. But of course he's only being honest about the attraction computer science has always held for engineers, enthusiasts and precocious children: the appeal of instantly responsive, utterly submissive systems that can be gradually massaged toward perfection.
Though digital technology invites its creators into a world of absolute control, the computer market remains a place of frustrating chaos. Gates long ago adopted the strategy that made Microsoft's fortune: ship early with imperfect products, seize market share and then upgrade toward an acceptable level of performance. This drives engineers nuts, but it's sharp business, and it has kept the company on top of the software industry -- until now.
Conclusion and personal ideas:
William Henry Gates III, as you have read, is quite an incredible man. His intelligence and insight into the future, shows how "ahead of his time", he is. In almost all of our daily lives, (whether you know it or not), Gates has done something, influenced someone, invented some new software, that is relevant to what you do. Whether you are a news reporter, or a bagger at a grocery store, a high-tech attorney, or a low-tech gardener, it seems that not a day goes by, without some mention of technology, computers, or what's in store for us.
He is quite a pioneer in his field, and has brought a new realization to many, regarding the future. In fact, his 1995, best selling book, is titled: "The Road Ahead". This man has such power over our society, and our country, that his ideas are often met with resistance. Many people believe that it is terrible that someone with ideas and goals like his, should have so much power and say in our everyday life.
It is obvious to many that he tells the truth, when he talks about the future, and how he thinks it will be. Because with his economic stature, and powerful ideas, he will be able to change the world.
I believe he is one of the most magnificent men in our recent history, to be compared to Hitler, Rockefeller, Martin Luther King, and many other influential people. He has influenced me personally, just with the use of computers in our everyday lives, (more in mine that others), and the majority of our U.S. population. His presence in our economy, society, and life cannot be ignored, and I believe that this will become even more evident, as we move into the 21st century.
f:\12000 essays\sciences (985)\Computer\Windows 95 Beats Mac.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Over the years, there has been much argument over which computer platform to buy. The two contenders in theis competions have been the PC , with its Windows environment and the Macintosh. Now, with the successful release of Windows 95 for the PC, this has been the mjor argument for each side : hardware configuration, networking capabilities, and operating system.
The first arguments to look at between the Pc and Mac platform has to do with hardware configuration. Before Windows 95, installing and configuring hardware was like pulling teeth. The instructions given to help install hardware were too complicated for the average user. There was also the issuer of compatibility between the large number of different hardware setups available in the PC world. Is a particular board going to work with my PC? With Windows 95, these problems were alleviated with plug and play technology. With plug and play compatible boards, the computer detects and configures the new board automatically. The operating system may recognize some hardware components on older PCs. Mac userw will claim that they always had the convenicnce of a plug and play system, ubt the difference shows in teh flexibility of the two systems.
Another set of arguments Mac users use in favor of their sysstems over PCs is in multimedia and networking capabilities. Mac users gloat that the Mac has networking technology built in the system. Even if a user did not use it, the network is included with the system. They cited that for the PC users and Pc users hate the fact that they need to stick a card in their computers to communicate with any other computer. With Windows 95, the Mac network gloaters are silenced. Windows 95 included built-in network support. Any network will work properly. The Mac users also claim their systems have speech, telephony, and voice recognition, whereas the Pc user does not have. In truth, the promised building blocks for telephony control do not yet exist. I think the speech is not good point in the Mac.
In the world of computer, people cannot stand still for too long without getting passed by. Windows 95 now threatens the only assets the Mac has in capturing the interests of the consumers because of configuration in the hardware, communication betweencomputers and difference of operating systems in both platforms. Almost any argument could give in defense of the Mac does not carry nearly as much bite as it did before Windows 95 arrived. Pc users have something to be proud of.
f:\12000 essays\sciences (985)\Computer\Windows 95 or NT.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
When one asks himself or another, Which Operating system will better fill my needs as an operating system Windows 95 or Windows NT version 3.51. I will look at both operating systems and compare the qualities of each one in price, performance, stability and ease of use. The final results will give one a clear view to the superior operating system for years to come.
As one already knows, that if you keep up with the computer industry, that Microsoft Windows has been around for a long time. The Majority of all PC users use some type of windows for their working environment. Microsoft has spent a great deal of time trying to make the supreme operating system. In doing so they have created two of the most debated systems available to the general public in this day and age. However, in doing so each one of these operating systems has there good side and there bad side.
Windows NT 3.51 was originally created for business use, but has ended up being more widely available for the average PC user in ones home. Windows 95 was developed for the sole purpose as an alternative to Windows NT. But has ended up in the work place more then the home. Windows 95 carries an average price of ninety-five dollars in stores. Which makes it an expensive system worth the money. On the other hand Windows NT 3.51 carries a price tag of three-hundred and forty nine dollars. Making this software very expensive but also worth every penny.
Windows 95 is much easier to use then Windows NT. It was designed to make the PC user have more of an easier time navigating through its complex tasks. This is one of the main reasons why people would rather buy the more less expensive operating system. Rather then the more expensive system Windows NT. Another one the reasons that Windows 95 is more popular is for its simple graphic user interface otherwise known as the GUI. Windows also carries a option that Windows NT does not carry. That option is called PnP or Plug and Play, This is where the operating system will install the hardware and new hardware that could be added at a later date in time, Windows NT does not carry this very useful feature. If one has ever tried to install a new peripheral to ones computer it can be a headache alone trying to decipher the instruction manual that comes along with the device. Windows 95 will do this on its own, one of the downfalls to it is the fact that it can be only a device that is less then six to eight months old and carries the PnP logo.
Windows NT 3.51 was developed more for business application (ex. Database, Spreadsheets, word processing and programming.) In the long run though Windows NT is less susceptible to system crashes. Windows NT does not carry the same Graphic User Interface as does Windows 95. It carries more of a look of Windows 3.x, So it is a little more difficult to navigate around. Windows NT does however have the ability to Multitask ( meaning to have more then one application open at a time). Windows 95 does carries the ability to Multitask but loses a great deal of system performance in doing so. Windows NT is the best operating system that comes pre-loaded with level two government encryption standards. Windows NT was designed also to be used in a network, Windows 95 was designed to for a network, however Windows NT does a much better job of handling a network environment.
Conclusion
In the race for the best operating system Microsoft for sure is the leader in the Personal computer industry. Microsoft has proven that it can meet anyone's needs by releasing to different Operating systems. Each one of them having there own benefits, so when it comes down to deciding which level of computing to accomplish. One has to decide what kind of computing one will be doing, how much money does one want to invest in their software and would one be ready to take the leap into the future and upgrade
f:\12000 essays\sciences (985)\Computer\Windows 95 Skills Checklist.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Windows 95 Skills Check
Please do each of the following tasks. If you have a question on how to do any of them, do not hesitate to ask your instructor for assistance. This skills check is provided for you to measure how well you can perform the tasks that were covered in class thus far.
1. Start the computer, go into "Safe Mode" in Windows 95.
2. Shut down the computer in the correct manner.
3. Restart your computer and go into normal mode in Windows 95.
4. Open up the Taskbar Properties Sheet and check "Auto Hide."
5. Change the date and the time on your computer to read Jan 1, 2000 and 12:00 P.M.
6. Reset the date and time to the correct settings.
7. Using the Find feature in the Start Button, find the files Command.dos and Config.dos. What directories are they located in?
8. Start any five programs (sol.exe, cal.exe, clock.exe, quicken, etc.).
9. Maximize each of the started programs.
10. Minimize each of the started programs.
11. Restore each program, one at a time, using the Taskbar.
12. Close each program, one at a time, using the Taskbar.
13. Open up the Explorer.
14. Make sure the Toolbar is activated.
15. Change the View: Use all options- large icon, small icon, detail, list, etc.
16. While in the Detail View Mode, order files from any directory from top to bottom first by name, then by date, then by size.
17. Copy any file and place it on the desktop.
18. Make a short cut of any game or accessory file and place it on the desktop.
19. Make a short cut of drive A: and place it on the desktop.
20. Move the Taskbar to the right side, the top, and then return it to it's original position.
21. Open Microsoft Word
22. Minimize, Maximize and Restore Microsoft Word.
23. Close Microsoft Word.
24. Open Explorer.
25. Open up any folder that has more than ten files.
26. Using the mouse, practice selecting one file, random files, and sequential files (control and shift buttons in conjunction with the mouse).
27. Close Explorer.
f:\12000 essays\sciences (985)\Computer\Windows 95 the O[S of the future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Windows 95 the O/S of the future
The way of the computing world is changing at a neck breaking pace. People are looking for
computers to be easy to use, and to make life easier for them. The computer manufactures
and software developers have started to tailor computers and programs to fit the needs of
the new "computer age". Graphical Interface Software (GUI) began to make computing easier
and people who never dreamed of owning computers began to buy them. Macintosh was one of
the first GUI computers to hit the market, but it was not IBM compatible, so it did not
take over the mainstream of the computer industry. Since most computers where being make
to fit the IBM compatible standards, Microsoft saw the need to replace DOS (Disk Operating
System) with something easier to use. That is when they developed Windows, which covered
the difficult to use DOS with a new face that made computing easier. The first Windows
was a start in the right direction. In an effort to make computing meet the needs of the
public, Microsoft developed Windows 95.
Windows 95 has the appearance of being a completely user friendly operating system and it
pretty much is as far as the average user is concerned. The compatibility with most
hardware makes it easy for someone to upgrade their computer. The desktop is designed so
the user has point and click access to all their open and closed programs. Utilizing the
32 bit programing it was written with, users are able to work with more than one program at
a time and move information between programs. This gives the user the freedom they need to
begin to explore the world of computing without having to learn all the "computer stuff".
Today everyone wants the fastest computer with the best monitor and fastest modem this was
an interrupt address nightmare until Windows 95 was developed. People didn't know what
jumpers needed to go where to make their hardware work. Or why their CDROM wouldn't work
since they changed their sound board. Most hardware periphials have all the configurations
built into a chip that communicates with Windows 95 to find out where it needs to put
itself in the address map. This allows users to have fancy big screen monitors and connect
to the Internet with high speed modems. They can also put in faster video cards that use
all the nice Windows 95 features, thus making their computing less complicated Windows 95
is set up with novice users in mind. As with Windows 3.x, it has boxes that open up with
the program inside called windows. These windows are used to make computing more exciting
for the user. No one wants to look at a screen with just plain text anymore. Before a
window is opened, it is represented by an icon. Double clicking this icon with the mouse
pointer will open the application window for the user to work in. Once the window has been
opened, all visible functions of the program will be performed within it. At any time the
window can be shrunk back down into an icon, or made to fit the entire screen. For all
essential purposes the user has complete control over his windows. Since more than one
window can be open at a time, the user can work with more than one program. Being able to
work with more than one program brings out other special features of Windows 95. In a
regular DOS system only one program can be open at a time. With previous versions of
Windows more than one program could be open, but they did not work well together. Since
Windows 95 is a 32 bit program, it manipulates memory addresses in a way that makes it look
as though your programs are running simultaneously. This makes it easier to share
information between programs. For example (I run Windows 95) while I am writing this paper
using a word processor, I am logged onto the Internet and have five different programs
running. I can move information from the Internet, or any other open program, into this
paper without stopping anything else, something entirely impossible in DOS. Some people
think the because they never see DOS anymore, it is not there. This could not be farther
from the truth. DOS is alive and well hidden under the Windows 95 curtain. But unless the
user wants to use DOS, there is no reason to even bother it. In Windows 95, DOS (version
7) has a few added goodies the some users enjoy. The biggest one is being able to open
Windows applications by typing the program file name at the DOS prompt. Another one is
being able to run more than one DOS application at a time. This does not work as well as
with Windows applications, but it has similar effect. DOS can be used alone, outside of
Windows 95, as before. Or it can be opened in a window on the desktop like a normal
Windows program, and can be manipulated in size and style. The desktop is where the icons
and windows we discussed before live. In older versions of Windows the icons lived in the
Program Manager. In Windows 95 they live under the Start button. Once the start button is
clicked, it displays a pop up windows. Moving the mouse pointer in the pop up windows
gives you access to the different programs available. Icons can also be moved onto the
desktop itself, these are called shortcuts. Double clicking a shortcut will open the
program the shortcut represents. Shortcuts can be linked to a program or a file, and can
be moved to any position on the desktop the user likes. You can also change the picture of
the icon to any "Icon" picture you have available. The desktop can be fashioned in any way
the user likes. For example colors and background pictures can be changed. Even the colors
and thickness of the window outlines and menus can be changed. While programs are open on
the desktop, they are displayed on the Task Bar at the bottom of the screen as buttons.
One option with the task bar is that it may be moved to any of the four sides of the
screen. The buttons have a picture and word identifier on them so the user knows which
button is for which program. Clicking once on the button will switch to the program
represented, which makes it easier to switch between more than one program. This just
about gives the user total control over his computer, which is what most users want.
The ease of use is what makes Windows 95 appealing to the "modern" computer user. In time
Microsoft will improve on the reliability of Windows 95, making it easier to work with.
Being the most complete and user friendly IBM compatible operating system on the market, I
feel that Windows 95 will be the dominant operating system for several years to come.
f:\12000 essays\sciences (985)\Computer\Windows 95.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Windows 95 may very well be the most talked about software release in history. With more
people than ever using personal computers, and given Microsoft's dominance in this still
growing market, Mr. Gates' newest offering has caused quite a stir. As with any new product
in this ultra-competitive industry, Windows 95 has come under intense scrutiny. Advocates
of the new operating system applaud its new features and usability, while its opponents
talk about the similarities to Apple's operating system. As I have never used an Apple
computer, I can't address this point, but I will attempt to outline some of the more
interesting "new" features of Windows 95. Arguably the most welcome innovation Win 95
offers is the "task bar". Use of the task bar eliminates the need to navigate through
several open application windows to get to the one you need. When you first start an
application, a corresponding button appears on the task bar. If after opening other windows
you need to return to the original window, all you need do is click on the application's
button on the task bar and the appropriate window will come to the fore. According to Aley,
"the most gratifying, and overdue, improvement is Windows 95's tolerance for file names in
plain English" (29-30). Traditionally, users had to think of file names that summed up
their work in eight letters or less. This was a constant problem because frequently a user
would look at a list of files to retrieve and think "now what did I save that as?". Those
days are over. Windows 95 will let the user save his or her work with names like "New
Speech" or "Inventory Spreadsheet No. 1", making the contents of those files obvious. Much
to the annoyance of software developers, Windows 95 incorporates many features that
previously required add-on software. One such feature is the Briefcase- a program for
synchronizing the information stored on a user's desktop and notebook computers. Keeping
track of which files were the most recently updated was a big problem. As Aley puts it,
"Which copy of your speech for the sales conference did you work on last, the one in the
laptop or the one in the desktop?" (29-30). One solution was to use programs like Laplink
which would analyze which copy of a file was updated last. Now that Windows 95 provides
this utility, there is no need to buy the add-on software. While mice have always come with
two or even three buttons, most programs have only provided for the use of the left. With
Windows 95 there is finally a use for the right. "Clicking it calls up a menu of commands
that pertain to whatever the cursor is pointing at"(Aley 29-30). Clicking on the background
will open a window that will allow you to change the screen savers and wallpaper. Clicking
on an icon that represents a disk drive will bring up statistics about that drive. To use
Aley's words, "Windows 95 is still clearly a work in progress" (29-30). The software
included to let a user connect to The Microsoft Network cannot be used yet because there is
no Microsoft Network. The dream of plug-and-play compatibility for pc's has not yet been
realized, although in fairness, part of the responsibility for that lies with hardware
manufacturers. However, even with these drawbacks, Windows 95 offers many much needed and
useful new features.
Works Cited
Aley, James. "Windows 95 and Your PC." Fortune 3 Apr. 1995: 29-30.
James Connell
1
f:\12000 essays\sciences (985)\Computer\Windows NT vs Unix as an operating system.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Evolution & Development History
In the late 1960s a combined project between researchers at MIT, Bell Labs and General Electric led to the design of a third generation of computer operating system known as MULTICS (MULTiplexed Information and Computing Service). It was envisaged as a computer utility, a machine that would support hundreds of simultaneous timesharing users. They envisaged one huge machine providing computing power for everyone in Boston. The idea that machines as powerful as their GE-645 would be sold as personal computers costing only a few thousand dollars only 20 years later would have seemed like science fiction to them. However MULTICS proved more difficult than imagined to implement and Bell Labs withdrew from the project in 1969 as did General Electric, dropping out of the computer business altogether.
One of the Bell Labs researchers (Ken Thompson) then decided to rewrite a stripped down version of MULTICS, initially as a hobby. He used a PDP-7 minicomputer that no was using and wrote the code in assembly. It was initially a stripped down, single user version of MULTICS but Thompson actually got the system to work and one of his colleagues jokingly called it UNICS (UNiplexed Information and Computing Service). The name stuck but the spelling was later changed to UNIX. Soon Thompson was joined on the project by Dennis Richie and later by his entire department.
UNIX was moved from the now obsolete PDP-7 to the much more modern PDP-11/20 and then later to the PDP-11/45 and PDP-11/70. These two latter computers had large memories as well as memory protection hardware, making it possible to support multiple users at the same time. Thompson then decided to rewrite UNIX in a high-level language called B. Unfortunately this attempt was not successful and Richie designed a successor to B called C. Together, Thompson and Richie rewrote UNIX in C and subsequently C has dominated system programming ever since. In 1974, Thompson and Richie published a paper about UNIX and this publication stimulated many universities to ask Bell Labs for a copy of UNIX. As it happened the PDP-11 was the computer of choice at nearly all university computer science departments and the operating systems that came with this computer was widely regarded as being dreadful and hence UNIX quickly came to replace them. The version that first became the standard in universities was Version 6 and within a few years this was replaced by Version 7. By the mid 1980s, UNIX was in widespread use on minicomputers and engineering workstations from a variety of vendors.
In 1984, AT&T released the first commercial version of UNIX, System III, based on Version 7. Over a number of years this was improved and upgraded to System V. Meanwhile the University of California at Berkeley modified the original Version 6 substantially. They called their version 1BSD (First Berkeley Software Distribution). This was modified over time to 4BSD and improvements were made such as the use of paging, file names longer than 14 characters and a new networking protocol, TCP/IP. Some computer vendors like DEC and Sun Microsystems based their version of UNIX on Berkeley's rather than AT&T's. There was a few attempts to standardise UNIX in the late 1980s, but only the POSIX committee had any real success, and this was limited.
During the 1980s, most computing environments became much more heterogeneous, and customers began to ask for greater application portability and interoperability from systems and software vendors. Many customers turned to UNIX to help address those concerns and systems vendors gradually began to offer commercial UNIX-based systems. UNIX was a portable operating system whose source could easily be licensed, and it had already established a reputation and a small but loyal customer base among R&D organisations and universities. Most vendors licensed source bases from either the University of California at Berkeley or AT&T(r) (two completely different source bases). Licensees extensively modified the source and tightly coupled them to their own systems architectures to produce as many as 100 proprietary UNIX variants. Most of these systems were (and still are) neither source nor binary compatible with one another, and most are hardware specific.
With the emergence of RISC technology and the breakup of AT&T, the UNIX systems category began to grow significantly during the 1980s. The term "open systems" was coined. Customers began demanding better portability and interoperability between the many incompatible UNIX variants. Over the years, a variety of coalitions (e.g. UNIX International) were formed to try to gain control over and consolidate the UNIX systems category, but their success was always limited. Gradually, the industry turned to standards as a way of achieving the portability and interoperability benefits that customers wanted. However, UNIX standards and standards organisations proliferated (just as vendor coalitions had), resulting in more confusion and aggravation for UNIX customers.
The UNIX systems category is primarily an application-driven systems category, not an operating systems category. Customers choose an application first-for example, a high-end CAD package-then find out which different systems it runs on, and select one. The final selection involves a variety of criteria, such as price/performance, service, and support. Customers generally don't choose UNIX itself, or which UNIX variant they want. UNIX just comes with the package when they buy a system to run their chosen applications.
The UNIX category can be divided into technical and business markets: 87% of technical UNIX systems purchased are RISC workstations purchased to run specific technical applications; 74% of business UNIX systems sold are multiuser/server/midrange systems, primarily for running line-of-business or vertical market applications.
The UNIX systems category is extremely fragmented. Only two vendors have more than a 10% share of UNIX variant license shipments (Sun(r) and SCO); 12 of the top 15 vendors have shares of 5% or less (based on actual 1991 unit shipments, source: IDC). This fragmentation reflects the fact that most customers who end up buying UNIX are not actually choosing UNIX itself, so most UNIX variants have small and not very committed customer bases.
Operating System Architecture
Windows NT was designed with the goal of maintaining compatibility with applications written for MS-DOS, Windows for MS-DOS, OS/2, and POSIX. This was an ambitious goal, because it meant that Windows NT would have to provide the applications with the application programming interfaces (API) and the execution environments that their native operating systems would normally provide. The Windows NT developers accomplished their compatibility goal by implementing a suite of operating system environment emulators, called environment subsystems. The emulators form an intermediate layer between user applications and the underlying NT operating system core.
User applications and environment subsystems work together in a client/server relationship. Each environment subsystem acts as a server that supports the application programming interfaces of a different operating system . Each user application acts as the client of an environment subsystem because it uses the application programming interface provided by the subsystem. Client applications and environment subsystem servers communicate with each other using a message-based protocol.
At the core of the Windows NT operating system is a collection of operating system components called the NT Executive. The executive's components work together to form a highly sophisticated, general purpose operating system. They provide mechanisms for:
Interprocess communication.
Pre-emptive multitasking.
Symmetric multiprocessing.
Virtual memory management.
Device Input/Output.
Security.
Each component of the executive provides a set of functions, commonly referred to as native services or executive services. Collectively, these services form the application programming interface (API) of the NT executive.
Environment subsystems are applications that call NT executive services. Each one emulates a different operating system environment. For example, the OS/2 environment subsystem supports all of the application programming interface functions used by OS/2 character mode applications. It provides these applications with an execution environment that looks and acts like a native OS/2 system. Internally, environment subsystems call NT executive services to do most of their work. The NT executive services provide general-purpose mechanisms for doing most operating system tasks. However the subsystems must implement any features that are unique to the their operating system environments.
User applications, like environment subsystems, are run on the NT Executive. Unlike environment subsystems, user applications do not directly call executive services. Instead, they call application programming interfaces provided by the environment subsystems. The subsystems then call executive services as needed to implement their application programming interface functions.
Windows NT presents users with an interface that looks like that of Windows 3.1. This user interface is provided by Windows NT's 32-bit Windows subsystem (Win32). The Win32 subsystem has exclusive responsibility for displaying output on the system's monitor and managing user input. Architecturally, this means that the other environment subsystems must call Win32 subsystem functions to produce output on the display. It also means that the Win32 subsystem must pass user input actions to the other environment subsystems when the user interacts with their windows.
Windows NT does not maintain compatibility with device drivers written for MS-DOS or Windows for MS-DOS. Instead, it adopts a new layered device-driver architecture that provides many advantages in terms of flexibility, maintainability, and portability. Windows NT's device driver architecture requires that new drivers be written before Windows NT can be compatible with existing hardware. While writing new drivers involves a lot of development effort on the part of Microsoft and independent hardware vendors (IHV), most of the hardware devices supported by Windows for MS-DOS will be supported by new drivers shipped with the final Windows NT product.
The device driver architecture is modular in design. It allows big (monolithic) device drivers to be broken up into layers of smaller independent device drivers. A driver that provides common functionality must only be written once. Drivers in adjacent layers can then simply call the common device driver to get their work done. Adding support for new devices is easier under Windows NT than most operating systems because only the hardware-specific drivers need to be rewritten.
Windows NT's new device driver architecture provides a structure on top of which compatibility with existing installable file systems (for example, FAT and HPFS) and existing networks (for example, Novell and Banyan Vines) was relatively easy to achieve. File systems and network redirectors are implemented as layered drivers that plug easily into the new Windows NT device driver architecture.
In any Windows NT multiprocessor platform, the following conditions must hold:
All CPUs are identical, and either all have identical coprocessors or none has a coprocessor.
All CPUs share memory and have uniform access to memory.
In a symmetric platform, every CPU can access memory, take an interrupt, and access I/O control registers. In an asymmetric platform, one CPU takes all interrupts for a set of slave CPUs.
Windows NT is designed to run unchanged on uniprocessor and symmetric multiprocessor platforms
A UNIX system can be regarded as hierarchical in nature. At the highest level is the physical hardware, consisting of the CPU or CPUs, memory and disk storage, terminals and other devices.
On the next layer is the UNIX operating system itself. The function of the operating system is to allow access to and control the hardware and provide an interface that other software can use to access the hardware resources within the machine, without having to have complete knowledge of what the machine contains. These system calls allow user programs to create and manage processes, files and other resources. Programs make system calls by loading arguments into memory registers and then issuing trap instructions to switch from user mode to kernel mode to start up UNIX. Since there is no way to trap instructions in C, a standard library is provided on top of the operating system, with one procedure per system call.
The next layer consists of the standard utility programs, such as the shell, editors, compilers, etc., and it is these programs that a user at a terminal invokes. They use the operating system to access the hardware to perform their functions and generally are able to run on different hardware configurations without specific knowledge of them.
There are two main parts to the UNIX kernel which are more or less distinguishable. At the lowest level is the machine dependent kernel. This is a piece of code which consists of the interrupt handlers, the low-level I/O system device drivers and some of the memory management software. As with most of the Unix operating system it is mostly written in C, but since it interacts directly with the machine and processor specific hardware, it has to be rewritten from scratch whenever UNIX is ported to a new machine. This kernel uses the lowest level machine instructions for the processor which is why it must be changed for each different processor.
In contrast, the machine independent kernel runs the same on all machine types because it is not as closely reliant on any specific piece of hardware it is running on. The machine independent code includes system call handling, process management, scheduling, pipes, signals, memory paging and memory swapping functions, the file system and the higher level part of the I/O system. The machine independent part of the kernel is by far the larger of the two sections, which is why it UNIX can be ported to new hardware with relative ease.
Unix does not use the DOS and Windows idea of independently loaded device drivers for each additional hardware item that is not under BIOS control in the machine which is why it must be recompiled whenever hardware is added or removed, the kernel needing to be updated with the new information. This is the equivalent of adding a device driver to a configuration file in DOS or Windows and then rebooting the machine. It is however a longer process to undertake.
Memory Management
Windows NT provides a flat 32-bit address space, half of which is reserved for the OS, and half available to the process. This provides a separate 2 gigabytes of demand-paged virtual memory per process. This memory is accessible to the software developer through the usual malloc() and free() memory allocation and deallocation routines, as well as some advanced Windows NT-specific mechanisms.
For a programmer desiring greater functionality for memory control, Windows NT also provides Virtual and Heap memory management APIs.
The advantage of using the virtual memory programming interface (VirtualAlloc(), VirtualLock(), VirtualQuery(), etc.) is that the developer has much more control over whether backing store (memory committed in the paging (swap) file to handle physical memory overcommitment) is explicitly marked, and removed from the available pool of free blocks. With malloc(), every call is assumed to require the memory to be available upon return from the function call to be used. With VirtualAlloc() and related functions, the memory is reserved, but not committed, until the page on which an access occurs is touched. By allowing the application to control the commitment policy through access, less system resources are used. The trade-off is that the application must also be able to handle the condition (presumably with structured exception handling) of an actual memory access forcing commitment.
Heap APIs are provided to make life easier for applications with memory-using stack discipline. Multiple heaps can be initialised, each growing/shrinking with subsequent accesses. Synchronisation of access to allocated heaps can be done either explicitly through Windows NT synchronisation objects, or by using an appropriate parameter at the creation of a heap. All access to memory in that particular heap is synchronised between threads in the process.
Memory-mapped files are also provided in Windows NT. This provides a convenient way to access disk data as memory, with the Windows NT kernel managing paging. This memory may be shared between processes by using CreateFileMapping() followed by MapViewOfFile().
Windows NT provides thread local storage (TLS) to accommodate the needs of multithreaded applications. Each thread of a subprocess has its own stack, and may have its own memory to keep various information.
Windows NT is the first operating system to provide a consistent multithreading API across multiple platforms. A thread is unit of execution in a process context that shares a global memory state with other threads in that context (if any). When a process is created in Windows NT, memory is allocated for it, a state is set up in the system, and a thread object is created. To start a thread in a currently executing process, the CreateThread() call is used as a function pointer is passed in through lpStartAddr; this address may be any valid procedure address in an application.
Windows NT supports a number of different types of multiprocessing hardware. On these designs, it's possible for different processors to be running different threads an application simultaneously. Take care to use threads in an application to synchronise access to common resources between threads. Fortunately, Windows NT has very rich synchronisation facilities.
Most UNIX developers don't use threads in their applications since support is not consistent between UNIX platforms.
Handles don't have a direct mapping from UNIX; however, they're very important to Win32 applications and deserve discussion. When kernel objects (such as threads, processes, files, semaphores, mutexes, events, pipes, mailslots, and communications devices) are created or opened using the Win32 API, a HANDLE is returned. This handle is a 32-bit quantity that is an index into a handle table specific to that process. Handles have associated ACLs, or Access Control Lists, that Windows NT uses to check against the security credentials of the process. Handles can be obtained by explicitly creating them (usually when an object is created), as the result of an open operation (e.g. OpenEvent()) on a named object in the system, inherited as the result of a CreateProcess() operation (a child process inherits an open handle from its parent process if inheritance was specified when the original handle was created and if the child process was created with the "inherit handles" flag set), or "given away" by DuplicateHandle(). It is important to note that unless one of these mechanisms is used, a handle will be meaningless in the context of a process.
For example, suppose process 1 calls CreateEvent() to return a handle that happens to have the ordinal value 0x1FFE. This event will be used to co-ordinate an operation between different processes. Process 2 must somehow get a handle to the event that process 1 created. If process 1 somehow "conjures" that the right value to use is 0x1FFE, it still won't have access to the event created by process 1, since that handle value means nothing in the context of process 2. If instead, process 1 calls DuplicateHandle()with the handle of process 2 (acquired through calling OpenProcess() with the integral id of process 2), a handle that can be used by process 2 is created. This handle value can be communicated back to process 1 through some IPC mechanism.
Handles that are used for synchronisation (semaphores, mutexes, events) as well as those that may be involved in asynchronous I/O (named pipes, files, communications) may be used with WaitForObject() and WaitForMultipleObject(), which are functionally similar to the select() call in UNIX.
Prior to 3BSD most UNIX systems were based on swapping. When more processes existed than could be kept in physical memory, some of them were swapped out to disk or drum storage. A swapped out process was always swapped out in its entirety and hence any current process was always either in memory or on disk as a complete unit.
All movement between memory and disk was handled by the upper level of a split level scheduler, known as the (memory) swapper. Swapping from memory to disk was initiated when the kernel ran out of free physical memory.
In order to choose a victim to evict, the swapper would first look at the processes that were being blocked by having to wait for something such as terminal input or a print job to respond. If more than one process was found, that process whose priority plus residence time was the highest was chosen as a candidate for swapping to disk. Thus a process that had consumed a large amount of CPU time recently was a good candidate, as was one that had been in memory a long time, even if it was mostly doing I/O. If no blocked process was available in memory then a ready process was chosen based on the same criteria of priority plus residence time.
Starting with 3BSD, memory paging was added to the operating system to handle the ever larger programs that were being written. Both 4BSD and System V implemented demand paging in a similar fashion. The theory of demand paging is that a process need not necessarily be entirely resident in memory in order to continue execution. All that is actually required is the user structure and the page tables. If these are swapped into memory, the process is then deemed to be sufficiently in memory and can be scheduled to execute. The pages of the text, data and stack segments are brought in dynamically, one at a time, as they are referenced, thus leaving memory free for other tasks rather than filling it with tables of data which may be referenced only once. If the user structure and page table are not in memory, the process cannot be executed until the swapper swaps them into memory from disk.
Paging is implemented partly by the main kernel and partly by a process called the page daemon. Like all daemons, the page daemon is started up periodically so that it can look around to see if there is any work for it to do. If it discovers that the number of free pages in memory is too low, it initiates action to free up more pages.
When a process is started it may cause a page fault due to one of its pages is not being resident in memory. When a page fault occurs, the operating system takes the first page frame free on the list of page frames, removes it from the list and reads the needed page into it. If the free page frame list is empty, the process must be suspended until the page daemon has had time to free a page frame from another process.
The page replacement algorithm is executed by the page daemon. At a set interval (commonly 250 millisec but varying from system to system) it is activated to see if the number of free page frames is at least equal to a system parameter known as lotsfree (typically set to 1/4 of memory). If there are insufficient page frames, the page daemon will start transferring pages from memory to disk until the lotsfree parameter value of page frames are available. Alternatively, if the page daemon discovers that more than lotsfree page frames are on the free list, it has no need to perform any function and terminates until its next call by the system. If the machine has plenty of memory and few active processes, it will be inactive for most of the time.
The page daemon uses a modified version of the clock algorithm. It is a global algorithm, which means that when removing a page it does not take into account whose page is being removed. Thus the number of pages each process has assigned to it varies in time, depending both on its own requirements and other process requirements. The size of the data segment may vary depending upon what has been requested, the operating system tracking allocated and unallocated memory blocks while the memalloc function manages the content of the data segment.
Process Management, Inter-process Communication and Control
The Windows NT process model differs from that of UNIX in a number of aspects, including process groups, terminal groups, setuid, memory layout, etc. For some programs, such as shells, a re-architecture of certain portions of the code is inevitable. Fortunately, most applications don't inherently rely on the specific semantics of UNIX processes, since even this differs between UNIX versions.
Quoting from the online help provided with the Windows NT SDK:
Win32 exposes processes and threads of execution within a process as objects. Functions exist to create, manipulate, and delete these objects.
A process object represents a virtual address space, a security profile, a set of threads that execute in the address space of the process, and a set of resources or objects visible to all threads executing in the process. A thread object is the agent that executes program code (and has its own stack and machine state). Each thread is associated with a process object which specifies the virtual address space mapping for the thread. Several thread objects can be associated with a single process object which enables concurrent execution of multiple threads in a single address space (possible simultaneous execution in a multiprocessor system running Windows NT). On multiprocessor systems running Windows NT, multiple threads may execute at the same time but on different processors.
In order to support the process structure of Windows NT, APIs include:
* Support for process and thread creation and manipulation.
* Support for synchronisation between threads within a process and synchronisation objects that can be shared by multiple processes to allow synchronisation between threads whose processes have access to the synchronisation objects.
* A uniform sharing mechanism that provides security features that limit/control the sharing of objects between processes.
Windows NT provides the ability to create new processes (CreateProcess) and threads (CreateThread). Rather than "inherit" everything always, as is done in UNIX with the fork call, CreateProcess accepts explicit arguments that control aspects of process creation such as file handle inheritance, security attributes, debugging of the child process, environment, default directory, etc. It is through the explicit creation of a thread or process with appropriate security descriptors that credentials are granted to the created entity.
Win32 does not provide the capability to "clone" a running process (and it's associated in-memory contents); this is not such a hardship, since most UNIX code forks and then immediately calls exec. Applications that depend on the cloning semantics of fork may have to be rearchitected a bit to use threads (especially where large amounts of data sharing between parent and child occurs), or in some cases, to use IPC mechanisms to copy the relevant data between two distinct processes after the CreateProcess call is executed.
If a child process is to inherit the handles of the creator process, the bInherit flag of the CreateProcess call can be set. In this case, the child's handle table is filled in with handles valid in the context of the child process. If this flag is not specified, handles must be given away by using the DuplicateHandlecall.
Windows NT was not designed to support "dumb terminals" as a primary emphasis, so the concept of terminal process groups and associated semantics are not implemented. Applications making assumptions about groups of applications (for example, killing the parent process kills all child processes), will have to investigate the GenerateConsoleCtrlEvent API, which provides a mechanism to signal groups of applications controlled by a parent process using the CREATE_NEW_PROCESS_GROUP flag in the CreateProcess API.
Programs making assumptions about the layout of processes in memory (GNU EMACS, for example, which executes, then "dumps" the image of variables in memory to disk, which is subsequently "overlayed" on start-up to reduce initialisation time), especially the relationship of code segments to data and stack, will likely require modification. Generally, practices such as these are used to get around some operating system limitation or restriction. At this level, a rethinking of the structure of that part of the application is generally in order, to examine supported alternatives to the "hack" that was used (perhaps memory mapped files for particular cases like this). For those who must deal with an application's pages on this level, there is a mechanism by which a process may be opened (OpenProcess), and individual memory pages, threads, and stacks examined or modified.
There is no direct equivalent of the UNIX setuid. There are, however, a number of Windows NT alternatives to use depending on the task to be accomplished. If the task at hand is a daemon that runs with a fixed user context, it would be best to use a Windows NT service (again, the online help is invaluable for this information). A Windows NT service is equivalent to a "daemon" running with fixed user credentials, with the added benefit of being administrable locally or remotely through standard Windows NT administration facilities. For instances when a process must "impersonate" a particular user, it's suggested that a server program be written that communicates thr
f:\12000 essays\sciences (985)\Computer\Windows NT.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Once a small and simple collection of computers run by the Defence Department, is now a massive world wide network of computers, what we call the 'Internet'. The word "Internet" literally means "network of networks." In itself, the Internet is composed of thousands of smaller local networks scattered throughout the globe. It connects roughly 15 million users in more than 50 countries a day. The World Wide Web (WWW) is mostly used on the Internet. The Web refers to a body of information, while the Internet refers to the physical side of the global network containing a large amount of cables and computers.
The Internet is a 'packet-switching' computer network. When a person sends a message over the Internet, it is broken into tiny pieces, called 'packets'. These packets travel over many different routes between the computer that it is being sent from to the computer to which it is being sent to. Phone lines, either fibre-optics or copper wires ones, carry most of the data packets. Internet computers along the path switch each packet that will take it to its destination, but no two packets need to follow the same path. The Internet is designed so that packets always take the best available route at the time they are travelling. 'Routers' which are boxes of circuit boards and microchips, which do the essential task of directing and redirecting packets along the network. Much smaller boxes of circuit boards and microchips called 'modems' do the task of interpreting between the phone lines and the computer. The packets are all switched into a destination and reassembled by the destination computer. Today's Internet contains enough repetitious and interconnected circuits simply to reroute the data if any portion of the network goes down or gets overloaded.
The packet-switching nature of the Internet gives it sufficient speed and flexibility to support real-time communication, such as sending messages to other people in a chat environment (IRC). Every packet is written in a particular protocol language, called TCP/IP, which stands for Transmission Control Protocol/Internetworking Protocol. This protocol is the common language of the Internet, and it supports two major programs called File Transfer Protocol (FTP) and Telenet. FTP lets the transfer files from one Internet computer to another. Telnet lets a person to log into a remote computer. They have combined these two tools in complex ways to create the Internet tools such as Gopher, the World Wide Web and IRC.
Some collections of phone lines and routers are larger and more powerful than others. Spirit and MCI both have each built collections of phone lines and routers that crisscross the United States and can carry large amounts of data. There are six companies in the US with large, nationwide networks of high-speed phone lines and routers. These companies include, MCI, Sprint, AGIS, UUNet/AlterNet, ANS, and PSI. They make up what they often call the 'Internet Backbone'.
Data packets travelling on a 'backbone' network stay within that network for much of their journey. The reason is that there is only a handful of places where the backbone networks meet. For example, 1a packet travelling on a Sprint circuit to a Sprint router, can only transfer to an MCI circuit at certain places. This is just like how certain city streets often run parrel to each other for many miles before reaching an intersection. These intersections that they call 'Network Access Points' (NAP) are very crucial to the transmission of data on the Internet.
A Web is a program running on a computer who's only purpose is to serve documents to other computers when asked. A Web client is a program that interfaces (talks) with the user and requests documents from a server as the user requests them. The server only operates when a request for a document is made. The process of how this work is very simple, one example is; Running a Web browser, the user selects a piece of hypertext connected to another text -"Planes."
The Web client connects to a computer specified by a network address somewhere on the Internet and asks that computer's Web server for "Planes." The server responses by sending the text and any other media within the text (this includes pictures, sounds, movies) to the users screen. The World Wide Web does thousands of these transactions per hour throughout the wold, creating a web of information.
They call the language that the Web client and servers use to talk with each other the 'Hypertext Transmission Protocol' (HTTP). All Web clients and servers must be able to speak HTTP to send and receive hypermedia documents.
The standard language the Web uses for creating and recognizing hypermedia documents is the 'Hypertext Markup Language' (HTML). Another formatting language used for Web documents is 'Standard Generalized Markup Language' (SGML). HTML is widely liked because of its ease of use. Web documents are usually written in HTML and are usually named with the suffix '. html'. HTML documents are nothing more than standard 7-bit ASCII files with formatting codes that contain information about the layout (text styles, document titles, paragraphs, lists and hyperlinks). Hyperlinks are links in the document to go to other documents or another Web sight. HTML uses what they call 'Uniform Resource Locators' (URL) to represent hypermedia links and links to network services within documents. The first part of the URL (before the two slashes) specifies the method of access. The second is typically the address of the computer where the data or service is found. Further parts may specify the name of files, the port to connect to, or the text to search for in a database.
Most Web browsers allow the user to specify a URL and connect to that document or service. When selecting hypertext in an HTML document, the user is actually sending a request to open a URL. In this way, they can make hyperlinks not only to other texts and media, but also to other network services.
The powerful, sophisticated access that the Internet provides is truly amazing. It is spreading faster than cellular phones, and fax machines. The amount of people connecting to the Internet is growing at a rapid rate, along with the number of "host" machines with direct connection to TCP\IP. The main reason that the Internet is flourishing so rapidly is because of the freedom, there is no one who actually owns the Internet and no rules for users. As the Internet grows, many new activities are joining in, like 'Internet Radio', which will support real-time call-in shows and music to be sent over the Internet. As the Internet is expanding into another decade, it will become even more interesting and complex.
FOOTNOTES:
1.John Quarterman, The Matrix: Computer Networks and Conferencing Systems Worldwide (Bedford, MA: Digital Press, 1990), 42.
f:\12000 essays\sciences (985)\Computer\Windows revealed.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
To view Internet.txt on screen in Notepad, maximize the Notepad window.
To print Internet.txt, open it in Notepad, or another word processor,
and then use the Print command on the File menu.
--------
CONTENTS
--------
Why Is the Internet So Popular?
How Can I Connect to the Internet?
Notes
Online Services--Where You Can Get MSIE.exe
===============================
Why is the Internet So Popular?
===============================
The Internet is a rich source of online information, covering almost
any topic you can imagine. When you are connected to the Internet, you can:
- Exchange messages with people all over the world.
- Get the latest news, weather, sports, and entertainment information.
- Download software, including games, pictures, and programs.
- Join discussion groups, such as bulletin boards and newsgroups.
==================================
How Can I Connect to the Internet?
==================================
-- If you do not currently have an Internet account, sign up for The
Microsoft Network (MSN) by double-clicking the MSN icon on your desktop
and then following the instructions. The Microsoft Network includes access
to the Internet as part of its service.
-- If you have an account with an online service (see list below),
or use bulletin board services (BBS) regularly, do one of the following:
a) Sign up for The Microsoft Network by double-clicking the
MSN icon on your desktop.
b) Download The Microsoft Internet Explorer files (MSIE.exe) from
your online service (see full list of locations below).
-- If you already have an account with an Internet access provider,
you can download Microsoft's browsing tool, Internet Explorer, from
http:\www.microsoft.com. Besides being easy to use, Internet Explorer
enables you to create desktop shortcuts to your favorite Web sites.
Try it out!
=====
Notes
=====
If you do not see the MSN icon on your desktop, you can install it by
opening Control Panel and then double-clicking the Add/Remove Programs icon.
Then, click the Windows Setup tab.
If you have Microsoft Plus!, you already have Internet Explorer.
===========================================
Online Services--Where You Can Get MSIE.exe
===========================================
On the Internet
(ftp://ftp.microsoft.com/PerOpSys/Win_News/
On the World Wide Web
http://www.microsoft.com/
On The Microsoft Network
From Main Menu: Categories\Computers and Software\Software\
Microsoft\Windows 95
On CompuServe
type GO WINNEWS
On Prodigy
JUMP WINNEWS
On America Online
Use keyword WINNEWS
On GEnie
MOVE TO PAGE 95
f:\12000 essays\sciences (985)\Computer\Wire Pirates.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Wire Pirates
Someday the Internet may become an information superhighway, but right now it is more like a 19th-century railroad that
passes through the badlands of the Old West. As waves of new settlers flock to cyberspace in search for free information
or commercial opportunity, they make easy marks for sharpers who play a keyboard as deftly as Billy the Kid ever drew a
six-gun.
It is difficult even for those who ply it every day to appreciate how much the Internet depends on collegial trust and mutual
forbearance. The 30,000 interconnected computer networks and 2.5 million or more attached computers that make up the
system swap gigabytes of information based on nothing more than a digital handshake with a stranger.
Electronic impersonators can commit slander or solicit criminal acts in someone else´s name; they can even masquerade as
a trusted colleague to convince someone to reveal sensitive personal or business information.
"It´s like the Wild West", says Donn B. Parker of SRI: "No laws, rapid growth and enterprise - it´s shoot first or be killed."
To understand how the Internet, on which so many base their hopes for education, profit and international competitiveness,
came to this pass, it can be instructive to look at the security record of other parts of the international communications
infrastructure.
The first, biggest error that designers seem to repeat is adoption of the "security through obscurity" strategy. Time and
again, attempts to keep a system safe by keeping its vulnerabilities secret have failed.
Consider, for example, the running war between AT&T and the phone phreaks. When hostilities began in the 1960s,
phreaks could manipulate with relative ease the long-distance network in order to make unpaid telephone calls by playing
certain tones into the receiver. One phreak, John Draper, was known as "Captain Crunch" for his discovery that a modified
cereal-box whistle could make the 2,600-hertz tone required to unlock a trunk line.
The next generation of security were the telephone credit cards. When the cards were first introduced, credit card
consisted of a sequence of digits (usually area code, number and billing office code) followed by a "check digit" that
depended on the other digits. Operators could easily perform the math to determine whether a particular credit-card
number was valid. But also phreaks could easily figure out how to generate the proper check digit for any given telephone
number.
So in 1982 AT&T finally put in place a more robust method. The corporation assigned each card four check digits (the
"PIN", or personal identification number) that could not be easily be computed from the other 10. A nationwide on-line
database made the numbers available to operators so that they could determine whether a card was valid.
Since then, so called "shoulder surfers" haunt train stations, hotel lobbies, airline terminals and other likely places for the
theft of telephone credit-card numbers. When they see a victim punching in a credit card number, they transmit it to
confederates for widespread use. Kluepfel, the inventor of this system, noted ruefully that his own card was compromised
one day in 1993 and used to originate more than 600 international calls in the two minutes before network-security
specialists detected and canceled it.
The U.S. Secret Service estimates that stolen calling cards cost long distance carriers and their customers on the order of
2.5 billion dollars a year.
During the same years that telephone companies were fighting the phone phreaks, computer scientists were laying the
foundations of the Internet. The very nature of Internet transmissions is based on a very collegial attitude. Data packets are
forwarded along network links from one computer to another until they reach their destination. A packet may take dozen
hops or more, and any of the intermediary machines can read its contents. Only a gentleman´s agreement assures the
sender that the recipient and no one else will read the message.
But as Internet grew, however, the character of its population began changing, and many of the newcomers had little idea of
the complex social contract. Since then, the Internet´s vulnerabilities have only gotten worse. Anyone who can scrounge up
a computer, a modem and $20 a month in connection fees can have a direct link to the Internet and be subject to break-ins
- or launch attacks on others.
The internal network of high-technology company may look much like the young Internet - dozens or even hundreds of
users, all sharing information freely, making use of data stored on a few file servers, not even caring which workstation they
use to accessing their files. As long as such an idyllic little pocket of cyberspace remains isolated, carefree security systems
may be defensible. System administrators can even set up their network file system to export widely used file directories to
"world" - allowing everyone to read them - because after all, the world ends at their corporate boundaries.
It does not take much imagination to see what can happen when such a trusting environment opens its digital doors to
Internet. Suddenly, "world" really means the entire globe, and "any computer on the network" means every computer on
any network. Files meant to be accessible to colleagues down the hall or in another department can now be reached from
Finland or Fiji. What was once a private line is now a highway open to as much traffic as it can bear.
If the Internet, storehouse of wonders, is also a no-computer´s land of invisible perils, how should newcomers to
cyberspace protect themselves? Security experts agree that the first layer of defense is educating users and system
administrators to avoid the particularly stupid mistakes such as use no passwords at all.
The next level of defense is the so called fire wall, a computer that protects internal network from intrusion. To build a fire
wall you need two dedicated computers: one connected to the Internet and the other one connected to the corporation´s
network. The external machine examines all incoming traffic and forwards only the "safe" packages to its internal
counterpart. The internal gateway, meanwhile, accepts incoming traffic only from the external one, so that if unauthorized
packets do somehow find their way to it, they cannot pass.
But other people foresee an Internet made up mostly of private enclaves behind fire walls. A speaker of the government
notes, "There are those who say that fire walls are evil, that they are balkanizing the Internet, but brotherly love falls on its
face when millions of dollars are involved".
In the meantime, the network grows, and people and businesses ent
f:\12000 essays\sciences (985)\Computer\WorkStudy Internship.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HuxFinn
CIS497 Internship Course Project
HELPDESK INTERNSHIP SUMMARY
RESPONSIBILITIES :
The Computer User's Support Services has several divisions; Office
Automations, Networking, System Programmers, Operations, Hardware Services, Helpdesk Services, Computer Laboratories, Audio / Visual, and Switchboard. CUSS is responsible for the maintenance and development of the Purdue University Network Infra-Structure. As a division of CUSS, Helpdesk channels incoming requests for computer service from Purdue faculty and staff.
The Helpdesk Technician constantly monitors campus mail, voice mail, and answers the phone. Any and all questions or problems must be responded to with promptness and courtesy. Ideally, Helpdesk resolves the problem over the phone. When this is not possible, the technician goes on site armed with appropriate software and a toolkit to assist the user with basic repair. Any questions or problems requiring a higher lever of expertise or authority are relayed to specialists within one of the other divisions and a trouble ticket is created.
The Helpdesk maintains careful records of all correspondence with each customer. Each call is logged in 'Magic', a specialized database application. As problems are resolved, the solutions are recorded in an ever growing library of Helpdesk documentation. The objective in creating 'Tip Sheets' is to prevent the rework involved in frequently recurring problems. They facilitate prompt and efficient customer service. An extensive collection of reference manuals for all campus software is also maintained.
PROJECTS :
Nupop is a DOS-based, e-mail utility that is widely used among faculty and staff. The original developers of this software no longer support it. I took the initiative in collecting as much information as possible about Nupop so that Helpdesk could fill the gap and respond to problems.
When Banner was made available to Windows, a rash of users experienced difficulty. I responded by assisting in identifying the problem and had the satisfaction of participating in its resolution.
CUSS and ISCP sponsored the "Taste of Technology" open house this semester.
I was proud to represent Helpdesk during the activities.
Helpdesk itself is a team project. No one person can possibly answer all the questions or solve all the problems. We depend on one another as an information resource. Job activities must be coordinated to best provide quality service to our customers. The spirit of cooperation is essential.
ACCOMPLISHMENTS :
Majoring in ISCP lays a good foundation for the skills required in Helpdesk. Helpdesk builds on these skills. A typical day includes, jammed printers, login problems, stuck keyboards, obscure error messages, forgotten passwords, virus outbreaks, equipment moves, burnt out monitors, stuck keyboards, smeared printouts, and memory shortages. There are the more esoteric software questions; "What happened to my Word macros? How do I use Reachout? How can I print an attached document in mail? How can I be included in Distribution E? What happened to my Toolbar? How do I unformat a disk?"
As you can guess, an effective Helpdesk Tech requires a goodly amount of versatility and resourcefulness. As representatives of CUSS and the University, they must remain patient and agreeable. Our callers are frequently experiencing frustration and may be irritable. The phones often ring continually. Often two or three conversations are conducted simultaneously. A good tech learns to take interruptions in stride. On Helpdesk you develop not just a repertoire of hardware and software knowledge. You develop people skills.
AVERAGE WORK HOURS :
Throughout the majority of the Spring 96 semester, I worked full-time at the Helpdesk. This was a 40 hour week. The Helpdesk opens for business at 7:30 AM and closes at 4:30 PM. At that time, the phone lines are forwarded to receive voice mail around the clock; 24 hours a day and 365 days a year.
HARDWARE AND SOFTWARE :
Every effort is made to keep the Helpdesk on a parallel with current campus technology. The Helpdesk is supplied with several PCs, a DEC terminal, and a Macintosh. The technician must also become familiar with every model of printer on campus; Panasonic, Epson, Toshiba, and Hewlett Packard. There is also exposure to printer netports, scanners, LAN cards, CD ROM, and the network infrastructure.
The software includes MS Office, Excel, Powerpoint, Access, Word, Word for DOS, Lotus, FoxPro, Netscape, PC Slots, McAfee, WordPerfect, and CCMail. As the faculty uses Nupop, Banner, Labres, Reachout and SPSS; these must also be understood. Helpdesk is also experimenting with Net Remote, an application that enables remote control of any computer on the network. Windows 95 is in the introductory stages. Its release on campus is under evaluation by a committee. Netscape Version 2.01 is also being evaluated for presentation on the network.
DEDICATED TRAINING
CUSS keeps a library of professionally prepared videos for the new Helpdesk worker to view. These videos discuss the responsibilities and skills of the Support Service professional. Additionally, technicians are encouraged to attend any computer related seminars or lectures available on campus. The Helpdesk is a fast-paced environment. For this reason, the majority of training is not formalized, but is acquired on the job. Training is non-stop and extremely wide ranging. If there is a lull in the course of the day, we take the opportunity to work on documentation or brush up our knowledge by reading one of the software reference manuals. The Internet has also proven an invaluable too resource.
f:\12000 essays\sciences (985)\Computer\X Hacking54.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HACKING
Contents
~~~~~~~~
This file will be divided into four parts:
Part 1: What is Hacking, A Hacker's Code of Ethics, Basic Hacking Safety
Part 2: Packet Switching Networks: Telenet- How it Works, How to Use it,
Outdials, Network Servers, Private PADs
Part 3: Identifying a Computer, How to Hack In, Operating System
Defaults
Part 4: Conclusion- Final Thoughts, Books to Read, Boards to Call,
Acknowledgements
Part One: The Basics
~~~~~~~~~~~~~~~~~~~~
As long as there have been computers, there have been hackers. In the 50's
at the Massachusets Institute of Technology (MIT), students devoted much time
and energy to ingenious exploration of the computers. Rules and the law were
disregarded in their pursuit for the 'hack'. Just as they were enthralled with
their pursuit of information, so are we. The thrill of the hack is not in
breaking the law, it's in the pursuit and capture of knowledge.
To this end, let me contribute my suggestions for guidelines to follow to
ensure that not only you stay out of trouble, but you pursue your craft without
damaging the computers you hack into or the companies who own them.
I. Do not intentionally damage *any* system.
II. Do not alter any system files other than ones needed to ensure your
escape from detection and your future access (Trojan Horses, Altering
Logs, and the like are all necessary to your survival for as long as
possible.)
III. Do not leave your (or anyone else's) real name, real handle, or real
phone number on any system that you access illegally. They *can* and
will track you down from your handle!
IV. Be careful who you share information with. Feds are getting trickier.
Generally, if you don't know their voice phone number, name, and
occupation or haven't spoken with them voice on non-info trading
conversations, be wary.
V. Do not leave your real phone number to anyone you don't know. This
includes logging on boards, no matter how k-rad they seem. If you
don't know the sysop, leave a note telling some trustworthy people
that will validate you.
VI. Do not hack government computers. Yes, there are government systems
that are safe to hack, but they are few and far between. And the
government has inifitely more time and resources to track you down than
a company who has to make a profit and justify expenses.
VII. Don't use codes unless there is *NO* way around it (you don't have a
local telenet or tymnet outdial and can't connect to anything 800...)
You use codes long enough, you will get caught. Period.
VIII. Don't be afraid to be paranoid. Remember, you *are* breaking the law.
It doesn't hurt to store everything encrypted on your hard disk, or
keep your notes buried in the backyard or in the trunk of your car.
You may feel a little funny, but you'll feel a lot funnier when you
when you meet Bruno, your transvestite cellmate who axed his family to
death.
IX. Watch what you post on boards. Most of the really great hackers in the
country post *nothing* about the system they're currently working
except in the broadest sense (I'm working on a UNIX, or a COSMOS, or
something generic. Not "I'm hacking into General Electric's Voice Mail
System" or something inane and revealing like that.)
X. Don't be afraid to ask questions. That's what more experienced hackers
are for. Don't expect *everything* you ask to be answered, though.
There are some things (LMOS, for instance) that a begining hacker
shouldn't mess with. You'll either get caught, or screw it up for
others, or both.
XI. Finally, you have to actually hack. You can hang out on boards all you
want, and you can read all the text files in the world, but until you
actually start doing it, you'll never know what it's all about. There's
no thrill quite the same as getting into your first system (well, ok,
I can think of a couple of bigger thrills, but you get the picture.)
One of the safest places to start your hacking career is on a computer
system belonging to a college. University computers have notoriously lax
security, and are more used to hackers, as every college computer depart-
ment has one or two, so are less likely to press charges if you should
be detected. But the odds of them detecting you and having the personel to
committ to tracking you down are slim as long as you aren't destructive.
If you are already a college student, this is ideal, as you can legally
explore your computer system to your heart's desire, then go out and look
for similar systems that you can penetrate with confidence, as you're already
familar with them.
So if you just want to get your feet wet, call your local college. Many of
them will provide accounts for local residents at a nominal (under $20) charge.
Finally, if you get caught, stay quiet until you get a lawyer. Don't vol-
unteer any information, no matter what kind of 'deals' they offer you.
Nothing is binding unless you make the deal through your lawyer, so you might
as well shut up and wait.
Part Two: Networks
~~~~~~~~~~~~~~~~~~
The best place to begin hacking (other than a college) is on one of the
bigger networks such as Telenet. Why? First, there is a wide variety of
computers to choose from, from small Micro-Vaxen to huge Crays. Second, the
networks are fairly well documented. It's easier to find someone who can help
you with a problem off of Telenet than it is to find assistance concerning your
local college computer or high school machine. Third, the networks are safer.
Because of the enormous number of calls that are fielded every day by the big
networks, it is not financially practical to keep track of where every call and
connection are made from. It is also very easy to disguise your location using
the network, which makes your hobby much more secure.
Telenet has more computers hooked to it than any other system in the world
once you consider that from Telenet you have access to Tymnet, ItaPAC, JANET,
DATAPAC, SBDN, PandaNet, THEnet, and a whole host of other networks, all of
which you can connect to from your terminal.
The first step that you need to take is to identify your local dialup port.
This is done by dialing 1-800-424-9494 (1200 7E1) and connecting. It will
spout some garbage at you and then you'll get a prompt saying 'TERMINAL='.
This is your terminal type. If you have vt100 emulation, type it in now. Or
just hit return and it will default to dumb terminal mode.
You'll now get a prompt that looks like a @. From here, type @c mail
and then it will ask for a Username. Enter 'phones' for the username. When it
asks for a password, enter 'phones' again. From this point, it is menu
driven. Use this to locate your local dialup, and call it back locally. If
you don't have a local dialup, then use whatever means you wish to connect to
one long distance (more on this later.)
When you call your local dialup, you will once again go through the
TERMINAL= stuff, and once again you'll be presented with a @. This prompt lets
you know you are connected to a Telenet PAD. PAD stands for either Packet
Assembler/Disassembler (if you talk to an engineer), or Public Access Device
(if you talk to Telenet's marketing people.) The first description is more
correct.
Telenet works by taking the data you enter in on the PAD you dialed into,
bundling it into a 128 byte chunk (normally... this can be changed), and then
transmitting it at speeds ranging from 9600 to 19,200 baud to another PAD, who
then takes the data and hands it down to whatever computer or system it's
connected to. Basically, the PAD allows two computers that have different baud
rates or communication protocols to communicate with each other over a long
distance. Sometimes you'll notice a time lag in the remote machines response.
This is called PAD Delay, and is to be expected when you're sending data
through several different links.
What do you do with this PAD? You use it to connect to remote computer
systems by typing 'C' for connect and then the Network User Address (NUA) of
the system you want to go to.
An NUA takes the form of 031103130002520
\___/\___/\___/
| | |
| | |____ network address
| |_________ area prefix
|______________ DNIC
This is a summary of DNIC's (taken from Blade Runner's file on ItaPAC)
according to their country and network name.
DNIC Network Name Country DNIC Network Name Country
_______________________________________________________________________________
|
02041 Datanet 1 Netherlands | 03110 Telenet USA
02062 DCS Belgium | 03340 Telepac Mexico
02080 Transpac France | 03400 UDTS-Curacau Curacau
02284 Telepac Switzerland | 04251 Isranet Israel
02322 Datex-P Austria | 04401 DDX-P Japan
02329 Radaus Austria | 04408 Venus-P Japan
02342 PSS UK | 04501 Dacom-Net South Korea
02382 Datapak Denmark | 04542 Intelpak Singapore
02402 Datapak Sweden | 05052 Austpac Australia
02405 Telepak Sweden | 05053 Midas Australia
02442 Finpak Finland | 05252 Telepac Hong Kong
02624 Datex-P West Germany | 05301 Pacnet New Zealand
02704 Luxpac Luxembourg | 06550 Saponet South Africa
02724 Eirpak Ireland | 07240 Interdata Brazil
03020 Datapac Canada | 07241 Renpac Brazil
03028 Infogram Canada | 09000 Dialnet USA
03103 ITT/UDTS USA | 07421 Dompac French Guiana
03106 Tymnet USA |
There are two ways to find interesting addresses to connect to. The first
and easiest way is to obtain a copy of the LOD/H Telenet Directory from the
LOD/H Technical Journal #4 or 2600 Magazine. Jester Sluggo also put out a good
list of non-US addresses in Phrack Inc. Newsletter Issue 21. These files will
tell you the NUA, whether it will accept collect calls or not, what type of
computer system it is (if known) and who it belongs to (also if known.)
The second method of locating interesting addresses is to scan for them
manually. On Telenet, you do not have to enter the 03110 DNIC to connect to a
Telenet host. So if you saw that 031104120006140 had a VAX on it you wanted to
look at, you could type @c 412 614 (0's can be ignored most of the time.)
If this node allows collect billed connections, it will say 412 614
CONNECTED and then you'll possibly get an identifying header or just a
Username: prompt. If it doesn't allow collect connections, it will give you a
message such as 412 614 REFUSED COLLECT CONNECTION with some error codes out to
the right, and return you to the @ prompt.
There are two primary ways to get around the REFUSED COLLECT message. The
first is to use a Network User Id (NUI) to connect. An NUI is a username/pw
combination that acts like a charge account on Telenet. To collect to node
412 614 with NUI junk4248, password 525332, I'd type the following:
@c 412 614,junk4248,525332 <---- the 525332 will *not* be echoed to the
screen. The problem with NUI's is that they're hard to come by unless you're
a good social engineer with a thorough knowledge of Telenet (in which case
you probably aren't reading this section), or you have someone who can
provide you with them.
The second way to connect is to use a private PAD, either through an X.25
PAD or through something like Netlink off of a Prime computer (more on these
two below.)
The prefix in a Telenet NUA oftentimes (not always) refers to the phone Area
Code that the computer is located in (i.e. 713 xxx would be a computer in
Houston, Texas.) If there's a particular area you're interested in, (say,
New York City 914), you could begin by typing @c 914 001 . If it connects,
you make a note of it and go on to 914 002. You do this until you've found
some interesting systems to play with.
Not all systems are on a simple xxx yyy address. Some go out to four or
five digits (914 2354), and some have decimal or numeric extensions
(422 121A = 422 121.01). You have to play with them, and you never know what
you're going to find. To fully scan out a prefix would take ten million
attempts per prefix. For example, if I want to scan 512 completely, I'd have
to start with 512 00000.00 and go through 512 00000.99, then increment the
address by 1 and try 512 00001.00 through 512 00001.99. A lot of scanning.
There are plenty of neat computers to play with in a 3-digit scan, however,
so don't go berserk with the extensions.
Sometimes you'll attempt to connect and it will just be sitting there after
one or two minutes. In this case, you want to abort the connect attempt by
sending a hard break (this varies with different term programs, on Procomm,
it's ALT-B), and then when you get the @ prompt back, type 'D' for disconnect.
If you connect to a computer and wish to disconnect, you can type @
and you it should say TELENET and then give you the @ prompt. From there,
type D to disconnect or CONT to re-connect and continue your session
uninterrupted.
Outdials, Network Servers, and PADs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In addition to computers, an NUA may connect you to several other things.
One of the most useful is the outdial. An outdial is nothing more than a modem
you can get to over telenet- similar to the PC Pursuit concept, except that
these don't have passwords on them most of the time.
When you connect, you will get a message like 'Hayes 1200 baud outdial,
Detroit, MI', or 'VEN-TEL 212 Modem', or possibly 'Session 1234 established
on Modem 5588'. The best way to figure out the commands on these is to
type ? or H or HELP- this will get you all the information that you need to
use one.
Safety tip here- when you are hacking *any* system through a phone dialup,
always use an outdial or a diverter, especially if it is a local phone number
to you. More people get popped hacking on local computers than you can
imagine, Intra-LATA calls are the easiest things in the world to trace inexp-
ensively.
Another nice trick you can do with an outdial is use the redial or macro
function that many of them have. First thing you do when you connect is to
invoke the 'Redial Last Number' facility. This will dial the last number used,
which will be the one the person using it before you typed. Write down the
number, as no one would be calling a number without a computer on it. This
is a good way to find new systems to hack. Also, on a VENTEL modem, type 'D'
for Display and it will display the five numbers stored as macros in the
modem's memory.
There are also different types of servers for remote Local Area Networks
(LAN) that have many machine all over the office or the nation connected to
them. I'll discuss identifying these later in the computer ID section.
And finally, you may connect to something that says 'X.25 Communication
PAD' and then some more stuff, followed by a new @ prompt. This is a PAD
just like the one you are on, except that all attempted connections are billed
to the PAD, allowing you to connect to those nodes who earlier refused collect
connections.
This also has the added bonus of confusing where you are connecting from.
When a packet is transmitted from PAD to PAD, it contains a header that has
the location you're calling from. For instance, when you first connected
to Telenet, it might have said 212 44A CONNECTED if you called from the 212
area code. This means you were calling PAD number 44A in the 212 area.
That 21244A will be sent out in the header of all packets leaving the PAD.
Once you connect to a private PAD, however, all the packets going out
from *it* will have it's address on them, not yours. This can be a valuable
buffer between yourself and detection.
Phone Scanning
~~~~~~~~~~~~~~
Finally, there's the time-honored method of computer hunting that was made
famous among the non-hacker crowd by that Oh-So-Technically-Accurate movie
Wargames. You pick a three digit phone prefix in your area and dial every
number from 0000 --> 9999 in that prefix, making a note of all the carriers
you find. There is software available to do this for nearly every computer
in the world, so you don't have to do it by hand.
Part Three: I've Found a Computer, Now What?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This next section is applicable universally. It doesn't matter how you
found this computer, it could be through a network, or it could be from
carrier scanning your High School's phone prefix, you've got this prompt
this prompt, what the hell is it?
I'm *NOT* going to attempt to tell you what to do once you're inside of
any of these operating systems. Each one is worth several G-files in its
own right. I'm going to tell you how to identify and recognize certain
OpSystems, how to approach hacking into them, and how to deal with something
that you've never seen before and have know idea what it is.
VMS- The VAX computer is made by Digital Equipment Corporation (DEC),
and runs the VMS (Virtual Memory System) operating system.
VMS is characterized by the 'Username:' prompt. It will not tell
you if you've entered a valid username or not, and will disconnect
you after three bad login attempts. It also keeps track of all
failed login attempts and informs the owner of the account next time
s/he logs in how many bad login attempts were made on the account.
It is one of the most secure operating systems around from the
outside, but once you're in there are many things that you can do
to circumvent system security. The VAX also has the best set of
help files in the world. Just type HELP and read to your heart's
content.
Common Accounts/Defaults: [username: password [[,password]] ]
SYSTEM: OPERATOR or MANAGER or SYSTEM or SYSLIB
OPERATOR: OPERATOR
SYSTEST: UETP
SYSMAINT: SYSMAINT or SERVICE or DIGITAL
FIELD: FIELD or SERVICE
GUEST: GUEST or unpassworded
DEMO: DEMO or unpassworded
DECNET: DECNET
DEC-10- An earlier line of DEC computer equipment, running the TOPS-10
operating system. These machines are recognized by their
'.' prompt. The DEC-10/20 series are remarkably hacker-friendly,
allowing you to enter several important commands without ever
logging into the system. Accounts are in the format [xxx,yyy] where
xxx and yyy are integers. You can get a listing of the accounts and
the process names of everyone on the system before logging in with
the command .systat (for SYstem STATus). If you seen an account
that reads [234,1001] BOB JONES, it might be wise to try BOB or
JONES or both for a password on this account. To login, you type
.login xxx,yyy and then type the password when prompted for it.
The system will allow you unlimited tries at an account, and does
not keep records of bad login attempts. It will also inform you
if the UIC you're trying (UIC = User Identification Code, 1,2 for
example) is bad.
Common Accounts/Defaults:
1,2: SYSLIB or OPERATOR or MANAGER
2,7: MAINTAIN
5,30: GAMES
UNIX- There are dozens of different machines out there that run UNIX.
While some might argue it isn't the best operating system in the
world, it is certainly the most widely used. A UNIX system will
usually have a prompt like 'login:' in lower case. UNIX also
will give you unlimited shots at logging in (in most cases), and
there is usually no log kept of bad attempts.
Common Accounts/Defaults: (note that some systems are case
sensitive, so use lower case as a general rule. Also, many times
the accounts will be unpassworded, you'll just drop right in!)
root: root
admin: admin
sysadmin: sysadmin or admin
unix: unix
uucp: uucp
rje: rje
guest: guest
demo: demo
daemon: daemon
sysbin: sysbin
Prime- Prime computer company's mainframe running the Primos operating
system. The are easy to spot, as the greet you with
'Primecon 18.23.05' or the like, depending on the version of the
operating system you run into. There will usually be no prompt
offered, it will just look like it's sitting there. At this point,
type 'login '. If it is a pre-18.00.00 version of Primos,
you can hit a bunch of ^C's for the password and you'll drop in.
Unfortunately, most people are running versions 19+. Primos also
comes with a good set of help files. One of the most useful
features of a Prime on Telenet is a facility called NETLINK. Once
you're inside, type NETLINK and follow the help files. This allows
you to connect to NUA's all over the world using the 'nc' command.
For example, to connect to NUA 026245890040004, you would type
@nc :26245890040004 at the netlink prompt.
Common Accounts/Defaults:
PRIME PRIME or PRIMOS
PRIMOS_CS PRIME or PRIMOS
PRIMENET PRIMENET
SYSTEM SYSTEM or PRIME
NETLINK NETLINK
TEST TEST
GUEST GUEST
GUEST1 GUEST
HP-x000- This system is made by Hewlett-Packard. It is characterized by the
':' prompt. The HP has one of the more complicated login sequences
around- you type 'HELLO SESSION NAME,USERNAME,ACCOUNTNAME,GROUP'.
Fortunately, some of these fields can be left blank in many cases.
Since any and all of these fields can be passworded, this is not
the easiest system to get into, except for the fact that there are
usually some unpassworded accounts around. In general, if the
defaults don't work, you'll have to brute force it using the
common password list (see below.) The HP-x000 runs the MPE operat-
ing system, the prompt for it will be a ':', just like the logon
prompt.
Common Accounts/Defaults:
MGR.TELESUP,PUB User: MGR Acct: HPONLY Grp: PUB
MGR.HPOFFICE,PUB unpassworded
MANAGER.ITF3000,PUB unpassworded
FIELD.SUPPORT,PUB user: FLD, others unpassworded
MAIL.TELESUP,PUB user: MAIL, others
unpassworded
MGR.RJE unpassworded
FIELD.HPPl89 ,HPPl87,HPPl89,HPPl96 unpassworded
MGR.TELESUP,PUB,HPONLY,HP3 unpassworded
IRIS- IRIS stands for Interactive Real Time Information System. It orig-
inally ran on PDP-11's, but now runs on many other minis. You can
spot an IRIS by the 'Welcome to "IRIS" R9.1.4 Timesharing' banner,
and the ACCOUNT ID? prompt. IRIS allows unlimited tries at hacking
in, and keeps no logs of bad attempts. I don't know any default
passwords, so just try the common ones from the password database
below.
Common Accounts:
MANAGER
BOSS
SOFTWARE
DEMO
PDP8
PDP11
ACCOUNTING
VM/CMS- The VM/CMS operating system runs in International Business Machines
(IBM) mainframes. When you connect to one of these, you will get
message similar to 'VM/370 ONLINE', and then give you a '.' prompt,
just like TOPS-10 does. To login, you type 'LOGON '.
Common Accounts/Defaults are:
AUTOLOG1: AUTOLOG or AUTOLOG1
CMS: CMS
CMSBATCH: CMS or CMSBATCH
EREP: EREP
MAINT: MAINT or MAINTAIN
OPERATNS: OPERATNS or OPERATOR
OPERATOR: OPERATOR
RSCS: RSCS
SMART: SMART
SNA: SNA
VMTEST: VMTEST
VMUTIL: VMUTIL
VTAM: VTAM
NOS- NOS stands for Networking Operating System, and runs on the Cyber
computer made by Control Data Corporation. NOS identifies itself
quite readily, with a banner of 'WELCOME TO THE NOS SOFTWARE
SYSTEM. COPYRIGHT CONTROL DATA 1978,1987'. The first prompt you
will get will be FAMILY:. Just hit return here. Then you'll get
a USER NAME: prompt. Usernames are typically 7 alpha-numerics
characters long, and are *extremely* site dependent. Operator
accounts begin with a digit, such as 7ETPDOC.
Common Accounts/Defaults:
$SYSTEM unknown
SYSTEMV unknown
Decserver- This is not truly a computer system, but is a network server that
has many different machines available from it. A Decserver will
say 'Enter Username>' when you first connect. This can be anything,
it doesn't matter, it's just an identifier. Type 'c', as this is
the least conspicuous thing to enter. It will then present you
with a 'Local>' prompt. From here, you type 'c ' to
connect to a system. To get a list of system names, type
'sh services' or 'sh nodes'. If you have any problems, online
help is available with the 'help' command. Be sure and look for
services named 'MODEM' or 'DIAL' or something similar, these are
often outdial modems and can be useful!
GS/1- Another type of network server. Unlike a Decserver, you can't
predict what prompt a GS/1 gateway is going to give you. The
default prompt it 'GS/1>', but this is redifinable by the
system administrator. To test for a GS/1, do a 'sh d'. If that
prints out a large list of defaults (terminal speed, prompt,
parity, etc...), you are on a GS/1. You connect in the same manner
as a Decserver, typing 'c '. To find out what systems
are available, do a 'sh n' or a 'sh c'. Another trick is to do a
'sh m', which will sometimes show you a list of macros for logging
onto a system. If there is a macro named VAX, for instance, type
'do VAX'.
The above are the main system types in use today. There are
hundreds of minor variants on the above, but this should be
enough to get you started.
Unresponsive Systems
~~~~~~~~~~~~~~~~~~~~
Occasionally you will connect to a system that will do nothing but sit
there. This is a frustrating feeling, but a methodical approach to the system
will yield a response if you take your time. The following list will usually
make *something* happen.
1) Change your parity, data length, and stop bits. A system that won't re-
spond at 8N1 may react at 7E1 or 8E2 or 7S2. If you don't have a term
program that will let you set parity to EVEN, ODD, SPACE, MARK, and NONE,
with data length of 7 or 8, and 1 or 2 stop bits, go out and buy one.
While having a good term program isn't absolutely necessary, it sure is
helpful.
2) Change baud rates. Again, if your term program will let you choose odd
baud rates such as 600 or 1100, you will occasionally be able to penetrate
some very interesting systems, as most systems that depend on a strange
baud rate seem to think that this is all the security they need...
3) Send a series of 's.
4) Send a hard break followed by a .
5) Type a series of .'s (periods). The Canadian network Datapac responds
to this.
6) If you're getting garbage, hit an 'i'. Tymnet responds to this, as does
a MultiLink II.
7) Begin sending control characters, starting with ^A --> ^Z.
8) Change terminal emulations. What your vt100 emulation thinks is garbage
may all of a sudden become crystal clear using ADM-5 emulation. This also
relates to how good your term program is.
9) Type LOGIN, HELLO, LOG, ATTACH, CONNECT, START, RUN, BEGIN, LOGON, GO,
JOIN, HELP, and anything else
f:\12000 essays\sciences (985)\Computer\X Hacking56.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
NEW CORDLESS TELEPHONE FREQUENCY LISTINGS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CHANNEL BASE PORTABLE TELEPHONE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
1 46.610 49.670
2 46.630 49.845*
3 46.670 49.860*
4 46.710 49.770
5 46.730 49.875*
6 46.770 49.830*
7 46.830 49.890*
8 46.870 49.930
9 46.930 49.990
10 46.970 49.970
Some of the older cordless phones using the frequencies marked by the <*>
asterisk are paired with frequencies around 1.7 MHz. Listening to the 1.7 MHz
side will yield both sides of the conversation.
The best frequencies to monitor are the 46 MHz as they will repeat both sides
of the conversation. Power output of both base and hand units are less than
100 Mw or 1/10 watt so the range is limited. Careful monitoring will produce
some outstanding results. It is not uncommon to hear conversations up to a
mile away.
f:\12000 essays\sciences (985)\Computer\X Hacking59.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Jenna Gray
p.1
Another one got caught today, it's all over the papers. "Teenager Areested in Computer
Crime Scandal", "Hacker Arrested after Bank Tampering"....Damn kids. They're all
alike. But did you , in your three-piece psychology and 1950's technobrain, ever take a
look behind the eyes of the hacker? Did you ever wonder what made him tick, what
forces shaped him, what may have molded him? I am a hacker, enter my world...Mine is a
world that begins with school... I'm in junior high or high school. I've listened to
teachers expain for the fifteenth time how to reduce a fraction. I understand it "No, Ms.
Smith, I didn't show my work. I did it in my head..." Damn kid . Probably copied it.
They're all alike. I made a discovery today. I found a computer. Wait a second, this is
cool. It does what I want it to. If it makes a mistake, it's because i screwed it up. Not
because it doesn't like me... Or feels threatened by me.. Or thinks I'm a smart ass... Or
doesn't like teaching and shouldn't be here... Damn kid. All he does is play games.
They're all alike. And then it happened... a door opened to a world... rushing through the
phone line like heroin through an addict's veins, an electronic pulse is sent out, a refuge
from the day-to-day incompetencies is sought... a board is found. "This is it... this is
where I belong... "I know everyone here... even if I've never met them, never talked to
them, may never hear from them again... I know you all... Damn kid. Tying up the phone
line again. They're all alike... you bet you ass we're all alike... we've been spoon-fed baby
food at school when we hungered for steak.. the bits of meat that you did let slip through
were pre-chewed and tasteless. We've been dominated by sadists, or ignored by the
apathetic. The few that had something to teach found us willing pupils, but those few
are like drops of water in the desert. This is our world now... the world of the electron
and the switch, the beauty of the baud. We make use of a service already existing without
paying for what could be dirt-cheap if it wasn't run by profiterring gluttons, and you call
us criminals. We explore... and you call us criminals. We seek after knowledge.. and you
call us criminals. We exist without skin color, without nationality, without religious
bias...and you call us criminals? Yes, I am a criminal. My crime is that of curiosity. My
crime is that of judging people by what they say and think , not what they look like. My
crime is that of out smarting you, something that you will never forgive me for. I am a
hacker, and this is my manifesto. you may stop this individual, but you can't stop us all...
after all, we're all alike.
Hacking is a serious offense. And I think that I agree w/Jansie Kotze's theories and she explains a lot of the things that I was wondering about. I think that she had a lot to say, and she did it very well.
On the other hand, the boy who is supposedly in junior high has serious problems. After reading the page he had written, it concluded that I was #27,461 to read his homepage. I was shocked.......and I wondered how many little kids under 12 years old had been in there and got all sorts of ideas.....bad ideas. He had everything from viruses to download and other tips. He even had a "cookbook" that he called the infection connection.
I never thought about hacking and phreaking all too much until now. Sure, kids give ea. other little viruses for kicks, but when you can break passwords and break security grounds, that is getting out of hand. I think that it is all just a wanting to know.
He even concluded in a paragraph that he said that we could easily find out his phone # and track him down. He even put a smily face after the sentence.... Like it was a joke to be stalked. I think there are a lot of sick people in this world that aren't computer nerds. They are maniacs that have fun torturing other people, and proving they can break into any file, do anything, and nothing will stop them.
There was so much information that it was mind boggling. I didn't want to look at it, because it was really is intriguing..interesting. But I finally concluded that I was actually scared. In those chatrooms, people can send viruses just like that. Mess up your whole netscape if you open the file that they send you. Even in this particular guy's homepage, he could have planted a virus so that when you went into a particular area, a virus would download. I was cautious of where I went, but I think that they wouldn't do that. That would give them away to easily. I think that the reason why they put up a homepage, is to prove their knowledge is great.........and they are actually competing one another to be the best....and not get caught.
This guy was a kid himself..and he was mocking us all. I don't see why he needed to brag, but I guess every kid wants to be noticed. Well, in my mind, he succeeded. He is living in the future technology...and way more advanced than even I. I envy him in some ways, but then I don't. Just one slip and they catch him, bam, he's in jail. I think I'll live on the safe track and use the technology wisely and respect others that also have the same technology and knowledges as I.
f:\12000 essays\sciences (985)\Computer\X Internet39.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Internet
Introduction
w What is the internet?
w Why you should have it?
w I have it
Body
w Who sells the internet?
w Equipment that's is needed?
w Legal/illegal implications?
w Accessibility- which browser?
w Material
w Protection
w Upload/download
Conclusion
w Advantages?
w What can it do for you?
w New world
Internet
The internet is a service that is available on computer to subscribers. The
internet opens up a whole new world of communication, information and
entertainment. having access to the internet has given me a chance to explore a
totally new dimension in technology. I believe that any body who has access to the
internet will benefit greatly from this experience.
The internet service is marketed by various companies which are on the
increase on a daily basis. These companies basically offer the same package but
differ in the amounts they charge for membership. In my opinion a good company
to subscribe to is one that offers a flat rate.
In order to access the internet you require a good computer and a powerful
modem . If you have these it is much easier and faster to "Surf the Net". I would
recommend a 28.8 kbs modem manufactured by U.S Robotics.
The Internet can give you access to both legal & illegal sites on the net.
There is pirated software e.g. full version of games that you can access without
actually paying for them.
The internet can only be accessed with a browser. There are a few web
browsers but two main ones are Netscape Navigator and Internet Explorer.
As I mentioned earlier, the internet allows everyone to access various
topics of interests on the web. The choices ranges from recreational, education,
hobbies, communications and entertainment.
There is always a risk of accessing material which are not appropriate e.g.
pornographic material and racist material which are all available to anyone who
wishes to view them. However, there is a way of protecting children who
should not be viewing such material. There is software like the Internet Nanny ,
Adult lock & Firewall which once installed will protect children from viewing.
One of the disadvantages is that while downloading files from the internet
there is a possibility of downloading a virus into your system. In order to prevent
this from happening I suggest you only download from trustworthy and reliable
sites. Uploading files is basically giving a file to someone else.
The internet is a very powerful tool to have if used in the right way. From
the comfort of your own home you can surf the net and find the power that lies
within the web. I would recommend that if possible everyone should at some time
or the other have access to the internet.
BY ALNUR ISAMIL 8-6
f:\12000 essays\sciences (985)\Computer\X Software Piracy24.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Software Piracy
What is Software Piracy
The PC industry is just over 20 years old. In those 20 years, both the quality and quantity of available software programs have increased dramatically. Although approximately 70% of the worldwide market is today supplied by developers in the United States, significant development work is occurring in scores of nations around the world. But in both the United States and abroad, unauthorized copying of personal computer software is a serious problem. On average, for every authorized copy of personal computer software in use, at least one unauthorized copy is made. Unauthorized copying is known as software piracy, and in 1994 it cost the software industry in excess of US$15 billion. Piracy is widely practiced and widely tolerated. In some countries, legal protection for software is nonexistent (i.e., Kuwait); in others, laws are unclear (i.e. Israel), or not enforced with sufficient commitment (i.e., the PRC). Significant piracy losses are suffered in virtually every region of the world. In some areas (i.e., Indonesia), the rate of unauthorized copies is believed to be in excess of 99%.
Why do People Use Pirated Software?
A major reason for the use of pirated software is the prices of the REAL thing. Just walk into a CompUSA, Electronics Boutique, Computer City, Egghead, etc and you will notice the expensive price tags on copies of the most commonly used programs and the hottest games. Take the recent Midwest Micro holiday catalogue for example and notice the prices. Microsoft Windows 95: $94, Microsoft Office 95: $224, Microsoft Visual C++: $250, Borland C++: $213, Corel Draw 7: $229, Corel Office Professional 7: $190, Lotus Smartsuite 96: $150, Microsoft Flight Simulator95: $50, Warcraft 2: $30. The list goes on and on and the prices for the programs listed above were only upgrade versions. Users of the software listed above include anywhere from large companies like AT&T to yourself, the average user at home. Although a $30 game like Warcraft 2 doesn't seem like much, by the time you finish reading this paper, it will seem like a fortune.
Ease of Availability
Since the law states clearly that making a copy of what you own and distributing it or installing more than one copy of one piece of software on two separate computers is illegal, then why do the average Joes like you and us still do it? There are many answers to that question and all of them seem legitimate except that no answers can be legally justified. A friend borrowing another friend's Corel draw or Windows 95 to install on their own PC is so common that the issue of piracy probably doesn't even come to mind right away or even at all.
Pirated Software on the Internet
The Internet is sometimes referred to as a "Pirate's Heaven." Pirated software is available all over the net if you bother to look for them. Just go to any of the popular search engines like Excite, Infoseek or Yahoo and type in the common phrase "warez, appz, gamez, hacks" and thousands of search results will come up. Although many of the links on the pages will be broken because the people have either moved the page or had the page shut down, some of the links will work and that one link usually has a decent amount of stuff for you to leech off of or a better way to put it is for you to download.
Web Sites That we Have Personally Visited:
Jelle's Warez Collection
Wazh's Warez Page
Beg's Warez Page
Chovy's Empire
The Spawning Grounds
GAMEZ
Lmax's Warez Page
Jugg's Warez-List
Jureweb Warez Page
Top Warez Page
Why Are They There?
Why is there pirated software on the net? There could only be two possible answers. Either the people who upload these files are very nice people or they do it just because its illegal and browsers of the web like us wouldn't mind taking our time to visit these sites to download the software. What they get out of it is the thousands of "hits" their sites get a day which makes them very happy.
Anonymous and Account-Based FTP Sites
FTP stands for File Transfer Protocol. FTP sites are around so that people can exchange software with each other and companies like Microsoft can distribute info and demos to users who visit their FTP site. Something they don't want happening is the distribution of their full-release products on "Pirate" FTP sites. "Pirate" FTP sites come and go. Most sites don't stay up for more than a day or two. They are also referred to as 0 day FTP sites. Its extremely difficult to logon to these sites becasuse they are usually full of leechers like us or require a username and password.
FTP Sites That we Have Visited:
ftp://ftp.epri.com
ftp://ftp.dcs.gla.ac.uk
ftp://204.177.0.18
ftp://207.48.187.133
ftp://192.88.237.2
ftp://153.104.11.94
ftp://208.137.11.105
ftp://194.85.157.2
Newsgroups
There are over 20,000 newsgroups on the net. The majority of them are nonsense but if you happen to stumble upon the right one, you'll be able to get almost any crackor serial number for any game or program. Although programs and games are not abundant on newsgroups, you'll be able to obtain registered programs of such popular shareware like Winzip and Mirc and if you post trade requests, people will respond to your request.
Newsgroups With Cracks, Serial #'s, Programs and Games
News:alt.binaries.cracks
News:alt.binaries.games
News:alt.crackers
News:alt.cracks
News:alt.hacker
News:alt.binaries.warez.ibm-pc
News:alt.binaries.warez.ibm-pc.games
News:alt.warez.ibm-pc
Exchanging Through E-Mail
It is illegal to send copyrighted programs and games through e-mail but does anyone really care? Everyday, there are hundreds and thousands of illegally attached programs and games sent through the net in the form of e-mail. Just visit any of the above newsgroups and you'll see listings of people who want to trade through e-mail. We placed an ad in news:alt.binaries.cracks requesting three programs: Magnaram 97, Qemm8.0 and Corel Draw 7. We managed to receive both Magnaram 97 and Qemm 8.0 through e-mail from some nice person but did not receive Corel Draw 7 most likely because it was not a reasonable demand.
Modem Speeds
Part of the reason nobody sent us Corel Draw 7 is because of the size of the program and the many hours it takes to upload and download it. The two most common modem speeds at the time that this report was written are 28.8kbps and 14.4kbps. Both speeds are considered to be extremely slow when it comes to transferring enormous amounts of data. Most of the programs and games nowadays are on CD-Roms which if full, contain 650MB of data. The new X2 Technology, Cable modems, ISDN modems and DirecPC satellite dishes could solve the long download time problems a little better considering that all the above mentioned modems are two to fourteen times faster in transferring data than the 28.8kbps modems.
Cost of Pirated Software To The Industry
Piracy cost companies that produce computer software $13.1 billion in lost revenue during 1995. The loss exceeded more than the combined revenues of the 10 largest personal computer software companies. The dollar loss estimates were up from the $12.2 billion in 1994 because of the spreading use of computers worldwide.
Microsoft (The Big Loser)
MS Windows 95 $179
MS Office Pro 95 $535
MS Project 95 $419
MS Publisher 97 $69
MS Visual C++ 4.0 $448
These are the prices they expect people to buy their software at. In Hong Kong, copies of these lucrative pieces software can be had for about five US dollars for all of them on one CD very easily. That will be further explained later.
The Honest Consumer
Software piracy harms all software companies and, ultimately, the end user. Piracy results in higher prices for honest users, reduced levels of support and delays in funding and development of new products, causing the overall breadth and quality of software to suffer.
US Laws
In 1964, the United States Copyright Office began to register software as a form of literary expression. The Copyright Act, Title 17 of the U.S. Code, was amended in 1980 to explicitly include computer programs. Today, according to the Copyright Act, it is illegal to make or distribute copyrighted material without authorization. The only
exceptions are the user's right to make a copy as an "essential step" in using the program (for example, by copying the program into RAM) and to make a single backup copy for archival purposes (Title 17, Section 117). No other copies may be made without specific authorization from the copyright owner. In December 1990, the U.S. Congress approved the Software Rental Amendments Act, which generally prohibits the rental, leasing or lending of software without the express written permission of the copyright holder. This amendment followed the lead of the British Parliament (which passed a similar law, The Copyright, Designs and Patents Act, in 1988), and adds significant additional protection against unauthorized copying of personal computer software. In addition, the copyright holder may grant additional rights at the time the personal computer software is acquired. For example, many applications are sold in LAN (local area network) versions that allow a software package to be placed on a LAN for access by multiple users. Additionally, permission is given under special license agreement to make multiple copies for use throughout a large organization. But unless these rights are specifically granted, U.S. law prohibits a user from making duplicate copies of software except to ensure one working copy and one archival copy. Without authorization from the copyright owner, Title 18 of U.S. Code prohibits duplicating software for profit, making multiple copies for use by different users within an organization, downloading multiple copies from a network, or giving an unauthorized copy to another individual. All are illegal and a federal crime. Penalties include fines of up to $250,000 and jail terms up to five years (Title 18, Section
2320 and 2322).
Business Software Alliance (BSA)
The Business Software Alliance (BSA) promotes the continued growth of the software industry through its international public policy, enforcement, and education programs in 65 countries throughout North America, Europe, Asia, and Latin America. Founded in 1988, BSA's mission is to advance free and open world trade for legitimate business software by advocating strong intellectual property protection for software.
BSA's worldwide members include the leading publishers of software for personal computers such as Adobe Systems, Inc., Apple Computer, Inc., Autodesk, Inc., Bentley Systems, Inc., Lotus Development Corp., Microsoft Corp., Novell, Inc., Symantec Corp., and The Santa Cruz Operation, Inc. BSA's Policy Council consists of these publishers and other leading computer technology companies including Apple Computer Inc., Computer Associates International, Inc., Digital Equipment Corp., IBM Corp., Intel Corp., and Sybase, Inc. Statistics of Software Piracy.
Court Cases
Inslaw vs. Dept. of Justice
-Sued Justice Dept for Software piracy.
-In 1982, Inslaw landed a $10M contract with the Justice Dept. to install
PROMIS case-tracking software in 20 offices.
-They allegedly spent $8M enhancing PROMIS on the assumption that they
could renogotiate the contract to recoup the expenses.
-But after the Justice Dept. got the source code, they terminated the contract
pirated the code
-By 1985, Inslaw was forced into bankruptcy.
-Owners kept fighting and the case ended up in the US Bankruptcy Court
-In Feb. '88, Inslaw was awarded $6.8M in damages plus legal fees
Novell and Microsoft Settle Largest BBS Piracy Case Ever
-Scott W. Morris, operator of the Assassin's Guild BBS, agreed to pay
Microsoft and Novell $73,00 in cash and forfeit computer hardware valued at
More than $40,000
-In the raid, marshals seized 13 computers, 11 modems, a satellite dish, 9 gigs
of online data, and over 40 gigs of off-line data
Novell Files Software Piracy Suits Against 17 Companies in California
-The suits allege that the defendants were fraudulently obtaining Novell
upgrades and/or counterfeiting NetWare boxes to give the appearance of a
a new product
-The suit follows Novell's discovery that the upgrade product was being sold
in Indonesia, the United Kingdom, United Arab Emirates, as well as the US
F.B.I. Reveals Arrest in Major CD-Rom Piracy Case
-The first major case of CD-Rom piracy in the United States
-A Canadian father and son were found in possession of 15,000 counterfeit
copies of Rebel Assault and Myst that were being sold at 25% of the retail
value
-Both men were free on bail
Pirated Software in Asia and the Rest of the World
Pirate Plants in China
The Chinese government says there are 34 factories in China producing compact discs and laser discs. Authorities say most have legitimate licenses to produce legal CDs.
But production capacity far outstrips domestic demand. According to the International Intellectual Property Alliance, a Washington, D.C.-based consortium of film, music, computer software and publishing businesses, China produces an estimated 100 million pirated CDs a year, while its domestic market is only 5 million to 7 million CDs annually.
Where is the oversupply going? To Hong Kong, and then overseas. Another major problem is that Chinese officials and soldiers have money invested to these factories so no matter how hard the US pushes China to close down these factories, the Chinese government will have a laid back approach. Software piracy in Asia is connected to organized crime.
Vendors in Hong Kong
The Golden Shopping Arcade in Hong Kong's Sham Shui Po district is a software pirate's dream and software companies nightmare. Here you can buy Cd's called Installer discs for about nine dollars US. All volumes of these installers contain 50+ programs each compressed with a self-extracting utility. Volume 2 has a beta copy of Windows 95 as well as OS/2 Warp, CorelDraw! 5, Quicken 4.0, Atari Action Pack for Windows, Norton Commander, KeyCad, Adobe Premier, Microsoft Office, and dozens of other applications, including a handful written in Chinese. The programs on this disc cost around $20,00-$35,000 US retail. It is very common for a store to be closed for a portion of the day and then reopen later because of raids from authorities. These stores as you can expect are extremely crowded with kids and tourists.
US Tourists
A good number of Americans who travel to Hong Kong or another part of Asia will bring home pirated software of some sort because of the very low prices for expensive pieces of software here in the US. The usual way to do it is to stuff the cd's in clothes and hand carried luggage. Another approach is sending them back to the US using the postal service. Both of these methods work very well. We have had relatives who have done this for us and the success rate thus far is 100%. The United States Customs Service has been trained in the apprehension of software pirates at ports of entry but this is a joke because they are more worried about illegal immigrants and terrorists rather than software pirates.
Software Piracy
What is Software Piracy
The PC industry is just over 20 years old. In those 20 years, both the quality and quantity of available software programs have increased dramatically. Although approximately 70% of the worldwide market is today supplied by developers in the United States, significant development work is occurring in scores of nations around the world. But in both the United States and abroad, unauthorized copying of personal computer software is a serious problem. On average, for every authorized copy of personal computer software in use, at least one unauthorized copy is made. Unauthorized copying is known as software piracy, and in 1994 it cost the software industry in excess of US$15 billion. Piracy is widely practiced and widely tolerated. In some countries, legal protection for software is nonexistent (i.e., Kuwait); in others, laws are unclear (i.e. Israel), or not enforced with sufficient commitment (i.e., the PRC). Significant piracy losses are suffered in virtually every region of the world. In some areas (i.e., Indonesia), the rate of unauthorized copies is believed to be in excess of 99%.
Why do People Use Pirated Software?
A major reason for the use of pirated software is the prices of the REAL thing. Just walk into a CompUSA, Electronics Boutique, Computer City, Egghead, etc and you will notice the expensive price tags on copies of the most commonly used programs and the hottest games. Take the recent Midwest Micro holiday catalogue for example and notice the prices. Microsoft Windows 95: $94, Microsoft Office 95: $224, Microsoft Visual C++: $250, Borland C++: $213, Corel Draw 7: $229, Corel Office Professional 7: $190, Lotus Smartsuite 96: $150, Microsoft Flight Simulator95: $50, Warcraft 2: $30. The list goes on and on and the prices for the programs listed above were only upgrade versions. Users of the software listed above include anywhere from large companies like AT&T to yourself, the average user at home. Although a $30 game like Warcraft 2 doesn't seem like much, by the time you finish reading this paper, it will seem like a fortune.
Ease of Availability
Since the law states clearly that making a copy of what you own and distributing it or installing more than one copy of one piece of software on two separate computers is illegal, then why do the average Joes like you and us still do it? There are many answers to that question and all of them seem legitimate except that no answers can be legally justified. A friend borrowing another friend's Corel draw or Windows 95 to install on their own PC is so common that the issue of piracy probably doesn't even come to mind right away or even at all.
Pirated Software on the Internet
The Internet is sometimes referred to as a "Pirate's Heaven." Pirated software is available all over the net if you bother to look for them. Just go to any of the popular search engines like Excite, Infoseek or Yahoo and type in the common phrase "warez, appz, gamez, hacks" and thousands of search results will come up. Although many of the links on the pages will be broken because the people have either moved the page or had the page shut down, some of the links will work and that one link usually has a decent amount of stuff for you to leech off of or a better way to put it is for you to download.
Web Sites That we Have Personally Visited:
Jelle's Warez Collection
Wazh's Warez Page
Beg's Warez Page
Chovy's Empire
The Spawning Grounds
GAMEZ
Lmax's Warez Page
Jugg's Warez-List
Jureweb Warez Page
Top Warez Page
Why Are They There?
Why is there pirated software on the net? There could only be two possible answers. Either the people who upload these files are very nice people or they do it just because its illegal and browsers of the web like us wouldn't mind taking our time to visit these sites to download the software. What they get out of it is the thousands of "hits" their sites get a day which makes them very happy.
Anonymous and Account-Based FTP Sites
FTP stands for File Transfer Protocol. FTP sites are around so that people can exchange software with each other and companies like Microsoft can distribute info and demos to users who visit their FTP site. Something they don't want happening is the distribution of their full-release products on "Pirate" FTP sites. "Pirate" FTP sites come and go. Most sites don't stay up for more than a day or two. They are also referred to as 0 day FTP sites. Its extremely difficult to logon to these sites becasuse they are usually full of leechers like us or require a username and password.
FTP Sites That we Have Visited:
ftp://ftp.epri.com
ftp://ftp.dcs.gla.ac.uk
ftp://204.177.0.18
ftp://207.48.187.133
ftp://192.88.237.2
ftp://153.104.11.94
ftp://208.137.11.105
ftp://194.85.157.2
Newsgroups
There are over 20,000 newsgroups on the net. The majority of them are nonsense but if you happen to stumble upon the right one, you'll be able to get almost any crackor serial number for any game or program. Although programs and games are not abundant on newsgroups, you'll be able to obtain registered programs of such popular shareware like Winzip and Mirc and if you post trade requests, people will respond to your request.
Newsgroups With Cracks, Serial #'s, Programs and Games
News:alt.binaries.cracks
News:alt.binaries.games
News:alt.crackers
News:alt.cracks
News:alt.hacker
News:alt.binaries.warez.ibm-pc
News:alt.binaries.warez.ibm-pc.games
News:alt.warez.ibm-pc
Exchanging Through E-Mail
It is illegal to send copyrighted programs and games through e-mail but does anyone really care? Everyday, there are hundreds and thousands of illegally attached programs and games sent through the net in the form of e-mail. Just visit any of the above newsgroups and you'll see listings of people who want to trade through e-mail. We placed an ad in news:alt.binaries.cracks requesting three programs: Magnaram 97, Qemm8.0 and Corel Draw 7. We managed to receive both Magnaram 97 and Qemm 8.0 through e-mail from some nice person but did not receive Corel Draw 7 most likely because it was not a reasonable demand.
Modem Speeds
Part of the reason nobody sent us Corel Draw 7 is because of the size of the program and the many hours it takes to upload and download it. The two most common modem speeds at the time that this report was written are 28.8kbps and 14.4kbps. Both speeds are considered to be extremely slow when it comes to transferring enormous amounts of data. Most of the programs and games nowadays are on CD-Roms which if full, contain 650MB of data. The new X2 Technology, Cable modems, ISDN modems and DirecPC satellite dishes could solve the long download time problems a little better considering that all the above mentioned modems are two to fourteen times faster in transferring data than the 28.8kbps modems.
Cost of Pirated Software To The Industry
Piracy cost companies that produce computer software $13.1 billion in lost revenue during 1995. The loss exceeded more than the combined revenues of the 10 largest personal computer software companies. The dollar loss estimates were up from the $12.2 billion in 1994 because of the spreading use of computers worldwide.
Microsoft (The Big Loser)
MS Windows 95 $179
MS Office Pro 95 $535
MS Project 95 $419
MS Publisher 97 $69
MS Visual C++ 4.0 $448
These are the prices they expect people to buy their software at. In Hong Kong, copies of these lucrative pieces software can be had for about five US dollars for all of them on one CD very easily. That will be further explained later.
The Honest Consumer
Software piracy harms all software companies and, ultimately, the end user. Piracy results in higher prices for honest users, reduced levels of support and delays in funding and development of new products, causing the overall breadth and quality of software to suffer.
US Laws
In 1964, the United States Copyright Office began to register software as a form of literary expression. The Copyright Act, Title 17 of the U.S. Code, was amended in 1980 to explicitly include computer programs. Today, according to the Copyright Act, it is illegal to make or distribute copyrighted material without authorization. The only
exceptions are the user's right to make a copy as an "essential step" in using the program (for example, by copying the program into RAM) and to make a single backup copy for archival purposes (Title 17, Section 117). No other copies may be made without specific authorization from the copyright owner. In December 1990, the U.S. Congress approved the Software Rental Amendments Act, which generally prohibits the rental, leasing or lending of software without the express written permission of the copyright holder. This amendment followed the lead of the British Parliament (which passed a similar law, The Copyright, Designs and Patents Act, in 1988), and adds significant additional protection against unauthorized copying of personal computer software. In addition, the copyright holder may grant additional rights at the time the personal computer software is acquired. For example, many applications are sold in LAN (local area network) versions that allow a software package to be placed on a LAN for access by multiple users. Additionally, permission is given under special license agreement to make multiple copies for use throughout a large organization. But unless these rights are specifically granted, U.S. law prohibits a user from making duplicate copies of software except to ensure one working copy and one archival copy. Without authorization from the copyright owner, Title 18 of U.S. Code prohibits duplicating software for profit, making multiple copies for use by different users within an organization, downloading multiple copies from a network, or giving an unauthorized copy to another individual. All are illegal and a federal crime. Penalties include fines of up to $250,000 and jail terms up to five years (Title 18, Section
2320 and 2322).
Business Software Alliance (BSA)
The Business Software Alliance (BSA) promotes the continued growth of the software industry through its international public policy, enforcement, and education programs in 65 countries throughout North America, Europe, Asia, and Latin America. Founded in 1988, BSA's mission is to advance free and open world trade for legitimate business software by advocating strong intellectual property protection for software.
BSA's worldwide members include the leading publishers of software for personal computers such as Adobe Systems, Inc., Apple Computer, Inc., Autodesk, Inc., Bentley Systems, Inc., Lotus Development Corp., Microsoft Corp., Novell, Inc., Symantec Corp., and The Santa Cruz Operation, Inc. BSA's Policy Council consists of these publishers and other leading computer technology companies including Apple Computer Inc., Computer Associates International, Inc., Digital Equipment Corp., IBM Corp., Intel Corp., and Sybase, Inc. Statistics of Software Piracy.
Court Cases
Inslaw vs. Dept. of Justice
-Sued Justice Dept for Software piracy.
-In 1982, Inslaw landed a $10M contract with the Justice Dept. to install
PROMIS case-tracking software in 20 offices.
-They allegedly spent $8M enhancing PROMIS on the assumption that they
could renogotiate the contract to recoup the expenses.
-But after the Justice Dept. got the source code, they terminated the contract
pirated the code
-By 1985, Inslaw was forced into bankruptcy.
-Owners kept fighting and the case ended up in the US Bankruptcy Court
-In Feb. '88, Inslaw was awarded $6.8M in damages plus legal fees
Novell and Microsoft Settle Largest BBS Piracy Case Ever
-Scott W. Morris, operator of the Assassin's Guild BBS, agreed to pay
Microsoft and Novell $73,00 in cash and forfeit computer hardware valued at
More than $40,000
-In the raid, marshals seized 13 computers, 11 modems, a satellite dish, 9 gigs
of online data, and over 40 gigs of off-line data
Novell Files Software Piracy Suits Against 17 Companies in California
-The suits allege that the defendants were fraudulently obtaining Novell
upgrades and/or counterfeiting NetWare boxes to give the appearance of a
a new product
-The suit follows Novell's discovery that the upgrade product was being sold
in Indonesia, the United Kingdom, United Arab Emirates, as well as the US
F.B.I. Reveals Arrest in Major CD-Rom Piracy Case
-The first major case of CD-Rom piracy in the United States
-A Canadian father and son were found in possession of 15,000 counterfeit
copies of Rebel Assault and Myst that were being sold at 25% of the retail
value
-Both men were free on bail
f:\12000 essays\sciences (985)\Computer\X Software Piracy45.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Software piracy is the failure of a licensed user to adhere to the conditions of a software license or the unauthorized use or reproduction of copyrighted software by a person or entity that has not been licensed to use the software. Software piracy has become a household word and a household crime and has had a great affect on the software industry. It is a problem that can only be solved by the choices of each individual.
The computer software industry is one of the great business success stories of recent history, with healthy increases in both hardware and software sales around the world. However, software piracy threatens the industry's economic future. According to estimates by the U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed annually worldwide. These copies work as well as the originals and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually caught. In addition, software pirates know that they are unlikely to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. The software industry loses more than $15.2 billion annually worldwide due to software piracy.
Software piracy costs the industry:
$482 every second
$28,900 every minute
$1.7 million every hour
$41.6 million every day
$291.5 million every week
To understand software piracy, one must get inside the mind of the pirate. People, who wouldn't think of sneaking merchandise out of a store or robbing a house, regularly obtain copies of computer programs which they haven't paid for. The pirate has a set of excuses for his actions: prices are too high; the company doesn't provide decent support; he's only going to use the program once in a while. Although, what really makes software piracy seem less bad than other kinds of theft is that nothing is physically taken. There is no immediate effect on the inventory or productive capacity of the creator of a piece of software if someone 500 miles away copies a disk and starts using it.
People tend to think of property as a material thing, and thus have a hard time regarding a computer program as property. However, property is not a concept pertaining to matter alone. Ownership is a concept which comes out of the fact that people live by creating things of value for their own use or for trade with others. Creation does not mean making matter, but rather changing the form of matter alongwith an idea and a purpose. Most often, the actual cost of creating goods is determined in the production of individual items. With software, the reverse is true. The cost of producing copies is negligible compared with the cost of constructing the form of the product.
In both cases, though, the only way a producer can benefit from offering his product in trade, is for others to respect his right to it and to obtain it only on his terms. If people are going to make the production of software a fulltime occupation, they should expect a return for their efforts. If they do not receive any benefit, they will have to switch to a different sort of activity if they want to keep working.
The thief, though, will seldom be caught and punished; his particular act of
copying isn't likely to push a software publisher over the edge. In most cases, people can openly talk about their acts of piracy without suffering criticism. However, there is a more basic deterrent to theft than the risk of getting caught. A person can fake what he is to others, but not to himself. He knows that he is depending on other people's ignorance or willingness to pretend they haven't noticed. He may not feel guilty because of this, but he will always feel helpless and out of control. If he attempts to rationalize his actions, he becomes dependent on his own self-ignorance as well.
Thieves who abandon honesty often fall back on the idea of being smart. They think it's stupid to buy something when they can just take it. They know that their own cleverness works only because of the stupidity of others who pay for what they buy. The thieves are counting on the failure of the very people whose successful efforts they use.
The best defense against software piracy lies neither in physical barriers to copying nor in stiffer penalties. The main prevention to theft in stores is not the presence of guards and magnetic detectors, but the fact that most people have no desire to steal. The best way to stop piracy is to instill a similar frame of mind among software users. This means breaking down the web of excuses by which pirates justify their actions, and leaving them to recognize what they are. Ultimately, this is the most important defense against any violation of people's rights; without an honest majority, no amount of effort by the police will be effective.
In almost all countries of the world, there are statutes, criminal and civil, which provides for enforcement of copyrighted software programs. The criminal penalties range from fines to jail terms or both. Civil penalties may reach as high as $100,000
per infringement. In many countries, companies as well as individuals may face civil and criminal sanctions.
There are several different types of software piracy. Networking is major cause to software piracy. Most licenses to software is written so that the program can only be installed on one machine and can only be used on one machine at a time, however, with some network methods, the program can be loaded on several machines at once, therefore a violation of the agreement. On some network applications, the speed of transporting the software back and forth is too slow, and therefore, copying the program onto each machine would be so much faster, and this could be a violation of the license agreement.
End-user Copying is a form of piracy when individuals within organisations copy software programs from co-workers, friends and relatives. This is the most prevalent form of software theft. Some refer to end user copying as 'disk swapping'.
Hard disk loading happens when unlicensed software is downloaded onto computers that you buy. Generally you, as the customer will have an original program on your hard drive that you may or may not have paid for. However, you will not
receive the accompanying disks or documentation and you will therefore not be entitled to technical support or upgrades. This practice is often used as a sales feature or an added incentive by the dealer to entice the sale.
Software rental is a form of piracy that takes place when an individual rents a computer with software loaded on it or the software itself from a rental shop or computer retailer. The licence agreement clearly states that the purchaser is prohibited from engaging in the rental of the software. This often occurs in the form of a rental, and then a re-stocking charge when the software is returned to the retailer.
Counterfeit software involves both low quality disks and high quality fakes that are extremely close in appearance to the original software.
Stealing via bulletin boards is one of the fastest growing means of software theft. It involves downloading programs onto computers via a modem.
OEM unbundling can occur at either the Original Equipment Manufacturer (OEM) level or at the retailer. Unbundling involves the separating of OEM software from the hardware that it is licensed to be sold with. The product is clearly marked 'For Distribution With New PC Hardware Only' and is designed so that it cannot be sold on the retail shelf. The customer can run into support issues as it is the OEM that is required to provide support for this type of software. When you buy unbundled software you take a bigger risk of purchasing a counterfeit product.
In conclusion, software piracy has had a major impact on the software industry. Economically it has cost the industry billions of dollars each year and there is no sign that this will change in the near future. No amount of penalties or policing will stop the trend of software piracy. Each individual must develop their own moral standards so that they do not add to the problem.
f:\12000 essays\sciences (985)\Computer\X Telecommuting27.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As defined in Webster's New World Dictionary, Third Edition, telecommuting is "an electronic mode of doing work outside the office that traditionally has been done in the office, as by computer terminal in the employee's home." Basically, it is working at home utilizing current technology, such as computers, modems, and fax machines. Traditionally, people have commuted by cars, buses, trains, and subways, to work and back. Through the innovation of telecommuting, , the actual necessity to change location in order to accomplish this task has been challenged on the basis of concerns for energy conservation, loss of productivity, and other issues.
One advantage of telecommuting is energy conservation. A tremendous amount of energy is required to produce transportation equipment such as automobiles, buses trains, and subways. If telecommuting is promoted, there will be less use of this equipment and less energy will be required for production, maintenance, and repair of this equipment. Fuel resources needed to operate this equipment will be reduced. The building and repair of highways and maintenance requires a large consumption of energy, not only in the operation of the highway construction and repair equipment, but also in the manufacture and transportation of the required materials. An increase in the percentage of people telecommuting to work will decrease the need for expanded highways and associated road maintenance. The first two areas related to getting to work. Once a person arrives at a central office working location, he or she represents another energy consumer, often times magnified many times over what would be required at home. The office building has heating, cooling, and lighting needs, and the materials to build it and maintain it require energy in their production and transportation. Working from home requires only modest incremental demands on energy for heating, cooling, and lighting needs, and makes effective use of existing building space and facilities.
Telecommuting also improves productivity. Much time is spent on unnecessary activities by people who commute back and forth to work in the conventional manner. Time is wasted from the minute one gets up to go to work until the minute one returns home from work. With telecommuting, one no longer needs to be always preparing for the commute and for being "presentable". One can go to work simply by tossing on a robe and slippers, grabbing a cup of coffee and sitting down to the terminal. You would no longer have to worry if the car will start, if your clothes are neat, or if you're perfectly groomed. That may still be important to you, but it no longer has to be. And you are no longer interrupted by the idle chatter that inevitably takes place at the central work place - some of it useful for your work, but a lot of it is just a waste of time and a perpetual interruption. As quoted in Computerworld, one telecommuter comments "I was feeling really cramped in our old office. I find I can get much more done. It is much more quiet here at home."
In addition, telecommuting reduces family related stress by allowing involvement with family and flexibility in location of a remote worksite. Working in the home offers people a greater opportunity to share quality time with family members, to promote family values and develop stronger family ties and unity. Also, time saved through telecommuting could be spent with family members constructively in ways that promote and foster resolution of family problems. Since the actual location a telecommuter works from isn't relevant, the person could actually move to another town. This would alleviate the stress caused when a spouse has an opportunity to pursue his or her career in another town and must choose between a new opportunity or no opportunity, because their spouse does not want to or cannot change employment. If either person could telecommute, the decision would be much easier.
Also, telecommuting promotes safety by reducing high way use by people rushing to get to work. There are thousands of traffic-related deaths every year and thousands more people severely injured trying to get to work. In addition there is substantial property loss associated with traffic accidents that occur as people take chances in order to make the mad dash from home to the office. Often times people have mad the trip so often that they are not really alert, often falling asleep and frequently becoming frustrated by the insistence that they come into the office every day, when, in fact, most, if not all of their work could be accomplished from their home or sites much closer to their home.
Telecommuting, however does have its disadvantages. The most obvious disadvantage is the overwhelming cost of starting a telecommuting program. A study by Forrester Research, Inc. reveals "that it costs $30,000 to $45,000 a head to" train prospective telecommuters. After the first year, however, "per-user spending [is] cut to about $4,000", also, "employees are starting to see telecommuting policies as a benefit, and companies offering it will be more competitive." Another disadvantage is the psychological impact is may have on employees. "Executives who have labored for years to win such corporate status symbols as secretaries and luxurious corner offices are reluctant to shed their hard-won perks." Some employees also complain that their "creativity... has been dampened" by lack of interaction with their co-workers.
Despite the disadvantages, though, telecommuting is a viable option to any future plan to preserve and protect our environment from encroachment and pollution caused by auto emissions and the consumption of land by enlarged highways and an increasing area for parking. A telecommuting program can be put in place by following a few tips from Mindy Blodgett in her article "Lower costs spur move to more telecommuting":
"Form a telecommuting team that includes technical experts, upper managers and human resources staff, and assign a telework coordinator."
"Contact other companies to learn from their experiences."
"Train participants and supervisors."
"Monitor the program through surveys before and after a pilot."
Measuring productivity in actual dollars is difficult. The actual productivity is best measured by the satisfaction and enjoyment by employees.
Bibliography
Bjerklie, David and Partick E. Cole. "Age of the road warrior." Time 145.12 (1995): 38- 40.
Blodgett, Mindy. "Lower costs spur move to more telecommuting." Computerworld 30.45 (1996) 8.
Blodgett, Mindy. "Telecommuting pilot test proves space-saving plan." Computerworld 30.46
(1996) 81-82.
Webster's New World Dictionary of American English, Third College Edition. Victoria Neureldt, Ed. 1988 New York 1375.
f:\12000 essays\sciences (985)\Computer\X The Communications Decency Act48.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Communications Decency Act
The Communications Decency Act that was signed into law by President Clinton over a year ago is clearly in need of serious revisions due, not only to its vagueness, but mostly due to the fact that the government is infringing on our freedom of speech, may it be indecent or not. The Communications Decency Act, also know by Internet users as the CDA, is an Act that aims to remove indecent or dangerous text, lewd images, and other things deemed inappropriate from public areas of the net. The CDA is mainly out to protect children.
In the beginning, the anonymity of the Internet caused it to become a haven for the free trading of pornography. This is mainly what gives the Internet a bad name. There is also information on the Net that could be harmful to children. Information on how to make home-made explosives and similar info such as The Jolly Rodgers and the Anarchist's Cookbook are easily obtained on the Net. Pedophiles (people attracted to child porn) also have a place to hide on the Internet where nobody has to know their real name. As the average age of the Internet user has started to drop, it has became apparent that something has to be done about the pornography and other inappropriate info on the net.
On February 1, 1995, Senator Exon, a Democrat from Nebraska, and Senator Gorton, a Republican from Washington, introduced the first bill towards regulating online porn. This was the first incarnation of the Telecommunications Reform Bill.
On April 7, 1995, Senator Leahy, a Democrat from Vermont, introduces bill S714. Bill S714 is an alternative to the Exon/Gorton bill. This bill commissions the Department of Justice to study the problem to see if additional legislature (such as the CDA) is even necessary.
The Senate passed the CDA as attached to the Telecomm reform bill on June 14, 1995 with a vote of 84-16. The Leahy bill does not pass, but is supported by 16 Senators that actually understand what the Internet is. Seven days later, several prominent House members publicly announce their opposition to the CDA, including Newt Gingrich, Chris Cox, and Ron Wyden. On September 26, 1995, Senator Russ Feingold urges committee members to drop the CDA from the Telecommunications Reform Bill.
On Thursday, February 1, 1996, Congress passed (House 414-9, Senate 91-5) the Telecommunications Reform Bill, and attached to it the Communications Decency Act. This day was known as "Black Thursday" by the Internet community. One week later, it was signed into law by President Clinton on Thursday, February 8, 1996, also known as the "Day of Protest." The punishment for breaking any of the provisions of the bill is punishable with up to 2 years in prison and/or a $250,000 fine.
On the "Day of Protest," thousands of home-pages went black as Internet citizens expressed their disapproval of the Communications Decency Act. Presently there are numerous organizations that have formed in protest of the Act. The groups include: the American Civil Liberties Union, the Voters Telecommunications Watch, the Citizens Internet Empowerment Coalition, the Center for Democracy & Technology, the Electronic Privacy Information Center, the Internet Action Group, and the Electronic Frontier Foundation. The ACLU is not just involved with Internet issues. They fight to protect the rights of many different groups. (ex. Gay and Lesbian Rights, Death Penalty Rights, and Women's Rights) The ACLU is currently involved in the lawsuit of Reno vs. ACLU in which they are trying to get rid of the CDA.
In addition to Internet users turning their homepage backgrounds black, there was the adoption of the Blue Ribbon, which was also used to symbolize their disapproval of the CDA. The Blue Ribbons are similar to the Red Ribbons that Aids supports are wearing. The Blue Ribbon spawned the creation of "The Blue Ribbon Campaign." The Blue Ribbon's Homepage is the fourth most linked to site on the Internet. Only Netscape, Yahoo, and Webcrawler are more linked to. To be linked to means that they can be reached from another site. It's pretty hard to surf around on the Net and not see a Blue Ribbon on someone's site.
On the day that President Clinton signed the CDA into law, a group of nineteen organizations, from the American Civil Liberties Union to the National Writers Union, filed suit in federal court, arguing that it restricted free speech. At the forefront of the battle against the CDA is Mike Godwin. Mike Godwin is regarded as one of the most important online-rights activists today. He is the staff counsel for the Electronic Frontier Foundation, and has "won fans and infuriated rivals with his media savvy, obsessive knowledge of the law, and knack for arguing opponents into exhaustion." Since 1990 he has written on legal issues for magazines like Wired and Internet World and spoken endlessly at universities, at public rallies, and to the national media. Although this all helped the cause, Godwin didn't become a genuine cyberspace superhero until what he calls the "great Internet sex panic of 1995." During this time, Godwin submitted testimony to the Senate Judiciary Committee, debated Christian Coalition executive director Ralph Reed on Nightline, and headed the attack on the study of online pornography.
The study of online porn became the foundation of "Time Magazine's" controversial July 3 cover story, "On a Screen Near You: Cyberporn." Time said the study proved that pornography was "popular, pervasive, and surprisingly perverse" on the Net, but Godwin put up such a fight to the article that three weeks later, the magazine ran a follow-up story admitting that the study had serious flaws.
The CDA is a bad solution, but it is a bad solution to a very real problem. As Gina Smith, a writer for Popular Science, has written, "It is absolutely true that the CDA, is out of bounds in it's scope and wording. As the act is phrased, for example, consenting adults cannot be sure their online conversations won't land them in jail." Even something as newsstand-friendly as the infamous Vanity Fair cover featuring a pregnant and nude(but strategically covered) Demi Moore might be considered indecent under the act, and George Carlin's famous 'seven dirty words' are definitely out. CDA supporters are right when they say the Internet and online services are fertile playgrounds for pedophiles and other wackos bent on exploiting children.
Now, parents could just watch over their children's shoulder's the whole time that they are online, but that is both an unfair and an impractical answer. There are two answers, either a software program that blocks certain sites could be installed, or parents could discipline their kids so that they would know better than to look at pornography. The latter would appear to be the better alternative, but that just isn't practical. If kids are told not to do something, they are just going to be even more curious to check out porn. On the other hand, many parents are less technologically informed than their kids. Many would not know how to find, install, and understand such programs as CyberPatrol or NetNanny.
The future of the CDA seems to be fairly evident. It doesn't look like the CDA is going to be successful. In addition to the Act being too far reaching in its powers, it is virtually unenforceable. As with anything in print, much of the material on the Internet is intelligent and worthy of our attention, but on the other hand, some of it is very vulgar. The difficulty in separating the two rests in the fact that much of the Internet's value lies in its freedom from regulation. As Father Robert A. Sirico puts it, "To allow the federal government to censor means granting it the power to determine what information we can and cannot have access to."
Temptations to sin will always be with us and around us so long as we live in this world.
f:\12000 essays\sciences (985)\Enviromental\A Beginning and End.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gloria Skains
English 101-10/ Perkins
Comparison/Contrast
A Beginning and End
Thesis: There is a reason for all seasons, two of which are spring with its new beginning and autumn with its incipient decline.
I. Spring
A. Daylight
B. Gardens
C. Insects
D. Color
E. Nests
F. Migratory birds
G. Coats of animals
H. Rain
II. Autumn
A. Daylight
B. Gardens
C. Insects
D. Color
E. Nests
F. Migratory birds
G. Coats of animals
H. Rain
Each change and occurence that takes place during each season is so important to the outcome of the next, because the seasons are all entwined. The activities which are common to each season have a profound effect on the cycle of plant and animal life. There is a reason for all seasons, two of which are spring with its new beginning and autumn with its incipient decline.
Mother nature wakes after a long restful sleep, stands, claps her hands, and calls spring to attention because this is the time of reawakening, a sudden surprising emergence after a period of concealed existence. I rise early because the days are becoming longer now. As I walk and explore this new morning, I cannot help but notice the activity that surrounds me. Gardens are being tilled and planted for the ground is warm and will soon break with sprouts of future bounty. Butterflies of every color are seen flittering and fluttering, while insects of all kinds are heard buzzing and humming. Butterfly and insects eagerly emerge from their winter homes intent on the tasks which lie ahead. The landscape is a palette of every shade of green imagineable. Before my very eyes, a kaleidoscope of colors splash the horizon as buds, leaves, and blossoms spring forth and pollen fills the air. The nests of foul, squirrel, and other animals are busy with activity for they are full with their young. Droves of geese and other migratory birds flying in formation are seen and heard returning to spring's warmer weather. Furry, feathery, and slimy creatures have begun shedding their heavier coats as they, too, prepare for the warmer temperatures. The intermittent spring showers encourage the abundant new growth of iris, narcissus, tulips, and daffodils.
Months later, as I gaze out my window I notice the transformation that is now taking place. What was once warm, bustling, and fresh has become cool, calm, and wilted.
There is a stillness in the air as autumn appears. It is as if mother nature sat, stretched her arms, and yawned, announcing the time has come for a much needed rest. The nights are longer which makes it harder to wake. As I stand at the window I cannot help but notice the calmness that surrounds me. The change is apparent as activity slows because this is the time of full maturity. Ravished fields and gardens are left to wither away as the final harvesting comes to an end. The abundance of blooms have diminished so pollen becomes less evident. The hives of wasps and bees are made ready for the winter rest. Evidence of cocoons (future caterpillars and butterflies) are seen on tree trunks and eaves. Leaves cover cars, roof tops, and the ground like a blanket of snow, as trees and plants shed their leaves. The predominate mixtures of brown, gray, and evergreen are everywhere. Acorns and other seeds are shed, providing food for a multitude of animals. Mother nature distributes the remaining seeds among her fields and her forests for the future growth of plants and trees. Nests are now empty of their young. The migratory foul take flight in their V-shaped formation in search of a warmer climate. Furry and feathery creatures have now replaced their spring attire with fuller, heavier coats and feathers. Less rain falls since less is needed. The excitement diminishes as the preparation of rest takes place.
Many events have taken place throughout the year, some obvious and some never acknowledged, but all are remarkable. The changing of the seasons are many: a new beginning for every living thing to be fruitful and multiply, a time for nourishment of both soil and its reapers, and a time for rest because tomorrow is yet another beginning and end.
f:\12000 essays\sciences (985)\Enviromental\A Discussion On Earthquakes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Perhaps Mother Nature offers no greater force than that of the earthquake. Across the span of time, earthquakes have been recorded for their incredible destructive forces, and their abilities to awe mankind with their unparalleled force. Earthquakes can often strike without any notice, leveling large cities and killing scores of innocent people. Not only can earthquakes bring harm to society through these methods of destruction, but they can also cause millions of dollars worth of damage to the areas they destroy, causing economic chaos. An earthquake is a natural phenomenon, occurring throughout the history of the world. Descriptions as old as recorded history show the significant effects earthquakes have had on people's lives. Long before there were scientific theories for the cause of earthquakes, people around the world created folklore to explain them. Until recent times, science has not had a complete understanding of how earthquakes are caused, and what can be done to predict when they will strike. This essay will discuss how earthquakes are formed and occur, how scientists can more accurately predict the arrival of earthquakes.
Before contemplating how earthquakes might possibly be prevented, it is essential that the process and formation of and earthquake be understood. Earthquakes are caused when the earth's crustal plates move, rub, or push against each other. The earth's crust (the outer layer of the earth) is made up of seven major plates and approximately thirteen smaller ones. The name plate is used to describe these portions of the earth's crust because they are literally "plates" or sections, composed of dirt and rock. These plates float on molten lava, called magma. Since the plates are floating on magma, they can slowly move. The place where friction occurs between plates is called a fault. A fault is a crack in a plate or a place where two or more plates meet. An example of a fault where two plates meet is the San Andrea's fault in California, where the Pacific and North American plates meet. The plates are about 30 miles thick under land and can be one to five miles thick beneath the ocean. The plates move because of convection currents. Magma has currents like the ocean does, that move in a circular motion beneath the plates. When two plates are pushing against each other, they are constantly building up tension on the fault. When two plates finally slip, they release a great amount of energy in the form of shock waves. These shock waves cause vibrations, which in turn cause the ground around the fault line to move and shake. This phenomenon is know as an earthquake.
Because of the incredible destructive capabilities of earthquakes, scientists are constantly trying to devise ways to ensure their early detection. Earth scientists have begun to forecast damaging earthquakes in California. Although quake forecasting is still maturing, it is now reliable enough to make official earthquake warnings possible. These warnings help government, industry, and private citizens prepare for large earthquakes and conduct rescue and recovery efforts in the aftermath of destructive shocks. In recent years, earthquake forecasting has advanced from a research frontier to an emerging science. This science is now being applied in quake-plagued California, where shocks are closely monitored and have been studied for many years. Earthquake forecasts declare that a temblor has a certain probability of occurring within a given time, not that one will definitely strike. In this way they are similar to weather forecasts. Scientists are able to make earthquake forecasts because quakes tend to occur in clusters that strike the same area within a limited time period. The largest quake in a cluster is called the mainshock, those before it are called foreshocks, and those after it are called aftershocks.
In any cluster, most quakes are aftershocks. Most aftershocks are too small to cause damage, but following a large mainshock one or more may be powerful. Such strong aftershocks can cause additional damage and casualties in areas already devastated by a mainshock, and also threaten the lives of rescuers searching for the injured. In the first few weeks after the 1994 magnitude 6.7 Northridge, California, earthquake, more than 3,000 aftershocks occurred. One magnitude 5.2 aftershock caused $7 million in damage just in electric utility equipment in the Los Angeles area alone. The U. S. Geological Survey (USGS) first began forecasting aftershocks following the 1989 magnitude 7.1 Loma Prieta, California, earthquake. By studying previous earthquakes, scientists had detected patterns in the way aftershocks decrease in number and magnitude with time. With such knowledge, scientists can estimate the daily odds for the occurrence of damaging aftershocks following large California temblors. These forecasts are relayed directly to the California Office of Emergency Services (OES) as well as to the public.
Some of the more larger earthquakes are preceded by foreshocks. Knowledge of past earthquake patterns allows scientists to estimate the odds that an earthquake striking today is a foreshock and will soon be followed by a larger mainshock in the same area. These odds depend on the earthquake's magnitude and the same seismic history of the fault on which it occurred. When a moderate earthquake hits California, scientists immediately estimate the probability that a damaging mainshcck will follow. If the threat is significant, a warning is issued. This warning process was put into action in June, 1988 when a magnitude 5.1 shock--one of the largest in the San Francisco Bay region since the great 1906 earthquake--struck 60 miles south of San Francisco. Alerted by the USGS that there was a 1 in 20 chance of a larger earthquake in the next five days, the California OES issued an advisory to warn the public. (The usual daily odds of a large earthquake in the Bay region are 1 in 15,000.) The warning period passed without further activity. In August, 1989, another earthquake hit the same area and a similar advisory was issued. Again nothing happened in the specified warning period. However, 69 days later, the area was rocked by the magnitude 7.1 Loma Prieta earthquake, which killed 63 people and caused $6 billion of damage in the San Francisco Bay region.
The lessons learned from these observations have already enabled earth scientists and emergency response officials to build a framework within which they communicate rapidly and effectively. Based on this experience, similar alert plans have been devised for geologic hazards in other areas of the United States. The development of modern seismic monitoring networks and the knowledge gained from past shocks, earthquake forecasts and warnings are now a reality. Continued effective communication of these forecasts to the public will help reduce the loss of life and property in future earthquakes.
In conclusion, earthquakes are a powerful force of nature. Although these destructive giants are indeed deadly, scientists are continually utilizing research data collected from previous earthquakes and observations, so that a more effective and efficient warning system may be put in place. Because of these scientist's work, society benefits from this advanced knowledge of when an earthquake will most probably strike. With the continued study of collected data, perhaps one day their will be a warning system that will be able to give enough advanced notice, so that casualties might be minimized even further.
f:\12000 essays\sciences (985)\Enviromental\A Planet for the Taking.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the essay "A Planet for the Taking," David Suzuki describes Canadians' odd appreciation for this great natural bounty we call our own. He is an internationally acclaimed scientist who is concerned about the welfare of Canada. Suzuki's intended audience is the Canadian population that does not realize the grave danger they are instilling upon themselves by haphazardly taking our resources without looking at the subsequent repercussions of their actions. The essay is persuasive and informative. He compares various facets of science and gives reasons why none of these fields can explain why we are destroying nature.
The organization of the essay supports the author's views well. It begins with general opinions about the Canadian population and is followed by more detailed explanations. The general opinions in the beginning are well-chosen considering the audience. Suzuki's tone is evident when he states "We have both a sense of the importance of the wilderness and space in our culture and an attitude that it is limitless and therefore we needn't worry." These words suggest that we are willing to reap the rewards of our vast resources but we fail to see the harm that we are doing, and will continue to do if we do not stop these actions.
Although his approach for explaining his beliefs changes, Suzuki's tone of great concern remains consistent throughout the essay. After his views are presented, Suzuki begins to tell us what we have done to our country and how we are destroying it. Present day Canadians are compared to native Canadians which successfully serves its purpose in illustrating how, for centuries, people lived off the natural resources in Canada. With the development of science and technology, we have developed better ways of mass harvesting resources but these methods are taking at a faster rate than nature can sustain. Science suggests means of replacing these resources we are taking but there is no quick replacement for ecosystems that have taken thousands of years to evolve.
Following his explanations of how we have destroyed nature, Suzuki discusses science and how society deals with it, "I believe that in large part our problems rest on our faith in the power of science and technology." This statement and the following sentences are used to describe how people deal with great developments in science and technology. Because there have been so many great advances in these fields in the past century, people are comfortable placing their faith in science though scientists are still far from discovering all of the secrets to the universe. Scientists interfere with nature without having a complete understanding, subsequently harming it. All sciences attempt to explain nature but are unable to do so. Therefore, following the discoveries of science may be more harmful than helpful. This idea about science is one of Suzuki's main goals in writing this essay. He wants to create an awareness that just because scientists have had many great successes, they cannot determine how to deal with everything else on the planet.
Suzuki creates a good relationship with the reader from the start. He makes general statements about Canadians which most of the audience either believes or can relate to. The writing is persuasive but the arguments are presented in a non- offensive manner which creates a good rapport with the reader. When Suzuki explains the scientific parts of his argument, he does so in a simplistic way which puts the reader at ease but serves its purpose in provoking thought.
The author is quite serious and certain about his topic. These feelings are evident through his powerful writing and diction. "We need a very profound perceptual shift and soon." This is Suzuki's closing sentence for the essay. His suggestion for a change in people's perceptions is clear and direct, leaving no room for misinterpretation; he does this consistently throughout the essay. Discussing the topic with such seriousness makes it an effective, persuasive essay.
The essay does not contain much powerful, descriptive imagery but Suzuki's powerful examples serve the same purpose. Supplying the reader with examples to support his arguments is a valuable means of persuading the reader. By giving examples, the audience can relate to the topic and see what they have done to nature. Examples of the various types of sciences also help the audience to relate. Suzuki provides the reader with examples of the shortcomings of all the fields of science, helping to make the reader second-guess science. Some powerful images he does use, however, are present when he describes the terminology that society uses for plants and animals, "We speak of 'herds' of seals, of 'culling' 'harvesting,' 'stocks.'" These images support the theme of the essay because they show the way that humankind has taken over nature and how we feel as if we can control everything. It makes it seem as if we do not care about the environment; we are merely concerned with making more and more money. Imagery, when used successfully, can support the aims of the essay and create more persuasion for the reader.
By writing this persuasive essay, David Suzuki wanted to convince his audience that we are destroying our planet for our own greed. It is no longer a matter of subsistence, humans are raping the land and if they do not learn to control this, it will lead to the downfall of humankind. Canadians act as if they are proud of their large, abundant country but then turn around and destroy it for their own wealth. This essay is persuasive, yet eloquent. It satisfies the author's aims in an informative and interesting manner.
f:\12000 essays\sciences (985)\Enviromental\acid rain .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION: Acid rain is a great problem in our world. It causes fish
and plants to die in our waters. As well it causes harm to our own race as
well, because we eat these fish, drink this water and eat these plants. It
is a problem that we must all face together and try to get rid of. However
acid rain on it's own is not the biggest problem. It cause many other
problems such as aluminum poisoning. Acid Rain is deadly.
WHAT IS ACID RAIN?
Acid rain is all the rain, snow, mist etc that falls from the sky onto
our planet that contains an unnatural acidic. It is not to be confused with
uncontaminated rain that falls, for that rain is naturally slightly acidic.
It is caused by today's industry. When products are manufactured many
chemicals are used to create it. However because of the difficulty and cost
of properly disposing of these products they are often emitted into the
atmosphere with little or no treatment.
The term was first considered to be important about 20 years ago when
scientists in Sweden and Norway first believed that acidic rain may be
causing great ecological damage to the planet. The problem was that by the
time that the scientist found the problem it was already very large.
Detecting an acid lake is often quite difficult. A lake does not become
acid over night. It happens over a period of many years, some times
decades. The changes are usually to gradual for them to be noticed early.
At the beginning of the 20th century most rivers/lakes like the river
Tovdal in Norway had not yet begun to die. However by 1926 local inspectors
were noticing that many of the lakes were beginning to show signs of death.
Fish were found dead along the banks of many rivers. As the winters ice
began to melt off more and more hundreds upon hundreds more dead fish
(trout in particular) were being found. It was at this time that scientist
began to search for the reason. As the scientists continued to work they
found many piles of dead fish, up to 5000 in one pile, further up the
river. Divers were sent in to examine the bottom of the rivers. What they
found were many more dead fish. Many live and dead specimens were taken
back to labs across Norway. When the live specimens were examined they were
found to have very little sodium in their blood. This is typical a typical
symptom of acid poisoning. The acid had entered the gills of the fish and
poisoned them so that they were unable to extract salt from the water to
maintain their bodies sodium levels.
Many scientist said that this acid poising was due to the fact that it
was just after the winter and that all the snow and ice was running down
into the streams and lakes. They believed that the snow had been exposed to
many natural phenomena that gave the snow it's high acid content. Other
scientists were not sure that this theory was correct because at the time
that the snow was added to the lakes and streams the Ph levels would change
from around 5.2 to 4.6. They believed that such a high jump could not be
attributed to natural causes. They believed that it was due to air
pollution. They were right. Since the beginning of the Industrial
revolution in England pollution had been affecting all the trees,soil and
rivers in Europe and North America.
However until recently the loses of fish was contained to the southern
parts of Europe. Because of the constant onslaught of acid rain lakes and
rivers began to lose their ability to counter act their affects. Much of
the alkaline elements; such as calcium and limestone; in the soil had been
washed away. It is these lakes that we must be worried about for they will
soon become extinct.
A fact that may please fishermen is that in lakes/rivers they tend to
catch older and larger fish. This may please them in the short run however
they will soon have to change lakes for the fish supply will die quickly in
these lakes. The problem is that acid causes difficulties the fish's
reproductive system. Often fish born in acid lakes do not survive for they
are born with birth defects such as twisted and deformed spinal columns.
This is a sign that they are unable to extract enough calcium from the
water to fully develop their bone. These young soon die. With no
competition the older,stronger can grow easily. However there food is
contaminated as well by the acid in the water. Soon they have not enough
food for themselves and turn to cannibalism. With only an older population
left there is no one left to regenerate themselves. Soon the lake dies.
By the late 1970s many Norwegian scientists began to suspect that it
was not only the acid in the water that was causing the deaths. They had
proved that most fish could survive in a stream that had up to a 1 unit
difference in PH. After many experiments and research they found that their
missing link was aluminum.
Aluminum is one of the most common metals on earth. It is stored in a
combined form with other elements in the earth. When it is combined it
cannot dissolve into the water and harm the fish and plants. However the
acid from acid rain can easily dissolve the bond between these elements.
The Aluminum is then dissolved into a more soluble state by the acid. Other
metals such as Copper (Cu), iron (Fe) etc can cause such effects upon the
fish as well however it is the aluminum that is the most common. For
example: CuO + H2SO4 ----------> CuSO4 + H2O
In this form it is easily absorbed into the water. When it comes in
contact with fish it causes irritation to the gills. In response the fish
creates a film of mucus in the gills to stop this irritation until the
irritant is gone. However the aluminum does not go always and the fish
continues to build up more and more mucus to counteract it. Eventually
there is so much mucus that it clogs the gills. When this happens the fish
can no longer breath. It dies and then sinks to the bottom of the lake.
Scientists now see acid, aluminum and shortages of calcium as the three
determining factors in the extinction of fish.
As well there is the problem of chlorine. In many parts of the world
it is commonly found in the soil. If it enters the fish's environment it
can be deadly. It affects many of the fish's organisms and causes it to
die. As well it interferes in the photosynthesis process in plants.
NaOH + HCl ----> NaCl + H2O
The carbon in the water can become very dangerous for fish and plants
in the water if the following reaction happens:
CaCO3 + 2HCl ---> CaCl2 + H2CO3 then
H2CO3 ---> H2O + CO2
The salt created by this reaction can kill. It interferes directly with
the fish's nervous system.
Acid lakes are deceivingly beautiful. The are crystal clear and have a
luscious carpet of green algae on the bottom. The reason that these lakes
are so clear is because many of the decomposers are dead. They cannot break
down that material such as leaves and dead animals. These materials
eventually sink to the bottom instead of going through the natural process
of decomposition. In acid lakes decomposition is very slow. "The whole
metabolism of the lake is slowed down."
During this same period of time the Canadian department of fisheries
spent eight years dumping sulfuric acid (H2SO4) into an Ontario lake to see
the effects of the decrease in the PH over a number of years. At the PH of
5.9 the first organisms began to disappear. They were shrimps. They started
out at a population of about seven million, but at the pH of 5.9 they were
totally wiped out. Within a year the minnow died because it could no longer
reproduce it's self.
At this time the pH was of 5.8. New trout were failing to be produced
because many smaller organisms that served as food to it had been wiped out
earlier. With not enough food the older fish did not have the energy to
reproduce. Upon reaching the pH of 5.1 it was noted that the trout became
cannibals. It is believed this is due to the fact that the minnow was
nearly extinct.
At a pH of 5.6 the external skeletons of crayfish softened and they
were soon infected with parasites, and there eggs were destroyed by fungi.
When the pH went down to 5.1 they were almost gone. By the end of the
experiment none of the major species had survived the trials of the acid.
The next experiment conducted by the scientists was to try and bring the
lake back to life. They cut in half the amount of acid that they dumped to
simulate a large scale cleanup. Soon again the cuckers and minnows began to
reproduce again. The lake eventually did come back; to a certain extent;
back to life. THE NEW THEORY:
A scientist in Norway had a problem believing that it was the acid
rain on it's own that was affecting the lakes in such a deadly way. This
scientist was Dr Rosenqvist.
"Why is it that during heavy rain, the swollen rivers can be up to
fifteen times more acid than the rain? It cannot be the rain alone that is
doing it, can it?" Many scientist shunned him for this however they could
not come up with a better answer. Soon the scientists were forced to accept
this theory.
Sulfuric acid is composed of two parts, know as ions. The hydrogen ion
is what make a substance acid. The other ion is sulphate. When there are
more hydrogen ions then a substance is acid. It is this sulphate ion that
we are interested in. When the rain causes rivers to overboard onto the
banks the river water passes through the soil. Since the industrial
revolution in britain there has been an increasing amount of sulphur in the
soil. In the river there is not enough sulphur for the acid to react in
great quantities. However in the soil there is a great collection of
sulphur to aid the reaction. When it joins the water the pH becomes much
lower. This is the most deadly effect of acid rain on our water!!! The
water itself does not contain enough sulphur to kill off it's population of
fish and plants. But with the sulphur in the soil it does.
CONCLUSION:
Acid rain is a big problem. It causes the death of our lakes, our rivers,
our wild life and most importantly us. As well it causes other problems
that are very serious as well such as the release of aluminium and lead
into our water supplies. We are suffering because of it. In Scotland there
are many birth defects being attributed to it. We must cut down the
releases of chemicals that cause it. But it will take time, even if we were
to stop today we would have the problem for years to come because of the
build up in the soil. Let's hope we can do something.
BIBLIOGRAPHY
Penguin Publishing House, 1987 , Pearce Fred Acid Rain. What is it and
what is it doing to us?
f:\12000 essays\sciences (985)\Enviromental\Acid Rain 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ACID RAIN. When fossil fuels such as coal, gasoline, and fuel oils are
burned, they emit oxides of sulfur, carbon, and nitrogen into the air.
These oxides combine with moisture in the air to form sulfuric acid,
carbonic acid, and nitric acid. When it rains or snows, these acids are
brought to Earth in what is called acid rain.
During the course of the 20th century, the acidity of the air and acid
rain have come to be recognized as a leading threat to the stability and
quality of the Earth's environment. Most of this acidity is produced in the
industrialized nations of the Northern Hemisphere--the United States, Canada,
Japan, and most of the countries of Eastern and Western Europe.
The effects of acid rain can be devastating to many forms of life,
including human life. Its effects can be most vividly seen, however, in
lakes, rivers, and streams and on vegetation. Acidity in water kills
virtually all life forms. By the early 1990s tens of thousands of lakes had
been destroyed by acid rain. The problem has been most severe in Norway,
Sweden, and Canada.
The threat posed by acid rain is not limited by geographic boundaries, for
prevailing winds carry the pollutants around the globe. For example, much
research supports the conclusion that pollution from coal-powered electric
generating stations in the midwestern United States is the ultimate cause of
the severe acid-rain problem in eastern Canada and the northeastern United
States. Nor are the destructive effects of acid rain limited to the natural
environment. Structures made of stone, metal, and cement have also been
damaged or destroyed. Some of the world's great monuments, including the
cathedrals of Europe and the Colosseum in Rome, have shown signs of
deterioration caused by acid rain.
Scientists use what is called the pH factor to measure the acidity or
alkalinity of liquid solutions. On a scale from 0 to
14, the number 0 represents the highest level of acid and 14 the most basic
or alkaline. A solution of distilled water containing neither acids nor
alkalies, or bases, is designated 7, or neutral. If the pH level of rain
falls below 5.5, the rain is considered acidic. Rainfalls in the eastern
United States and in Western Europe often range from 4.5 to 4.0.
Although the cost of such antipollution equipment as burners, filters, and
chemical and washing devices is great, the cost in damage to the environment
and human life is estimated to be much greater because the damage may be
irreversible. Although preventative measures are being taken, up to 500,000
lakes in North America and more than 4 billion cubic feet (118 million cubic
meters) of timber in Europe may be destroyed before the end of the 20th
century.
Sebastian Kovacs Copyright@1997
f:\12000 essays\sciences (985)\Enviromental\Acid Rain 3.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acid rain is pollueted rain. The pollutants go up to the atmosphere and when it
rains it brings the pollution down with it. Sulfur dioxide and nitrogen oxide are the
gases that form the acid rain. When these gases mix with moisture it can make
rain, snow, hail, or even fog. The scientific term for acid rain is acid deposition
which means when the acid is taken from the air and is deposited on the earth.
Major industries, coal burning factories, power plants and automoble engines are
the main sources of sulfur dioxide and nitrogen oxide which caues acid rain.
Volcaneoes and forest fires also causes sulfur dioxide and nitrogen oxide. Some
of the many problems that come from acid rain is the killing of of many plants and
underwater life in thousands of lakes and streams around the world. It strips
forest soils of nutrients and damages farm crops. Acid rain can also corrode
stone buildings, bridges, and priceless monuments. Acid rain can also be harmful
to humans because acid rain kills the crops and fish we eat, ruins homes, and the
acid can release lead in the pipes and the lead could go into our drinking water.
It is hard to determine where acid rain may fall next, because the wind from a
pollueted area could carry pollution to another area and the acid rain could fall
there. The regions effected more by acid rain is large parts of eastern North
America, Scandinavia, and central Europe. In alot of places acid rain isn't a
probelm because some soils can neutralize the acid and it doesn't effect the
crops. Areas more sensitive to acid rain is in the western United States most of
Washington all of Oregon, sectons of California and most of Idaho. Maine, New
Hampshire, Vermont and a large section of north east Canada. The soil in these
places can not neutralize acid rain deposits, then the nutrients are stripped which
means the crops in those places may not survive. The Black forest is a
2
mountainous region in Baden-Wurttemberg, in southwestern Germany. The
valleys are fertile and make good pasture land as well as providing good soil
vineyards. No forest region is showing serious effects of acid rain. Many trees
are dying, the forest lost masses of needles, leaving them with sparse, scruffing
crowns. Their major industries are Lumbering wood, manufacturing toys and
cuckoo clocks. Winter sports and mineral springs attract tourists.
Acid rain can damage and ruin soils by stripping the soils nutrients. But
some soils can neutralize and weaken acid deposits that fall from the sky. These
soils are called alkaline soil, also called a base. In 1838 the German chemist
Justus von Liebig offered the first really useful definition of an acid, namely, a
compound containing hydrogen that can react with a metal to produce hydrogen
gas.
Soil is formed when rocks are broken up by the weather and erosion and
mixed with organic matter from plants and animals. The term soil generally refers
to the loose surface of the Earth, made from solid rock. To the farmer, soil is the
natural medium for growth of all land plants. The rocks that make up soil could
be acid, neutral, or alkaline, another name for a base. Limestone and chalk are
rocks that are formed from tiny shells that are rich in calcium. Alkaline is made
up of calcium. When acid rain falls on alkaine soil the calcium makes the acid
become weaker or neutralize. Farmers put lime (a very strong alkaine
substance) and special fertilzers in there soil netralize the acid in the soil on a
regular daily basis.
In general, soil structure is classified as sandy, clay, or loam, although
most garden soils are mixtures of the three in varying proportions. A sandy soil is
3
very loose and will not hold water. A clay soil is dense and heavy, sticky when
wet, and almost brickhard when dry. Loam is a mixture of sand and clay soils,
but it also contains large quantities of humus, or decayed organic material, which
loosens and aerates clay soil and binds sandy soil particles together. In addition,
humus supplies plant nutrients. Then, soil structure can be improved by digging
in compost, manure, peat moss, and other organic matter.
Parts of western United States, Minneapolis, northeastern North America
and east and north Canada are places in North America where the is more
sensitive to acid deposits then any other places. Many factors, including the soil
chemistry and the type of rock determine the enviroments ablity to neutralize the
acid deposits from the rain.
Soils naturally contain small amounts of poisonous minerals such as
mercury, aluminum, and cadmium. Normally, these minerals do not cause
serious problems, but as the acidity of the soil increases, chemical reactions allow
the minerals to be absorbed by the plants. The plants are damaged and any
animals that eat the plants will aborb the poisons, which will remain in the animals
body and can hurt them or even kill them. The harmful minerals can also leach
out of the soil into streams and lakes where they can kill fish and other types of
living creatures. The problem gets even bigger and bigger when pollution dumps
more minerals in the soil. For example, in some parts of Poland vegetable crops
have been found to contain ten times more lead than is considered safe.
Some plants need and require soil, and the farmers do not want lime to be
put in there soil. If acid requiring plants, such as some types of shurbs, are put in
alkaline soil those plants are very likely to start to look yellow and very sicklly very
soon. Even if the water you give to the plants came from limestone strata it could
4
neutralize the soil. Continued use of some types of fertilzer may be cause the
loss of acidity, too. If the soil does not have enough acid in it, it may be made
more acidic by the application of alum, sulfur, or by adding gypsum to the soil. To
add more acid to the soil you can also lift the plants and replacing the whole bed
to a depth of nine inches with acid soil. It is not easy to make neutral soils acid.
Sulfur is the most commonly used to increase the soils acidity, but it acts very
slowly. So acid rain is good for some plants in some places with alkaline soil
because some of the plants want acid. Some acid requiring plants are several
popular shurbs, including azalea, camelia, gardenias, blueberries, and
rhododendron. Soils can be acid, alkaline, or neutral. The amounts of alkaline
and acid in the soil influence the biological and chemical processes that take
place in the soil. Highly alkaline or acid soils can harm many plants. Neutral soils
can support most of the processes.
Florida's sandy soils are naturally acidic, but the soil is easily changed from
acid to neutral or even a base ( base is alkaline soil ) by the small amounts of
lime and calcium that come from tiny shells often found in the Florida's sandy
beaches.
When acid rain falls from the sky it gets into the soil. The plants only have
time to absorb and store the water when the soil is wet. Then the leftover water
in the soil evaporates back into the sky where it becomes water vapor, forms into
clouds, and gets ready to rain again. It is the same thing with acid rain. The acid
doesn't stay in the soil. The acid evaporates back into the sky.
Pedologists are scientists who study the soil. They classify the soils
according to the characteristics of a polypedon. There are ten groups of soils,
they are Alfisols, Aridisols, Entisols, Histosols, Inceptisols, Mollisols, Oxisols,
5
Spodosols, Ultisols, and Vertisols. Alfisols develop under forests and grasslands
in humid climates. Aridisols occure in dry regions and contain small amounts of
organic matter. Entisols show little development. Histosols are organic soils.
They form water satuated enviornments, including swamps and bogs. Inceptisols
are only slightly developed. Mollisols develop in praire regions. They have thick
organically rich topsoils. Oxisols are the most chemically weathered soils. They
have a reddish color and occure in the tropical parts of the world. Spodosols
contains iron, aluminum, and organic matter in there B horizons. They form in
humid climates. They are moist, well-developed, acid soils. Vertisols form in
subhumid and arid warm climates. They make wide, deep cracks during the dry
season.
Other soil groups are the tundra, podzol, chernozem. Tundra soils have
dark brown surfaces and darker subsoils than in arctic regions that are underlain
by permafrost. The soils can be farmed if they are well drained and permafrost is
absent or deep-lying. Podzol soils are moderately to strongly leached soils in
forests and in humid regions. They are not naturally very productive for
agriculture. Chernozem soils (from Russian for "black earth") have a dark
surface layer underlain by more lightly colored soil. They typically develop under
grasses while the temperate is cool. Subhumid climates are highly productive,
although they require fertilizers after a long use.
f:\12000 essays\sciences (985)\Enviromental\acid rain 5.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the century past, the industrial society kept advancing. However, many advantages of the industrial society brings us also has a down side. One of the adverse effects of industrialization is acid deposition due to power plant, fossil fuel and automobile emissions. Acid rain is the popular term but the scientists prefer the term acid deposition. Acid rain can have adverse effects on the environment by damaging forests or by lowering the pH of the lakes and making the water too acidic for many aquatic plants and animals to live.
The father of acid rain research is an Englishman named Charles Angus Smith who suggested in, 1852, that sulfuric acid in Manchester, English, was causing metal to rust and dyed goods to fade. One source that causes acid rain are fossil fuel. Fossil fuel has many usage in our society. Such as to power electric power plants, industrial boilers, smelters, businesses, schools, homes and vehicles of all sort. These various energy sources contribute 23.1 million tons of sulfur dioxide and 20.5 million tons of nitrogen oxides to our atmosphere worldwide. When fossil fuels are ignited like oil and coal, they release carbon dioxide, a so-called greenhouse gas that traps heat within the earth's atmosphere which causes global warming that is taking place right now. Also, it releases sulfur dioxide, nitrogen oxide and various metals (mercury, aluminum) that are released into the atmosphere that reacts with other airborne chemicals (water vapor and sunlight) to produce sulfuric and nitric acid which later can be carried long distance from their source and be deposited as rain (acid rain) but acid doesn't just came from rain but also in the forms of snow, hail, fog, and mist.
Forests are a complex ecosystems that involves trees, soil, water, the air, climate and other living organisms that support the community of wildlife: animals, birds, insects and plants and also a major economic resource. The countries hardest effected by acid rain is in the European countries, yet central Europe face a much greater threat since it has a large amount of forest area and about 8% of German's forest face the lethal effect of Waldsterben or forest death of acid rain. Acid rain kill about 50 million hectares of forest that have been damaged in Europe and in Central and Eastern Europe's thousands of tons of pollution each year that 14,000 lakes are unable to support sensitive aquatic life. Acid rain does not kill trees outright but weakens them to the point where they become susceptible to extremes of heat or cold, attacks from blight-causing or from inserts such as the gypsy moth, and other environmental stresses. The problem of acid rain is caused by burning of fossil fuel that emits SO2 and industrial factories from the North America that emits pollution that travels to Europe. Acid rain is now becoming a growing problem in Third World countries such as China and India due to rapidly expanding populations where energy demands are increasing. Thus, the rate of fossil fuel consumption have greatly increased and where pollution controls are all non-existent have greatly to their problems with acid rain. Yet, most emissions are primarily located in eastern North America, Europe, and China. That is why acid rain is so threatening because it is concentrated and it has a devastating effect on soil because most of the trees get their nutrients from soil, which lakes, ponds, streams, and other waterways, which receives runoffs from soils uphill which humans.
When acid rain gets into the lakes and river it destroy all living things and declines in populations of fish, aquatic plant life and micro-organisms. Fish just goes extinct because they simply fail to reproduce and become less and less abundant, and older and older, until they die out. Changes in the biology of the provided one of the first clues to the problem of acid pollution is linked between the acidly of lakes and fish production. In the Ontario Ministry of the Environment reported that 140 acidified lakes in the province has no fish at all, and a further 48,000 lakes would not be able to tolerate extended acid inputs. There are three stages in the acidification of surface water. The first stage, is bicarbonate ions neutralize acids by reacting with hydrogen to produce carbon dioxide and water. If the bicarbonate content is maintained at a critical minimum level, the pH value of the water will remain stable, and plants, animals and micro-organisms will be unaffected. The second stage, is when the bicarbonate content drops below the critical level, and large influxes of hydrogen ions can no longer be neutralized. The pH value begins to go down faster then before this will disrupted the organisms and young fish. The third stage comes when the pH value stabilizes around 4.5. Almost all the original life is eliminated for example snails, many insects and fish disappear. The water becomes abnormally clear but some lakes can be neutralize the acids with limestone.
Destruction of forests and lakes are not the only the cause of acid rain but corrosion of buildings and monuments are also affected by the acid rain. Almost every buildings of a major urban or industrial centre may be at risk from the corrosive effects of acid deposition and the rate of corrosion has increased dramatically in many urban area. The Athenian monuments is get worst in the past 20-25 years result of pollution.
Most of the corrosion of the city buildings and monuments is the result of the dry deposition of SO2 and sulphate particles, mainly cause by urban concentrations of pollutants. When the sulphur pollutants fall on the surface of limestone or sandstone, they can react with the calcium carbonate in the stone to form calcium sulphate (gypsum), which sticks to the stone. This will causes flaking that can be washed away by rainwater, exposing more stone to corrosion and also gypsum and soot particles cause black crusts on buildings surface. Acid attack can also lead to stone decay through the creation of salt, which can crystallize and force apart mineral grains, causing the stone to disintegrate, salt can expand and contract thereby weather the surface. Mainly this is cause by the release of sulfur and nitrogen oxides by cars and industrial.
As our society get more advance there is also the effect of pollution that rises, to stop the effect we need to find alternatives way to stop car emission with new fuels and slowly removing fossil fuel that causing most of the nitrogen oxides. The government need to put stricter laws on the industrial that pollinated and removing harmful chemical from the earth. People should find a efficiency and alternatives way to stop the pollution that is happening around the world because the effect can be seen all round us starting with the forest, lakes and in the heart of are city where the buildings and monuments are being eating away by pollution.
f:\12000 essays\sciences (985)\Enviromental\Acid Rain Legislation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acid Rain Legislation
Acid rain is a destructive force as a result of nature and man colliding. It is formed through harmful industrial emissions combining with contents of the earth's atmosphere; a dangerous combination. This prompted governments throughout North America to take action. Many laws and regulations have been implemented, yet the question still remains, "Should tougher legislation be implemented to force industries to reduce acid rain emissions?"
To decide whether tougher legislation should be implemented, one must first understand the details of what exactly acid rain is. Acid rain is a result of mankind's carelessness. It travels a long one of the most efficient biogeochemical cycles on earth, the Hydrologic Cycle. This allows acid rain to distribute itself further away from it's source causing more than local problems. Sulfur Dioxide (SO2) is released by fossil fuels when they undergo combustion. Power plants and other fossil fuel burning industrial areas release various forms of nitrous oxides (Nox). These two chemical compounds combine with the water in the atmosphere to form what is known as acid rain.
The main reason that has prompted legislation of industrial emissions from governments is because of the negative effects they can have on the environment. Acid rain is harmful to the environment because of it's low pH. It can harm the biotic components of earth, and also the abiotic components. It's high acidity degrades soil to the point where it cannot support any type of plant life. Trees in forests are killed over long-term exposure. When these trees are killed, an imbalance in the hydrologic cycle can occur. Without living trees to consume the precipitate, it must be consumed by the earth or any other plants. These will receive an excess of water, causing other problems in the hydrologic cycle. This in turn causes a chain reaction of death among our forests. Some regions are more susceptible to acid rain because they don't have enough Alkaline soil to "neutralize" the acid before it is able to destroy the rest of the soil or before it can run off into lakes or rivers. Aquatic environments can be greatly affected by soil runoff. Acidic soil may runoff into lakes and rivers due to erosion, causing acid
rain to destroy more environment. Acid rain aquatic animals as well as aquatic plant life. When acid rain combines with water in major bodies of water, it not only destroys wildlife habitat, it destroys our drinking water. An aquatic ecosystem is very dependent on each and every aspect within itself. Once one species dies off, others that depend on it, will eventually begin to die off also. This systematic chain continues until the entire body of water is completely abiotic. The reason acid rain is so effective in destroying ecosystems is because it harms everything in that particular ecosystem. Being distributed through the hydrologic cycle, acid rain is capable of destroying everything in it's path.
Many laws and agreements have been implemented by governments in North America to reduce acid rain emissions. The question governments are asking is: "Are these regulations enough?". One of the more famous laws/organizations implemented by North American governments was the "Clean Air Act" which was signed in 1991. Also in 1991, Canada signed an agreement with the United states concerning air quality. Media explains that the agreement has enough framework to address all transboundary air pollution issues. It is a very broad/general agreement that should highly reduce air pollution between these two major countries of the world. This agreement contains other specific commitments for emissions reductions relating to acid rain precursors and research as well as a commitment to review the Agreement in it's fifth year. This allows for expansion of the agreement in the near future. Research and studies forced by this agreement is also an intelligent decision among these countries; education is the basis of all knowledge. Besides agreements and legislations of sorts, technology is an awesome force in the reduction of acid rain emissions. The only down side to this technology is that it is extremely expensive. Scrubbers have been placed in the smokestacks to remove harmful emissions. Lime is used in lakes to "neutralize" the low pH levels. Without studies being conducted and research being taken out on acid rain, these technologies would not be here today. This is why education may be the ultimate technology in the reduction of acid rain emissions.
Should tougher legislation be implemented to force industry to reduce acid rain emissions? From an environmental point of view...yes, anything that can be done, should be done. Whether it be through studies, research, new technologies, anything for our environment. From an economical point of view...no, technology is very expensive and hardly affordable in terms of most industries. Technology can reduce the dangers of acid rain, but at what cost? Tougher legislation should be implemented to preserve our environments, to preserve our lifestyle, and life on earth.
f:\12000 essays\sciences (985)\Enviromental\Acid Rain.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Acid rain forms when sulfur and nitrogen dioxides combine with moisture in the
atmosphere to produce rain, snow, or another kind of precipitation. This kind of pollution may also
be suspended in fog or deposited in a dry form. Acid rain is most common in North America and
Europe. Acid rain has also been detected in other areas of the world such as tropical rain forests
of Africa. Canada has placed limitations on the sulfur emissions. The United States has not, so
the emissions may still drift into Canada.
The acid rain cycle begins with hundreds of power plants burning millions of tons of coal.
Burning coal produces electricity for us. Coal is made of carbon, but the coal that we mine is not
pure carbon. It is mixed with other minerals. Two of these are sulfur and nitrogen. Then the coal
is burned some of the sulfur changes into sulfur dioxide and nitrogen changes into nitrogen oxide.
These escape in to the air as poisonous gases. Some smokestacks release chemicals like
mercury, arsenic, and aluminum. Some of these minerals are changed in to gases and others
become tiny specks of ash. As these chemicals drift, they may change again. They may react
with other chemicals in the air. When sulfur dioxide combines with water, the result is sulfuric acid.
When nitrogen oxide gas combines with water, the result is also another acid. When the clouds
releases rain or other precipitation, the acid goes with it. This is called acid rain.
The level of acid is measured in pH levels. The pH scale begins at 0 and ends with 14. A
reading lower than 7 is called acidic, and a reading higher than 7 is called basic. Seven is neutral.
Normal rain is slightly acidic with a pH level of about 6.5. Rain with a pH of 5.5 is then times more
acidic than normal rain and rain with pH of 4.5 is a hundred times more acidic than normal rain.
In parts of the country, rain with pH levels of 4.5 to 5.0 is common.
An English scientist named Robert Angus Smith discovered acid rain in 1872, but no
other scientist continued this study. Then in 1961 the Sweden wanted to know why the fish in their
lakes were dying. Svante Odén discovered that the reason was acid rain. After Odén's discovery,
other scientist began to study acid rain too.
Acid rain has destroyed plant and animal life in lakes, damaged forests and crops. It has
also endangered marine life in coastal waters, eroded structures, and contaminated drinking water.
It can kill fish, frogs, and insects in lakes. Acid rain can also be harmful to humans. It can hurt
their lungs and make it harder for them to breath. Acid fog can be particularly harmful to people
with respiratory problems. Acid rain can corrode stones and some metals. Higher acid levels can
be dangerous to our drinking water. Some water pipes are made of lead, and when the water is
acidic, it can dissolve the metal. Then the metals end up in the water we drink.
f:\12000 essays\sciences (985)\Enviromental\Advantages of Using Hydroponics Over Traditional Methods.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Advantages of Producing Crops Through
Use of Hydroponics
HYDROPONICS derives its name from the Greek HYDRO-PONOS meaning water/labor. Literally, "Hydroponics" means "Water Work." There is no soil in a hydroponic garden. No organic matter is present so nourishment (Nutrient) is not available to the plants in the same was as it is in a soil garden. Instead, nutrients are added to the water. So, as plants are watered, they are also fed. There are many ways to feed and water plants. The method chosen becomes a "Hydroponic System." Common systems are: Hand watered, recirculating system (With a submersible water pump), gravity fed from a nutrient tank into pots or trays, or a wick system.
A common question is, "What can be grown in hydroponics?" Surprisingly to some, anything that can be grown in soil can be grown in a hydroponic system! Flowers, herbs, vegetables, fruit trees, vines, and ornamental shrubs. Everything from Aloe Vera to Zucchinis can be grown in this unique system.
There are many advantages to using a hydroponic system for growing plants. The most obvious being that it is easier to control the plant growing environment. Some others are restricted supply of suitable water, lack of suitable soil, high labor cost of traditional cultivation, high cost of sterilizing soil, and there is a greater reliability and predictability of plant production. In addition, It's easy!
Depending on what is being grown, most of the time hydroponic plants require less attention than soil-grown crops. Because of this, it can relieve some people of the added responsibility that soil-grown plants require.
As one can see, there are many advantages to this system of growing plants. Since its origination, thousands of companies have sprung up dealing solely with hydroponics and hydroponic equipment. Maybe someday, when man inhabits outer-space, this method will be the main protocol for growing consumable items.
f:\12000 essays\sciences (985)\Enviromental\Air pollution 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Air Pollution
Then the sky turned red,
toxins over head,
everybodies dead,
everybodies dead.
(GutterMouth; Nitro Records; 1995)
In 1948, the industrial town of Donora, Pennsylvania
suffered 28 deaths because of the thick smog. Air pollution is
an ecological problem having to do with toxins in the air.
There are a few things the human race has done to try to
prevent air pollution from taking such a serious toll. Two of
these are the Clean Air Acts and the increased use of solar
power over coal power. By creating electric cars, the pollution
caused by gasoline powered cars will be lessened. If the
pollution is not stopped, it will cause life on earth as we know
it to cease.
Air pollution causes a number of health and ecological
problems. It causes health problems like cancer, emphysema,
and asthma. It also causes the depletion of the ozone layer
which results in global warming and melting of the ice caps.
Up until the industrial era, the air was fairly clean. The use of
smokestacks and the burning of fuels put many pollutants in
the air during this period of time. The increased use of fossil
fuels today also builds on this.
There have been many attempts at stopping air pollution.
The Clean Air Acts were effective for a little while. They made
using some polluting substances illegal. This did not work
because people simply did not listen. Solar Energy is another
attempted solution. This type of energy is good because it is
an alternative energy source to coal and other polluting fossil
fuels. The problem with solar energy is that it is extremely
expensive, but it has been used extensively throughout the
world.
One of the more effective ways of eliminating air pollution
is the making of electric cars. The use of these electric cars
would completely reduce the amount of pollution in the air
caused by gasoline powered cars. These cars are run on
batteries instead of gasoline or other fuel. Though the use of
solar cars seems more realistic, it would also be very expensive
and unreliable. The difference primarily between solar and
electrical power is the fact that solar cars would be much more
expensive to make. The possibilities of making a cost efficient
electric car are much more realistic than making a cost
efficient solar car.
By making the ecologically safe and inexpensive electric
car, the pollution caused by gasoline powered cars would go
down. Without making these changes, the globe will continue
to heat up at an annual rate of .2 degrees a year which will
result in the melting of the ice caps. If this were to happen,
the water would rise 200 feet, flooding most of the earth.
The Earth's ecosystem is a little bit like a web. It is very
fragile and depends on all of its strands to maintain
stableness. If the air is polluted it disrupts this web creating a
total imbalance. This "total imbalance" would also occur if the
water was polluted. In effect, when one part falls they all do.
Believe me, "total imbalance" is not cool.
f:\12000 essays\sciences (985)\Enviromental\Air Pollution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Problems Caused By Air Pollution
Some people think that air pollution is not harming the earth or the people, but it is doing worse, by killing the earth and getting people sick. "Air pollutants," according to Gay, "are known to cause respiratory diseases, cancer, and other serious illnesses" (12). Air pollution not only threatens the health and life of humans but also causes damage to the environment (Gay 13).
First, air pollution causes a great deal of health problems. Wanting clean air is a good reason because air that is polluted can damage human health (Edelson 25). In the United States many health problems have occurred because of air pollution. According to Sproull, "For more than a decade, local residents in the tri-state valley bounded by Kentucky, Ohio, and West Virginia have claimed to suffer from health problems, including rashes, respiratory problems and even cancers" (D4). In 1948, in the industrial town of Donora, Pennsylvania, which sits in a valley, had twenty deaths, and nearly 6,000 residents, or 40% of the population, suffered respiratory problems (Edelson 25, 26). New York experienced several killer smogs, which in a later analysis attributed, from the usually severe pollution, 58 deaths (Edelson 26). Not only in the United States are health problems caused by air pollution showing up, but they are also showing up in other parts of the world, like Europe. In 1930, in Belgium's Meuse River valley, a major industrial region, where the primary fuel was coal reported sixty deaths, and about 6,000 residents of the valley became ill with breathing problems and respiratory infections (Edelson 25). In December 1952, the toll was huge in London from the infamous smog, which caused up to 4,000 deaths, when levels of sulfur oxides and particulates rose above normal (Edelson 26). Air pollution also increased deaths from chronic lung disease in the United States. "Although statistics on the physical effects of air pollution are not easily calculated," according to Edelson, "an alarming related statistic is that between 1970 and 1986, deaths in the United States from chronic lung disease rose 36%" (35). Air pollution has cost a great deal of money on health care in the United States. In terms of health care and lost productivity, the costs of air pollution in the United States alone have been estimated at more than $100 billion (Edelson 35). Another cost air pollution has caused is human life, which is incalculable (Edelson 35). Air pollution causes many health problems that lead to death.
Also, air pollution causes a great deal of damage to the environment and property. Lot of air pollution creates acid rain, which deteriorates things. Acid rain began to emerge as a serious problem in the late 1960's, when a decline in fish population was noticed by scientists in Scandinavia (Edelson 37). In the 1970's, a number of studies related the declining of fish stocks and forest damage to acid rain from industrialized and urban areas, often hundreds of miles away (Edelson 37-8). Thousands of lakes and streams, across the northeastern part of the United States and the mid-Atlantic states, in Canadian provinces, in Scandinavian countries and in other parts of Europe, have acid concentrations so high that aquatic food chains are destroyed, and fish die off (Gay 26). Land is also destroyed by acid rain. In North American and European forests, and tropical rainforests in Mexico and Central America, vast numbers of red spruce, pine, fur, and other trees wither and die (Gay 26). Acid rain also destroys world-famous structures such as the Taj Mahal, the Statue of Liberty, the Parthenon, and ancient Mayan ruins (Gay 26). Fresh paint on buildings and new cars fades quickly due to acid rain (Gay 26). Acid rain may also be damaging crops (Edelson 42). Dust settling, from nearby dust sources, also cause damage to crops (Sproull 111). Visible damage is cause by such heavy dustfall to pine trees, alfalfa, cherry trees, beans, oats, and citrus trees (Sproull 111). Concentrations as low as only a few micrograms per cubic meter of "fluorides" injure various plants (Sproull 112). Dusts of aluminum fluoride, cryolite, calcium fluoride, and apatite, and such gases as hydrogen fluoride, silicon tetrafluoride, carbon tetrafluoride, and fluorine are all included as fluorides (Sproull 112). Trees are dying because of air pollution. The German Ministry of Food, Agriculture, and Forestry say that the primary cause of damage to more than half of forested regions of West Germany in 1988 was air pollution (Edelson 37). Ponderosa pine forests have been severely damaged by air pollution (Sproull 111). Artwork and history is being erased as air pollution causes them to deteriorate (Edelson 45). The Mellon Institute had study of economic losses in Pittsburgh in 1912-13 due to air pollution, indicated annual losses about $10 or $20 million per person (Sproull 46-7). Air pollution is killing the environment and property.
In conclusion, air pollution is killing the earth and its people. In order to stop the killing of the earth and its people by air pollution, the people must become involved. A recent strategy that has been suggested for individual action is "green consuming" or buying "green" products (Gay 120-21). Green consuming is buying or using goods and services that do not harm the air, water, or land (Gay 121). Even though a large majority of Americans refuse to buy products or pay for services that contribute to environmental problems, people should still get involved (Gay 121). In order to save energy to save the earth, people should do the following:
· When the lights are not in use, turn them off.
· Instead of normal light bulbs, use compact fluorescent light bulbs.
· On a short trip, walk or ride a bike.
· Whenever possible, use public transportation.
· To conserve heat or air conditioning, close off unused rooms.
· To save fuel, adjust thermostat a few degrees lower for heating and higher for cooling.
· For better efficiency, clean furnace and air condition filters.
· Only when full, run the dishwasher, washer, and dryer. (Gay 115)
There is no excuse, for not becoming involved to stop air pollution. Many scientists predict that the temperature of the earth will rise because of global warming (Edelson 87). Some scientists already believe that the earth is already warming (Edelson 87). If people do not become involved the earth will not be suitable enough to live on in the future, and there might not be anybody left for the future.
f:\12000 essays\sciences (985)\Enviromental\Air Polution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
AIR POLUTION
Problem
The first thing people see, in the morning, when they walk outside is the sky or the colored sun. Is this world giving us the privilege of seeing the natural colors of the sun through all the layers of pollution within the air (Dinanike 31)? Not only are beautiful sights such as this hidden behind the pollution this world causes everyday, but an increase in diseases, infections and death occurs. What causes pollution? What can we do to prevent it, and get rid of it? Is it fair to the children of the future to have to suffer the consequences that pollution causes? Why not take care of the problem now? Factory and business owners have the ability to prevent air pollution.
Air pollution is the presence in the atmosphere of harmful gases, liquids, or solids. Air pollution, known as smoke pollution for many years, resulted from coal combustion (Hodges 526). Smog has been a problem in coal-burning areas for several centuries. Smog finally decreased when coal combustion was replaced by oil and gas combustion. Air pollution is caused by a number of different types of pollutants.
The first type, particulate matter, consists of solid and liquid aerosols suspended in the atmosphere. These arise from the burning of coal and from industrial processes. Atmospheric particles can scatter and absorb sunlight which reduces visibility. Particles also reduce visibility by attenuating the light from objects and illuminating the air causing the contrast between the objects and their backgrounds to reduce. Not only does it effect visibility, but it hastens the erosion of building materials and the corrosion of metals, interferes with the human respiratory system, and brings toxic materials into the body. The small particles cause chronic bronchitis, bronchial asthma, emphysema and lung cancer (Hodges 59).
The second type is sulfur oxides which come from the burning of coal and industrial processes. Damage to materials, to vegetation, and to the human respiratory system are caused by the acid nature of oxides. Small quantities of sulfur oxides can increase illness and mortality (Hodges 59).
The third type of pollutant is carbon monoxide. Carbon monoxide is a colorless, odorless, tasteless gas against which humans have no protection. Carbon monoxide comes from the exhaust of gasoline-powered vehicles and secondarily from industrial processes (Hamer 45). Hemoglobin, which is in the blood, combines with carbon monoxide and carries less oxygen to body tissues causing health and heart effects. Some health problems come from the exhaust fumes leaking into the interior of the automobile. "Several hundred Americans die from CO poisoning each year. Sodium oxide levels below .25 ppm have been associated with increased morbidity in New York as measured by hospital admissions. In all cases in which adverse health effects have been noted the elderly patients have been affected severely" (Hodges 60).
The fourth type is hydrocarbons which are chemical compounds containing only carbon and hydrogen. Hydrocarbons also arise from gasoline-powered vehicles and from industrial processes. Hydrocarbons are an important part of the production of photochemical smog (Hodges 61).
The last type is nitrogen oxides that come from high-temperature combustion, such as that occurring in motor vehicle engines, electric power plants and other fuel usage. Nitrogen oxide contributes to acidity in precipitation and production of photochemical smog. Nitrogen oxide is also dangerous it causes serious illness and deaths even if the exposure to NO2 is short. "The gas was responsible for 124 deaths in a fire at Cleveland's Crile Children Hospital on May 15, 1929, when x-ray film containing nitrocellulose accidentally caught fire and produced NO2" (Hodges 63).
Solution
As one can surely see these types of air pollutants are harmful to our atmosphere, environment and personal health. Factory owners can help prevent all of these effects. Researchers have found different ways to remove these pollutants from the air. One device designed to remove hydrocarbons from the atmosphere "is an improved low pollution invisible flare burner which comprises a tall stack lined with ceramic. Primary air is introduced under pressure in a tube below and coaxial with the stack. The top of the tube contains a burner for the vented hydrocarbon gases" (Sittig 227). Within this device different air mixtures provide means for complete combustion of the vented gases with low emission of smoke and light. Another method wherein gases contaminate with vapors from volatile organic liquids are recovered by containing the vapor- containing gas in an absorbed tower with a sponge oil which absorbs the vapors. Both methods can successfully remove hydrocarbon (Sittig 348).
A method has been discovered for removing nitrogen oxides from gases. An ionizing radiation allows the noxious gas pollutants to enable a collection of the particles or mist electrostatic precipitators (Sittig 409).
In the book, How to Remove Pollutants and Toxic Materials from Air and Water, it reports:
to remove sulfur oxides and particulate matter from waste gases comprises crosscurrent contacting of the waste gas stream with a moving bed or supported, copper-containing acceptor in a first zone removing in subsequent separate zones the particulate matter and the sulfur oxides from the acceptor in a subsequent zone before introducing it back into the first zone for further removal of sulfur oxides and particulate matter. Sittig 565 Another air pollutant which is able to be reduced is carbon monoxide. Factories simply have to change their coal or oil combustions to natural gas combustion. Afterburners can cause the combustion of CO. This combustion is a source of heat as in blast furnaces (Sittig 415).
Action
The government should take action passing a restriction on equipment within factories and businesses. The inspection should consist of requiring four different conditions or devices: a flare burner, an ionizing radiation, crosscurrents contacting of the waste gas stream with an acceptor, and change all combustions to natural gas combustions. Like most laws, if one device or condition is not present then the company should be fined a large sum of money. In order for the company to continue staying open the missing devices or conditions should be present within the following two weeks. Many restrictions are made for businesses and factories, but what is more important than the health of our people. Action should be taken right away.
Important advantages to passing extra restrictions on factories and businesses are involved within this action. Not only will the factories realize how much pollution they have caused without these conditions, but they will prevent hurting the health of others. Each of these devices are exactly what we need in order to stop air pollution. Save the children of tomorrow and the environment of today by doing something to prevent air pollution.
Justification
Each method mentioned above can be used in factories all over the world. The question is does it cost a lot of money? Yes, it does. In order to apply all of the above methods it can cost the factory and businesses millions. The estimated costs are $800,000,000 alone from public sectors. For private sectors it can cost up to $17,000,000,000. Reducing pollution might cut salaries for many workers due to the cost of expenses which would rise (Hodges 582).
The estimations of the cost of devices to reduce pollution are accurate, but what about the money it takes to repair the damages caused by air pollution? The annual total for air pollution is $16 billion in the U.S. The amount spent dealing with air pollution leaves less money for our government to give to researchers to find cures for diseases, military expenses, or for government debt. It is like throwing away money just because factory and business owners do not want to take the time and money and invest in new methods and devices to prevent air pollution. $240 million goes to cleaning equipment dirtied by air pollution each year. For livestock and agricultural crops 500 million is used for damages. Millions are used a year for medical costs, cost of fuels wasted in incomplete combustion, and maintenance of cleanliness in production of foods and beverages. $18.6 billion worth of damages are done due to motor vehicle pollution (Hodges 568). It might seem to cost a lot of money to prevent air pollution, but as one can see it may cost more to repair the damages from air pollution.
WORKS CITED
Dinanike, George. "Sunset in the Comfort of a Laboratory." New Scientist 19 October 1991: 31.
Hamer, Mick. "Pollution Leaves a Cloud over Life in the City." New Scientist 13 May 1989: 45.
Hodges, Laurent. Environmental Pollution Second Edition. New York: Holt, Rinehart and Winston, 1977.
Sittig, Marshall. How to Remove Pollutants and Toxic Materials from Air and Water. Parkridge, New Jersey: Noyes Data Corporation, 1977.
World Wide Web Site.
URL: http://www.cyberstore.ca/greenpeace/ozone/ozonehome.html "Greenpeace." November 2,1994.
f:\12000 essays\sciences (985)\Enviromental\Aluminium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aluminium
The history of Aluminium use
Aluminium is now one of the most widely used metals, but one of the hardest to refine due to it's reactivity with other elements. Even as late as the turn of the century, Aluminium was considered very valuable and in turn expensive, even more expensive than gold. In some cultures, when a function was held (for example, a party) by wealthy people, only the most honored guests would be given Aluminium cutlery, the others had to make do with gold or silver cutlery.
A Description of the Aluminium ore, including a list of it's contents
Pure Aluminium oxide is known as alumina (Al2O3). This is found as corundum, a crystalline. Aluminium can also occur as cryolite (Na3AlF6). Traces of other metal oxides in Aluminium oxide tint it to make it form stones (often precious) for example: chronium gives a red colour to rubies, and cobalt makes the blue in sapphires.
How Aluminium deposits are formed
Aluminium (like many other metals) is not found in it's pure form, but associated with other elements in rocks and minerals. An aluminosilicate such as felspar
(KAlSi3O8) is the main constituent of many rocks such as granite, which is quartz and mica cemented together with felspar. These rocks are gradually weathered and broken down by the action of carbon-dioxide from the air dissolved in rainwater forming 'kaolin'. This is further broken down to form other substances, ultimately resulting in the formation of Aluminium deposits.
Where and how Aluminium is mined?
Aluminium is never found in it's pure state until it has been refined. Aluminium is made when refining alumina, which is in turn found from the ore 'bauxite'. Bauxite is often mined in the opencast method.
Aluminium deposits are found in many countries, but the countries with significant deposits include: Guinea, Jamaica, Surinam, Australia and Russia.
How is Aluminium refined?
One method is the 'electrolytic process'. This is performed when a low voltage current is passes through a bath containing alumina in the molten form. The alumina is broken down into Aluminium metal which collects at the bottom of the bath at one electrical pole, the cathode, and the oxygen which reacts at the other pole, the anode, to give carbon-dioxide and some carbon-monoxide.
The uses and properties of Aluminium
Aluminium is now the second most widely used metal, after iron. Aluminium and it's alloys, such as 'duralumin', are used as structural metals for a wide variety of products from aircraft to cooking utensils. Aluminium foil is used to wrap food and is also being used to replace copper wire in electrical windings. Aluminium mirrors are used in some large astronomical telescopes. Some Aluminium ores are found in the form of gems and precious stones. Aluminium is also used in the making of vehicles such as aircraft due to it's strength and light weight, but is not used so much in cars due to it's cost.
f:\12000 essays\sciences (985)\Enviromental\An autumn and the falling of leaves.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Autumn and the Fall of Leaves
It is not true that the close of a life which ends in a natural fashion---life which is permitted to put on the display of death and to go out in glory---inclines the mind to rest. It is not true of a day ending nor the passing of the year, nor of the fall of leaves. Whatever permanent, uneasy question is native to men, comes forward most insistent and most loud at such times. There are still places where one can feel and describe the spirit of the falling of leaves.
At Fall, the sky which is of so delicate and faint a blue as to contain something of gentle mockery, and certain more of tenderness, presides at the fall of leaves. There is no air, no breath at all. The leaves are so light that they sidle on their going downward, hesitating in that which is not void to them, and touching at last so intangible to the earth with which they are to merge, that the gesture is much gentler than a greeting, and even more discreet than a discreet touch. They make a little sound, less than the least of sounds. No bird at night in the marshes rustles so slightly, no men, though men are the most refined of living beings, put so passing a stress upon their sacred whispers or their prayers. The leaves are hardly heard, but they are heard just so much that men also, who are destined at the end to grow glorious and to die, look up and hear them falling.
There is an infinite amount of qualities of describing the leaves. The color is not a mere glory: it is intricate. If you take up one leaf, then you can see the sharp edge boundaries which are stained with a deep yellow-gold and are not defined. Nor do shape and definition ever begin to exhaust the list. For there are softness and hardness too. Beside boundaries you have hues and tints, shades also, varying thicknesses of stuff, and endless choice of surface, and that list also is infinite, and the divisions of each item in it are everywhere the depth and the meaning of so much creation are beyond our powers. All this happens to be true of but one dead leaf; and yet every dead leaf will differ from its fellow.
It is no wonder, then, that at this peculiar time, this week (or moment) of the year, the desires which if they do not prove at least demand---perhaps remember--- our destiny, come strongest. They are proper to the time of autumn, and all men feel them. The air is at once new and old; the morning (if one rises early enough to welcome its leisurely advance) contains something in it of profound remembrance. The evenings hardly yet suggest (as they soon will) friends and security, and the fires of home. The thoughts awakened in us by their bands of light fading along the downs are thoughts which go with loneliness and prepare us for the isolation of the soul. It is on this account that tradition has set, at the entering of autumn, for a watch at the gate of the season and at its close of day and the night of on which the dead return.
f:\12000 essays\sciences (985)\Enviromental\An incident.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tatyana Golbin
Essay
When you are happy, you never think that it can end in a second. Of course, if you
always had tried to imagine something bad, you would not have been so happy. After an
incident I had had once, I understood that happiness could be just one step away from
loosing it.
I was seventeen. Summer was in the air, but we could not enjoy it because it
was the time of our graduation and we had to study for our final exams. When exams
were over, we were relieved and decided that we deserved some rest.
Usually, people prefer resting on beaches, living in expensive hotels, eating at
exclusive restaurants, and enjoying other services provided by travel companies.
Although it really is careless relaxation, my friends and I always preferred to travel to
wild places. After the complicated life of the big city, with its intense traffic and polluted
air, with its endless problems, union with nature seemed to us the most worthwhile way
to relax.
Getting prepared for the trip already made me excited. It was very important not
to forget anything. Finally, when all preparations were made, and the cars loaded up, we,
following our chosen route, dreamed of our coming adventures. On that particular
vacation, we had chosen a place named Blue Lakes.
It was really a place of wildlife. In the summer, the forest was beautiful. It burst
into leaves and needles, and abounded with mushrooms and berries. The unbelievable
silence was sometimes interrupted only by birds and insects. The amazing lakes, which
were situated all over the wood, seemed to complete the picture. In addition, Mother
nature had given every lake some unique feature. Red Lake, for example, seemed to have
red water, because of the iodine in it. Cold Lake had a very low temperature, even on the
hottest days, so that when I touched it with my foot I shivered from its chilliness. But the
most impressive lake was Dead Lake, which seemed to boil because of its phosphorus.
Due to it, the lake was always under a fog. Besides, Dead Lake's bottom had several
levels. In other words, it was like a labyrinth under the water. It was untouchable and, at
the same time, mysterious.
We camped near a lake that was simply beautiful, without any of the
aforementioned extraordinary qualities. Soon, everybody found himself busy with
fishing, swimming, and playing games. Sometimes, leaving two people at the camp, we
went to see the other interesting places. During one of those trips, we had to cross a deep
and narrow river which had such a strong current that we could not swim across. As I
said above, the place was wild and, of course, there was no bridge. We solved that
problem by spanning the two banks with a log. Half of us crossed the river without fear,
but when my turn came, the fear thrilled through my veins. I am not a good swimmer;
therefore, it was a serious problem for me. I tried not to show my fear. I said to myself,
"You can do it," and started going. My legs were trembling, but I managed to hold my
balance. When I was watching my steps, I could see the strong current. Yet I kept saying
myself, "A few more steps and you are done". At the moment when I was half way across,
suddenly, the log turned and I lost my balance. I got wet instantly. Thousands of bubbles
pushed from my body. Unwillingly, I inhaled some water and felt that I was carried away
by the stream. I waved my hands desperately trying to swim, but the current forced me
straight ahead. Despite all the water around me, my throat was dry. I felt myself getting
tired and panicky. Eventually, I grabbed a branch of a tree that was growing almost in the
river. Then, my friends threw me an end of the rope and pulled me out. I was coughing
and felt dizzy, but it was good to feel the ground under my feet again.
After everything was over, we returned to our camp and, after a while, everybody
seemed to forget what happened to me, or when somebody brought it up, he just made
fun of it. It wasn't funny for me though. I could not help thinking that I had started
crossing the river just because I did not want to look like a coward in front of my friends.
Who knows what could have happened to me if I had not seized the branch. I could have
drown easily. One minute ago I was happy and another I did not know where I was going
to.
After this incident, I am afraid of not only water, but also loosing something
because of my thoughtlessness and impulsiveness. I became more careful and mature.
When I am happy now, I always say to myself, "Watch out!"
f:\12000 essays\sciences (985)\Enviromental\An overview of the Exxon Valdez Oil Spill.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ABSTRACT
In March of 1989, the Exxon Valdez oil tanker ran aground on Bligh Reef in Prince William Sound, Alaska. An eighteen foot wide hole was ripped into the hull, and 10.9 million gallons of crude oil spilled into the ocean. In the following weeks, many things transpired. This paper will discuss the cleanup, the damage, and the results of the biggest oil spill in United States history.
On March 24, 1989, in Prince William Sound Alaska, the Exxon Valdez was moving South West after leaving Port Valdez. The ship was carrying over fifty million gallons of crude oil. When the Valdez was only twenty-eight miles from the port, it ran aground on Bligh reef. The bottom was ripped open, and 10.9 million gallons of North Slope Crude Oil spilled into the frozen Alaskan waters at a rate of two hundred thousand gallons per minute. The remaining forty-two million gallons were off loaded. In the ensuing days, more than 1,200 miles of shoreline were hit with oil. This area included four National Wildlife Refugees, three National Parks, and Chugach National Forest.
Within hours, smaller tanker vessels arrived in order to off load the remaining oil. Unfortunately, the cleanup effort was hindered by an inadequate cleanup plan that had been created during the 1970's. These plans outlined how an oil spill would be handled, including provisions for maintaining equipment such as containment booms and "skimmer boats." The plans also called for a response team to be on twenty-four hour notice. Unfortunately, the plans were good on paper only. A spill of this size had not been anticipated. Therefore, the response teams had been demobilized, and the equipment that was supposed to be ready at all times was either too far away or nonexistent.. Precious hours were also wasted as Corporations, the Alaskan State Government, and the National government argued over who should take control of the situation. The arguments ensued after debates over who would pay for what, who was responsible for what, and who would do the best job.
The local fishermen were a big help with the cleanup effort. They battled with the oil in order to protect their industry. Many fisherman were seen in row-boats in the small coastal inlets. The fishermen worked by hand to clean up the oil, using buckets to scoop up the oil, which was several inches thick on top of the water in some places. Fishermen would leave in the morning and return when their boat was filled with oil. The oil that they scooped out was then deposited at special collection sites. The fishermen also used their boats to help with the deployment of containment booms. The booms would be fastened behind the boats and then dragged into place. However, the booms were not always helpful do to choppy seas. Many fishermen also became temporary employees of Exxon, receiving excellent pay on an hourly basis.
The cleanup was a long and tiring process which was plagued by many difficulties. Inexperience was a major problem. Coast Guard Vice Admiral Clyde Robbins explained in disgust that, "It was almost as if that spill was the first one that they had ever had." The equipment was not ready and not in perfect shape and the response teams were not equipped to deal with a spill of the magnitude that occurred. Other difficulties arose due to the format that was used by the executive committee in charge of the cleanup spill. They had set themselves up in such a way that every member of the committee had veto power. This was a result of the original conflicts that took place between corporations the state government and the National government. It was nearly impossible to get all of the members of the committee to agree on one particular plan of action.
The natural factors also made the cleanup a difficult process. The Alaskan wilderness is a rugged country. Rocky shorelines made beach work difficult, and the cold weather made working long hours very difficult. Another problem with the cold weather was that it prevented the oil from breaking down. Under normal weather conditions, the oil would have began to decompose, which would have made it easier to deal with. There were also problems with high winds, which were often in excess in of forty knots. Perhaps the most interesting problem that cleanup workers had to deal with was with the wildlife. There was actually one reported case of an Alaskan brown bear attacking a worker that was on the beach. All of these factors combined to make the cleanup more difficult then anticipated.
The cleanup process was probably the most expensive oil spill cleanup in history. However, the total cost is unknown and still growing. Exxon paid more than five billion dollars, including twenty million to study the spill. Part of the reason that the cleanup effort was so expensive was the amount of workers that were used in the effort. Exxon had approximately eleven thousand men and women on its payroll, including temporary workers. The average worker received $16.69 per hour. Although there was no set number of hours that the workers competed per week, one thousand eight hundred dollars was a normal paycheck for one week. Exxon was also in need of many small boats to help with the deployment of containment buoys, and to be used as floating observation stations. Local fisherman charged up to eight thousand a day for the usage of their boats. This combined with their hourly wages made cleaning up after the oil spill more profitable then fishing on a daily basis for many people. They could receive more money in a shorter amount of time, doing less strenuous work. Another expensive aspect of the cleanup effort was in dealing with oil soaked wildlife. It is estimated that the cost of saving one otter was $40,000. This is due to the amount of people required, transporting the animal to a cleanup site, and the rehabilitation process. As I said earlier, the total economic cost of the spill is still unknown and still growing. This is also true about the environmental cost. Millions of animals in the spill area were killed, as well as plants and microorganisms. Studies are still taking place to asses the damage that was caused by the spill. These studies will continue far into the future also.
In the end, the cleanup effort was relatively successful. Perhaps the most successful part of the cleanup involved an experimental technique. This technique involved the use of Inopol EAP22. Inopol22 is a nitrogen phosphorous fertilizer mix. The compound is sprayed on oil that has been washed up on beaches. The fertilizer then encourages the growth of "oil eating" bacteria which naturally exist in small numbers. The Inopol technique was very successful, but it was not widely used due to uncertainty as to the possible side effects. Later studies showed the side effects to be negligible. Other more standard techniques were used as well. The technique that was used to clean the beaches involved concentrating oil on the shoreline. This is done by using powerful pumps to move sea water up the beach. This water then flows through a perforated hose on high ground that runs parallel to the water front. This creates a continuos flow of water to push the oil downhill towards the shore line. High pressure hoses spray one hundred forty degree water to "blast" the oil of the rocks. This oil is also moved down hill towards the shore line. Cold water is used at the shore line to move the oil towards a central point, where it can be collected by skimmer vessels. Containment booms were also used to "corral" the oil. The booms, which are large pieces of rubber are dragged between two boats. The booms extend a foot or more under the surface in order to collect all of the oil. The oil is condensed, and then collected by skimmer boats. The cleanup effort after the Exxon Valdez spill was very intense. One worker exclaimed, "Everything from paper towels to kitchen utensils are being used."
The most publicized aspect of the Exxon Valdez spill was the damage to the wildlife in the surrounding area, especially the animals. Hundreds of birds, sea otters, fish, shell fish, and marine mammals were killed.
More than eighty-eight species of birds were affected by the spill. One hundred thousand birds are believed to been killed, including more than one hundred fifty bald eagles. The majority of these birds died due to hypothermia. After their feathers became soaked with oil, they lost their insulating ability, which then led to hypothermia. Another cause of death for the birds is anemia. When oil gets into the blood stream, it causes the red blood cells to "wrinkle" which causes anemia.
More than seven thousand sea otters were killed also. This is a significant proportion of the total sea otter population. The sea otters were killed by a variety of conditions including hypothermia. Many otters were killed as a result of oil getting into the blood stream. When the oil gets into the blood, it could cause a variety of things to happen. It could cause nose bleeds due to blood thinning, which then lead to infection. It could cause liver and kidney damage, because these are the organs which attempt to clean the oil out of the system. Damage to these organs would lead to death. It could also lead to emphysema which compromises the diving ability of the otters and eventually leads to death. Another cause of death is blindness. If oil were to get into an otter's eye it could cause blindness which would then cause starvation.
Fish were also effected by the oil spill, however, the extent of the casualties is unknown. Fishing is a huge industry in Alaska, so there has been much concern over the welfare of the fish. Many natives also live by subsistence fishing. Pink salmon and herring were the two species that people were most concerned about. Pink salmon is the biggest commercial fish in Alaskan waters, many people were afraid that the salmon population would need years to recover, however, studies have shown that the effect of the oil on spawning, eggs, and fry was negligible. Chromatography tests have also shown that there are no hydrocarbons in the flesh of most of the fish. Those that do have hydrocarbons in their flesh have a level that is so low as to be measured in the parts per billion range. Herring is also a huge commercial fish in Alaska. The 1988 catch yielded twelve point three million dollars. In 1989, after the spill, herring was declared "off limits" to fishermen. However, this was compensated by a salmon catch that was six times as big as it had been in 1988. In 1990, when herring fishing resumed, it returned to normal levels. The damage to the fishing industry was not nearly as bad as had been anticipated. Usha Varanasi, director of the NOAA's Environmental Conservation Division in S
f:\12000 essays\sciences (985)\Enviromental\animal ethics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Animal ethics is concerned with the status of animals, whereas environmental ethics concerns itself with the relationship to the environment. I will show the existence of animal ethics depends on the existence of environmental ethics. I will prove this by showing that such philosophers who have practiced animal ethics such as Singer, Regan, and Taylor are limited because they are individualistic. Which means they are limited to animal concerns, and nothing else. But with the environmental ethics such philosophers as Leapold, Wesra and Naess look at the environment ethics collectively. Which means they look at the big picture which includes the animals and its environment.
I will first look at the views of Peter Singer, who is a utilitarian. A utilitarian is someone who believes the greatest amount of good for the greatest number. Singer wants the suffering of animals to be taken into consideration. He states "If a being suffers, there can be no moral justification for refusing to take that suffering into consideration. No matter what the nature of the being, the principle of equality requires that its suffering be counted equally with the like suffering...". What this means is that the suffering of animals is not justified. He also states how he thinks a major way to stop the suffering of animals is to stop the experimenting on animals. He states"...the widespread practice of experimenting on other species in order to see if certain substances are safe for human beings, or to test some psychological theory about the effect of severe punishment or learning...". When he is talking about the experiments and suffering of animals. He is concerned most with domestic animals, he is not too concerned with the other animals in the word. Views like these make Singer limited.
Singer is limited and individualistic because he is not concerned with the environment in which animals live and since he is a utilitarian, equality is not something he is concerned with. Even other philosopher criticizes the utilitarian point of view exhibited by Singer. Regan protests "Utilitarian has no room for the equal moral rights of different individuals because it has no room for their equal inherent value or worth. What has value for the utilitarian is the satisfaction of an individuals interests, not the individual whose interests they are". If things are not given equal rights, that includes the environment there will be a tomorrow to look forward to.
Singer has also been known to show a lack of compassion and sympathy. As stated by Westra "IT is probable that, at a minimum, instrumental values has always been ascribed to those animals which have contributed in some way to the human community down through ages...Still it is possible to raise doubts about sympathy, as many claim to have no such feeling, including such animals defenders as Singer". Westra goes on to describe how Singer is not only unsympathetic to that of animals with intrinsic value but to those people in the third world. Singer feels that since the people of the third world are so far away that it is not of his concern. Singer wants the suffering of animals to stop because it is not justified, but what makes the suffering of third world countries justified? Because they are further away? Such individualistic approaches will not save the habitat in which the animals live and without that the environment will not survive. Singer is not the only one with an individualistic approach.
Another philosopher of environmental ethics Tom Regan also displays the individualistic approach. Regan believes in Cantianism. What that means is that the individuals have rights. Regan has modified it a bit to say that everyone is subject to a life. Regan believes that animal and humans all have intrinsic value, therefor they have a right to life. He calls for three changes "1) The total abolition of the use of animals in science. 2) The total dissolution of commercial animal agriculture. 3) The Total elimination of commercial and sport hunting". He believes that animals should not be treated as our resources. he also believes that since everyone is subject to a life people should not believe in contractarianism. Contractarianism states that in order to gain morality you must be able to sign and understand a contract and if they can not sign a contract (i.e. infant) you do not have the right to morality. But Regan also views things individualisticly.
He, like Singer also looks at the concerns of animals, of "Value". Those animals used in science experiments, agriculture, and commercial and sport hunting. But what about the animals not included in the list, who is going to protect the rights of those animals? Without all animals and especially the environment. Regan will not just have to worry about the reform of animal rights.
The last philosopher concerned with animal ethics in which I am going to look at is Paul Taylor. He is an egalitarian, which means everyone's interests count and count equally with the like interests of everyone's else's. He argues that humans are no more valuable than any other living thing put should see themselves as equals. He calls for two changes "1) Every organism, species population, and community of life has a good of its own which moral agents can intentionally further or damage by their actions....2) The second concept essential to the moral attitude of respect for nature is the idea of inherent worth". What this means is to respect everything and everyone even if that means the little creepy crawlies on earth. But if we respect everything intern we are respecting nothing.
One of Taylor's biggest flaws is that he has no hierarchy which intern some animals lose out. Westra sums it up best "Further, it is such an intensely individualistic ethic that it requires me to consider every leaf I might pick from a tree, every earthworm that might be lying across my path. It will also be extremely different to apply to aggregates, such as species, or community, such as ecosystems". With no hierarchy he is looking at things individualistic which means something is going to lose out.
Another problem with Taylor's that he can be applied to animal ethics as well as environmental ethics in order to make a stranger argument he should stick to either one or the other.
One way we can avoid this individualism outlook is to look at things holistically such as Leapold. He believes that we should see ourselves not as conquerers of the Land but as members of the community. He proposes we can do this by having a land ethic. The Land Ethic states "the land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals a collectively; the land". This is like an animal ethic but expanded to include the environmental ethics. He also proposes we have a land pyramid which consists of "1) That land that is not merely soil. 2) That the native plants and other animals kept the energy circuit open; others may or may not. 3) That man made changes are of a different order than evolutionary changes, and have effects more comprehensive than is intended or foreseen". The land pyramid states that changes must be made to the whole ecosystm and everything in it. It looks at things collectively. But lie everything it has its faults.
When we are looking at things holistically we are leaving some things out. And for whose to say that the land pyramid is correct, and will work? Who is Leapold to decide how and what is more important than other things.
Another philosopher who views the world collectively is Westra. Westra is concerned with the principle of integrity. She states that " 'Integrity' thus includes the wholeness of a living system". Therefor she wants to look at the ecosystem as a whole. She protests that there are four sections of ecosystem integrity. They are first ecosytem health. The second is the capacity to withstand stress and regenerate itself afterward. The third is optimum capacity (for place and time, including biodiversity). The fourth is the ability to continue development and change. With these four features an environment has a good chance of survival.
Another reason why she has a holistic approach is because she says "It counsels respect for the basis of life as well as for all entities living within ecosystems, including animals, which would involve the abolition of agribusiness, factory farming, and all other wasteful, explosive practices". She believes everything should be looked at as equal. But her views are too controversial.
Westra sates that there should be an abolition of agribusiness, but she herself admits that she eats 'free-range' chicken. It to is an agribusiness so why does it make it OK for free-range? And if we are looking at things holistically who is she to say that one type of business is any better than factory farming or agribusiness. Sure they are taking advantage of animals, but if she is to look at things holistically any business that runs successfully involves expletive practices in some manner.
The last philosopher of environmental ethics in which I am going to look at is Arne Naess. He looks at the environment in terms of deep ecology. What this means is that 1) holistic perspective. 2) biospherical egalitarianism (everyone's valuable). 3) principles of diversity and symbiosis. 4) anticlass posture, no racism, no sexism. 5) fight against pollution and resource depletion. 6) complexity not completion, cutting up science. 7) local autonomy and decentralization. They are a matter of steps or hierarchy and you have to start from the bottom and start fixing till you make it to the top. Or should I say if you make it to the top because if you can not fix each level you can not continue to the next level until its fixed. But this way of looking at things can cause problems.
Viewing the world like this could leave us right were we started from because if we can not fix it we can not move on. Another problem is when you get near the top of the steps you hit a point where you should look at things threw an egalitarian point of view. Which can bring you back to where you started from because you are supposed to respect everything which intern you end up respecting nothing.
In conclusion do to the arguments I have shown, we can conclude the existence of animal ethics depends on the existence of environmental ethics. I have shown this by demonstrating the individualistic ways in which Singer, Regan and Taylor look at this world will only save the rights of animals , and the world can not survive with just animals. I have also shown that by demonstrating the holistic views of Leapold, Westra, and Naess will preserve the rights of the environmental as a whole.
f:\12000 essays\sciences (985)\Enviromental\Animal Experimentation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
Animal experimentation has been a part of biomedical and behavioral research for several millennia; experiments with animals were conducted in Greece over 2,000 years ago. Many advances in medicine and in the understanding of how organisms function have been the direct result of animal experimentation.
Concern over the welfare of laboratory animals is also not new, as reflected in the activities of various animal welfare and antivivisectionist groups dating back to the nineteenth century. This concern has led to laws and regulations governing the use of animals in research and to various guides and statements of principle designed to ensure humane treatment and use of laboratory animals.
HISTORICAL BACKGROUND
Use of Animals in Research
Some of the earliest recorded studies involving animals were performed by Aristotle (384-322 B.C.), who revealed anatomical differences among animals by dissecting them (Rowan, 1984). The Greek physician Galen (A.D. 129-199) maintained that experimentation led to scientific progress and is said to have been the first to conduct demonstrations with live animals--specifically pigs-a practice later extended to other species and termed "vivisection" (Loew, 1982). However, it was not until the sixteenth century that many experiments on animals began to be recorded. In 1628, William Harvey published his work on the heart and the movement of blood in animals (French, 1975). In the 18OOs, when France became one of the leading centers of experimental biology and medicine-marked by the work of such scientists as Francis Magendie in experimental physiology, Claude Bernard in experimental medicine, and Louis Pasteur in microbiology and immunology-investigators regularly used animals in biomedical research (McGrew, 1985).
Research in biology progressed at an increasing pace starting around 1850, with many of the advances resulting from experiments involving animals. Helmholtz studied the physical and chemical activities associated with the nerve impulse; Virchow developed the science of cellular pathology, which led the way to a more rational understanding of disease processes; Pasteur began the studies that led to immunization for anthrax and inoculation for rabies; and Koch started a long series of studies that would firmly establish the germ theory of disease. Lister performed the first antiseptic surgery in 1878, and Metchnikoff discovered the antibacterial activities of white blood cells in 1884. The first hormone was extracted in 1902. Ehrlich developed a chemical treatment for syphilis in 1909, and laboratory tissue culture began in 1910. By 1912, nutritional deficiencies were sufficiently well understood to allow scientists to coin the word "vitamin." In 1920, Banting and Best isolated insulin, which led to therapy for diabetes mellitus. Mter 1920, the results of science-based biological research and their medical applications followed so rapidly and in such numbers that they cannot be catalogued here.
Concerns over Animal Use
The first widespread opposition to the use of animals in research was expressed in the nineteenth century. Even before this, however, concern had arisen about the treatment of farm animals. The first piece of legislation to forbid cruelty to animals was adopted by the General Court of Massachusetts in 1641 and stated that "No man shall exercise any tyranny or cruelty towards any brute creatures which are usually kept for man's use" (Stone, 1977). In England, Martin's Act was enacted in 1822 to provide protection for farm animals. In 1824, the Society for the Prevention of Cruelty to Animals (SPCA) was founded to ensure that this act was observed. In 1865, Henry Bergh brought the SPCA idea to America (Thrner, 1980).
He was motivated not by the use of animals in research but by the ill-treatment of horses that he observed in czarist Russia.
In the second half of the nineteenth century, concerns for the welfare of farm animals expanded to include animals used in scientific research. The antivivisectionist movement in England, which sought to abolish the use of animals in research, became engaged in large-scale public agitation in 1870, coincident with the development of experimental physiology and the rapid growth of biomedical research. In 1876, a royal commission appointed to investigate vivisection issued a report that led to enactment of the Cruelty to Animals Act. The act did not abolish all animal experimentation, as desired by the antivivisection movement. Rather, it required experimenters to be licensed by the government for experiments that were expected to cause pain in vertebrates.
As animal experimentation increased in the United States in the second half of the nineteenth century, animal sympathizers in this country also became alarmed. The first American antivivisectionist society was founded in Philadelphia in 1883, followed by the formation of similar societies in New York in 1892 and Boston in 1895. Like their predecessors in England, these groups sought to abolish the use of animals in biomedical research, but they were far less prominent or influential than the major animal-protection societies, such as the American SPCA, the Massachusetts SPCA, and the American Humane Association (Turner, 1980).
Unsuccessful in its efforts toward the end of the nineteenth century to abolish the use of laboratory animals (Cohen and Loew, 1984), the antivivisectionist movement declined in the early twentieth century. However, the animal welfare movement remained active, and in the 195Os and 1960s its increasing strength led to federal regulation of animal experimentation. The Animal Welfare Act was passed in 1966 and amended in 1970, 1976, and 1985. Similar laws have been enacted in other countries to regulate the treatment of laboratory animals (Hampson, 1985).
Concern over the welfare of animals used in research has made itself felt in other ways. In 1963, the Animal Care Panel drafted a document that is now known as the Guide for the Care and Use of Laboratory Animals (National Research Council, 1985a). As discussed in Chapter 5, the Guide is meant to assist institutions in caring for and using laboratory animals in ways judged to be professionally and humanely appropriate. Many professional societies and public and private research institutions have also issued guidelines and statements on the humane use of animals; for example, the American Physiological Society, the Society for Neuroscience, and the American Psychological Association.
PRESENT SITUATION
Despite the long history of concern with animal welfare, the treatment and use of experimental animals remain controversial. In recent years a great expansion of biomedical and behavioral research has occurred. Simultaneously, there has been increased expression of concern over the use of animals in research. Wide publicity of several cases involving the neglect and misuse of experimental animals has sensitized people to the treatment of laboratory animals. Societal attitudes have also changed, as a spirit of general social concern and a strong belief that humans have sometimes been insensitive to the protection of the environment have contributed to an outlook in which the use of animals is a subject of concern.
Of course, any indifference to the suffering of animals properly gives rise to legitimate objections. From time to time some few members of the scientific community have been found to mistreat or inadequately care for research animals. Such actions are not acceptable. Maltreatment and improper care of animals used in research cannot be tolerated by the scientific establishment. Individuals responsible for such behavior must be subject to censure by their peers. Out of this concern that abuse be prevented, organizations have emerged to monitor how laboratory animals are being treated, and government agencies and private organizations have adopted regulations governing animal care and use.
Discussions about laboratory animal use have also been influenced in recent years by the emergence of groups committed to a concept termed "animal rights." Some of these groups oppose all use of animals for human benefit and any experimentation that is not intended primarily for the benefit of the individual animals involved. Their view recognizes more than the traditional interdependent connections between humans and animals: It reflects a belief that animals, like humans, have inherent rights" (Regan, 1983; Singer, 1975).
Their use of the term "rights" in connection with animals departs from its customary usage or common meaning. In Western history and culture, "rights" refers to legal and moral relationships among the members of a community of humans; it has not been applied to other entities (Cohen, 1986). Our society does, however, acknowledge that living things have inherent value. In practice, that value imposes an ethical obligation on scientists to minimize pain and distress in laboratory animals.
Our society is influenced by two major strands of thought: the Judeo-Christian heritage and the humanistic tradition rooted in Greek philosophy. The dominance of humans is accepted in both traditions. The Judeo-Christian notion of dominance is reflected in the passage in the Bible that states (Genesis 1:26):
And God said, Let us make man in our image, after our likeness; and
let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.
However, the Judeo-Christian heritage also insists that dominance be attended by responsibility. Power used appropriately must be used with the morality of caring. The uniqueness of humans, most philosophers agree, lies in our ability to make moral choices. We have the option to decide to dominate animals, but we also have a mandate to make choices responsibly to comply with the obligations of stewardship.
From tradition and practice it is clear that society accepts the idea of a hierarchy of species in its attitudes toward and its regulation of the relationships between humans and the other animal species. For example, animals as different as nonhuman primates, dogs, and cats are given special consideration as being "closer" to humans and are treated differently from rodents, reptiles, and rabbits.
Most individuals would agree that not all species of animals are equal and would reject the contention of animal rights advocates who argue that it is "speciesism" to convey special status to humans. Clearly, humans are different, in that humans are the only species able to make moral judgments, engage in reflective thought, and communicate these thoughts. Because of this special status, humans have felt justified to use animals for food and fiber, for personal use, and in experimentation. As indicated earlier, however, these uses of animals by humans carry with them the responsibility for stewardship of the animals.
Several recent surveys have examined public opinion about the use of laboratory animals in scientific experimentation (Doyle Dane Bernbach, 1983; Media General, 1985; Research Strategies Corp., 1985). Most of the people interviewed want to see medical research continued, even at the expense of animals' lives. Beyond that, people's thoughts about animal use depend on the particular species used and/or on the research problem being addressed. Almost all people support the experimental use of rodents. Support for the use of dogs, cats, and monkeys is less, and people clearly would prefer that rodents be used instead. Most people polled believe that animals used in research are treated humanely.
f:\12000 essays\sciences (985)\Enviromental\Animal Rights 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Animal Rights
Animal rights is a very fragile topic. Opposing sides have strong reasons
to stand for either of their believes, leading to many ethical questions.
One of the major questions is who is right and who is wrong? There is no
one right answer, but instead million of them based upon our own individual
opinion, and the opinions are formed on how we feel about the facts. Animal
research, test, and use has taken humanity a long way, with its advances in
medicine and as a major source of food, but it is not morally correct to
abuse, test, use, and ultimately kill the animals unnecessarily, especially
for our comforts, luxuries, and greed.
Many benefits have been obtained through animals, mostly in the field of
Medicine. The medical world has rapidly moved forward finding cures for many diseases through animal testing, giving new alternatives and shining a new light for illnesses that did not have a cure before. Working with animals
like monkeys and dogs have resulted in successful open heart surgeries on
people, as well as organ transplants, and the cardiac pace maker.
The disease polio, which killed and disabled many children, is almost
completely vanished from the United States by the used of preventive vaccines that were perfected on monkeys. Not only polio, but also mumps, measles,rubella and smallpox have been eliminated through antibiotics tested on monkeys. Major diseases have been alleviated and have had major advances totheir cure, diseases such as leukemia in children among other types of cancerand tumors. Animals not only contribute greatly in medicine, but through out the history of human kind they have been consumed as food, being a majorsource for the basic nutrition.
Unfortunately, some research has gone too far, putting many animals through unnecessary pain. According to Jean Bethke Elshtain, a Centennial Professor of Political Science at Vanderbilt University, the abuse that animals go through is intolerable. Monkeys are dipped in boiling water and pigs are burnt, without any type of painkillers to see how they react to third degree burns. Even more horrifying is the fact that dogs and other animals are left without care bearing open incisions, infected wounds, broken bones among other things in a miserable atmosphere, surrounded by rotting food and their own feces. What breaks my heart to read is how when some animals are being tested, the researchers sometimes remove the defenseless animals vocal chords by an operation called centriculocordectomy, so this way they will not hear the animals' cries, groans and yelps while they are being
experimented on.
Research it's not only limited to the medical field, but also in many other
areas. "Monkeys are the most likely subjects of experiments designed to
measure the effects of neutron-bomb radiation and the toxicity of chemical
warfare agent.....Radiation experiments on primates continue. Monkeys' eyes
are irradiated, and the animals are subjected to shocks of up to 1,200
volts." Monkeys are exposed to these radiations to observe
how cancer progresses. But what observers actually see is "primates that are
so distressed that they claw themselves and even bite hunks from their own
arms and legs in a futile attempt to stop the pain." The pain these
animals endure and go through is unbearable and unnecessary, other means
could be use to do the research, considering many times these type of
research is unnecessary. Even worse is the fact that animals are been used
in the production of things such as make-up and household products. Is it
really necessary for a whale to die so we can wear lipstick for couple of
hours, or so we can wax our floors? The price of these commodities is
ridiculous, an animal should never even be at risk of been hurt for such
superficial things. Many companies such as Gillette test their products on
animals before putting them on the public market. This is simply cruel,
animals are hurting and dying for things that are superficial. The worst is
the FUR COATS!!! It is a a a greedy murder.
If anything we are the ones that owe the animals a great debt. Without themwe would be years behind in medicine and many things that are done now wouldnot be able to be accomplished. But how far are we willing to go for thesake of medicine, technology, comforts, luxuries and looks? Is all theseworth the death and abuse of millions of animals? The answer is no. Grantedthat they have helped us greatly in the ordeal of human kind, by letting ustest on them and eat them, we are ultimately responsible for most of the diseases and the habits and they should not pay for it. Even in the name of Medical Science, the life of an animal that is taken unnecessarily was a
mistake that will never be repaired.
f:\12000 essays\sciences (985)\Enviromental\Animal Rights.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Isn't man an amazing animal? He kills wildlife--birds,
kangaroos, deer, all kinds of cats, coyotes, groundhogs, mice, foxes and
dingoes--by the million in order to protect his domestic animals and their
feed. Then he kills domestic animals by the billion and eats them. This in
turn kills man by the million, because eating all those animals leads to
degenerative--and fatal--health conditions like heart disease, kidney
disease, and cancer. So then man tortures and kills millions more animals
to look for cures for these diseases. Elsewhere, millions of other human
beings are being killed by hunger and malnutrition because food they
could eat is being used to fatten domestic animals. Meanwhile, some
people are dying of sad laughter at the absurdity of man, who kills so
easily and so violently, and once a year sends out cards praying for
"Peace on Earth." (Coats, 13)
We need to realize, that in today's society, animals
deserve just as much freedom as humans have. Although we
are larger in size, we are not superior in status. Animals have
been around on the earth for as long as humans, if not
longer. Animals play an important role in today's society
whether or not we choose to admit it. Like a newborn baby
learning to play with others we must learn to share the
planet with animals.
One of the main issues being disputed today with animal
right, is SUPERIORITY. Over 7 Billion animals die at the
hands of humans, in the US, every year. Out of those 7
Billion animals, about 95% of them are killed for uses as
food. Advocates for animal rights justify their research by
presenting the obvious differences that exist between humans
and animals. These include size, status, strength, and ability.
Sometimes, one who is against animal rights will take the
attitude that "God gave them [animals] to us to use." What
these researchers fail to recognize is PAIN. All creatures are
capable of feeling pain. If a creature is capable of
experiencing the pain then they too can wish for the pain to
end. That right, along with many others, of animals is being
denied to them even as we speak. Animals have a few basic
rights which should be observed by all, no matter what
company or corporation they belong to. These rights include
freedom, the right to live peacefully in their own
environment, the right to receive respect and most
importantly, the right to LIFE.
Throughout history it has been noted by many, that
humans have gained their existence from animals. In 1871,
Charles Darwin proposed that one species could evolve from
another. He stated that humans had evolved from other
primates, such as apes and monkeys. Darwin related human
feelings to those of animals. By stating that certain human
characteristics could be traced back to animals, Darwin
caused much controversy. He was now contradicting the
traditional relationship between human and animal. In a
way, almost putting them both on the same level. This
theory questioned all that was believed to be true in society
and made people think about their purpose.
With many of the experiments done today, animals
are mistreated in every way shape and form. Usually, there is
a lack of adequate food and water. Ventilation for the
animals is minimal and many times cages are packed full with
animals, leaving very little if any room to move around. On
many occasions, the animal will die throughout the course of
the experiment. Animals are in laboratories, today, because
we are powerful enough to keep them there, not because they
truly belong there. Once we have an animal caged and
restrained, we suddenly gain an even greater feeling of
superiority over the animal. No matter what laws exist,
experimentation will continue. This is because the
experimenter's imagination is endless. Our containing the
animals can be related to one's enslavery of another human
being.
One way in which many domesticated, yet homeless,
animals are brought to laboratories is through Pound
Seizure. Pound Seizure is a law which requires
shelters/pounds to sell, "extra" animals to experimenters.
As a result of Pound Seizure, 200,000 stray/homeless cats
and dogs die at the hands of researchers each year. At the
moment, there are five states which allow and have Pound
Seizure Laws. Iowa, MN, OK, SD, and UT all allow these
practices to go on legally.
Today, one major reason that the animal rights
movement still plays a minor part in society is IGNORANCE.
Many people feel that animal testing does not exist because
they do not see it directly. What they do not realize is that
animal testing is being done by many of the major companies
around today. Colgate, Palmolive, Gillette, Johnson &
Johnson, and L'Oreal all test on animals. One way to get
around using these animal tested products is to look for
products that will specifically state "No Animal Testing Done
On This Product."
Today, an animal's rights are protected by the Animal
Welfare Act which was passed in 1966. It sets standards for
the treatment of animals used in research, zoos, circuses, and
pet stores. It covers housing, food, cleanliness, and medical
care. Although this act covers many animals, mice and rats,
which account for 70% of animals used in testing, are not
included, and many animal rights activists claim that the
standards are not strictly enforced. According to
Psychological Abstracts, every year approximately 1-2 million
animals are used for research purposes. About 90% of the
animals that are used are rats, mice, and birds. Presently
there are two major animal rights groups around. They
would be PETA and the ASPCA. PETA (People For the Ethical
Treatment of Animals), promotes vegetarianism
"Does Your Food Have A Face?" (Achor, 57)
and "cruelty free" products. PETA attempts to establish and
defend the rights of all animals. Their primary focus is on
the factory farms, laboratories and the fur trade, but will
also concern themselves with hunting, fishing, zoos, the
circus and other ways in which animals are used for
entertainment purposes. PETA is actively involved in
exposing all the illegal practices used in animal
experimentation.
As many new studies continue to come out, more
research is pointing to the conclusion that animal
experiments are not always as accurate as we'd like to think
they are. Although some similarities exist, quite a few of the
positive results gotten from animal tests, will backfire when
first used by humans. A commonly used test by researchers
is the LD 50. The LD 50 (Lethal Dose) gets its name because
in this experiment 100 animals are taken, and then subjected
to lethel doses of a potentially dangerous chemical or drug.
The dosage is the increased until 50 of the animals die. If the
animal is "lucky" enough to survive, its life will be a total hell.
They could end up being deformed for life. If not deformed
than definitely traumatized. After being subjected to
constant pain, punishment, stress and social and emotional
deprivation, the animal might never act the same.
Animal rights is a humane position which looks out for
the rights of others not just humans. It must be
understood, that all living beings are unique expressions of
life. They each have their own inherent value, for if they did
not, then why would they exist? We may think that just
because animals do not speak to us, that they do not possess
feelings, but they are capable of feeling pain and suffering.
Just like you and me animals have the right to live their life
without exploitation, or unnecessary pain.
"Question: What has billions of legs but still can't run
away?"
"Answer: The 6 Billion animals raised and killed for food every
year in the US." (Achor, 77)
This may sound like a funny riddle, but just remember what,
or who, you are eating next time you have some sort of meat.
f:\12000 essays\sciences (985)\Enviromental\BATS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Bats
Prepared By:
C4
8th Grade
01-05-97
Contents
1. Title Page Page... 1
2. Contents Page... 2
3. Bat Facts Page... 3-4
4. Congress Ave. Bridge Page... 5-6
5. How To Get A Bat Out Of Your House Page... 6
6. About Bat Houses Page... 7
7. References Page... 8+
My report is on bats. I will start my story off by telling you some facts about bats.
Bat Facts
1. Did you know that the worlds smallest mammal is a Bumblebee bat that lives in Thailand. It weighs less than a penny!
2. Vampire bats adopt orphan pups (the name for a baby bat) and have been known to risk their lives to share food with the less fortunate.
3. The African Heart-Nosed bat can hear the footsteps of a beetle walking on sand from a distance of over six feet!
4. The giant Flying Fox bat from Indonesia has a wing span of six feet!
5. Disk-winged bats of Latin America have adhesive disks on both feet that enable them to live in unfurling banana leaves (or even walk up a window pane).
6. Nearly 1,000 kinds of bats account for almost a quarter of all mammal species, and most are highly beneficial.
7. Worldwide, bats are the most important natural enemies of night-flying insects!
8. A single brown bat can catch over 600 mosquitoes in just one hour!
9. Tropical bats are key elements in rain forest ecosystems which rely on them to pollinate flowers and disperse seeds for countless trees and shrubs.
10. Bat droppings in caves support whole ecosystems of unique organisms, including bacteria useful in detoxifying wastes, improving detergents, and producing gasohol and antibiotics.
11. More than 50% of American bat species are in severe decline or already listed as endangered. Losses are occurring at alarming rates worldwide.
12. All mammals can contract rabies; however, even the less than half of one percent of bats that do, normally bite only in self-defense and pose little threat to people who do not handle them.
13. An anticoagulant from Vampire bat saliva may soon be used to treat human heart patients.
14. Contrary to popular misconception, bats are not blind, do not become entangled in human hair, and seldom transmit disease to other animals or humans.
Well, enough with the facts. I think that should get you ready for the rest of my essay.
Austin, Texas
Congress Ave. Bridge
A Bit Of History.......
When Engineers reconstructed downtown Austin's Congress Bridge in 1980, they had no idea that the new crevices beneath the bridge would make an ideal bat roost. Although bats had lived in Austin for years, it was headline news when they suddenly began moving by the thousands under the bridge. Reacting in fear, many people petitioned to have the bat colony eradicated.
About that time, Bat Conservation International (BCI) stepped in and told Austinites the surprising truth: that bats are gentle and incredibly sophisticated animals, that bat-watchers have nothing to fear if they don't try to handle the bats, and that on the nightly flights out from under the bridge, Austin bats eat 10,000 to 30,000 pounds of insects, including mosquitoes and numerous agricultural pests.
As the city came to appreciate its bats, the population under the Congress Avenue Bridge grew to be the largest urban bat colony in North America. With up to 1.5 million bats spiraling into the summer sunset, Austin now has one of the most unusual and fascinating tourist attractions anywhere!
Congress Avenue Bridge's bats are mostly Mexican free-tails (Tadarida brasiliensis). These bats migrate each spring from central Mexico. Most of the colony is female, and early June each one gives birth to a single baby bat. At birth the babies weigh one-third as much as their mothers (the equivalent of a human giving birth to a 40-pound child!). The pink, hairless babies will grow to be about three to four inches long, with a wingspan of up to a foot. In just five weeks, they will learn to fly and hunt insects on their own. Until that time, each Mother bat locates her pup (baby bat) among the thousands by its distinctive voice and scent.
What To Do If A Bat Gets Stuck In Your House
1. Open a door or window and wait for it to fly out.
2. Wait for the bat to calm down and stop flying. When it has stopped flying put a bowl over it and then slide cardboard under the bowl. Then all you have to do is open the door and pick the bowl up.
3. Another way is to build a net. It should look something like one of the nets that you use to catch butterflies in.
If You Would Like To Keep Bats Around Your House To Keep Those Insects Away This Summer.
You can order instructions on how to build a bat house or you can buy one . My dad and I found instructions on how to build a bat house for 40-50 bats. We found these instructions in a magazine. I found instructions on the Internet for sale for $6.95. They also sold the same bat house that my dad and I built (the one in the magazine for 40-50 bats). They were selling the bat house for $50. We built ours for about $8!
You can help protect bats by simply spreading the word about these gentle and beneficial animals. Tell a friend. Teach a child or parent. Write a letter to your government representative. Join BCI and become a member. You can even build your very own bat house.
f:\12000 essays\sciences (985)\Enviromental\Between the Forest and Greed .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Within the past decade there has been a rising "environmentally conscious" movement. The spectrum of issues in contention by environmentalism has expanded virulently and is reaching its zenith. Public dissatisfaction with the environmental movement is forming, as the movement has taken the fight for the environment too far. Donella Meadows is an environmentalist who has yet to fully think about the issue she is arguing. In her piece "Not Seeing the Forest for the Dollar Bills," she takes an almost infantile approach to the logging industry and the concept of clear cutting. The monetary motivations behind the logging industry is her explanation for clear cutting, trying to portray the logging industry as a cold money making machine. This of course neglects the fact that the reason logging generates capital is because the world needs wood. There are several economic and environmental issues that are considered when loggers enter and area. Haphazard clear cutting of forests, while it maybe what Meadows would like us to think, does not happen. With every industry, every aspect is carefully debated and analyzed for the short and long term outcomes.
Any industry that capitalizes on earth's resources figuratively signs a pact with the earth. This pact bonds this industry to the earth and requires that any harvesting of resources is not done so with haste and waste. There is a symbiotic relationship between the two. For the industry to exist there must be a constant supply of the resource. Without a constant supply the industry dies. Now, many people believe that the logging industry's objective is to cut down all the trees that are currently standing. As horrific as this scenario may sound, it is far from the truth. Without trees to cut down there is no industry. The logging industry is not so foolish as to rampage the forests and cut down all the trees. As they cut, they plant. Replacing forests with samplings may look inadequate, but over a long period of time these samplings will become a new forest. The earth as we know it today has been in existence for millions of years. Even if newly planted tress take a century to grow back that is only a pinpoint on the time line. The millions of acres of forested land left untouched currently will not be engulfed by blades and tractors instantly. It will take time to cut down the trees, as it will take time to grow them back.
Meadows seems to have a misconception of industries and the service they provide. All industries, whether it be recycling to logging, are trying to maximize their profits. If this means moving their plants off shore, so be it. These industries provide the world with services that we need to operate as an advanced civilization. She claims that the remaining old growth forests are on protected federal land. If this is the case she has little to complain about. The remaining portion of what she is trying to protect is protected. At the same time she is also claiming that old growth forest can not be recreated. This seems far fetched from the eyes of an historian. Referring back to the history of the earth, one can assume that before humans inhabited the land that forests burnt to the ground leaving nothing but charred remains. Yet forests still exist today. Now, when they are threatened by fire, we save them instead of allowing nature to take its course. Meadows gives the reader a choice between "the forest and greed". If her choices were accurate one would probably choose the forest. The problem lies in her choices. They are given to the reader from only one perspective... hers. When an argument is based upon a one sided view it loses strength. It only leads to flaws and the eventual dismissal of the argument. In any debate one should look at the topic from the opposing side before approaching it from one's own.
In every country, forests are considered a valuable resource. They provide us with wood to build homes and paper to communicate. With a constantly growing population, the need for homes becomes greater and thus the supply of wood must also increase. The real choice that should be analyzed is "the forest or your home". Many alternate forms of building materials are phased into the system as need be, but the need for wood will always exist. Knowing that the world will continue to cut down trees, the only solution to the forest depletion, is reforestation. A forest ecologically engineered with the proper plants and bacteria may not be perfect initially but it will someday become an old growth forest. The animals that live in these forests, will learn to adapt. The few animals that don't adapt will probably die off or move. One may say that is a cold way to look at the problem, but thousands of species have become extinct and this process is called evolution. Whether evolution is accelerated by man or comes in due time by nature, the outcome is the same. I would propose that a human life is more important than any other life on this planet, and that taking care of humans is a higher priority than that of animals. Meadows has yet to understand the logging industry and what it is trying to accomplish. Her piece is based on fear and poor preparation and that is why the choices she gives the reader is inaccurate.
f:\12000 essays\sciences (985)\Enviromental\Biodiversity.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"BIODIVERSITY"
Biodiversity, as defined by E.O. Wilson, "is meant to be all inclusive- it's the genetic based variation of living organisms at all levels, from the variety of genes in populations of single species, through species, on up to the array of natural ecosystems." This includes plants, animals, insects, fungi, bacteria, and all microorganisms. All of these things create what is known as a web. These things interact with each other in some way there for they depend on one another throughout their lifetime. There are many separate ways in which we can study biodiversity. These ways include, genetic, species, community, ecosystem, and taxonomic diversity. Biodiversity can be best number of species in a given area, or scientifically, "species richness."
Today there is a biodiversity crisis facing us. This is caused mainly by the destruction of habitats. This dramatically increases the rate at which species decrease in number and become extinct. It is appalling to know that we are the main cause of this. Over fishing, pollution, over cutting, and an increase in population contribute to this problem. An example of this is the gold mining operation that we saw in the video. While mining, Mercury was dripping into the water. The mercury then got into the fish and into the humans who ate the fish.
Biodiversity promotes a healthy environment. Environments rich in biodiversity are stronger and can with stand things such as drought, disease, and other stresses that environments that lack it cannot. In the video, during the drought, the side of the field with a more diverse environment held stern as the other wilted away.
Areas that are very diverse are very important to humans as well. They provide a wide arrange of pharmaceuticals such as aspirin and penicillin.
"Some 40 percent of U.S. prescriptions are for pharmaceuticals derived from wild plants, animals and microorganisms.(E.O. Wilson)" They also provide fruit, oils, beverages, drugs (including illegal narcotics), fuel, and much more. Humans also benefit from biodiversity from what E.O. Wilson calls "biophillia," which is the natural affiliation humans have for natural environments.(E.O. Wilson)
Old growth forests play a dentrimental role in biodiversity preservation. It's most important feature is biodiversity. Old growth forests provide us with many of the things that we as humans take for granted, for example, breathable air pure water, and pest control through birds, bats, and insects. In the Eastern U.S., most of the old growth occurs in small isolated areas. Scientist have come to the conclusion that even if these matured areas cover a substantial portion of landscape, it will not provide long term diversity for many species that live in such a community.(How much old growth..?)
Many environmentalists are increasingly concerned with this biodiversity crisis. As humans we need to do our part to end this. Most people don't realize the impact of the environment on our lives. "Our most valuable but least appreciated resource.(E.O. Wilson)" This quote best summarizes society's action towards our ecosystem.
f:\12000 essays\sciences (985)\Enviromental\Can We Say NO To Recycling.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dr. Nadia El-Kholy.
English 113.
Tamer Wadid Shalaby.
Final Draft Paper.
Can We Say "NO" To Recycling
Lately the earth's capacity to tolerate exploitation and absorb solid wastes disposal has diminished, due to excess trashing. People dispose lots of stuff, and simply do not care. Therefore scientists found out a way to reuse things and that process was called "recycling". This new approach seemed quite successful at the beginning, until its true identity appeared. Recycling first started as man's best friend, people were intrigued by this new phenomena. What could be better than using things that were already used. Recycling has been very useful especially that man is constantly consuming, burning up, wearing out, replacing and disposing at an alarming rate.(Durning 1992). However, unfortunately recycling has proven that it is quite costly. Although recycling of wastes material solves the problem of garbage disposal at landfills, and saves resources, it does nevertheless entail large hidden costs in collecting, sorting and manufacturing; therefore, it is necessary for the government to overcome such problems of recycling to be worth while and for manufacturers and consumers to consume less.
Recycling has proven its efficiency in solving the problem of garbage disposal at landfills1. By the accumulation of garbage throughout the years, space available for garbage has largely diminished. In the states for example almost 67% of their waste stream ends up in landfills.(Scott 25). This has in fact increased the price of disposal. As Kimball stated "tipping fees" at landfills, is so often prohibitive(3), and some cannot find landfills to dump their garbage. It can cost up to $158 to pick a ton of garbage and dispose it.(Consumer Reports 1994). Beside, these landfills pollute their surroundings area with lots of hazardous materials and contaminate underground water. To discover the contamination of the underground water it would be 12 yeas after the poisons-benzene; formaldehyde; mercury; and BCEE- have actually contaminated the land, and had sunk 24 feet into the ground contaminating about 50 million gallons of underground water.(Dahir 94). Besides these lands could be used in more useful ways such as building schools, hospitals, or simply turning them into large green areas to purify the air. This problem is practically acute in Egypt, since we do find even in central areas of the city, piles of garbage disposal very near to residential areas. Recycling would therefore eliminate this problem and protect the environment.
If we consider burning as an alternative, well it is not very advantageous, so often burning is done in incinerators. According to Plenum, incineration is the process of disposing of the "Combustible portion of the community wastes"(81). This burning pollutes the air in the area around it. It is not the way to solve the problem of recycling because it solving one problem by creating another which is air pollution. In this process a number of pollutants are emitted which poisons the air. Carbon dioxide and lead are by products of burning that most health organisations consider highly toxicating. These by products affect children mentally and physically. In addition, carbon dioxide is considered one of the main reasons of global warming because the molecule itself captures heat an stores it in it thus creating the green house effect. Besides plastics are rather toxicating when burnt according to Plenum, Acrylic type plastics emit HCN gas, Bromine components that are added to plastics results in the emission of HBr , which are all dangerous pollutants (157). Obviously burning cannot be considered an alternative and as stated in Consumer Reports, "Recycling does help to keep garbage out of landfills and incinerators, both of which pose environmental problems."(Feb 1994). Although burning lessens the physical amount of the waste materials, it is considered one of the easiest way to pollute the air.
Though these are great advantages to us and the environment, but recycling costs more than you could imagine. A study found that when the cost of garbage is calculated by volume, landfilling and recycling costs are roughly the same. Recycling does not appear to save any money, this applies to most of the European countries and the United States and studies have lately proven so. "Recycling is a good thing, but it costs money."(Boerner and Chilton 7). This view has been confirmed by John E. Jacobson, who is the president of AUS, a consulting firm in Philadelphia who stated that it is often more expensive to recycle than to manufacture from raw material. The process goes through lots of phases. First collecting and sorting garbage and second is manufacturing and marketing. Collection is a phase by itself. In developed countries such as the States, Europe, and the Far East, the people have a great deal of awareness of the situation. People know that recycling is important and would save us lots and lots of things. So the country itself provided facilities to help the people recycle such as machines that recycle cans on the spot and gives 2.5 cents/can and recycables-collecting programs and others. These collecting programs are costly, besides they do not work in apartment buildings. Beside, vehicles that transport these materials are not so cheap, besides most of these trucks' capacities are wasted by bulky objects. Especially when trash contains a lot of plastic containers. More tractors more rounds are required to collect recycables. This adds to the cost. "We took plastics out of recycling programs because we could not afford to drive around with trucks with 45% of their collection capacity taken up by air."(Consumer Reports 1994).
As for the sorting process it entails lots of man power and tools, both of which are very expensive. The material cannot simply be all fed into one big machine and then "boom" we have recycled material. No, every kind of material must be put alone then fed into big recycling machines. This process of separation or sorting costs money. Manufacturers have to hire labourers to sort out glass from aluminium from cardboard from tin and so on. According to Consumer Reports, the sorting equipment and the man power involved in the process is a big investment (1994). It is important to know that this process of collecting and sorting is particularly expensive in developed countries where sophisticated tools are used where man power is rather limited and expensive. However, in developing countries like Egypt, the process of collecting and sorting are rather primitive and is carried out by the "Zabaleen " or a second-hand car. This makes it less expensive than developed countries.
Manufacturing and marketing is the second phase in this process. In order to build the factories that do the recycling operation, the most important thing we must have is the capital. Building these factories is quite expensive plus it takes time because the latest technology must be applied in there. According to The Consumers Report, when garbage is sorted it is sent to factories to be put in industry. These factories usually designed for producing from raw material, need "retooling" so as to use recycled material; which is very expensive. For example Union Carbide Inc., one of the nations major supplies of plastic, had to spend 10 million dollars on building a factory that would recycle plastic bottles it had produced (1994). Therefore to retool a factory to make it compatible with the demands of recycling means machines in an old factory must be replaced with new ones and this is costly. For some reason all the machinery in a recycling firm tend to ware out so fast, it is due to the interaction of these materials. So what has to be done now is buy new machines for these firms ever time they ware out, well that's cheap.
Another disadvantage of recycling which makes it unattractive is that for manufacturers economically the recycled material is not highly demanded, it is not that pure as the virgin material. In a grocery store in Los Angeles-where I was staying-most of the food is kept in cardboard containers or boxes that are made out of recycled multimaterial . While books and furniture that are made out of artificial wood and paper do not prefer the usage of recycled material. According to Recycling is it worth the effort obstacles that are faced due to the recycling of paper is that the recycled paper is of lower quality than virgin paper for some uses.(94). This is a very good reason to look for an alternative because recycled material cannot be used in projects that are worth while, such as books and furniture. " In many cases, manufacturers would be forced to switch from multimaterial packages, which are difficult to recycle, to homogeneous, single-layered packaging."(Boener and Chilton 13).
So if marketing of recycled products is not economically worthwhile, then the whole process of recycling cannot be economically efficient. Manufacturers cannot be motivated to recycle if their recycled products are not demanded. "We have got to be realistic about some things" said Mitchell Alison, "we set goals with certain economies in mind, we no longer have those economies, so we have got to revisit these goals."
What led most of the economists to look for a substitute for recycling is the inconsistent quality of the products. The products of recycling are not as good as the original ones, thus leading consumers to look for a substitute. This inconsistency is due to improper sorting of material. People are expected to have a separate container for each thing that is recycled. When people mess around and misplace things this is due to either untrained employees who do not differentiate between the recycables or careless dumping. Careless dumping a result of unconcern of the people themselves. Also the recycled products are contaminated. Different kinds of paper or differently used aluminium cans when mixed together to be recycled they do not produce a quality of the same kind. Recycled paper faces four main obstacles, weak marketing for mixed paper, recycled paper is of lower quality than virgin paper in some uses, con not be used indefinitely , and finally photocopy, laser-printed paper hard to de-ink.(Consumer Reports 94). It is not as good as using the virgin material thus having low quality and less durability.
Recycling programs cost money, and where do you think this money comes from, taxpayers' pockets. Taxpayers are the ones who are stuck paying for these programs, they are forced to do so. A grocery store that uses recycled paper bags, plastic containers, and tin cans must be able to pay for them, or increase the price of the product and the consumer will pay of course because it is something humane and for the environment. To prove that recycling expenses are a burden on consumers, a margarine producer switched from plastic tubes to aluminium containers this led to an increase in the expenses of the product thus increasing by 25% to reach 50%. Obviously this increase in price will be passed on to the consumer increasing the price of the commodity.(Boener and Chilton 14). Most of the recycling organisations are non-profit organisations. Still it is expensive to use recycled material that is because the recycled material costs much.
The government must have a role in all this, its role is to overcome such problems. These programs must be financed by the government, but not in a way that the taxpayers have to suffer. Also some materials are better dumped than recycled, the government should look for the material that would cost the least to recycle and use it in most things. Such researches should be conducted and financed by the government. The packaging industry consumes a lot of paper and plastic, if this industry would consider using recycled material and less packaging they would save a lot of energy, time and resources. "Manufacturers of polycoated paper packages claim that recycling their products is both a boon to source reduction efforts and an energy-efficient process."(Kimball 64). That is what we all want a program that is cost efficient and saves energy. Also the taxpayers should pay according to the amount of recycables each household recycles. It should not be the same amount paid for each household because some people recycle less than others therefore they should pay less. This way the government will create the suitable conditions to encourage recycling programs and maybe help preserving the environment.
This rapid leap in our lives have led us to create recycling and hopefully it will lead us to look for a way to better plan it. Better planing for recycling will help prevent the problems faced now by recycling. If it could be made that it satisfies the needed conditions previously mentioned, to be cost-efficient, not time consuming, and a better quality of products, this would be like a dream come true. Recycling should be cost-efficient because what all nations are facing are massive economical problems. Financing these programs is one hell of a job and if it has to be done anyway, then we should at least look for ways to make it cost efficient. People should learn to use and reuse, rather than use and dispose. If we can use things more than once and could save energy then why not do so. "Reuse means getting more use out of a product to reduce the waste stream. Many so-called disposable items, such as plastic cups, knives, and forks, can actually be washed and used several times." (Scott 25). As we can see the benefits are over-estimated, and the costs are under-estimated. What we should do is not only look for an alternative but also look for other ways to improve recycling. The natural resources will not last for ever, eventually everything comes to an end and the end is very near to our natural resources. What is of greater importance is to find alternatives to such resources if they actually become extinct. Recycling is backed by most of the general public, for its ideas of saving the environment, energy, and virgin material. But it is not that good or that efficient it still costs money and is not that safe. "Recycling does not necessarily provide for safer or more environmentally sound disposal than landfilling or incinerators. The recycling process itself generates enormous amounts of hazardous wastes."(Schaumburg 32). In addition it will decrease and maybe solve the problem of the ever increasing pollution. Imagine that every time someone throws a piece of paper in the garbage is similar to a person cutting a leaf off a tree. This is what happens when one does so, so recycling was the way to solve such a problem.
f:\12000 essays\sciences (985)\Enviromental\CFCS detroy the Ozone.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cfcs Cause Deterioration of the Ozone Layer
The deterioration of the ozone layer , caused by Cfcs, endangers the lives of humans'. Cfcs have a diminishing effect on the ozone layer. Furthermore, the deterioration of the ozone cause an increase of Ultraviolet (UV) radiation which can have a negative effect on human skin and eyes. As a writer for newsweek, I have investigated the scenario and found the following information.
The earth's atmosphere is a blanket of air that surrounds the planet. This atmospheric air is made up of many different gases, 78% nitrogen, 21% oxygen, and 1% of a dozen or more other gases like carbon dioxide, helium, and ozone.
This atmosphere extends many miles out from the earth's surface. However, this layer is not a uniform layer, from top to bottom. As one moves out from the planet's surface the atmosphere becomes progressively dense. This atmosphere can be divide into four major regions.
The first region is the troposphere which extends about 6.5 miles above the planet's surface. The troposphere contains the oxygen that we breath and is where a majority of our weather takes place.
Beyond the troposphere is the second region of the atmosphere, the stratosphere. The stratosphere extends from roughly 6.5-30 miles from the earth's' surface. The air on this region is much less dense than in the troposphere, and it's a lot drier. The stratosphere is the area that contains the majority of the ozone layer.
Past the stratosphere is the mesosphere which extends to 50 miles above the planet. The last region is the thermosphere. The thermosphere's outermost edge is roughly 600 miles above the surface of the earth. Beyond it, the airless vacuum of space begins.
Oxygen is made up of two oxygen atoms that are bonded together. In the periodic table it is represented by O2.
Like oxygen, ozone is a gas that is made up of oxygen atoms. However, a molecule of ozone is made up of three atoms of oxygen bonded together, therefore, O3, represents ozone. The ozone makes up only .01% of the atmosphere. Furthermore, 90% of the ozone is found in the stratosphere. It is concentrated in a layer between 7 and 22 miles above the earth's surface.
The massive depth of the ozone in the stratosphere would lead you to believe that it is very thick, it is not. If it were condensed, the ozone layer would only be a few millimeters thick (Rowland and Molina 1994. p.23).
The ozone is made in the stratosphere. It is continuously being formed, broken down, and reformed, over and over again. Furthermore, the three key elements of the cycle are: oxygen, ozone, and the energy from the sun.
The ultimate source of energy for our planet is the sun. This energy travels through space in the form of Electromagnetic Radiation. Furthermore, this electromagnetic radiation is often referred to as waves and their length, therefore, wavelengths. The sun has a wide range of wavelengths. This range is known as the Electromagnetic Spectrum. In this spectrum there is Gamma, Ultraviolet, Visible, Infrared, and Radio waves.
It is the ultraviolet (UV) radiation coming from the sun that drives the ozone cycle in the stratosphere. When a oxygen molecule is hit by a high-energy UV ray, the O2 molecule absorbs the ray's energy. As a result, the bond holding the oxygen molecule together breaks. This break separates the molecule, O2=O+O. These separate molecules quickly join with nearby oxygen molecules to form a ozone molecule, O3=O2+O. Simultaneously, ozone molecules are being hit, they absorb the ray's energy and break apart, leaving behind an oxygen molecule and a single oxygen molecule, O3>O2-O. At this time, the entire process repeats itself making new molecules that are separated which combine to make new molecules, over and again (Rowland and Molin 1991 p. 42).
As a result of this cycle, about the same amount of ozone is produced as is broken down in the stratosphere. Therefore, the amount of ozone stays the same under normal circumstances (Rowland and Molina 1991 p.43).
A constant and stable ozone layer are important for life on earth because the high-energy UV rays that are absorbed in the ozone layer are extremely dangerous. These rays can kill some things while seriously damaging others. For example, some bacteria exposed to UV rays will die. Plants, on land and in oceans, can be seriously damaged or even destroyed by UV rays. When humans are exposed to the powerful rays, their skin can burn, damage to the eyes , and permanent changes in cells that can lead to cancer and other problems can occur. By absorbing the UV rays, the ozone molecules in the ozone layer form a shield that protects life on earth from the dangerous and even deadly UV rays. Cfcs affect this process.
Chloroflourocarbons (Cfcs) are man-made chemicals that were invented in 1928. However, they were not used on a large scale until the 1950's. There are many different types of Cfcs, but they all contain the same basic elements: chlorine, flourine, and carbon. Furthermore, different Cfcs contain different amounts of these elements. Some of the more commonly used Cfcs are: Cfc 11, also known as R-11, Cfc 13, and Trichloroflouromethane; Cfc 12, also known as freon, R-12, Cfc 12, and Dichlorodiflouromethane; and the third common type is Cfc13, also known as R-113, CF2CICFC12, and 1,1,2 Trichlorotrifluroethane. Moreover, Cfcs are considered to be chemically unreactive, or stable.
Due to their stability, Cfcs have been used for many different tasks. For example, Cfc 12 is the most popular liquid coolants for refrigerators and air conditioners. Several other Cfcs work well as aerosol propellants, in manufacturing foam, and in making Styrofoam containers. Furthermore, others are being used for cleaning delicate electronic equipment, such as computer chips and circuit boards. Moreover, these Cfcs appeared to be the perfect industrial chemical because they were, seemingly, completely safe for people and the environment.
However, two scientists, F. Sherwood Rowland and J. Molina became curious if they were as stable high in the atmosphere as they were on earth. In 1974 they published a paper which outlines their concerns and findings on Cfcs.
In their paper, Rowland and Molina explain how Cfcs would damage the ozone layer. After evaporation, due to their stability, Rowland and Molina reasoned, the Cfcs would not combine with other molecules in the air. Therefore, they wouldn't be involved in the natural process that removes most foreign chemicals from the lower region of the atmosphere. Instead, they would remain there for a long period of time, "50-200 years"(Rowland 1991 p. 32), gradually rising through the troposphere into the stratosphere(Rowland and Molina 1974 p.39).
In the stratosphere, Cfcs would be exposed to UV radiation. Once exposed to the UV radiation the bond that holds the chlorine containing compounds together would be broken by the rays. When a molecule of a Cfc breaks apart, chlorine atoms (CL) are released. Furthermore, individual chlorine atoms are very reactive. Rowland and Molina knew from laboratory experiments that chlorine atoms react with ozone molecules on a way that destroys the ozone. Therefore, the two hypothesized that Cfcs would indeed harm the ozone layer in the same way they affected Cfcs in experiments on earth. They warned society of the dangers, however, they were not taken seriously until the 1980s when British scientists, working at Halley Bay, using a Dobson spectrometer, discovered the whole in the ozone layer over the Anartic coast(Farman, Gardiner, and Shaklin, p.207). In 1985, the British scientists told the world about their findings, subsequently in 1995 Rowland and Molina were awarded the Nobel Peace prize. Furthermore, currently scientists are certain of the damage done by Cfcs. However, Cfcs themselves do not destroy the ozone, their decay products do.
After Cfcs reach the stratosphere and come into contact[photolyze] with UV radiation, the chlorine atoms are released. Furthermore, due to their high reactivity, the chlorine does not remain single for very long, they rapidly join nearby molecules. Since these reactions are occurring in the ozone layer, many of these nearby molecules are ozone molecules.
When a chlorine atom and a ozone molecule come together, the chlorine atom binds to one of the oxygen atoms on the ozone molecule. "As a result of the reaction, the ozone molecule is destroyed and a molecule of oxygen and chlorine monoxide (CIO) are left over"(Rowland 1989 p.71).
The ozone-destroying process does not stop there. Each one of the CIO molecules go on to react with other molecules nearby. When two CIO molecules come together, they briefly combine. This molecule breaks apart very quickly, leaving oxygen gas (O2) and chlorine atoms (CL). These chlorine atoms are now free again to destroy more ozone molecules. With the destruction of ozone molecules, comes more destructive UV rays.
The type of UV rays absorbed by the ozone layer are the same ones that are most harmful to humans; skin cancer and cataracts. Furthermore, depletion of the ozone layer results in increased UV radiation exposure.
One affect of UV on humans is skin cancer. "Most skin cancers fall into three classes: basal cell carcinomas, squamous cell carcinomas, and melanomas. In the US there were 500,000 cases of the first, 100,.000 cases of the second, and 27,000 of the third type, in 1990"(Wayne p. 47). Furthermore, cases of melanoma have been estimated to be increasing at an average of 10% from 1979 to 1993 and even larger increases are believed to be occurring in the southern hemisphere. Also, studies suggest that a 1% decrease in stratospheric ozone will result in a 2% increase of skin cancers (Wayne p.49). Moreover, some of these skin cancers can result in death. Malignant melanoma is much more dangerous, however, they are the least common. Malignant melanoma effects the pigment cell in the skin which can spread rapidly to the blood and lymphatic system. Furthermore, Wayne says, these have become increasingly frequent throughout the world, especially in areas of higher latitudes. Moreover, "there is a correlation between melanomas and exposure to UV. Melanoma incidence is correlated with latitude, with twice as many deaths (relative to state population) in Florida or Texas as in Wisconsin or Montana"(Wayne p.50). Furthermore, melanomas can take up too 20 years to develop, therefore, time will give us a better example of the effects of increased UV rays have on the skin. The eyes are also affected by UV rays.
An increase in UV rays results in an increase of UV absorption by the eye. Chronic UV exposure has been shown to be a factor in eye disease, says Roach. Moreover, "blindness from cataracts is the number one preventable cause of cataracts" (Roach p.119).
The latest findings indicate that "for every 1% decrease in ozone levels results in a .6-.8% increase in eye cataracts, or annually approximately 100,000 to 150,000 additional cases of cataract-induced blindness worldwide" (Roach p.122-3).
Moreover, UV rays cause other eye injures including photokeratitis, also known as sun blindness or snow blindness, damage to the retina, and intraocular melanoma tumors. Roach's predictions suggest a substantial future increase in eye cancer rates. However, some, object to the effects Cfcs have on the ozone and on humans.
Two of the more common objections are: Cfcs are two heavy to reach the stratosphere and we should not be concerned about Cfcs because the majority of chlorine in the atmosphere is created by the acidification if salt spray.
However, for the first objection, atmospheric gases do not segregate by weight in the troposphere and the stratosphere. This is because vertical transport in the troposphere takes place by convection and turbulent mixing, says Wayne. Furthermore Wayne says, in the stratosphere and in the mesosphere, it takes place by "eddy diffusion", the gradual mechanical mixing of gas by motions on smaller scales, these mechanisms due not distinguish molecular masses (Wayne Ch. 4).
As for the second objection, it is an assumption that is not correct at all. "Eighty percent of the chlorine found is from Cfcs and other man made organic chlorine compounds (Rowland 1989 p.77).
In conclusion, despite the increasing list of negative affects of UV radiation, we continue to release ozone depleting chemicals into the atmosphere. Despite the availability of safer alternatives, we continue to promote technologies that are only slightly safer than the ones they replaced. Despite all of the current information on the destructive affects of Cfcs, we still continue to use them on a mass scale.
Scientific research has only began to discover the impacts of UV radiation, however, what we do know should be enough for action. We cannot afford to sit around and wait for the damage to reach a point that makes us react, by then it will be too late.
The time to act is now because even with an immediate and complete end to production and release of ozone-depleting substances to the environment, we are still left with many decades of decreasing ozone and increased UV exposure. We must think long term and act now.
Works Cited
Farman, J.C., B.G. Gardiner, and J.D. Shankin. "Large losses of total ozone in Antartica
reveal seasonal CIOx/NOx interaction." Nature v.230 (Aug.4,1985): p.205-215.
Roach, M. "Sun Track." Health v.201 (May/June 1992): p.119-125.
Rowland, F.S. "Chloroflourocarbons and the depletion of stratospheric ozone."
American Scientist v.128 (Nov. 4,1989): p. 70-78.
Rowland, F.S. and M.J. Molina. "Ozone depletion: 20 years after the alarm." Chemical
Engineering News v.20 (Jan.11,1994): p. 20-34.
Rowland, F.S. and M.J. Molina. "Chloroflourocarbons in the environment."
Rev.Geophys. and Space Phys. v.7 (Mar.1975): p. 13-73
Wayne, R.P. Chemistry of Atmosphere. New York: Oxford Univ.,1991.
f:\12000 essays\sciences (985)\Enviromental\Chimpanzee Cannabalism and Infanticide.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I surrender all copyrights to Jens Shriver.
The acts of cannibalism and infanticide are very apparent in the behavior of the chimpanzee. Many African studies show that wild chimpanzees kill and eat infants of their own species. (Goodall, 1986:151) Although there is not a clear answer why chimps engage in this very violent and sometimes gruesome behavior there are many ideas and suggestions. This essay will deal with chimpanzee aggression, cannibalism and infanticide. This paper will present information on major research studies performed in Africa and analyze how and why this strange behavior occurs in a commonly thought peaceful primate.
Wild chimpanzees(Pan troglodytes schweinfurthii) are known to kill and eat mammals in various parts of Africa. Monkeys were recorded to be consumed in the Gombe National Park, the Kasakati Basin, and the Budongo Forest. Moreover, there is new evidence that chimpanzees near the Ugalla River of western Tanzania also consume mammals.(Riss, 1990:167) Cannibalism has also been recorded both in the Budongo Forest, Mahale Mountains and the Gombe National Park.
In Jane Goodall's, May 1979 article in the National Geographic called "Life and Death at Gombe" it reveals the first time that chimpanzees who were always perceived to be playful, gentle monkeys, could suddenly become dangerous killers. "I knew that some of our chimpanzees, so gentle for the most part, could on occasion become savage killers, ruthless cannibals, and that they had their own form of primitive warfare."(Goodall, 1979:594) To try and explain this ruthless behavior it is necessary to first analyze their social upbringing and unique lifestyle.
The Chimpanzee society is clearly a male dominated aggressive social unit. Males are larger than females, they are more openly aggressive, and they fight more often. (Holloway, 1974:261)
These fights can look extremely fierce and
the victim screams loudly. But it is rare
for a fight between community members to last
longer than quarter of a minute, and it is
even more unusual for such a fight to result
in serious injury.(Goodall, 1992:7)
Many fights break out suddenly. Afterwards the loser of the fight, even though clearly fearful of the aggressor, will almost always approach him and adopt a submissive posture.(Goodall, 1992:8) The loser is giving in and admitting that he has lost and only feels relaxed when the aggressor reaches out and gives what is called a "reassurance gesture-he will touch, pat, kiss or embrace the supplicator (loser)."(Goodall, 1992:8)
Another example of chimpanzee aggression is the charging display. Although females sometimes display this behavior, especially high ranking, confident females, it is typically a male performance.(Reynolds, 1967:82) During such a display, the chimp charges flat out across the ground, slapping his hands, and stamping his feet. The chimps hair then begins to bristle and his lips bunch in a ferocious scowl. He may pitch rocks or jump around swinging branches.(Strier, 1992:46) Essentially what he is doing is making himself look bigger and more dangerous than he actually is, trying to intimidate his opponents. "We have found, over thirty years of study, that the young males who display the most frequently, the most impressively, and with the most imagination, are the most likely to rise quickly to a high position in the male dominance hierarchy."(Goodall, 1992:9)
In essence, every young male chimp is on a life long quest to become the top-ranking position of the male hierarchy that is called the "alpha-male." Many of the male chimpanzees spend a lot of energy and run risks of serious injury in pursuit of higher status. The rewards of the alpha male are claiming rights to the food, female partners, and he also acquires a position exempt from attack by fellow chimps.(Goodall, 1979:616) However, the latter discussion has dealt solely with inter-group aggression, (fighting within groups of the same community); outer-group aggression is grotesquely different.
A chimpanzee community has a home range within which its members constantly roam. Usually the home range consists of roughly five to eight square miles. The adult male chimpanzees usually in groups of three, take turns patrolling the boundaries of their area keeping close together, silent and alert.(Goodall, 1992:14) As they travel they pick up objects sniffing them as if they are trying to find clues to locate strangers. If a patrol meets up with a group from another community, both sides usually engage in threats, and then are likely to retreat back to their home ground.(Holloway, 1974:261) But if a single individual is encountered, or a mother and a child, then the patrolling males usually chase and, if they can, attack the stranger.(Goodall, 1979:599) "Ten very serious attacks on mothers or old females of neighboring communities have been recorded in Gombe since 1970; twice the infants of the victims were killed; one other infant died from wounds."(Goodall, 1979:599)
In 1972 the chimpanzees of Gombe divided into two groups: the southern group(Kahama)and the northern group(Kasakela). This was the start of what Jane Goodall called the "four year war." In 1974, a gang of five chimpanzees from the Kasakela community caught a single male of the Kahama group. They hit, kicked, and bit him for twenty minutes and left him bleeding from many serious wounds. A month later after this original occurrence another prime Kahama male was caught by three chimps from Kasakela and severely beaten. A few weeks later he was found, terribly thin and with a deep unhealed gash in his thigh. There were three more brutal attacks leaving three more Kahama chimpanzees dead before 1977.(Goodall, 1979:606) By 1978 the northern males had killed all of the southern group and took over both areas. "It seems that we have been observing a phenomenon rarely recorded in field studies-the gradual extermination of one group of animals by another, stronger, group."(Goodall, 1979:608) There is no clear reason for these brutal attacks to have taken place unless that the dominant northern males before the community split, had access to the southern community and they were just trying to attain their land back. "We know, today, that chimpanzees can be aggressively territorial."(Goodall, 1992:14)
In August of 1975, Gilka a chimpanzee mother was sitting with her infant when suddenly Passion, another mother appeared and chased her. Gilka ran screaming but Passion who was bigger and stronger caught up, attacked, seized, and killed the baby. She then proceeded to eat the flesh of the infant and share the gruesome remains with her adolescent daughter, Pom and her infant son, Prof. This was the first observed instance of cannibalistic behavior shown by Passion and Pom.(Goodall, 1992:22) About a year after this incident, Gilka gave birth to another infant and this time it was Pom who seized the baby, but Passion and Prof again shared the flesh. There is no explanation why Passion and Pom behaved as they did.(Goodall, 1992:23)
Passion was always an asocial female, and
had been a very harsh mother to her own first
infant, Pom. It was only as Pom grew older
that the very close bond developed between
mother and daughter, and it was only because
the two acted with such perfect co-operation
that they were able to overcome some of the
other females of their community.(Goodall, 1992:23) During the years of their rampaging, a total of ten infants died or disappeared and every instance point to Passion and Pom.(Goodall, 1979:616) They would never try to attack a female when there were any males around. Instead they would wait for the mother to be alone with her infant and gang up on her. In three years from 1974 to 1976 only a single infant in the Kasakela community had lived for more than one month. Finally, when Passion gave birth again to a third child, and Pom also gave birth, the extraordinary cannibalistic infant killing came to an end.(Goodall, 1979:619)
Chimpanzees have been studied in the Mahale Mountains National Park for 25 years. The study group, M-group, consisting of about 90 chimpanzees, has been monitored for 15 years. "Cases reported from Mahale, Tanzania, are of special interest because adult males kill and eat those infants that not only belong to the same community but are likely to be their own offspring."(Turner 1992:151)
On October 3, 1989, a case of within-group infanticide among Mahale chimpanzees was observed.
T. Asou, M. Nakamura and two cameramen of
a video team of ANC Productions Inc. from
Tokyo, and R. Nyundo of the Mahale Mountains
Wildlife Research Centre succeeded in
shooting most of the important scenes of the
infanticide and cannibalism."(Nishida, 1992:152)
This is an example of the flagrant cannibalism and infanticide witnessed based on their memos and videotape. During a chimpanzee group feeding period that had gone unsuccessfully. Kalunde a 2nd-ranking male walked up to and snatched a six-month old infant baby boy from the hands of its mother Mirinda. Kalunde ran with the infant on his belly with Mirinda chasing after him screaming. Kalunde then hid in some vegetation until two other males Shike and Lukaja found him and wanted to take the infant away from him. Lukaja finally won a tug of war for the infant between the two other males and handed it over to Ntologi the alpha male. Ntologi, who then dragged, tossed, and slapped it against the ground climbed a tree with the infant in his mouth. He waved it in the air, and finally killed it by biting it on the face. Then he proceeded to eat the infant sharing the meat with the other chimps.(Nishida, 1992:152) It is strange because this sort of cannibalistic behavior is exactly like a group of chimpanzees feeding on the meat of any mammals dead carcass. Unfortunately, in this case though, it was the meat of a dead chimpanzee infant. Nevertheless, after the infanticide, Mirinda was observed to mate with Ntologi as well as Kalunde.(Nishida, 1992:153) Even though both these males assisted in the killing of her first infant.
Another example of this fierce and barbaric activity happened again on "July 24, 1990, M.B. Kasagula, a research assistant, observed five adult males including Ntologi excitedly displaying."(Nishida, 1992:153) Ntologi had his hand on a 5-month-old male infant of Betty's. The infant was still alive. Ntologi began to bite on the infants' fingers and then struck the infant against a tree trunk, and also dragged it on the ground as he displayed. As a result the infant was killed.(Nishida, 1992:153) Once again, Ntologi shared the remains with ten adult females and eight males. Three hours later the chimpanzees were still eating the carcass.(Nishida, 1992:153)
Other than the two examples illustrated thus far, there were also five other cases of Mahale Mountain within-group infanticides which were analyzed. Firstly, all the victims of all seven cases were small male infants below 1 year of age.(Hamai, 1992:155) Secondly, infanticide also occurred mostly in the morning during an intensive feeding period.(Hamai, 1992:155) On six of the seven occasions, the captors of the infants were alpha or beta males.(Hamai, 1992:155) Group attacks were observed in at least three cases. In all infanticide cases the mother persistently tried to recover her infant from the adult males so long as it was still alive. However, an infant was only recovered by its mother once.(Hamai, 1992:157) Infants were killed while being eaten in all cases.(Nishida 1992:157) "What appeared common in cannibalism but uncommon in predation was that consumption of meat took a long time(>3 hr) and that the carcass-holder changed frequently, considering the prey size and the number of consumers."(Hamai, 1992:158) In all cases of cannibalism, many chimps ate and shared the meat by recovering scraps. There was always more than four adult male cannibals and the mother has never been seen to eat meat from the carcass of her own offspring.(Nishida, 1992:158) The Mahale Mountain study provided an in-depth analysis on how the chimpanzees reacted during and after their cannibalistic behavior.
There are several hypotheses explaining infanticide within a group of chimpanzees. One is the male-male competition hypotheses. Nishida and Hiraiwa-Hasegawa(1985) suggested that males of one clique destroy infants of females who associated with males of a rival clique.(Hamai, 1992:159) Spijerman(1990) proposed that infanticide functions as a kind of display to fortify male social status, or "to increase control over the attention of others."(Hamai 1992:159) Another idea was Kawanaka's(1981) that infanticide was an "elimination of the product of incest."(Hamai 1992:159) Some believe that the function of infanticide is to correct a females promiscuous habit and "coerce her into more restrictive mating relationships with adult males, and especially with high ranking males."(Hamai, 1992:159) What is interesting in all of these examples of chimpanzee infanticide is as soon as a chimpanzee male or female(Passion & Pom) got their hands on an infant, the chimps surrounding them would suddenly become excited and want it themselves as if the infant was just a piece of meat even though it was still alive.
In conclusion, there has been no evidence revealing why chimpanzees act and behave in this cannibalistic fashion. There are many theories and ideas but like the theory of evolution there is no one clear answer. Being the closest living relative to the human being, chimpanzees exhibit complicated and intricate behavior due to their advanced brains.(Zuckerman, 1932:171) This paper has revealed that chimpanzees are creatures of great extremes: aggressive one moment, peaceful the next. This gruesome violent behavior can actually be linked to a similarity with human beings. It is widely accepted in the scientific community that chimpanzees are the closest human relatives we have. If we are indeed superior to these primates, does it not stand to reason that humans should be able to learn from this violence and avoid it? Jane Goodall, in her article labeled, "Life and Death at Gombe" draws a similar conclusion:
It is sobering that our new awareness of
chimpanzee violence compels us to acknowledge
that these ape cousins of ours are even more
similar to humans than we thought before.
(Goodall, 1979:620)
f:\12000 essays\sciences (985)\Enviromental\Chinook Salmon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Environmental Science
Wednesday, February 26, 1997
Chinook Salmon
Among the many kinds of fish harvested each year by commercial fisheries is the
Oncorhynchus tshawytscha or Chinook salmon. The United States catches an averages of about
three hundred million pounds of salmon each year. However some Chinook salmon have been
recently listed as threatened. Man has been the main cause for the decline in Chinook salmon
populations.
The populations of Chinook salmon have declined for several reasons. Hydropower and
it's destructiveness to the environment, pollution, and overfishing are the three main causes for
the decline. The Chinook salmon is known for traveling the greatest distance back to its
spawning grounds, often traveling one to two thousand miles inland. This long journey is now
often interrupted by hydroelectric plants. Hydropower is a very good alternative resource for
power, however it is very damaging to our salmon populations. The dams block off rivers,
which block the salmon's path back to their breeding grounds. The salmon go back to the same
areas, just as their ancestors did, to lay their eggs. The hydropower plant's turbines are also very
dangerous to young salmon. Many of them are killed by the giant turbines on their way back to
the ocean. Killing off many of the salmons new generation. Pollution is also a killer of many
Chinook salmon. Pollution caused by sewage, farming, grazing, logging and mining find it's
way into our waters. These harmful substances kill many species of fish and other marine life.
The Chinook salmon is no exception. The chemicals are dumped into the rivers and streams and
eventually these chemicals find their way to the ocean, polluting and effecting each area they
pass through. The largest contributor to the decline in the Chinook salmon population is the
commercial fishing industry. From a period of 1990-1992 815,000 Chinook salmon were caught
by commercial fisheries. This does not include the 354,000 recreational catches.
Commercial fishing is a big industry. Commercial fishers use nets, which they pull by
boats. Some nets are designed so the holes in the nets are large enough for the head of the fish to
fit through, and then the mesh gets caught in the fish's gills. Others are designed to circle
around a school of fish and then is drawn shut. New technologies have developed factory stern
trawlers which easily haul netloads of up to 100 metric tons of fish. However, when catching the
salmon, fisherman use pound nets to catch the fish on their way to their spawning grounds. The
average annual salmon catch in just the United States is about 300 million pounds, of that about
60 percent is canned. Salmon canning is one of the major industries of the pacific coast.
To decrease the rate at which the salmon population is falling the U.S. Fish and Wildlife
service yearly deposits billions of young salmon and eggs into natural breeding grounds. Salmon
are also raised in and then deposited. The National Marine Fisheries Service has also proposed
a recovery plan for the Chinook salmon. They plan to improve migration conditions, by
increasing the area around the dams so that the salmon can get through. Also they plan to
protect the fishes' spawning habitat, by improving the general management. They would also
like to develop alternative harvesting methods.
The effects that man has had on the Chinook salmon and many other species of salmon is very
severe, any are labeled as threatened. We can reduce the causes of their population decrease by
reducing the amount of fish we catch annually, reducing pollution dumped into their habitats,
and by developing ways for the fish to bypass the dams.
f:\12000 essays\sciences (985)\Enviromental\Conserving our earth.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Preserving Our Earth
America's endangered areas are deplinishing daily. Natural disasters are a
major factor in their disappearance, but the most prominent factor is mankind. Even
though procedures are conducted daily to preserve our home, these areas slowly
crumble within our grasps. Protection of these areas is essential, as our whole race
depends upon resources derived from these lands.
From the lush greenery to the sparkling blue waters, all is majestic in its fullest.
This is why I believe almost all of these wonderful places should be preserved. Mankind
has come too far to throw it all away for his greedy purposes. Many believe that our
secluded wildlife areas should be available for the public. But what is to be offered
through this? Many recreational activities I presume, but what about our biggest
environmental concern - pollution.
Pollution is so widespread throughout our world that it is overwhelming.
Drinking water supplies are contaminated with runoff from nearby factories and even
with pollutants from our own backyards. Demands of skyscrapers and condominiums
wipe out our decreasing rainforests. This drudges wildlife from its natural home and into
the havoc that is ours. Millions of acres of beautiful land are destroyed daily to satisfy
the needs of mankind.
But has anyone contemplated the needs of our wildlife? When their homes are
incinerated, where do they run for shelter? Where will wildlife obtain its food and oxygen if the
sources are gone? Not much is done about our destructive ways, we sit back and let money
and greed take power. The solution is just a whisper away. The preserved areas should remain
untouched. Hunting should be outlawed in these protected lands. If a family is starving and has
to resort to this brutal deed, then restricted areas should be permitted. Proper usage of trash
and recycling receptacles should be readily available. Rivers should be tested and guarded for
the sake of our future and our children. These simple guidelines can be easily followed through
education of the general population. More people should volunteer their time through river, land,
and beach clean-ups. These small measures can save our endangered population drastically. If
we loose our world's natural resources, where will we turn?
f:\12000 essays\sciences (985)\Enviromental\Corporate Average Fuel Economy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Corporate Average Fuel Economy
The foreshadowed Market Failures of the mid 1970's gave way to Corporate Average Fuel Economy, regulation which would call for new standards in automobile fuel efficiency. The market failures hinged on a number of outside variables which could have had a drastic effect on domestic markets.
--> Resource Scarcity drove the American public to call for a more efficient means of managing its resource use due to a) oil embargos on non-domestic products and b) skyhigh prices at the pump.
--> Conservation of the world's non-renewable resources cams to the foreground with a) higher pump prices and b) forecasted resource expenditure before the year 2000.
*** With Corporate Average Fuel Economy in place the market failures should be partially alleviated and pressures due to restricted international resources should subside. The regulated fuel efficiency should allow the market to resume its national flow and regain stability without further manipulation.
--> Reliance on imported fuels would be minimized because of the a) decreased demand for fuel consumption and b) lowered fuel demand allowed for domestic producers to meet the basic needs of the public.
--> Maximum fuel efficiency would a) cut the amount of fuel consumption thus nullifying high pump prices and b) raise the level of conservation by lowering the amount consumed.
Although the intentions of Corporate Average Fuel Economy in the 1970's was thought to be a cure-all but, over the long run it has turned out to be a flop. The variables on which it was based, turned out to be almost exactly opposite.
--> Lower Gas Prices have a) caused the public to simply use more fuel, b) drive more frequently due to less fuel consumption and c) look beyond fuel economy when in the market for a new vehicle.
--> Quality Depletion of the total domestic car fleet due to special attention to only the fuel economy while ignoring: 1) performance, 2) acceleration and 3) handling.
*** With CAFE becoming a long-term flop, in its first years it did have its benefits. Both the public sector and the private interests gained from the regulation in the beginning.
--> Public Interests gained from this legislation due to a) to ability to get more mile for their buck, b) increased (initial) conservation and c) higher standard of living through the money which was saved in fuel costs.
--> Private Interests were kept happy by a) the "credits" earned by each manufacturer when standards were exceeded before set deadlines and b) no new taxation on the fuel industry to alleviate to e conservation problem.
CAFE could have had a very successful outcome if the original variables of fuel costs, resource availability and resource stability would have continued on the path they were taking. Because of changes in each of these variables CAFE did not have the resources to remain a successful regulation. CAFE did help to improve the energy efficiency of motor vehicles, but due to short-comings in the regulation other aspects were allowed to slip away and actually decline improvements.
--> The Uniformity of Corporate Average Fuel Economy did not allow for any outside manipulation of the problem at hand, thus only allowing for a one dimensional view of the fuel economy standards.
--> CAFE Separated the standards for both cars and trucks therefore a) resource conservation was limited to only the car fleet and b) depletion of automobile quality.
-->With Lower Fuel Costs the consumer a) actually consumed more fuel because of the cheaper price and b) when in the market for a new automobile they began to look for other features besides fuel economy: 1) size, 2) reliability and 3) performance.
*** In conclusion, I believe that if the initial concept of the CAFE policy would have been modified only slightly, to encompass small changes in the market place, it could have had a more beneficial outcome. CAFE originally was a good concept for both the consumer and the industry, but because of its downfalls it became a hindrance to the automobile industry.
f:\12000 essays\sciences (985)\Enviromental\Could the Greenhouse Effect Cause More Damage .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"Could the Greenhouse Effect Cause More Damage?"
John Harte is an ecologist from the University of California at
Berkley. He is trying to find out whether heat stimulates further
trace-gas from solid or not. He is going to conduct an experiment that
will tell him if the greenhouse effect could start a cycle that would cause
the effects to be worse than already predicted. The experiment will begin
December of 1996 and will run for no less than three years.
Harte has stretched a twelve foot high grid of cables above 300 square
yards of land in a high mountain meadow in the middle of the Colorado
Rockies. The cables are supported by four steel towers, one at each
corner of the grid. Hanging down from the cables are ten infrared heat
lamps which are about three feet long each. This is supposed to simulate
what many see as the coming apocalypse. (global warming) "By 2050, if we
decide to load trace gasses - mainly carbon dioxide - into the atmosphere
at our current rate, we can expect Earth's temperature to increase by any-
where from three to nine degrees. The Vostok record confirms that,"
says Harte.
The grid is divided into ten sections. Each of the ten sections covers
thirty square yards of the meadow. The infarared lamps will heat every
other section by 2.5 degrees. The unheated sections in between allow
researchers to compare the efects of the lamps with the regular state of
the meadow.
One time a week, Harte will take gas samples from buckets turned
upside down for ten minutes at a time on both the heated and unheated
strips through fitted nipples at the bottoms of the buckets by syringes,
then analyze them with a gas chromatograph. "We'll be able to plot any
changes in the meadow very precisely," says Harte.
Some of these changes could alter the very make-up of the seasons.
With a 2.5 degree rise in temperature, snow at high elevations might
melt up to two months sooner. In Colorado that would constitute March
as May. As a result, the soil will dry quicker and will be much warmer
than usual when May rolls around. John Harte says it would be like
expanding summer at the expense of winter.
That means plants that usually start to bloom just as the snow begins
to melt will bloom sooner then the pollinators of those plants can get to
them. That would be harmful to both of the species involved. Harte says
that this project should confirm that people don't have any time to waste
when it comes to saving this disaster from happening.
In three years, Harte and other researchers will be able to tell
whether or not Global Warming will be the next apocalypse. This apparently
is a serious issue to many environmentalists and ecologists. A lot of time,
money, and effort is being put into this. If this experiment ends up
warning us about what may happen if people keep polluting the atmosphere
with certain gasses, Harte and other researchers working on this project
will be commended.
f:\12000 essays\sciences (985)\Enviromental\Criminal Justice Reform Speech Paper with Outline and all S.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE NEED FOR EXTREME CRIMINAL JUSTICE REFORM IN CALIFORNIA
ORIENTATION
FACTORS:
I. Basic Introduction and description
- Introduce basic sides of Criminal Law and Elaborate
II. General History and Development
- Discuss the history and modifications of Reform Laws in California
III. Main Problems and Concern Stimulants
- Point out real life statistics and point out incidents
IV. Conclusion
- Point out the need for an extreme reform and what can be done
SENTENCE OUTLINE
I. An analysis of Department of Corrections data by the Center on Juvenile and Criminal
Justice in San Francisco, CA, in Nov, 1995 indicates that since the enactment of
California's "Three Strikes" law two years ago, 192 have "struck out" for marijuana
possession, compared to 40 for murder, 25 for rape, and 24 for kidnapping.
A. I have a strong proposition for the California Legislature...and that is a strict and logical reform to the present Criminal Justice System in California.
B. "The California Legislature is to be commended for its stance on crime. Not for their "get tough" policies such as the "Three Strikes" law but for their enactment of a little known section of the Penal Code entitled the "Community Based Punishment Act of 1994." (Senator Quentin Kopp, Time Magazine Feb 14, 1996)
C. By passage of this act, the State of California has acknowledged the limitations of incarceration as both punishment and a deterrent to criminal behavior.
D. The legislature has in fact declared that "California's criminal justice system is seriously out of balance in its heavy dependence upon prison facilities and jails for punishment and its lack of appropriate punishment for nonviolent offenders and substance abusers who could be successfully treated in appropriate, less restrictive programs without any increase in danger to the public"
II. More facts, Opinions and Developmental Ideas
A. In essence, this law proposes a community based system of intermediate restrictions for non-violent offenders that fall between jail time and traditional probation such as home detention with electronic monitoring, boot camps, mandatory community service and victim restitution, day reporting, and others.
B. Pilot programs are to be developed as a collaborative effort between the state and counties requiring a community based plan describing the sanctions and services to be provided.
C. A progress report on an actof this kind would be made by the California Board of Corrections on January 1, 1997 and annually thereafter to selected legislative committees.
III. Informatives
A. "It seems clear that the California Legislature has determined that incarceration is not appropriate for many criminal offenses and that alternative sanctions are preferable for non-violent offenders. " (Randy Meyer, Political Official)
B. But while this approach is to be applauded, its spreading prevents the fulfillment of its true potential.
C. "By retaining those non-violent offenders that are currently in state prison and continuing to pursue defensive punishment at the local level in the form of short term "shock incarceration" and bootcamps, the costly and ineffective methods of criminal behavior correction remain intact." (Charles Calderon-US News)
D. By immediately eliminating incarceration for all non-violent offenses and requiring victim compensation and community service, resources can be committed to preventing crime rather than to the feeding and housing of offenders.
E. This is consistent with the findings of the legislature and is cost efficient, requires minimal systemic change, and increases public safety and security.
IV. The Proposal
A. "Our current criminal justice system appears to be based upon the Old Testament proverb that "your eye shall not pity; it shall be life for life, eye for eye, tooth for tooth, hand for hand, foot for foot." Revenge thus plays a part of the punishment model."
(LA Official Boland)
From a societal standpoint, we expect punishment to prevent the offender and others from further criminal behavior. Incarceration of offenders as the punishment of choice thus theoretically provides revenge, individual incapacitation, and restriction.
But I submit that such a philosophical foundation is flawed. Revenge while understandable from an individual human perspective is not a proper basis for society's response to the misbehavior of its laws. This human urge to punish should be removed from the current system and replaced with methods of restrictions that utilize the offender's potential to benefit his victim and society at large.
In other words, in a free society the end desired is the correction of behavior that utilizes the least force . This conforms to the principles of limited government, efficiency, reduced cost, and personal freedom as advocated by both liberals and conservatives alike.
The basic underlying concept of this proposal is that incarceration should be reserved for those who are violent and thus dangerous to the public. Violent crimes would be defined broadly to include any act or attempt to injure the person of another except by accident. This would therefore range from murder to driving under the influence with current distinctions of misdemeanor and felony offenses remaining in place.
The court sentencing procedures would also be modified to exclude incarceration for non-violent crimes with an emphasis on victim restitution and community service. The court would maybe rely on probation reports to provide the necessary offender personal history including employment, job skills (or lack of), and personal resources, e.g. bank accounts, property ownership, etc. Based on this information, the court would apply the appropriate sentence of victim restitution and community service with close monitoring by probation officials.
As with all human endeavors, compliance by offenders would most likely not be 100%. The threat of incarceration would have to exist for those failing to submit to or comply with court ordered repayment and public service. Many will not agree with this due to the complexity and in many cases there can be more harm done then it could be beneficial. But for the most part there is no reason to believe that the failure rate would be any higher under this type of system than is currently the case
V. Conclusion
This proposal provides a policy alternative to the current criminal justice emphasis on incarceration as punishment. It is based on the premise of effectiveness and cost efficiency with a high regard for individual liberty that is essential to a free society. It moves away from the concept of punishment and focuses on a more functional goal of victim and societal repayment. The proposal offers prevention at the front end rather than repayment at the back end of crime reduction efforts.
The advantages of such a system are numerous. One of the most important assets of a revision of this kind is that of allowing for a major change in the criminal justice system with a minimum of disruption to the status quo. Rather than requiring an entire systemic change, this proposal works within the current practices of the court, police, and corrections. Indeed, very few authorized changes would have to be made.
Enactment of this proposal would eliminate the need for future bond measures for prison construction. Not only would it save taxpayer money, it would be most advantageous to the remaining employees of the California Department of Corrections by allowing for the closure of outdated and unsafe facilities. In addition, unemployment could be kept to a minimum by offering qualified state correctional officers employment with local law enforcement agencies.
It is time now to look beyond revenge and the emotionalism associated with current justice system practices.
"There is only one practical method of reducing crime and the subsequent public's fear and that is through a high level of police presence on the street."
(Randy Meyer, M.A.)
In essence, this revision allows for a return of the local neighborhood police officer who is familiar with its residents and business owners.
In the final analysis, our very freedom depends on how we treat society's criminals and misfits. By continuing to create a criminal class that has not been rehabilitated through incarceration, we are ultimately sabotaging our own security. Maybe with this we can have a means of reversing the trend of incarceration as punishment while increasing our personal safety and diminishing the fear that is rampant among us.
QUICK FACTS
•The current California prison population is 135,133 and is expected to increase to about 148,600 by June 30,1996 per the California Department of Corrections.
•42.1% of these inmates are incarcerated for violent offenses, 25.3% for property offenses, 26.2% for drugs, and 6.4% for other.
•Average yearly cost: per inmate, $21,885 and per parolee, $2,110.
•California Department of Corrections budget for 1995-1996: $3.4 billion; proposed budget for 1996-1997 for both Corrections and Youth Authority: $4.1 billion. This compares to $1.6 billion for community colleges and $4.8 billion for higher education.
•California Legislative Analysist Elizabeth Hill advised on February 26, 1996 that 24 new prisons will need to be built by the year 2005 to keep pace with the incarceration rate. This will cost taxpayers $7 billion for their construction and increase operating costs to $6 billion annually.
•California Attorney General Dan Lungren announced on March 12, 1996 that the number of homicides reported in 1995 in the most populated two-thirds of the state had declined 3.1%, rape 3.9%, robbery 7.9%, aggravated assault 4.2%, burglary 8.9%, and vehicle theft, 11.4% (San Jose Mercury News, 3/13/96). This is consistent with a 5% decline in the national violent crime rate for the first half of 1995 per the FBI.
MANUSCRIPT
An analysis of Department of Corrections data by the Center on Juvenile and Criminal
Justice in San Francisco, CA, in Nov, 1995 indicates that since the enactment of
California's "Three Strikes" law two years ago, 192 have "struck out" for marijuana
possession, compared to 40 for murder, 25 for rape, and 24 for kidnapping.
I have a strong proposition for the California Legislature...and that is a strict and logical reform to the present Criminal Justice System in California. "The California Legislature is to be commended for its stance on crime. Not for their "get tough" policies such as the "Three Strikes" law but for their enactment of a little known section of the Penal Code entitled the "Community Based Punishment Act of 1994." (Senator Quentin Kopp, Time Magazine Feb 14, 1996). By passage of this act, the State of California has acknowledged the limitations of incarceration as both punishment and a deterrent to criminal behavior. The legislature has in fact declared that "California's criminal justice system is seriously out of balance in its heavy dependence upon prison facilities and jails for punishment and its lack of appropriate punishment for nonviolent offenders and substance abusers who could be successfully treated in appropriate, less restrictive programs without any increase in danger to the public"
In essence, this law proposes a community based system of intermediate restrictions for non-violent offenders that fall between jail time and traditional probation such as home detention with electronic monitoring, boot camps, mandatory community service and victim restitution, day reporting, and others. Pilot programs are to be developed as a collaborative effort between the state and counties requiring a community based plan describing the sanctions and services to be provided. A progress report on an actof this kind would be made by the California Board of Corrections on January 1, 1997 and annually thereafter to selected legislative committees.
"It seems clear that the California Legislature has determined that incarceration is not appropriate for many criminal offenses and that alternative sanctions are preferable for non-violent offenders. " (Randy Meyer, Political Official). But while this approach is to be applauded, its spreading prevents the fulfillment of its true potential. "By retaining those non-violent offenders that are currently in state prison and continuing to pursue defensive punishment at the local level in the form of short term "shock incarceration" and bootcamps, the costly and ineffective methods of criminal behavior correction remain intact." (Charles Calderon-US News). By immediately eliminating incarceration for all non-violent offenses and requiring victim compensation and community service, resources can be committed to preventing crime rather than to the feeding and housing of offenders. This is consistent with the findings of the legislature and is cost efficient, requires minimal systemic change, and increases public safety and security.
"Our current criminal justice system appears to be based upon the Old Testament proverb that "your eye shall not pity; it shall be life for life, eye for eye, tooth for tooth, hand for hand, foot for foot." Revenge thus plays a part of the punishment model."
(LA Official Boland). From a societal standpoint, we expect punishment to prevent the offender and others from further criminal behavior. Incarceration of offenders as the punishment of choice thus theoretically provides revenge, individual incapacitation, and restriction.
But I submit that such a philosophical foundation is flawed. Revenge while understandable from an individual human perspective is not a proper basis for society's response to the misbehavior of its laws. This human urge to punish should be removed from the current system and replaced with methods of restrictions that utilize the offender's potential to benefit his victim and society at large.
In other words, in a free society the end desired is the correction of behavior that utilizes the least force . This conforms to the principles of limited government, efficiency, reduced cost, and personal freedom as advocated by both liberals and conservatives alike.
The basic underlying concept of this proposal is that incarceration should be reserved for those who are violent and thus dangerous to the public. Violent crimes would be defined broadly to include any act or attempt to injure the person of another except by accident. This would therefore range from murder to driving under the influence with current distinctions of misdemeanor and felony offenses remaining in place.
The court sentencing procedures would also be modified to exclude incarceration for non-violent crimes with an emphasis on victim restitution and community service. The court would maybe rely on probation reports to provide the necessary offender personal history including employment, job skills (or lack of), and personal resources, e.g. bank accounts, property ownership, etc. Based on this information, the court would apply the appropriate sentence of victim restitution and community service with close monitoring by probation officials.
As with all human endeavors, compliance by offenders would most likely not be 100%. The threat of incarceration would have to exist for those failing to submit to or comply with court ordered repayment and public service. Many will not agree with this due to the complexity and in many cases there can be more harm done then it could be beneficial. But for the most part there is no reason to believe that the failure rate would be any higher under this type of system than is currently the case
This proposal provides a policy alternative to the current criminal justice emphasis on incarceration as punishment. It is based on the premise of effectiveness and cost efficiency with a high regard for individual liberty that is essential to a free society. It moves away from the concept of punishment and focuses on a more functional goal of victim and societal repayment. The proposal offers prevention at the front end rather than repayment at the back end of crime reduction efforts.
The advantages of such a system are numerous. One of the most important assets of a revision of this kind is that of allowing for a major change in the criminal justice system with a minimum of disruption to the status quo. Rather than requiring an entire systemic change, this proposal works within the current practices of the court, police, and corrections. Indeed, very few authorized changes would have to be made.
Enactment of this proposal would eliminate the need for future bond measures for prison construction. Not only would it save taxpayer money, it would be most advantageous to the remaining employees of the California Department of Corrections by allowing for the closure of outdated and unsafe facilities. In addition, unemployment could be kept to a minimum by offering qualified state correctional officers employment with local law enforcement agencies.
It is time now to look beyond revenge and the emotionalism associated with current justice system practices.
"There is only one practical method of reducing crime and the subsequent public's fear and that is through a high level of police presence on the street."
(Randy Meyer, M.A.)
In essence, this revision allows for a return of the local neighborhood police officer who is familiar with its residents and business owners.
In the final analysis, our very freedom depends on how we treat society's criminals and misfits. By continuing to create a criminal class that has not been rehabilitated through incarceration, we are ultimately sabotaging our own security. Maybe with this we can have a means of reversing the trend of incarceration as punishment while increasing our personal safety and diminishing the fear that is rampant among us.
f:\12000 essays\sciences (985)\Enviromental\Death of a Planet.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Death of a Planet
Air pollution is a very big problem in the United States. A large part of air pollution
comes from cars. The Environmental Protection Agency says, "The most polluting activity an
average person does everyday is drive their car"(1 factsheet OMS-5). Most people probably
aren't aware that they are polluting the environment. Maybe if everyone knew how serious this
pollution problem is, they would find ways to reduce the pollution.
Most pollution that is released by cars comes from the exhaust, mainly in the form of
hydrocarbons(1 factsheet, OMS-5). Hydrocarbons are organic compounds ,a combination of two
or more elements, that contain only carbon and hydrogen (2 factsheet, OMS-5). Hydrocarbons
are released when fuel in the engine burns partially. When hydrocarbons come in contact with
sunlight they form ground level ozone. Ground level ozone is a major ingredient in the formation
of smog. Ground level ozone is responsible for irritating eyes, damaging lungs, and it
complicating respiratory problems. Hydrocarbons aren't the only pollutants released through car
exhaust.
Two more pollutants released through car exhaust are carbon monoxide and nitrogen
oxide. The first reduces the flow of oxygen to the bloodstream, and could harm people with heart
disease. Nitrogen oxide is formed when a car engine gets hot. It contains chemicals that aid in
the formation of ground level ozone as well as acid rain (2 factsheet, OMS-5). Acid rain destroys
the outsides of buildings, statues, etc. Acid rain can also contaminate drinking water, damage
vegetation, and destroy sealife. These two pollutants are two of the most dangerous pollutants
released through car exhaust. If these two pollutants were cut down just a little bit our planet
would be a safer place to live.
Carbon dioxide is another gas released through exhaust emissions. It isn't dangerous
directly to humans, but it is considered to be a "green house gas." A "greenhouse gas" is a gas
that is associated with global warming. Global warming is the gradual increase of temperature
due to human activity. Certain gases, such as carbon dioxide, methane, and ozone allow radiation
from the sun to break through the atmosphere and go to the earth's surface. Global warming
affects all living things on the entire planet (4 factsheet OMS-5).
Another type of hydrocarbon pollutant occurs through fuel evaporation. These
hydrocarbon pollutants are produced four different ways. The first way is called diurnal. This is
when the venting of gasoline vapors occurs due to the temperature of the car's engine rising. The
second way is running losses. This is the venting of gasoline when the car's engine is running.
The third way is called hot soak. Hot soak occurs when gasoline evaporates after the car has
been turned off. The fourth way additional hydrocarbons are released occurs when a person is
refueling a car. This occurs when a person fills up their gas tank. While the tank is being filled
gasoline vapors are forced out. One way to cut down on this type of pollution is to have the car
tuned properly. The refueling problem can be reduced with the use of vapor recovery system.
This system traps the vapor inside the tank. These systems are present at many gas stations in
highly polluted areas (4 factsheet, OMS-5).
Some cities around the world are very dangerous to live in because of car pollution. One
city is Athens, Greece. In Athens the number of deaths rises on days with greater air pollution.
Just breathing the air in Bombay is the same as smoking half a pack of cigarettes a day (97
Brown). Over 150 million people in the United States live in areas where the Environmental
Protection Agency considers the air to be unhealthy. The American Lung Association says, "The
unhealthy air leads to 120, 000 deaths every year" (97 Brown). The citizens of earth are killing
themselves. They are killing themselves for convenience. Eventhough the actions taken by the
government and car manufacturers have reduced car emissions a lot since 1970, The number of
miles an average person travels in a day has doubled (4 factsheet OMS-4). The average person
must find a way to reduce the number of miles he or she drives. One way to do these is to car
pool. If a large number of people started to car pool, pollution produced by cars would be
reduced. This would reduce the amount of smog in urban areas, making these polluted areas a
better place to live.
There are many other alternatives to reduce car emissions besides carpooling. One
possible solution is to use fuels cleaner than gasoline. There are many types of alternative fuels
that could be used, and the number of fuels increases as technology becomes more advanced.
One type of alternative fuel is alcohol. One type of alcohol is methanol. Methanol is made from
natural gas and coal. Another type of alcohol is ethanol. Ethanol is produced from grains or
sugar. Cars fueled by alcohol could produce as much as 80 to 90 percent less emissions than cars
fueled with gasoline. Both methanol and ethanol are high octane liquid fuels. The reason why
these two fuels are possible alternatives is because they are efficient and made from natural
ingredients, and they pollute less (4 factsheet, OMS-4).
Another alternative fuel is electricity. This isn't a very efficient fuel right now, because the
technology is limited. Recent advances in the production of electric cars could make this a reality
in the future. It is possible for cars powered with electricity to release little or no emissions. If
this alternative fuel becomes a reality and is used in most cars, it will knock a big chunk out of the
pollution problem all around the world (5 factsheet, OMS-5).
Another alternative fuel is natural gas. Natural gas is only good for cars where driving
long distances isn't important. It is possible for natural gas to produce 85 percent to 95 percent
less emissions than gasoline fueled cars (5 factsheet, OMS-5). With this alternative fuel car
emissions could be reduced a great deal to benefit the entire world.
Another way to reduce pollution released by gasoline powered cars is to test them for
dangerous amounts of pollutants regularly. Emissions testing programs are already in effect in
California, Arizona, and Colorado. Texas will have an emissions testing program soon. The test
is very simple. A car is tested for hydrocarbons and other ground level ozone forming chemicals
that are released through car exhaust. If the level of pollutants is high, the car fails the test. Then
the car must be taken to a mechanic, so the engine can be tuned to release fewer chemicals into
the air. After the car is taken to a mechanic and repaired, the car is tested again. This test does
cost the car owner money, but it's a small price to pay for the reduction of pollutants released by
cars (105 Brown).
Researchers are working on a machine that can test cars for hydrocarbon pollution while
the car is on the road. This system is called the remote sensing device. The device is placed on
the side of the road. As a car drives by it measures the level of hydrocarbon emissions released by
the car. The system can't measure nitrogen oxide emissions yet, but researchers are working on
another system to go along with this device. If the car is releasing a high level of hydrocarbon
emissions a video camera takes a picture of the car's license plate, and sends the license plate
number and the emissions data to a computer (1 factsheet, OMS-15). Then the owner of the car
is notified about the polluting car, and they are required to have the problem fixed (2 factsheet,
OMS-15). If remote sensing devices are used to detect hydrocarbon emissions, polluting cars can
be recognized sooner than regular testing, done year by year. If the remote sensing devices are
used people don't have to worry about their car being tested. They just have to get their car
repaired when the device says it's polluting the environment.
Air pollution caused by cars is a serious problem that can be reduced by average everyday
people. If the citizens of earth don't act fast, and reduce the amount of pollution caused world
wide, earth will be a horrible place to live. If something isn't done soon there might not be a place
to live at all.
Works Cited
Brown, Lester R. The World Watch Reader: On Environmental Issues.
NewYurk:Norton, 1991. 97-105.
The Environmental Protection Agency. Automobiles and Ozone: Factsheet OMS-4.
http://www.epa.gov/OMS WWW/04-ozone.htm. 1993. 4.
The Environmental Protection Agency. Automobile Emission: An overiew: Factsheet
OMS-5. http://www.epa.gov/OMS www/05-autos.htm. 1994. 1-5.
The Environmental Protection Agency. Remote Sensing: A Supplemental Tool for
Vehicle Emission Control: Factsheet OMS-15. http://www.epa.gov/OMS www/
15-remot.htm. 1993. 1-2.
f:\12000 essays\sciences (985)\Enviromental\Dedicous Forrests.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Deciduous Forests
INTRODUCTION
A deciduous forest, simply described is a forest that is leafless during the winter. Eury species make up this type of forest, meaning that the species can tolerate a wide range of conditions. In the extreme northern latitudes, the growing season is short causing the trees to be leafless the majority of the year. The deciduous forest is subjected to distinct weather cycles and temperature shifts. In this area of the northeast we experience four distinct seasons, and for a tree species to thrive it must adapt to the stresses corresponding to each season.
Of the three basic types of temperate broadleaf forests, (temperate deciduous forest, temperate woodlands, and temperate evergreen forest) our lab data deals with characteristics of the temperate deciduous forest. This forest type once covered large portions of Eurasia, South America, and North America. As with most native forests, they have been cleared so that the land could be used for farming or residential use. The temperate deciduous forests of North America were more diverse than the same type of forests in Europe due to glacial history. Glacial action dumped till as the ice edge retreated, and North America inherited a fertile soil base. Soil type is an important factor for which species of trees can thrive in an area. The general dominant tree species for temperate deciduous forests are Beech, Ash, Oak, and in our region also Tulip, Maple, Birch, and Hickory.
Developed forests consist of four layers. The layers are: canopy, sub canopy, shrub, and ground cover. This layering affect benefits the diversity of the ecosystem by providing a rich variety of habitats. It is a result of adaptation and competition for sunlight and shows the continuing process of succession. The stratification of a forest, by intercepting the some of the available sunlight at various locations, also creates micro-climates with a wide range of temperatures and moisture conditions. The soil composition also greatly influences the amount of water that is available to the plant species. The composition of the soil, the various layer development and the nutrient content are major factors in the survival of specific species of trees. Climate and soil type are a-biotic factors, meaning they are outside and uncontrollable by the species itself. Insect infestations such as Gypsy moths and disease such as the Chestnut blight are also a-biotic factors that in a relatively short period of time can severely thin out or destroy a specific species of tree. It might just add enough stress to one species, where a competing species will then out-compete it and then dominate.
The cycle of dropping the leaves when the days grow short is vital for the replenishment of nutrients in the soil. This litter layer decomposes and returns organic material to the trees through leeching and decomposition into the upper soil layers where they can be reused by re-absorption through the roots.
METHODS
This lab involved the investigation of a deciduous forest located on the undeveloped portion of the campus. The survey techniques used to collect data for the vegetation analysis portion of this lab were the quadrant and line intercept methods. Using pre-established 25 meter square plots, on opposite sides of a stream, the tree species and sizes were mapped and recorded. Breast height diameter measurements were made on the canopy and sub canopy trees in each quadrant. The types of trees found and the number per species was recorded and used to figure which species were dominate. Each quadrant also used a random line intercept of 10 meters in length to determine the density of the bush coverage of the quadrants. A soil analysis of both sides of the creek was also conducted to determine the affects of a-biotic conditions on the species recorded in the vegetation analysis. Multiple samples of the A1 and A2 horizons were collected and analyzed using standard screening and drying stages to determine soil particle size and moisture content. The measurement of the specific gravity using a hydrometer while the soil particles are settling out in a flask is used to calculate the percentages of sand, silt, and clay fractions of the soil samples. These sampling techniques were derived from exercises #14 and #40 located in: George W. Cox, Laboratory Manual of General Ecology, seventh edition, W.C. Brown Publishing, 1996.
RESULTS & DISCUSSION
The data for this lab was analyzed in stages. In the two charts provided, the overall differences between the two sides of the forest can be seen.
The first chart compares the tree species found on each side of the forest and shows the relative dominance. Relative dominance compares the presence of one species to the total presence of all species located in the forest and expresses this value as a percentage. The overall trend in the relative dominance data shows a clear change from one side of the creek to the other. On the north side of the creek, the dominant tree species is the Beech tree, occupying over 50% of the canopy area. The Beech tree is also the most dominant tree species in the sub canopy layer as well. The data shows that the Beech species is doing well and has an assured future in the area as indicated as the dominance of the same species in the sub canopy. On the northern sites, the canopy is relatively well developed and has virtually no bushes, just saplings mostly of the parent dominant Beech.
On the southern side of the creek, the Tulip tree is the dominant tree type in the canopy layer with almost half of the area occupied. In the sub canopy layer, the Sugar Maple is clearly dominant at 90%. It seems that the Tulip trees are at the end of there life cycle and are unable to produce any offspring to allow the species to continue to dominate that area, as there were no young Tulip saplings found in the sub canopy. The southern side is heavier in the bush layer due to the opportunities to gain sunlight through the failing canopy.
The second chart shows the soil analysis. The differences in the soil content are also clear. The total percent of water held in the A-1 horizon of the soil on the Beech side, is approximately 6% more moist than the Tulip side. The amount of organics in the Beech side are also higher by approximately one to two percent than the Tulip side soil. The specific gravity measurements indicated that the Tulip side is primarily sandy (A1 horizon) side but the Beech side data s does not show a clear composition. Both the amount of water and the amount of organic nutrients in the soil are important a-biotic factors that can affect the ability of any species to thrive in an area. The northern side also contains a high concentration of large rock, virtually not present in the Tulip side. This indicates that there is a difference in the soil construction, given that soil is produced by the break down of local parent material.
This lab showed how the species in a mixed forest are influenced by a-biotic factors. The general trend of the data does show that there are distinct differences in the construction of the forest. The differences in the soil composition may have pushed the beech tree into a dominant state in their location. However, it would be difficult to say that the decline in the Tulip tree population is due to soil depletion alone. It may be due to the natural life span of the species or a stress from a previous decease. The tulips are not producing any new seedlings, possibly suggesting that the conditions that once allowed the Tulip to thrive no longer exist, but the current conditions now favor the Sugar Maples.
f:\12000 essays\sciences (985)\Enviromental\deforestation in Canada.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Deforestation
Table of Contents
Introduction 1
Important Facts 1
Historical Background 1-2
Background Law 2
Causes of Deforestation 2
The Green House Effect 2-3
Reducing Deforestation 3
Case Studies 3-4
Pros and Cons 4-5
Conclusion 5
Bibliography 6
Ninety percent of our trees, 300 - 900 years old, have been cut down. The remaining 10% is all we will ever have. Deforestation is a significant issue of our time and must be taken seriously if we want to protect our remaining forests. The definition of deforestation by the Random House Dictionary of the English Language is "to divest or clear of forests or trees" and we must stop deforestation to save our planet. My intent on writing this essay is to enlighten the reader about the facts on deforestation and to express my opinions about deforestation.
There are approximately 3 400 million hectares of forests in the world, nearly 25% of the world's land area. Close to 58% of the forests are found in the temperate/boreal regions and 42% in the tropics. For about a millennium, people have benefited from the forests. Forest products range from simple fuelwood and building poles to sophisticated natural medicines, and from high-tech wood based manufactures to paper products. Environmental benefits include water flow control, soil conservation, and atmospheric influences. Brazil's Amozonia contains half of the world's tropical rain forests. The forests cover a region 10 times the size of Texas. Only about 10% of Brazil's rain forests have been cut to date, but cutting goes on at an uncontrollable rate.
Since pre-agricultural times the world's forests have declined one fifth from 4 to 3 billion hectares. Temperate forests have lost 35% of their area, subtropical woody savannas and deciduous forests have lost 25% and ever-green forests which are now under the most pressure have lost the least area, 6%, because they were inaccessible and sparsely populated. Now with new technology, such as satellites systems, low altitude photography and side looking radar, scientists can now figure that the world is losing about 20.4 million hectares of tropical forests annually and if these figures are not reduced, we will lose all of our tropical forests in about 50 years. It has been suggested that the high deforestation rates are caused partly by the fact that the new surveys are more accurate and thus reveal old deforestation rates that have not been detected with older methods.
At first there was concern only among foresters about deforestation but now the public has created organizations such as Green Peace to help increase awareness and reduce deforestation. The Food and Agriculture Organization or F.A.O, has worked mainly within the forest community to find new and better ways to manage the forests. Also, in 1985 there was the introduction of the Tropical Forestry Action Plan or T.F.A.P. This plan involved the F.A.O, United Nations development programs, the World Bank, other development agencies, several tropical country governments, and several government organizations. Together they developed a new strategy. More than 60 countries have decided to prepare national forestry action plans to manage their forests.
Tropical deforestation has various direct causes: The permanent conversion of forests to agricultural land, logging, demand for fuelwood, forest fires and drought. Slash and burn clearing is the single greatest cause of tropical rain forest destruction world wide. Air pollution is also a major threat to the forests in the northern hemisphere and is expected to increase. Reduced growth, defoliation and eventual death occur in most affected forests. From 1850 to 1980 the greatest forest losses occurred in North America and the Middle East (-60%), South Asia (-43%) and China (-39%). The highest rates of deforestation per year are now in South America (1.3%) and Asia (0.9%).
Over the last two decades the world became interested in the loss of tropical forests as a result of expanding agriculture, ranching and grazing, fuelwood collection and timber exportation. The consequences are increased soil erosion, irregular stream flow, climate change and loss of biodiversity. Deforestation is second only to the burning of fossil fuels as a human source of atmospheric carbon dioxide. Almost all carbon releases from deforestation originate in the tropics. Global estimates of the amount of carbon given off annually by deforestation is 2.8 billion metric tons. Deforestation accounts for about 33% of the annual emissions of carbon dioxide by humans. In 1987 11 countries were responsible for about 82% of this net carbon release: Brazil, Indonesia, Colombia, Cote d'Ivoire, Thailand, Laos, Nigeria, Vietnam, Philippines, Myanmar and India. During 1987 when there was intense land clearing by fire in Brazil's Amazon, more than 1.2 million metric tons of carbon are believed to have been released.
To save our remaining forests we have to learn three important principles: Reduce, reuse, recycle, i.e., lower the consumption of paper and paper products. Some examples are getting off junk mail lists, writing or photocopying on both sides of the paper, using cloth shopping bags, cloth instead of paper napkins and paper towels, cloth diapers, recycling waste papers, and buying recycled paper products. Another important fact to reduce deforestation is that we should communicate our views to our elected representatives and build a movement toward forest protection. Finally we should visit forests and learn to appreciate them as places of inspiration and recreation.
The following two examples of case studies represent deforestation in the U.S. and in Canada. It used to be that Northern California's Pacific Lumber Company was a timber operation that was an example of good forestry. The family run firm harvested selectively from it's 195 000 acres of Redwoods. Besides looking after the forest, Pacific Lumber looked after it's employees, many lived in the company town of Scotia, the company paid their kid's college tuition, the company's controlled logging virtually guaranteed that the trees would last well into the next century. All that changed in 1985 when Charles Hurwitz of the New York based MAXXAM group bought the company and financed the take-over by issuing some $800 million in high interest bonds. To pay the dept., Hurwitz doubled the rate of logging. Since the late 1980's huge tracts of land have been clear-cut. Economically the result has been a logging boom which will be followed by inevitable bust when the tall timber is gone. Ecologically the logged land has been left bear.
The B.C. government nearly owns all the forest land and seems inclined to support timber interests than acting as guardians of the land, Everyday loggers cut down more than 1.5 square miles of growth forest. Few native American tribes there have signed treaties with the Canadian government. After a struggle, the Haida nations in 1987 won the creation of a $350 000 acre park off South Moresby Island. A fight continues over 22 000 acre Meares Island, claimed by the native Clayoquot tribes. In 1984, a blockade by the Clayoquots (off Vancouver) turned back a boat load of loggers. The vigil to defend the island lasted for six months, when a court ruling prohibited further logging until the Clayoquots' claim to the land is settled.
In Conclusion, regulated deforestation can supply us with lumber without completely destroying the forests, but deforestation which is geared economically can permanently destroy our ecosystem. If deforestation is used wisely, possibilities of positive effects take place. Some of these are: Jobs would be created, the economy would be strengthened, expanding agriculture would provide much needed resources to underdeveloped countries and people from poor urban areas could be resettled. Proper deforestation also increases foreign exchange (for example, our government promotes a new type of harvest and sells it to other countries).
Still, if deforestation is used badly it will destroy forests, add to global warming, and destroy cultures. Bad deforestation degrades the ground and the economic benefits from unwise deforestation barely enriches the community while the money goes into the pockets of politicians or timber companies. Furthermore, there is the loss of local products such as fishery, honey, game, berries and also important species of plants that could help modern medicine.
I believe that if deforestation is not reduced soon, our ecosystem will be permanently damaged and we will have lost many our resources. Until then you might want to contact these organizations to find out more about our forests and become involved:
Association of Forest Service Employees for Environmental Ethics
P.O. box 11615
Eugene, OR 97440
(503) 484-2692
Global Relief
P.O. box 2000
Washington, DC 20013
National Wildlife Federation
1400 Sixteenth St. N.W.
Washington, DC 20036
(202) 797-6800
Bibliography
Zuckerman, Seth. Saving our Ancient Forests. Los Angeles: Living Planet Press, 1991.
Westoby, Jack. Introduction to World Forestry. New York: Basil Blackwell Ltd., 1989.
Gallant, Roy. Earth's Vanishing Forests. New York: Macmillan Publishing Company, 1991.
Kerasote, Ted. Canada: The Brazil of the North? Toronto: Sports Afield, 1994.
f:\12000 essays\sciences (985)\Enviromental\Deforestation of the Northwest.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Deforestation of the Pacific Northwest
One of the most controversial areas associated with the global problem of deforestation is the Pacific Northwest of the US. The problem can be broken down into several issues that all tie in together. These include the near extinction of the Northern Spotted Owl, the "business" aspect of logging versus the environmental aspect, and the role of the government in this problem.
In 1973, the Endangered Species Act (ESA) was passed. This enabled the Dept. of Commerce and Dept. of the Interior to place species, either land or marine, as either threatened or endangered. Under these terms species could no longer be hunted, collected, injured or killed. The northern spotted owl falls under the more serious condition of being endangered. Also, the bill forbids federal agencies to fund or carry out any activity that would threaten the species or its' habitat. It is the latter part of the bill that causes the controversy. Under the ESA, loggers should not be allowed to cut down the old-growth of the forest. The old growth of a forest includes the largest and oldest trees, living or dead. In the case of the North Coast forests, this includes some thousand-year-old stands with heights above three-hundred feet and diameters of more than ten feet.
In 1990, the number of spotted owls dropped to 2000 breeding pairs. The preservation of any species contributes to the biodiversity of an area. In an ecosystem, the absence of one species creates unfavorable conditions for the others. The absence of the spotted owl could have a significant effect on the North Coast forest ecosystem. In order to send the owl population in the right direction, the major problem for their decline would have to be remedied - loss of habitat. This fact combined with the owls' short life expectancy and late age of breeding only exacerbates the problem. When loggers remove old growth the owl loses habitat for its' food, housing, as well as protection from predators.
Approximately ninety percent of the forests in the Pacific Northwest have already been harvested. In order to protect the current owl population, the remaining forests would have to be preserved, but this would have a serious negative economical effect. Such a decision would effect jobs, regional economy, as well as the lifestyle of loggers. With such a great effect, to stop the cutting seems to be an exercise in futility. On the other hand, by continuing the destruction of the owls' habitat, the only suitable habitat that will remain will be in the confines of a zoo. Seeing an animal in an artificial environment can certainly not be compared to witnessing an animal in its' natural environment.
In my opinion, there can be no price put on the existence of any species on this planet, plant or animal. To think that money has become such an influential part of our society that companies are willing to sacrifice a species in order to make a profit. The northern spotted owl is only one of many species that are on the verge of extinction do to deforestation.
Another important consideration in the deforestation of the Pacific North Coast is logging as a business. The investors of a publicly owned company sole concern is the growth of their stock, and this for lumber companies is accomplished by harvesting trees in the most efficient and cost effective manner. Clear-cutting old growth is the best way to accomplish this. This approach leads to quick financial gain but is not best for the long-term or the trees. It is the companies that use this process that is the most unfavorable to the forests and contributes to deforestation the most.
Another approach uses wise management techniques to maximize the long-term profit of the forest. Guest speaker Jerry Howe would fall into this category as a private land owner. As a land "steward," he believes he can do what he wants with his land. The term "steward" is used to mean that no one can truly "own" the land, it can only be used or under the care of a person. He uses clear-cutting when it has the smallest effect on the environment, he also uses strip cutting in which the forest is cut in strips to provide a buffer zone and is more aesthetically pleasing. His methods are better for the forest due to conservative forestry practices that speed up the regeneration of the forest. This produces a more sustainable yield than clear-cutting alone.
While neither of these techniques is good for the environment, using wise management practices can still produce a large profit while conserving precious ecosystems. For large companies, such as Pacific Lumber, to switch to using conservative forestry practices would take more than proposals by environmentalists and the Forest Service to help the environment to change their current ways. For these companies to switch, it would cost them money to follow the more sustainable approach while also decreasing their profit due to less tree cutting in the short-term. In my opinion, it is up to the government to set standards that force these companies to switch by making regulations more strict as well as a greater number of them if need be.
The role of the government in the deforestation issue has been two-sided. This is evident in the several different stands Congress has chosen on the issue. These include: 1) The preservation of the forests for the public, such as the aesthetic values of them 2) The conservation of the forests to support the timber industry in the future 3) The protection of the right of a private land owner to cut all the trees down they want, with no limit. With indecisiveness like this there is no hope of setting regulations that protect the forests.
On one side of the government lies the "alphabet soup" of federal agencies set up to find solutions to questions like, "What is the sustainable yield of a forest?" These same agencies also decide where taxpayers' money goes within the logging business. In some cases, the money subsidizes the large companies for things such as logging roads in order to keep the cost of paper and other tree products down. These same companies ship their lumber to Japan for milling before they are sold back to the United States at a higher price. Not only does the public lose money in this process but it costs Americans a number of jobs.
On the other hand, agencies have made efforts to prevent deforestation. Members of the Forest Service educate not only the large companies, but the private landowners as well. It is the private owners who own sixty percent if the forests being harvested. By helping to show how conservative forestry techniques can be made efficient as well as more profitable, they are helping to diminish the rate of deforestation. If more money was spent on research and the spread of new and better techniques, then the taxpayers' money would be better spent.
In conclusion, there are several aspects of deforestation in the Pacific Northwest that need to be evaluated before the situation becomes irreversible. If the current harvesting techniques continue, our children will be missing more than the spotted owl.
f:\12000 essays\sciences (985)\Enviromental\Effects of Deforestation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The subject of deforestation and the effects that it has on the environment have been heavily debated for a long time; particularly over the last few years. Governments and large lumber companies see large profits in the mass deforestation of forests and state that their actions are having few, if any, harmful effects on the environment. Most people disagree with this and think that the environmental effects are devastating and will become irreversibly disastrous in the very near future. Whether or not the pros outweigh the cons will be hotly debated for years to come but the fact is that deforestation is harmful to the environment and leads to declining wildlife populations, drastic changes in climate and loss of soil.
The loss of forests means the loss of habitats for many species. Current statistics show that as many as 100 species become extinct every day with a large portion being attributed to deforestation (Delfgaauw, 1996). "Edge effects" are the destruction or degradation of natural habitat that occur on the fringes of fragmented forests. The effects for the animals include greater exposure to the elements (wind, rain etc...), other non-forest animals and humans (Dunbar, 1993). This unnatural extinction of species endangers the world's food supply, threatens many human resources and has profound implications for biological diversity.
Another negative environmental impact of deforestation is that it causes climate changes all over the world. As we learned in elementary school, plant life is essential to life on earth as it produces much of the oxygen that is required for humans and other organisms to breathe. The massive destruction of trees negatively effects the quantity and quality of the air we breathe which has direct repercussions on the quantity and quality of life among both humans and animals alike. With this reduced amount of vital plant life comes the increase of carbon dioxide levels in the earth's atmosphere. With these increased levels of CO-2 come unnatural changes in weather patterns both locally and globally. "The removal of forests would cause rainfall to decline more than 26%. The average temperature of soil will rise and a decline of 30% in the amount of moisture will evaporate into the atmosphere" (Delfgaauw, 1996). This leads to the global warming phenomenon which is also directly related to the declining amounts of forest areas on the earth.
Soil erosion caused by deforestation is also a major concern among even the most amateur environmentalists:
"When rain falls, some may sink to the ground, some may run off the
surface of the land, and flowing down towards the rivers and some may
evaporate. Running water is a major cause of soil erosion, and as the
forests are cut down, it increases erosion" (Delfgaauw, 1996).
The removal of wood causes nutrient loss in the soil, especially if the period between harvests isn't long enough (Hamilton and Pearce, 1987). Some areas also become "unbalanced" with the removal of tree roots as this removal can cause serious mud slides and unstability which can be seen in the in the tropical rain forests of Australia (Gilmour et al., 1982; as cited in Hamilton and Pearce, 1987) and Malaysia (Peh, 1980; as cited in Hamilton and Pearce, 1987). It should be mentioned that recent logging techniques have decreased the amount of soil erosion under most circumstances but it is nearly impossible to stop erosion from happening.
Whether or not you are a radical environmentalist or just a regular citizen, the consequences of deforestation affect us all. Living in BC we don't have to drive very far to see land that has been clear-cut or to see massive protests by people of all ages who want to "save the forests" or "save the environment". It is evident that reforestation projects are underway and in many cases are quite successful. Millions of dollars are spent each year (provincially, nationally and internationally) on reforestation and many experts agree that this is helping provided that the time between harvest is long enough for the area to mature properly. The projections we hear through the media make the situation sound quite bleak but the fact is that private and public awareness have lead to a decreasing amount of deforestation activity (from what is projected) in many areas such as the Brazilian Amazon Basin (Dunbar, 1993). Forests are an important part of maintaining the earth's biological and ecological diversity as well as major factors in the economic well being of many areas. If we can maintain a balance between the two and continue the reforestation efforts, the negative environmental affects could be greatly reduced. Regardless, the negative environmental affects do exist and the severity of them will be debated for many years to come.
f:\12000 essays\sciences (985)\Enviromental\El Nino .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Typically, the level of ocean water around the world is higher in the western Pacific and lower in the eastern, near the Western coast of South and North America. This is due primarily to the presence of easterly winds in the Pacific, which drag the surface water westward and raise the thermocline relatively all the way up to the surface in the east and dampen it in the west. During El Nino conditions, however, the easterlies move east, reducing the continuing interaction between wind and sea, allowing the thermocline to become nearly flat and to plunge several feet below the surface of the water, allowing the water to grow warm and expand.
With the help of the National Oceanic and Atmospheric Administration's weather satellites, tracking shifting patterns of sea-surface temperatures can be made easier. Normally, a "pool" of warm water in the western Pacific waters exists. Under El Nino conditions, this "pool" drifts southeast towards the coast of South America. This is because, in a normal year, there is upwelling on the western South American coastline, and cold waters of the Pacific rise and push westward. However, during an El Nino year, upwelling is suppressed and as a result, the thermocline is lower than normal. Finally, thermocline rises in the west, making upwelling easier and water colder.
Air pressures at sea level in the South Pacific seesaw back and forth between two distinct patterns. In the high index phase, also called "Southern Oscillation", pressure is higher near and to the east of Tahiti than farther to the west near Darwin. The east-west pressure difference along the equator causes the surface air to flow westward. When the atmosphere switches into the low index phase, barometers rise in the west and fall in the east, signaling a reduction, or even a reversal the pressure difference between Darwin and Tahiti. The flattening of the seesaw causes the easterly surface winds to weaken and retreat eastward. The "low index" phase is usually accompanied by El Nino conditions.
The easterly winds along the equator and the southeasternly winds that blow along the Peru and Ecuador coasts both tend to drag the surface water along with them. The Earth's rotation then deflects the resulting surface currents toward the right (northward) in the Northern Hemisphere and to the left (southward) in the South Hemisphere. The surface waters are therefore deflected away from the equator in both directions and away from the coastline. Where the surface water moves away, colder, nutrient-rich water comes up from below to replace it which is called upwelling. The winds that blow along the equator also affect the properties of upwelled water. When there is no wind, the dividing layer between the warm surface water and the deep cold water would be almost flat; but the winds drag the surface water westward, raising the thermocline nearly all the way up to the surface in the east and depressing in the west. The resulting changes in sea-surface temperature will have an effect on the winds. When the easterlies are blowing at full strength, the upwelling of cold water along the equatorial Pacific chills the air above it, making it too dense to rise high enough for water vapor to
condense to form clouds and raindrops. As a result, this part of the ocean stays indubitably free of clouds during normal years and the rain in the equatorial belt is mostly confined to the extreme western Pacific. However, when the easterlies weaken and retreat eastward during the early stages of an El Nino event, the upwelling slows and the ocean warms. The moist air above also warms. It would produce deep clouds which make heavy rain along the equator. The change in ocean temperatures thus causes the major rain zone over the western Pacific to shift eastward. In this way, the dialogue between wind and sea in the Pacific can become more and more intense.
Normally, each area of the globe follows a fairly predictable pattern and receives only that amount of rainfall that it is accustomed to receiving. However, conditions are quite different during El Nino. During normal years, when the winds blowing east along the equator are blowing at full strength, this strip of ocean stays free of clouds and the rain in the equatorial belt largely confined to the extreme Western Pacific, near Indonesia. But when the easterlies weaken and retreat eastward during El Nino years, the moist air above the ocean becomes buoyant enough to form clouds, and the clouds produce heavy rains along the equator. These rains are only some of the many weather changes that occur all over the globe during an El Nino event. Many other weather changes have resulted in great amounts of damage to the area. In 1982-1983, the El Nino resulted in 100 inches of rain falling during a six month period on Ecuador and northern Peru. The rain transmogrified the coastal desert into a grassland mottled with lakes. That same El Nino also caused typhoons to hit Hawaii and Tahiti. The monsoon rains that fell over the central Pacific, instead of on the Western side, led to terrible droughts and forest fires in Indonesia and Australia. Also, winter storms struck southern California and caused a lot of flooding across the southern United States, while northern regions of the USA received unusually mild winters and a lack of snow. Obviously, El Nino events have quite an effect on global weather patterns. Hopefully, as scientists develop better models, they will soon be better able to understand and make predictions about this curious event.
Normally, the thermocline is quite high in the eastern Pacific. Stirring by the wind mixed the nutrient-rich water below with the surface water. In the presence of sunlight, phytoplankton can produce chlorophyll, a tiny green plant substance. In turn, this substance feeds zooplankton, which in turn feeds higher members of the food chain. During El Nino conditions, the water level rises in the east and lowers in the west, forcing many changes to happen among the plant and animal life. Sea birds in the east must leave their nests, abandoning their young and searching for food which is not there, because the critical upwelling which causes the plankton and other lower members of the food chain to be produced is not there. Water temperature is above normal, and tropical fish are displaced poleward or migrate, along with the anchovy and sardines. On land, the effects produced a great amount of rainfall, making the desert lands into a grassland with lush vegetation and abundant life. Grasshoppers come, fueling toad and bird populations, and the increase in rainfall produces lakes which fish come to inhabit, fish that had migrated upstream during floods produced by the rain and become somehow trapped. In some flooded coastal cities, shrimp production set records. So too did the number of mosquito-borne malaria cases.
f:\12000 essays\sciences (985)\Enviromental\Endangered Manatees.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Endangered Manatee
New York Times April 11, 1996
The manatee population has suffered a devastating blow so far this year. About 221 manatees have been found dead already. Out of those 221, 128 have no obvious cause of death. Marine biologists have been unable to find the cause of these deaths. They suspect that the Red Tide off of the coast of southwest Florida has some connection with these mysterious deaths. A red tide is a bloom of flagellates which are deadly to marine life, but cause no harm to the human population. Another possible cause is a virus that is unknown to scientists. Many people believe that this problem will eventually stop an the manatee population will flourish, but others are rather pessimistic.
Ecological Problem: The manatee population is quickly dying off and unless they make an astonishing comeback, they will soon be extinct.
Ecological Solution: A possible solution to this problem is to move the manatees from their current habitat into a similar habitat, away from the "source" of the problem. If the manatees survive there, that tells scientists that the problem was in their habitat. If the still die at their current rate, that would tell the scientists that the manatees have a deadly unknown virus. If it is a virus, the scientists can devise some sort of medicine to defeat this virus.
f:\12000 essays\sciences (985)\Enviromental\Energy Flow Systems.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Energy Flow Systems
Richard White's Organic Machine, and William Cronon's Changes in the Land, both examine environments as energy flow systems. The energy flow model was utilized by the authors to explain relationships within ecosystems.
Richard White's thesis is to examine the river as an organic machine, as an energy system that, although modified by human intervention, maintains it's natural, its "unmade" qualities. White emphasizes on energy because it is a useful concept that can be easily understood. He says, "the flow of the river is energy, so is the electricity that comes from the dams that block that flow. Human labor is energy; so are the calories that are stored as fat by salmon for their journey upstream." White notes that energy is as concrete as salmon, human bodies, and the Grand Coulee Dam. White wants his readers to think about nature and its relationship with humanity.
White explains how the river is energy. The Columbia River works as gravity pulls it to the Pacific Ocean. The Columbia is continuously cutting into the terrain that it flows through. Over millions of years water rushed through the Columbia Basin to form the Columbia River. Water carries soil, silt, and debris downstream. The constant movement of material in the river cuts and shapes the river basin into the land. This movement is a slow and inefficient use of energy. According to White, only two percent of water's potential energy results in the work of erosion. The other ninety-eight percent of water's energy was lost as water molecules rub against themselves, the river bed, and the river banks. This energy was released as heat into the river.
Often the energy of flowing water was not recognized. There are occasions when rivers do show their power is destructive ways. Power was usually demonstrated through floods, and more so in flash floods. Thousands of years ago, an ice dam in the Columbia River, holding the glacial lake Missoula, broke and created the largest known freshwater flood in earth's history. The flood rushed into the Columbia Channel and created the Grand Coulee and other rock channels that would have taken the Mississippi River three hundred years at full flood to create.
Salmon are also a part of the Colombian energy model. As the river works its way downward to the Pacific Ocean, the salmon work their way up the Columbia to spawn. The energy in salmon can be measured by their body fat and caloric value. Salmon start their run upstream prepared for the long hard run. Their bodies have stored fat and oil after a year worth of feeding at sea. The stored energy in salmon is used as energy as they battle head to head against the force(energy) of the Columbia River. As the salmon work upstream, they use their stored energy and their bodies become leaner. When the salmon reach their destination, they are in ill condition. The skinny salmon lay their eggs and die of exhaustion.
Work and energy also link humans to the Columbia River energy model. Alexander Ross and his crew learned how powerful the river was in 1811. They attempted to enter the mouth of the Columbia from the Pacific. Ross learned that the river's current and the ocean's tide work against each other creating an astonishing amount of friction. Fresh water is pushed several miles out to sea and the ocean tides can be felt one hundred and forty miles up river. The tide form sandbars at the mouth of the river and the current crashing on them produces huge waves and foaming breakers. These breakers form barriers that Alexander Ross and his crew had to cross.
Human energy challenged the energy of the river mouth in 1811. The first attempt to cross the barrier was a failure. Ross's friend Fox and his crew were lost while battling the waves of the seemingly unapproachable mouth of the Columbia. Ross and his crew with will and muscle somehow survived the force of the tide and current and made it across the river's mouth for the first time.
Ross's drama to enter the river was explained by White by using the energy cycle. White explains that lunar energy causes the ocean tide and the sun provides all of the remaining energy of the cycle. The sun heats the atmosphere that heats and evaporates the ocean water and provides the wind to move the moisture to the mountains. The clouds cool and moisture is released as rain or snow that falls to the land. Gravity pulls the water to the ocean again through the rivers and the process starts over again.
Man attempts to slow down the natural energy cycle to extract energy from the river by building dams. Dams are used to store and regulate water that is used to turn turbines. These turbines power generators that produce electricity.
Hydroelectricity was first in abundance. Electricity was a product without much demand. Soon farmers used electricity to light their homes and to run small electrical devices such as toasters, irons, and washing machines. Electricity was also used to light city homes, factories, and streets. The hydroelectric companies still needed more customers to consume the electricity being produced. Californians bought electricity and soon major industries were attracted to the Columbia area because of the abundance of electricity. Electricity was a necessity for the production of aluminum. Aluminum and Electricity were the perfect combination for the production of airplanes. U.S. plane manufacturers are centrally located in the Colombian area because of this unique utopia of energy.
Later, electric energy in the Colombian river basin was produced automatically. The Columbia River was fist a sewer for radioactive waste produced by the production of Uranium. The Columbia was also used to cool nuclear reactors. The result of waste dumping was contamination, and the result of the Columbia cooling the nuclear reactors was the river's temperature rose after the warm water returned to the river.
White's model of the energy cycle in the Columbia River Basin fully describes how energy is a naturally reassuring process that is altered by man, but can never be destroyed.
William Cronon also uses an energy flow cycle in his book Changes in the Land. Cronon describes how the Indians and the Colonists create different cycles with the same environment.
The European farmers cleared the forest for fields to plant corn and grain. Farmers also cleared land for their animals to graze on. The corn and grain growing in the fields took energy from the rich soil and the water. This energy was then passed to humans or animals that ate the food. The animals that grazed the land, took energy from the grass and water. They to would pass energy to humans by the form of labor or in the form of food.
The energy in the soil came from the trees that held the water and rich top soil in place. As the trees were cut, the valuable topsoil was washed away from the rain and snow that easily washes into the streams and rivers. Soon the soil dries and is lifeless because the energy system was disturbed.
Cronon's solution to the European's problem was to sustain the farm land. Wood should be cut when necessary and new trees should be replaced to preserve the soil. In farm lands, crops need to be rotated ignored to sustain sufficient minerals in the soil for the harvest. The trees will hold topsoil and importantly water in the ground. The fields being sustained avoids the need to clear other fields for farm use.
American Indians manipulated their natural environment in a different way. Indian women were the farm workers. They grew their crops among the trees. The trees held the soil and water. Indian women would also grow many crops together. This created a balance of mineral replacement and water replacement. Indian men hunted for meat instead of grazing domesticated animals. Indians would create utopias for game by burning the forest floor annually. By doing this, the Indians create growth of small shrubs for the animals to hide in without destroying the forest. The sustained yield of crops and animals supported the Indian lifestyle until it was disturbed by European influences.
White and Cronon both use energy flow systems to explain environmental history. Energy is easy to look at in history because man has used it and changed it throughout time. Energy sustains life and is and ever lasting cycle.
f:\12000 essays\sciences (985)\Enviromental\Enviromentel Pollution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Science report
Environmental Pollution
ENVIRONMENTAL POLLUTION
Automobiles like these are around the world everyday, and their exhaust destroys our air everyday.
Our environment is a major aspect of our life today. Many of us don't take our Earth seriously and think that as long as pollution doesn't hurt them they can go ahead and throw garbage on the ground or spill oil down the drain. Well to many people have that theory and they are killing off our Earth and also physically harming themselves from the air they breath and the water they swim in. Our Earth is fragile like a human and people don't know. There are many different types of environmental pollution (e.g. Water, air, atmospheric.)
Scientists believe that all cities with populations exceeding 50,000 have some degree of air pollution. Burning garbage in open dumps causes air pollution, and also it smells pretty bad. Air pollution comes from many different sources. One of the major sources is carbon monoxide which manly comes from automobiles, but also burning of fossil fuels, CFCs etc. Air pollution does not leave the Earth it all gets trapped up in the atmosphere. This doesn't bother most people, and they think that it will not harm them. People burn down forests and people burn fossil fuels, and CFCs from aerosols. Every bit of this harms our atmosphere. Factories and transportation depend on huge amounts of fuel billions of tons of coal and oil are consumed around the world every year. When these fuels burn they introduce smoke and other, less visible, by-products into the atmosphere. Although wind and rain occasionally wash away the smoke given off by power plants and automobiles, the cumulative effect of air pollution poses a grave threat to humans and the environment. A big example of smog is LA you can see the smog just hovering above the city. I don't think any human alive should be subject to that kind of environment. Scientists believe that all cities with populations exceeding 50,000 have some degree of air pollution. Burning garbage in open dumps causes air pollution Scientist have discovered that over the South Pole the ozone has a high level of ozone depletion.
A computer-enhanced map, taken from satellite observations of ozone levels in the atmosphere over the South Pole, shows the region of ozone depletion that has begun to appear each spring over Antarctica.
When you look at this picture you can see the big red spot right above the South Pole. If this depletion opens up dangerous and deadly UV Rays from the sun will come into Earth. Air pollution causes global warming which scientist believe is making the Earth warmer and melting ice up in the South and North Pole. The country Holland has had water from the ocean got too high for them and flooded into towns. Holland spent millions of dollars to put up "dikes" which are big barriers in the water to prevent their town to be completely submerged. With the ocean getting deeper coastal cites all around the world could flood, billions of dollars would be spent to try to prevent it, but in a while it could not be stopped. Instead of waiting and having to spend all this money why don't we put it together today and try different ways of preventing air pollution, it would be much easier than all the trouble of stopping flooding.
Water pollution is another major aspect of environmental pollution. Water pollution is scary because over 75% of our Earth is covered by the ocean. Water pollution comes from many different sources around the world. One major pollutant that destroys the ocean is oil spills. The oil from an oil spill kills hundreds of sea animals from fish, to whales, to birds. Below is a small list of just some of the major oil spills. Notice how many tons were spilled into our ocean...
Notable Oil Spills
Date Location Description Tons spilled
Jan.-June, 1942 East coast of U.S. German U-boat attacks on tankers after 590,000
March 18, 1967 Land's End, Cornwall, England Grounding of 'Torrey Canyon' 119,000
June 13, 1968 South Africa Hull failure of 'World Glory' 46,000
Nov. 5, 1969 Massachusetts Hull failure of 'Keo' 30,000
March 20, 1970 Trälhavet Bay, Sweden Collision of 'Othello' with another ship 60,000 to 100,000
Dec. 19, 1972 Gulf of Oman Collision of 'Sea Star' with another ship 115,000
May 12, 1976 La Coruna, Spain Grounding of 'Urquiola' 100,000
Dec. 15, 1976 Nantucket, Mass. Grounding of 'Argo Merchant' 26,000
Feb. 25, 1977 Pacific Ocean Fire aboard 'Hawaiian Patriot' 99,000
March 16, 1978 Portsall, France Grounding of 'Amoco Cadiz' 223,000
July 19, 1979 Trinidad and Tobago Collision between 'Atlantic Empress' 300,000
Nov. 1, 1979 Galveston Bay, Tex. Collision of 'Burmah Agate' with 36,000
Aug. 6, 1983 Cape Town, South Africa Fire aboard 'Castillo de Beliver' 250,000
March 24, 1989 Prince William Sound, Alaska Grounding of 'Exxon Valdez' 34,000
Jan. 25, 1991 Sea Island, Kuwait Iraq deliberately dumped oil into 1,450,000*
Dec. 3, 1992 Spain Grounding of 'Aegean Sea' 84,000
Jan. 5, 1993 Shetland Islands Grounding of 'Braer' 87,000
Over a million tons of oil has been spilled into the ocean, and we are not able to clean all the oil out. Most of the oil stays on the surface so when fish come up to the surface they get the oil on their body, and it gets into their body and gills. Birds land on the water to catch fish or take a break when the oil gets on their feathers and they are to weighted down to fly off, they later get to tired and just die from either poisoning from the oil or their body just not being able to take it.
Another way our ocean is polluted is also from humans. We litter the ocean with garbage from boats and we pour oil down the storm drains. All the garbage is spread throughout the ocean and it poisons animals or fish, and birds get caught up in it. When those little plastic rings that hold soda cans together get into the water birds get them caught in their throats, fish get caught in them. There are many different things that can hurt the sea life.
In this picture above you can see the way the fish in the sea are affected by our actions. Pollution in the sea also affects the food chain, fish, and organisms can die of then other fish can't feed off the extinct fish, and so on. Preventing water pollution is such an easy task, not very hard, people just have to be aware of what they throw away our into the water, they can find a garbage can.
I think that all kinds of environmental pollution can be stopped if we all use our heads and just think before we throw a piece of trash on the ground, throw it into a nearby garbage can. We should look at our Earth as a precious human being and treat it like it were a child of our own. We should not trash it and take advantage of it. If we abuse our Earth now who knows how it will get back at us in the future. Saving the Earth is such a simple task, and I think everyone should be involved in it rich or poor. If we don't save our Earth now someday it will be to late. There are programs out there that try to save the Earth, but not enough people corporate in these programs. If more people supported and joined into these programs maybe our world wouldn't be in such danger of dying. If our Earth dies it will surely take us all with it.
BIBLIOGRAPHY
Environmental Health, Carleson Lavonne Chelsea House Publishers, New York 1994
Acid Rain, Tyson Peter Chelsea House Publishers, New York 1992
Clean Water, Barass Karen Chelsea House Publishers, New York 1992
"Environmental Pollution" Comptons Interactive Encyclopedia 1996
"Smog" Encarta Encyclopedia 1996
f:\12000 essays\sciences (985)\Enviromental\Estuaries.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
An estuary is a coastal area where fresh water from rivers and streams mixes with
salt water from the ocean. Many bays, sounds, and lagoons along coasts are estuaries.
Portions of rivers and streams connected to estuaries are also considered part of the
estuary. The land area from which fresh water drains into the estuary is its watershed.
Estuaries come in all shapes and sizes, each unique to their location and climate. Bays,
sounds, marshes, swamps, inlets, and sloughs are all examples of estuaries.
An estuary is a fascinating place from the largest landscape features to the smallest
microscopic organisms. When viewing an estuary from the air on is practically amazed by
dramatic river bends as freshwater finds its way back to the sea. The vast expanse of
marsh grasses or mudflats extend into calm waters that then follow the curve of an
expansive barrier beach. Wherever there are estuaries, there is a unique beauty. As rivers
meet the sea, both ocean and land contribute to an ecosystem of specialized plants and
animals.
At high tide, seawater changes estuaries, submerging the plants and flooding creeks,
marshes, panes, mudflats or mangroves, until what once was land is now water.
Throughout the tides, the days and the years, an estuary is cradled between outreaching
headlands and is buttressed on its vulnerable seaward side by fingers of sand or mud.
Estuaries transform with the tides, the incoming waters seemingly bringing back to
life organisms that have sought shelter from their temporary exposure to the non-aquatic
world. As the tides decline, organisms return to their protective postures, receding into
sediments and adjusting to changing temperatures.
The community of life found on the land and in the water includes mammals, birds,
fish, reptiles, shellfish, and plants all interacting within complex food webs. Flocks of
shore birds stilt through the shallows, lunging long bills at their abundant prey of fish,
worms, crabs or clams. Within the sediments, whether mud, sand or rocks, live billions of
microscopic bacteria, a lower level of the food web based largely on decaying plants.
Estuaries are tidally-influenced ecological systems where rivers meet the sea and
fresh water mixes with salt water. Estuaries provide habitat;tens of thousands of birds,
mammals, fish, and other wildlife depend on estuaries. They provide marine organisms,
most commercially valuable fish species included, depend on estuaries at some point
during their development. Where productivity is concerned, a healthy, untended estuary
produces from four to ten times the weight of organic matter produced by a cultivated
corn field of the same size. Estuaries provide water filtration;water draining off the
uplands carries a load of sediments and nutrients. As water flows through salt marsh peat
and the dense mesh of marsh grass blades, much of the sediment and nutrient load is
filtered out. This filtration process creates cleaner and clearer water. Estuaries also
provide flood control. Porous, resilient salt marsh soils and grasses absorb flood waters
and diffuse storm surges. Salt marsh dominated estuaries provide natural buffers between
the land and the ocean. They protect upland organisms as well as billions of dollars of
human real estate.
Estuaries are crucial transition zones between land and water that provide an
environment for lessons in biology, geology, chemistry, physics, history, and social issues.
Estuaries are significant to both marine life and people. They are critical for the survival
of fish, birds, and other wildlife because they provide safe spawning grounds and
nurseries. Marshes and other vegetation in the estuaries protect marine life and water
quality by filtering sediment and pollution. They also provide barriers against damaging
storm waves and floods.
Estuaries also have economic, recreational, and aesthetic value. People love water
sports and visit estuaries to boat, fish, swim, and just enjoy their beauty. As a result, the
economy of many coastal areas is based primarily on the natural beauty and bounty of
their estuaries. Estuaries often have ports serving shipping, transportation, and industry.
Healthy estuaries support profitable, commercial fisheries. In fact, almost 31 percent of
the Gross National Product (GNP) is produced in coastal counties. This relationship
between plants, animals, and humans makes up and estuary's ecosystem. When its
components are in balance, plant and animal life flourishes.
Humans have long been attracted to estuaries. Indian mittens consisting of shellfish
and fish bones are reminders of how ancient cultures lived. Since Colonial times we have
used estuaries and their connecting network of rivers for transporting agricultural goods
for manufacturing and trade. Not only do commercially important fish and shellfish
spawn, nurse, or feed in estuaries, estuaries also feed our hears and minds. Scientists and
even poets and painters are inspired by the beauty and diversity found in an estuary.
Human activity also seriously threatens the vulnerable ecosystems found in the
estuaries. Long considered to be wastelands, estuaries have had their channels dredged,
marshes and tidal flats filled, waters populated, and shorelines reconstructed to
accommodate our housing, transportation, and agriculture needs. As our population
grows and the demands imposed on our natural resources increase, so too does the
importance of protecting these resources for their natural and aesthetic values.
In recognition to these threats, Congress, in 1987, established the National Estuary
Program (NEP) as part of the Clean Water Act. The NEP's mission is to protect and
restore the health of estuaries while supporting economic and recreational activities. To
achieve this, the Environmental Protection Agency (EPA) helps create local NEPs by
developing partnerships between government agencies that oversee estuarine resources
and the people who depend on the estuaries for their livelihood and quality of life. These
groups plan and implement programs according to the needs of their own areas. Local
NEPs are demonstrating practical and innovative ways to revitalize and protect their
estuaries. The benefit of this program is that it brings communities together to decide the
future of their own estuaries.
One specific estuary is the San Francisco Estuary. Human activities in the 1600
square mile Bay/Delta watershed region have drastically altered natural habitats and
impaired the functions of the estuary's ecosystem. Poor cattle grazing practices contribute
to soil erosion and water quality problems. In model public or private partnership, this
NEP is assisting a private rancher in developing a grazing management strategy for a 500
acre parcel of public land within Wildcat Creek Regional Park. Strategies already being
implemented include building barriers to prevent livestock from trampling sensitive
habitats, installing pens to improvelivestock management, and selecting cattle grazing
period to retard the growth of alien and nuisance plants. These measures encourage the
regrowth of native bunchgrasses and fords that provide not only better habitat for wild
life, but also more desirable forage for the cattle. In addition, soil erosion and pollutant
loading should decrease.
Another interesting and problematic estuary is New York-New Jersey Harbor
Estuary. Trash and other foldable marine debris washing up on area beaches had been
chronic problem for the New York-New Jersey Harbor Estuary, but unusual episodes in
1987 and 1988 shocked the public and closed many beaches. The New York-New Jersey
Harbor GNP developed a short-term plan using helicopters and vessels for surveillance
and capture of the foldable debris. Along-term plan to address the floatables problem was
subsequently developed. This included the purchase of additional skimmer vessels to
collect debris, a pollution decrease strategy, and an Operation Clean Shores program in
New Jersey that has already removed 10,000 tons of debris.
There are many estuaries in the United States that are in the NEP. There are also
small estuaries. Such resources include the Mississippi and Alabama estuaries. The
GNP, National Estuary Program's basic purpose is to bring new life to present-day
estuaries.
f:\12000 essays\sciences (985)\Enviromental\Evolution part 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Human Evolution, the biological and cultural development of the species Homo sapiens, or human beings. A large number of fossil bones and teeth have been found at various places throughout Africa, Europe, and Asia. Tools of stone, bone, and wood, as well as fire hearths, campsites, and burials, also have been discovered and excavated. As a result of these discoveries, a picture of human evolution during the past 4 to 5 million years has emerged.
Human Physical Traits
Humans are classified in the mammalian order Primates; within this order, humans, along with our extinct close ancestors, and our nearest living relatives, the African apes, are sometimes placed together in the family Hominidae because of genetic similarities, although classification systems more commonly still place great apes in a separate family, Pongidae. If the single grouping, Hominidae, is used, the separate human line in the hominid family is distinguished by being placed in a subfamily, Homininae, whose members are then called hominines-the practice that is followed in this article. An examination of the fossil record of the hominines reveals several biological and behavioral trends characteristic of the hominine subfamily.
Bipedalism
Two-legged walking, or bipedalism, seems to be one of the earliest of the major hominine characteristics to have evolved. This form of locomotion led to a number of skeletal modifications in the lower spinal column, pelvis, and legs. Because these changes can be documented in fossil bone, bipedalism usually is seen as the defining trait of the subfamily Homininae.
Brain Size and Body Size
Much of the human ability to make and use tools and other objects stems from the large size and complexity of the human brain. Most modern humans have a braincase volume of between 1300 and 1500 cc (between 79.3 and 91.5 cu in). In the course of human evolution the size of the brain has more than tripled. The increase in brain size may be related to changes in hominine behavior. Over time, stone tools and other artifacts became increasingly numerous and sophisticated. Archaeological sites, too, show more intense occupation in later phases of human biological history.
In addition, the geographic areas occupied by our ancestors expanded during the course of human evolution. Earliest known from eastern and southern Africa, they began to move into the tropical and subtropical areas of Eurasia sometime after a million years ago, and into the temperate parts of these continents about 500,000 years ago. Much later (perhaps 50,000 years ago) hominines were able to cross the water barrier into Australia. Only after the appearance of modern humans did people move into the New World, some 30,000 years ago. It is likely that the increase in human brain size took place as part of a complex interrelationship that included the elaboration of tool use and toolmaking, as well as other learned skills, which permitted our ancestors to be increasingly able to live in a variety of environments.
The earliest hominine fossils show evidence of marked differences in body size, which may reflect a pattern of sexual dimorphism in our early ancestors. The bones suggest that females may have been 0.9 to 1.2 m (3 to 4 ft) in height and about 27 to 32 kg (about 60 to 70 lb) in weight, while males may have been somewhat more than 1.5 m (about 5 ft) tall, weighing about 68 kg (about 150 lb). The reasons for this body size difference are disputed, but may be related to specialized patterns of behavior in early hominine social groups. This extreme dimorphism appears to disappear gradually sometime after a million years ago.
Face and Teeth
The third major trend in hominine development is the gradual decrease in the size of the face and teeth. All the great apes are equipped with large, tusklike canine teeth that project well beyond the level of the other teeth. The earliest hominine remains possess canines that project slightly, but those of all later hominines show a marked reduction in size. Also, the chewing teeth-premolars and molars-have decreased in size over time. Associated with these changes is a gradual reduction in the size of the face and jaws. In early hominines, the face was large and positioned in front of the braincase. As the teeth became smaller and the brain expanded, the face became smaller and its position changed; thus, the relatively small face of modern humans is located below, rather than in front of, the large, expanded braincase.
Human Origins
The fossil evidence for immediate ancestors of modern humans is divided into the genera Australopithecus and Homo, and begins about 5 million years ago. The nature of the hominine evolutionary tree before that is uncertain.
Between 7 and 20 million years ago, primitive apelike animals were widely distributed on the African and, later, on the Eurasian continents. Although many fossil bones and teeth have been found, the way of life of these creatures, and their evolutionary relationships to the living apes and humans, remain matters of active debate among scientists. One of these fossil apes, known as Sivapithecus, appears to share many distinguishing features with the living Asian great ape, the orangutan, whose direct ancestor it may well be. None of these fossils, however, offers convincing evidence of being on the evolutionary line leading to the hominid family generally or to the human subfamily in particular.
Comparisons of blood proteins and the DNA of the African great apes with that of humans indicates that the line leading to modern people did not split off from that of chimpanzees and gorillas until comparatively late in evolution. Based on these comparisons, many scientists believe a reasonable time for this evolutionary split is 6 to 8 million years ago. It is, therefore, quite possible that the known hominine fossil record, which begins about 5 million years ago, extends back virtually to the beginnings of the human line. Future fossil discoveries may permit a more precise placement of the time when the direct ancestors of the modern African ape split off from those leading to modern people and human evolution can be said to begin.
Australopithecus
The fossil evidence for human evolution begins with Australopithecus. Fossils of this genus have been discovered in a number of sites in eastern and southern Africa. Dating from more than 4 million years ago (fragmentary remains are tentatively identified from about 5 million years ago), the genus seems to have become extinct about 1.5 million years ago. All the australopithecines were efficiently bipedal and therefore indisputable hominines. In details of their teeth, jaws, and brain size, however, they differ sufficiently among themselves to warrant division into four species: A. afarensis, A. africanus, A. robustus, and A. boisei.
The earliest australopithecine is A. afarensis, which lived in eastern Africa between 3 and 4 million years ago. Found in the Afar region of Ethiopia and in Tanzania, A. afarensis had a brain size a little larger than those of chimpanzees (about 400 to 500 cc/about 24 to 33.6 cu in). Some individuals possessed canine teeth somewhat more projecting than those of later hominines. No tools of any kind have been found with A. afarensis fossils.
Between about 2.5 and 3 million years ago, A. afarensis apparently evolved into a later australopithecine, A. africanus. Known primarily from sites in southern Africa, A. africanus possessed a brain similar to that of its predecessor. However, although the size of the chewing teeth remained large, the canines, instead of projecting, grew only to the level of the other teeth. As with A. afarensis, no stone tools have been found in association with A. africanus fossils.
By about 2.6 million years ago, the fossil evidence reveals the presence of at least two, and perhaps as many as four, separate species of hominines. An evolutionary split seems to have occurred in the hominine line, with one segment evolving toward the genus Homo, and finally to modern humans, and the others developing into australopithecine species that eventually became extinct. The latter include the robust australopithecines, A. robustus, limited to southern Africa, and A. boisei, found only in eastern Africa. The robust australopithecines represent a specialized adaptation because their principal difference from other australopithecines lies in the large size of their chewing teeth, jaws, and jaw muscles. The robust australopithecines became extinct about 1.5 million years ago.
The Genus Homo
Although scientists do not agree, many believe that after the evolutionary split that led to the robust australopithecines, A. africanus evolved into the genus Homo. If so, this evolutionary transition occurred between 1.5 and 2 million years ago. Fossils dating from this period display a curious mixture of traits. Some possess relatively large brains-several almost 800 cc (about 49 cu in)-and large, australopithecine-sized teeth. Others have small, Homo-sized teeth but also small, australopithecine-sized brains. A number of fossil skulls and jaws from this period, found in Tanzania and Kenya in eastern Africa, have been placed in the category H. habilis, meaning "handy man," because some of the fossils were found associated with stone tools. H. habilis possessed many traits that link it both with the earlier australopithecines and with later members of the genus Homo. It seems likely that this species represents the evolutionary transition between the australopithecines and later hominines.
The earliest evidence of stone tools comes from sites in Africa dated to about 2.5 million years ago. These tools have not been found in association with a particular hominine species. By 1.5 to 2 million years ago, sites in various parts of eastern Africa include not only many stone tools, but also animal bones with scratch marks that experiments have shown could only be left by humanlike cutting actions. These remains constitute evidence that by this time early hominines were eating meat, but whether this food was obtained by hunting or by scavenging is not known. Also unknown at present is how much of their diet came from gathered vegetable foods and insects (dietary items that do not preserve well), and how much came from animal tissue. It is also not known whether these sites represent activities by members of the line leading to Homo, or if the robust australopithecines were also making tools and eating meat.
Fossil evidence of a large-brained, small-toothed form, known earliest from north Kenya and dating from 1.5 to 1.6 million years ago, has been placed in the species H. erectus. The first part of the time span of H. erectus, like that of the earlier-in-time hominines, is limited to southern and eastern Africa. Later-between 700,000 and a million years ago-H. erectus expands into the tropical areas of the Old World, and finally at the close of its evolution, into the temperate parts of Asia. A number of archaeological sites dating from the time of H. erectus reveal a greater sophistication in toolmaking than was found at the earlier sites. At the cave site of Peking man in north China, there is evidence that fire was used; the animal fossils that have been found are sometimes of large mammals such as elephants. These data suggest that hominine behavior was becoming more complex and efficient.
Throughout the time of H. erectus the major trends in human evolution continued. The brain sizes of early H. erectus fossils are not much larger than those of previous hominines, ranging from 750 to 800 cc (45.8 to 48.8 cu in). Later H. erectus skulls possess brain sizes in the range of 1100 to 1300 cc (67.1 to 79.3 cu in), within the size variation of Homo sapiens.
Early Homo sapiens
Between 200,000 and 300,000 years ago, H. erectus evolved into H. sapiens. Because of the gradual nature of human evolution at this time, it is difficult to identify precisely when this evolutionary transition occurred, and certain fossils from this period are classified as late H. erectus by some scientists and as early H. sapiens by others.
Although placed in the same genus and species, these early H. sapiens are not identical in appearance with modern humans. New fossil evidence suggests that modern man, H. sapiens sapiens, first appeared more than 90,000 years ago. There is some disagreement among scientists on whether the hominine fossil record shows a continuous evolutionary development from the first appearance of H. sapiens to modern humans. This disagreement has especially focused on the place of Neandertals (or Neandertals), often classified as H. sapiens neanderthalis, in the chain of human evolution. The Neandertals (named for the Neander Valley in Germany, where one of the earliest skulls was found) occupied parts of Europe and the Middle East from 100,000 years ago until about 35,000 to 40,000 years ago, when they disappeared from the fossil record. Fossils of additional varieties of early H. sapiens have been discovered in other parts of the Old World.
The dispute over the Neandertals also involves the question of the evolutionary origins of modern human populations, or races. Although a precise definition of the term race is not possible (because modern humans show continuous variation from one geographic area to another), widely separate human populations are marked by a number of physical differences. The majority of these differences represent adaptations to local environmental conditions, a process that some scientists believe began with the spread of H. erectus to all parts of the Old World sometime after a million years ago. In their view, human development since H. erectus has been one continuous, in-position evolution; that is, local populations have remained, changing in appearance over time. The Neandertals and other early H. sapiens are seen as descending from H. erectus and are ancestral to modern humans.
Other scientists view racial differentiation as a relatively recent phenomenon. In their opinion, the features of the Neandertals-a low, sloping forehead, large brow ridge, and a large face without a chin-are too primitive for them to be considered the ancestors of modern humans. They place the Neandertals on a side branch of the human evolutionary tree that became extinct. According to this theory, the origins of modern humans can be found in southern Africa or the Middle East. Evolving perhaps 90,000 to 200,000 years ago, these humans then spread to all parts of the world, supplanting the local, earlier H. sapiens populations. In addition to some fragmentary fossil finds from southern Africa, support for this theory comes from comparisons of mitochondrial DNA, a DNA form inherited only from the mother, taken from women representing a worldwide distribution of ancestors. These studies suggest that humans derived from a single generation in sub-Saharan Africa or southeastern Asia. Because of the tracing through the material line, this work has come to be called the "Eve" hypothesis; its results are not accepted by most anthropologists, who consider the human race to be much older. See also RACES, CLASSIFICATION OF.
Whatever the outcome of this scientific disagreement, the evidence shows that early H. sapiens groups were highly efficient at exploiting the sometimes harsh climates of Ice Age Europe. Further, for the first time in human evolution, hominines began to bury their dead deliberately, the bodies sometimes being accompanied by stone tools, by animal bones, and even by flowers.
Modern Humans
Although the evolutionary appearance of biologically modern peoples did not dramatically change the basic pattern of adaptation that had characterized the earlier stages of human history, some innovations did take place. In addition to the first appearance of the great cave art of France and Spain See CAVE DWELLERS, some anthropologists have argued that it was during this time that human language originated, a development that would have had profound implications for all aspects of human activity. About 10,000 years ago, one of the most important events in human history took place-plants were domesticated, and soon after, animals as well. This agricultural revolution set the stage for the events in human history that eventually led to civilization.
Modern understanding of human evolution rests on known fossils, but the picture is far from complete. Only future fossil discoveries will enable scientists to fill many of the blanks in the present picture of human evolution. Employing sophisticated technological devices as well as the accumulated knowledge of the patterns of geological deposition, anthropologists are now able to pinpoint the most promising locations for fossil hunting more accurately. In the years ahead this will result in an enormous increase in the understanding of human biological history.
Daniel Mokari
f:\12000 essays\sciences (985)\Enviromental\Faster Dissolved Oxygen Test Kit.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
page 21
Purpose
The purpose of my project is to determine if there is any significant
difference in dissolved oxygen (DO) levels as measured by the traditional
HACH(r) method or the newly developed CHEMets(r) test kit under typical field
conditions.
Hypothesis
My hypothesis is that there is no significant difference in dissolved oxygen
(DO) levels as measured by the traditional HACH(r) method or the newly
developed CHEMets(r) test kit under typical field conditions.
Review of Literature
"Ours is a watery world, and we, its dominant species, are walking sacks
of sea water. The presence of large amounts of liquid water on Earth make our
planet unique in the solar system." (Hill, 1992 p. 477)
People have recently become more concerned with preserving our earth
for future generations. Even the government pitches in to help save our earth by
enacting laws to help preserve our natural resources. There is local evidence
that improved sewage treatment means improvement in water quality.
Monitoring on a national level showed that large investments in point-source
pollution control have yielded no statistically significant pattern of improvement
in dissolved oxygen levels in water in the last 15 years. It may be that we are
only keeping up with the amount of pollution we are producing. (Knopman,
1993)
The early biosphere was not pleasant for life because the atmosphere
had low levels of oxygen. Photosynthetic bacteria consumed carbon dioxide and
produced simple sugars and oxygen which created the oxygen abundant
atmosphere in which more advanced life forms could develop. (Brown, 1994)
The mystery of how Earth's oxygen levels rose is very complex. Scientists don't
agree when or how the oxygen on earth got here, but we know we could not live
without it. (Pendick, 1993) Oxygen is crucial for humans to survive. Dissolved
oxygen is also crucial for most fish and aquatic organisms to survive. Dissolved
oxygen is for them what atmospheric oxygen is for humans. If humans have no
oxygen to breathe, they die. The same goes for fish. However, fish get their
oxygen from the water, and humans get theirs from the atmosphere. (Mitchell
and Stapp, 1992)
Different aquatic organisms need different levels of dissolved oxygen to
thrive. For example, pike and trout need medium to high levels of dissolved
oxygen. Carp and catfish are the exact opposite, needing only low levels of
dissolved oxygen. (Mitchell and Stapp, 1992 ) Low levels of dissolved oxygen
inhibit the growth of Asiatic clams. ( Belanger, 1991) In the American River, too
much dissolved oxygen resulted in mortality of salmonoid fishes. (Colt, Orwicz
and Brooks, 1991) Brood catfish, or catfish raised on fish farms, are especially
susceptible to low dissolved oxygen. Since catfish are a major food source for
many people, their production is important. (Avault, 1993)
There are two main sources of dissolved oxygen: (1) the atmosphere -
waves on lakes, rapidly moving rivers, and tumbling rivers all act to mix oxygen
from the atmosphere with water; (2) aquatic plants - algae and benthic plants
(bottom-rooted plants) deliver oxygen into the water through photosynthesis.
The solubility of all gases, including oxygen, is inversely proportional to
temperature which means that the solubility of gases goes down as the
temperature goes up, and vice versa. The concentration of dissolved oxygen
also varies directly with atmospheric pressure and atmospheric oxygen
concentration. When the atmospheric pressure or atmospheric oxygen
concentration goes up, the level of dissolved oxygen goes up. (Roskowski &
Marshall, 1993) D.H. Farmer studied the fluctuation of dissolved oxygen content
in a body of water before, during, and after a storm. During the storm, the
increased wave activity increased the dissolved oxygen content. (Farmer and
McNeil, 1993)
Turbulent flow in streams has caused most of the biocenogenesis (the
environmentally determined characteristics of an organisms) to be represented
by attached or benthic organisms. For this reason, a method of evaluating the
role of benthic organisms in the total dissolved oxygen balance was created.
Benthic plants play an important role in providing dissolved oxygen. These
plants respire oxygen through photosynthesis. Benthic plants are plants such as
cattail, bulrush, arrowhead, water lily, pond weeds, and muskgrass. (Nebel,
1990)
Many things can change the level of dissolved oxygen in a body of water.
Dissolved oxygen levels rise from morning through afternoon as a result of
photosynthesis. Photosynthesis stops at night, but animals and plants continue
to respire and consume oxygen. Water temperature and volume of water also
affect dissolved oxygen levels. Dry weather causes dissolved oxygen levels to
decrease and wet weather causes dissolved oxygen levels to increase. (Mitchell
and Stapp, 1990)
The breakdown of organic matter by bacteria decreases dissolved oxygen
in the water and yet enriches the water with plant nutrients. A reasonable
amount of breakdown is good, so the water won't become oligotrophic or nutrient
poor. But too much organic breakdown will decrease dissolved oxygen and
leave an excess of nutrients. Eutrophication is a term used to describe a body of
water in which the organic nutrients reduce the level of dissolved oxygen to such
a point that plant life is favored over animal life. Algae blooms cause excessive
organic material also. When algae die, they become a part of the organic
wastes. (Nebel, 1990) Most organic material can be broken down by
microorganisms. Microorganic biodegradation can be either aerobic or
anaerobic. Aerobic oxidation results in the further depletion of dissolved
oxygen. When dissolved oxygen in water is decreased by excessive organic
matter and ongoing degradation, the process then shifts to an anaerobic
process. Anaerobic bacteria actually flourish in the absence of oxygen. Animal
life can be permanently suppressed in this environment. (Hill, 1992)
When dissolved oxygen decreases, major shifts occur in the kinds of
aquatic organisms found in a body of water . The insects that need high levels
of dissolved oxygen are replaced by anaerobic organisms. Mayfly nymphs,
stonefly nymphs, caddisfly nymphs, and beetle larvae (all need high levels of
dissolved oxygen) are replaced by pollution tolerant worms, fly larvae, nuisance
algae, and other anaerobic organisms. (Mitchell and Stapp, 1992)
So what is a good level of dissolved oxygen? Under 4 ppm is not
good. But what about too much dissolved oxygen? ( Hidaka, Shimazu,
Kumanda, Takeda and Aramaki, 1991) "A nonlinear relationship was found
between oxygen concentration and median lethal concentrations values, with
significantly increased toxicity at the middle oxygen concentration. It was
concluded that dissolved oxygen concentration was an important environmental
factor in the assessment of photo-induced toxicity of anthracene to fish."
(McCloskey and Oris, 1991 p.145)
We have present day examples of the effects of pollution on dissolved
oxygen, which then in turn effects the ecosystem. Following are two clear
examples of the devastating effects of neglect of our ecosystem.
(1) The Chesapeake Bay. Chesapeake Bay is the largest estuary in
North America. Before the 1970s the bay was also the most productive, yielding
millions of pounds of fish and shellfish and a home for a variety of waterfowl.
Most of the food chains started with the sea grasses. Over half a million acres of
this underwater "grass" was present only a few feet beneath the surface. The
sea grass provided food, a place for spawning, shelter for young fish, and
dissolved oxygen for the fish to breath. In the early 1970s, the sea grasses
started to die. By 1980 the grasses were gone, except in the lower bay. All
animals that had depended on the grasses died accordingly. Even worse, the
bottom water did not have enough dissolved oxygen and caused large numbers
of lobsters, oysters, and fish to be suffocated. The water of the Chesapeake Bay
was very murky and cloudy. The cloudiness persisted over extended periods of
time. The reduced light was decreasing photosynthesis and the sea grass
began to die as a result. Without the photosynthesis of the sea grass, dissolved
oxygen was no longer being adequately supplied. In addition, bacterial
decomposition was consuming dissolved oxygen, thus making it unavailable to
fish and shellfish. Chesapeake Bay has been overcome by the process called
eutrophication. This is not unusual. In the past 40 years, many other ponds and
small lakes have also suffered this fate. (Nebel, 1990)
(2) The Black Sea. The polluting of the Black Sea is causing the Black
Sea to die. Over 300 rivers dump into the Black Sea a deadly mix of nitrates,
phosphorous, and oil. A local joke in Varna, Bulgaria, tells suicide cases not to
worry about drowning, since the sea's poisons will kill them first. The worst
offenders are the Danube, Dniester, and Dnieper Rivers. Waste from the
Danube River has increased at least tenfold over the past decade. Johann
Strauss Jr.'s "Blue Danube" would hardly be recognizable to the composer. Its
never blue now, instead it's always a color of pea-green or black. When the sun
hits puddles of oil, it forms rainbows on its ripples. The biggest problem is not
the poisons but the nutrients - the phosphorous and nitrogen. The entry of more
nutrients into the sea means more harmful surface algae to keep sunlight from
the seabeds, killing them and halting the production of dissolved oxygen.
(Pomfret, 1994)
Other rivers are polluted also, such as the River Borovniscica
(Yugoslavia), which is polluted with organic substances and the River Bistra,
which is polluted with inorganic substances. Also the death of the Cuyahoga
River, which burst into flames on June 22, 1969. ( Gordon and Steele, 1993)
Dissolved oxygen levels can vary even within the same stream, river, or
body of water. Outside the main current of a stream, dissolved oxygen levels
can be low. This point was graphically illustrated by biologist E.P. Pister as he
attempted to rescue an endangered species of pupfish. In his hurry to collect
more pupfish, he had placed the cages containing previous captures in eddies
away from the main current. By the time he noticed his error, a number of these
fragile creatures were already dead. (Pister, 1993)
Sometimes it's not lack of dissolved oxygen that kills the fish. Rather it
can be too much dissolved oxygen, as in the case of the American River.
Dissolved oxygen levels were considerably higher in the American River than
those reported to cause death in hatchery salmonoids due to gas bubble
disease. The source of this gas bubble disease and supersaturation in the river
was from air entrainment, solar heating, and photosynthesis. The impact of the
high dissolved oxygen levels in the hatchery water supplies was decreased with
the installation of degassing structures to remove excessive dissolved oxygen.
(Colt and Orwicz and Brooks, 1991)
As people try to solve disasters like those cited above, they need to
determine the source of the problem before they work on a solution. Sometimes
even while people are trying to clean up, there is no statistically discernible
pattern of increases in the water's dissolved oxygen content. Many companies
offer test kits to measure water quality. Some tests take a long time to run, but
people are always looking for a quicker way to run the tests especially under
field conditions where response time is critical. Some companies have come up
with a quicker way to run tests, but are they as accurate as we'd like to believe?
One type of kit for measuring dissolved oxygen is put out by
CHEMetrics. The CHEMets ampoules contain a solution of indigo carmine in
reduced (near colorless) form. When you snap the tip the ampoule fills with your
water sample and any dissolved oxygen in that sample will cause the reagent to
oxidize to a blue color. Then the ampoule is compared with the standard color
bars. A noticeable problem is that there is definitely a change in the shade of
color from 0-4 ppm, but in the higher ranges it is hard to tell any difference. For
example, 5-10 ppm seem to be the same shade of blue, and there is not even a
specific color bar for 9 ppm. There is an 8 ppm and then a 10 ppm. If you say
that the shade of blue is darker than the 8 ppm color bar but less than the 10
ppm color bar, then you could declare it 9 ppm. There is no way of saying if
something is 7.5 ppm because there is no shade of blue halfway between 7 ppm
and 8 ppm.
The HACH method takes longer, but it is easier to determine the amount
of dissolved oxygen in the water. The way you determine the amount of
dissolved oxygen in the water is by how many drops of Sodium Thiosulfate
Standard Solution you add until the sample changes from yellow to colorless.
Each drop equals one ppm of dissolved oxygen.
We need to think about accuracy, but what about safety? The HACH
method uses chemicals that are labeled with "Keep Out Of Reach Of Children.
For Laboratory Use Only. Causes Eye Burns. Do Not Ingest. May Cause Skin
Irritation." along with direction what to do if you inhale, ingest, or come into
contact with the chemical. The directions indicate the need to be very cautious
with the chemical, particularly because it isn't safe without proper use. By
contrast the CHEMets kit has no warnings like this. The obvious hazard is that
you would squeeze the glass ampoule too hard and it would break.
There are many things to take into consideration when you are selecting a
test kit , not just which one is faster. Such as quality, time, safety, expense,
accuracy and much more.
Materials
HACH(r) TESTING KIT
-Dissolved Oxygen 1 Reagent Powder Pillows
-Dissolved Oxygen 2 Reagent Powder Pillows
-Dissolved Oxygen 3 Reagent Powder Pillows
-Sodium Thiosulfate, Stabilized, Standard Solution, 0.0109N
-Bottle, Dissolved Oxygen, glass stoppered
-Bottle, square, mixing
-Clippers
-Stopper, for dissolved oxygen bottle
-Tube, measuring 5.83 mL
CHEMets(r) TESTING KIT
-Self filling ampoules for colorometric analysis
-Chart with color bars for comparison to self filling ampoules
TABLE
-Covered with newpaper and/or paper towels
WATER
-Kankakee River
-Melted snow
-Tap water
-Tap water stirred for one minute
-Roof runoff
-Fish aquarium
EQUIPMENT TO RECORD RESULTS
-Paper
-Pencil
-Clipboard
SAFETY EQUIPMENT
-Rubber gloves
-Goggles
-Rubber aprons
Procedure
HACH TESTING KIT
1) Fill Dissolved Oxygen bottle (round bottle with glass stopper) with the water to
be tested by allowing water to overflow the bottle for 2 or 3 minutes. To avoid
trapping air bubbles in the bottle, incline the bottle slightly and insert the stopper
with a quick thrust. This will force the air bubbles out. If bubbles become
trapped in the bottle in Steps 2 or 4 the sample should be discarded before
repeating the test.
2) Use the clippers to open one Dissolved Oxygen 1 Regent Powder Pillow and
one Dissolved Oxygen 2 Reagent Powder Pillow. Add the contents of each
pillow to the bottle. Stopper the bottle carefully to exclude air bubbles. Grip the
bottle and stopper firmly, shake vigorously to mix. A flocculent (floc) precipitate
will be formed. If oxygen is present in the sample, the precipitate will be
brownish orange in color. A small amount of powdered reagent may remain
stuck to the bottom of the bottle. This will not affect the test results.
3) Allow the sample to stand until the floc has settled halfway in the bottle,
leaving the upper half of the sample clear. Shake the bottle again. Again let it
stand until the upper half of the sample is clear. Note the floc will not settle in
samples with high concentrations of chloride, such as sea water. No
interference with the test results will occur as long as the sample is allowed to
stand for 4 or 5 minutes.
4) Use the clippers to open one Dissolved Oxygen 3 Reagent Powder Pillow.
Remove the stopper from the bottle and add the contents of the pillow. Carefully
restopper the bottle and shake to mix. The floc will dissolve and a yellow color
will develop if oxygen is present.
5) Fill the plastic measuring tube level full of the sample prepared in Steps 1
through 4. Pour the sample into the square mixing bottle.
6) Add the Sodium Thiosulfate Standard Solution drop by drop to the mixing
bottle, swirling to mix after each drop. Hold the dropper vertically above the
bottle and count each drop as it is added. Continue to add drops until the
sample changes from yellow to colorless.
7) Each drop used to bring about the color change in Step 6 is equal 1 mg/L of
dissolved oxygen.
CHEMets(r) TESTING KIT
1) Immerse the snapper into the sample.
2) Place a CHEMet ampoule, tapered end first into the snapper.
3) Press down on the ampoule to snap the tip.
4) Remove the ampoule from the snapper, and invert it several times, allowing
the bubble to travel from end to end to mix the contents.
5) Wait 2 minutes for a full color development.
6) Use the color chart (inside box) to determine the dissolved oxygen content by
matching the filled CHEMet ampoule with the color bars on the chart. The chart
should be illuminated from above by a strong white light. Be sure to place the
ampoule on both sides of the color bar before concluding that it gives the best
match.
Results
Location Chem Hach Temp C° difference
Kankakee River (near our dock) 10 12 2.2 -2
Kankakee River (near our dock) 10 11 3.3 -1
Kankakee River ( near our dock) 9 13 4.4 -4
Kankakee River (near our dock) 10 12 5.0 -2
Roof Runoff 4 5 5.6 -1
Kankakee River (near our dock) 10 11 5.6 -1
Tap Water 7 7 18.9 0
Snow (melted) 8 8 21.1 0
Tap Water (stirred for 1 minute) 7 8 21.1 -1
Fish Aquarium 7 7 23.3 0
Fish Aquarium 7 8 23.3 -1
Fish Aquarium 7 8 23.3 -1
Tap Water 3 1 23.3 2
Graphs
Conclusions
My conclusion is that there is a significant difference in dissolved oxygen
(DO) levels as measured by the traditional HACH(r) method or the newly
developed CHEMets(r) test kit under typical field conditions. CHEMets(r) test kits
are very hard to read, especially in the higher ranges. CHEMets(r) does not
compare well to HACH(r) in areas where dissolved oxygen is higher than 8 ppm,
and it does not measure above 10 ppm. CHEMets(r) would be fine for
temperatures of about 150C or warmer. The HACH(r) test kit is the method of
choice for field analysis because it is more reliable at all levels in providing
accurate measures of dissolved oxygen. The HACH(r) method requires more
caution in use, but actually produces significant differences in measures of
dissolved oxygen.
Statistics
Wilcoxson Matched Pairs Signed Rank Test
Data gathered in the course of performing analysis is subject to certain
random fluctuations. These fluctuations may vary in size and in many cases
make it difficult to decide whether the observed differences are due to real
differences in the sample or to simple chance. The discipline of statistics allows
one to assess the probability (the odds) that measured differences arise from
chance alone. Once one has a feel for the odds that the differences arise from
chance, one can decide to reject or conditionally accept a hypothesis based on
that data.
The statistical test being used for this study (Wilcoxson - Matched Pairs
Signed Ranks) was chosen for its computational ease and power. A
nonparametric test was chosen because there was a question about the level of
measurement (ordinal or interval) and whether or not the assumptions for a
parametric test could be met.
Procedure to apply Wilcoxson - Matched Pairs Signed Ranks test. (see
table)
1. Pair all data from each sample according to date.
2. Take the difference between each pair of measurements.
3. Rank the size of each difference paying no heed to sign (drop zero
differences - split ranks on ties)
4. Compute the sum of the rank with the less frequent sign (T).
5. Set alpha for 0.05 with a two tailed test.
6. Look up the value for T in an appropriate statistical table. (table G page 254
of Nonparametric Statistics by Sidney Sigel 1956 McGraw Hill)
7. Reject the null hypothesis (Ho) if T is equal to or less than the tabled value.
Chem Hach Temp Cº Chem - Hach Rank
7 7 23.3 0
7 7 18.9 0
8 8 21.1 0
4 5 5.6 -1 3.5
7 8 21.1 -1 3.5
7 8 23.3 -1 3.5
7 8 23.3 -1 3.5
10 11 3.3 -1 3.5
10 11 5.6 -1 3.5
3 1 23.3 2 8
10 12 5.0 -2 8
10 12 2.2 -2 8
9 13 4.4 -4 10
N= 10 (number of non zero differences)
T= 8 (sum of ranks with less frequent sign)
a = 0.05 (significance level)
Ho is rejected.
Literature Cited
APHA (1990). Standard methods for the examination of water and wastewater.
(16 ed.) New York: APHA, Inc.
Avault, J. (1993, Jul/Aug). Take care of those brood cats. Aquaculture, pp.73
Belanger, S. (1991, July). The effect of dissolved oxygen, sediment, and sewage
treatment plant discharges upon growth, survival, and density of Asiatic clams.
Hydrobiologia, pp. 113-126.
Brown, L. (1994). State of the world. London: W.W. Norton & Company. pp.42.
Colt, J. & Orwicz K. & Brooks, D. (1991, Winter). Gas supersaturation in the
American River. California Fish and Game. pp.41-50.
Hikada, Shimazu, Kumanda, Takeda, Aramaki. (1991). Studies on the
occurrence of hypoxic water mass in surface mixed layer of inner area of
Kagoshima Bay. Memoirs of the Faculty of Fisheries: Kagoshima. pp.59-81.
Have, M. (1991). Selected water-quality characteristics in the upper Mississippi
River Basin, Royalton to Hastings, Minnesota. USGS Water-Resources
Investigation. pp. 125.
Hill, J. (1992). Chemistry for changing times. (6th ed.) New York: McMillain
Publishing Co. pp. 477, 487- 489.
Knopman, D. (1993, Jan/Feb). 20 years of the clean water act. Environment.
pp.16.
McCloskey, J. & Oris, J. (1991, Dec.). Effect of water temperature and dissolved
oxygen concentration on the photo-induced toxicity of anthracene to juvenile
bluegill sunfish. Aquatic Toxicology . pp.145-156.
Nebel, B. (1990). Science: the way the world works. (3rd ed.) New Jersey:
Prentice Hall.
Pomfret, 0.J. (1994, Nov. 25). Rivers deadly to Black Sea. The Daily Journal.
pp. 20.
Roskowski, R. & Marshall, B. (1993, Jul/Aug). Gases in water. Aquaculture, pp.
70-76.
Schopf, J. (!993, May). Fossil show diversity of life. Science News pp.276.
Stapp, W. & Mitchell, M. (1992). Field manual for water quality. (6th ed.).
Michigan: Thomson - Shore Inc.
Steele, J. (1993, Oct.). The American environmental policy. American
Heritage. pp.30.
f:\12000 essays\sciences (985)\Enviromental\Fluoridation of Municiple Water Supplies.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fluoride is a mineral that occurs naturally in almost all foods and water supplies. The fluoride ion comes from the element fluorine. Fluorine, the 13th most abundant element in the earth's crust, is never encountered in its free state in nature. It exists only in combination with other elements as a fluoride compound.
Fluoride is effective in preventing and reversing the early signs of tooth decay. Researchers have shown that there are several ways through which fluoride achieves its decay-preventive effects. It makes the tooth structure stronger, so teeth are more resistant to acid attacks. Acid is formed when the bacteria in plaque break down sugars and carbohydrates from the diet. Repeated acid attacks break down the tooth, which causes cavities. Fluoride also acts to repair areas in which acid attacks have already begun. The remineralization effect of fluoride is important because it reverses the early decay process as well as creating a tooth surface that is more resistant to decay.
Community water fluoridation is the adjustment of the amount of the beneficial trace element fluoride found in water to provide for the proper protection of teeth. Fluoridation has been widely utilized in this country since 1945. It does not involve adding anything to the water that is not already there, since virtually all sources of drinking water in the United States contain some fluoride. Fluoridation is a form of nutritional supplementation that is not unlike the addition of vitamins to milk, breads and fruit drinks; iodine to table salt; and both vitamins and minerals to breakfast cereals, grains and pastas.
The protection of fluoridation reaches community members in their homes, at work and at school -- simply by drinking the water. The only requirements for the implementation of fluoridation are the presence of a treatable centralized water supply and approval by appropriate decision makers.
Some people believe that there are effective alternatives to community water fluoridation as a public health measure for the prevention of tooth decay in the United States. The fact of the matter is that while other community-based methods of systemic and topical fluoride delivery (i.e. school-based fluoride mouthwash or tablet programs) have been developed over the five decades that water fluoridation has been practiced, none is as effective as community water fluoridation and none is free from financial constraints or other drawbacks. Alternatives to community water fluoridation remain useful only for populations significantly isolated from public water systems.
Nearly 145 million Americans are currently receiving the benefits of optimally fluoridated water. With the 1995 enactment of Assembly Bill 733 in California, ten states and territories in the United States now mandate fluoridation through legislation. Besides California, these include seven other states (Connecticut, Georgia, Illinois, Minnesota, Nebraska, Ohio and South Dakota), as well as the District of Columbia and Puerto Rico. Three states (South Dakota, Rhode Island and Kentucky), as well as the District of Columbia, have achieved the ultimate success with 100 percent of their treatable community water systems providing the benefits of fluoridation to their citizens.
While safety has been an issue frequently raised by those opposed to fluoridation, scientific data from peer-reviewed clinical research provide overwhelming evidence that the adjustment of fluoride levels in drinking water to the optimal level is undoubtedly safe. Hundreds of studies on fluoride metabolism have tracked the outcomes of ingested fluoride. Ingested fluoride essentially travels three metabolic pathways. It is either excreted by the kidneys, absorbed by the teeth or taken up in the skeleton.
At optimal levels fluoride has never been demonstrated to cause skeletal fluorosis or other bone problems. On the contrary, there is mounting evidence that continued exposure of individuals to low levels of fluoride, as in optimally fluoridated drinking water, results in a decrease in osteoporosis and a decrease in concurrent susceptibility to vertebral fracture. Furthermore, there is no evidence of increased morbidity or mortality from any disorder for those with lifetime exposures to optimally fluoridated drinking water.
Those opposed to water fluoridation claim that exposure to fluoridated water increases an individual's risk of suffering from several forms of cancer. Again, the overwhelming weight of scientific evidence indicates otherwise. Over 50 studies have evaluated the potential relationship of water fluoridation and cancer mortality. None found any credible evidence that exposure to water fluoridation is in any way related to an increased risk of cancer in humans. A number of national and international scientific commissions, after reviewing all of the available scientific literature, also concluded that water fluoridation was safe and that it in no way related to increased risk to humans of any form of cancer. Finally, a 1990 study of fluoridated and fluoride-deficient communities by the U.S. National Cancer Institute revealed no link between exposure of any populations to fluoridation and the incidence of many different types of cancer occurring in a 14-year period.
Mottled enamel or dental fluorosis has been claimed to be an indication of the "toxic effects of fluoridation" by those opposed to fluoridation. Technically, dental fluorosis is a developmental defect of enamel that can occur when a higher than optimal amount of fluoride is ingested at the same time as the stage of tooth development when enamel is being formed. The severity of the fluorosis is directly related to the age of the child at exposure, the type of exposure, the level of exposure, and the duration of exposure.
It is important to note that fluorosis can only occur during the period when teeth are developing. Once teeth have formed, fluorosis can no longer occur. The mildest form of dental fluorosis may appear in about 10 percent of those exposed to optimally fluoridated water. Most mild to moderate fluorosis occurs not from the ingestion of properly fluoridated water, but from the unnecessary and inappropriate prescribing of fluoride supplement tablets or drops for children in fluoridated areas and the inappropriate ingestion of large amounts of fluoride-containing toothpaste by young children not properly supervised during toothbrushing. The presence of dental fluorosis at any aesthetic level is not related to any other adverse conditions in humans, nor is there any evidence to show that dental fluorosis is a precursor to any disease or dysfunction. Mild to moderate dental fluorosis is no more a pathological condition than is having freckles.
There has never been a single valid, peer-reviewed laboratory, clinical or epidemiological study that showed that drinking water with fluoride at optimal levels caused cancer, heart disease, or any of the other multitude of diseases proclaimed by very small groups of antifluoridationists to be caused by fluoridation.
Because fluoride is so effective, those fortunate enough to be provided with fluoridated water can count on an up to 40- to 50-percent reduction in the number of dental cavities they would have experienced without fluoridation. Fluoridation is an extremely cost-effective public health measure because the technology is so simple and the fluoride so inexpensive. Studies indicate that a $100,000 investment in water fluoridation prevents 500,000 cavities. Moreover, for each dollar invested in fluoridation, over $80 in dental treatment costs are prevented, amounting to an 80:1 benefit-to-cost ratio. Few disease prevention efforts, public or private, achieve that level of return on investment.
f:\12000 essays\sciences (985)\Enviromental\Global Warming 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Global Warming
The greenhouse effect, in environmental science, is a popular term for the effect that certain variable constituents of the Earth's lower atmosphere have on surface temperatures. It has been known since 1896 that Earth has been warmed by a blanket of gasses (This is called the "greenhouse effect."). The gases--water vapor (H2O), carbon dioxide (CO2), and methane (CH4)--keep ground temperatures at a global average of about 15 degrees C (60 degrees F). Without them the average would be below the freezing point of water. The gases have this effect because as incoming solar radiation strikes the surface, the surface gives off infrared radiation, or heat, that the gases trap and keep near ground level. The effect is comparable to the way in which a greenhouse traps heat, hence the term. Environmental scientists are concerned that changes in the variable contents of the atmosphere--particularly changes caused by human activities--could cause the Earth's surface to warm up to a dangerous degree. Since 1850 there has been a mean rise in global temperature of approximately 1° C (approximately 1.8° F). Even a limited rise in average surface temperature might lead to at least partial melting of the polar icecaps and hence a major rise in sea level, along with other severe environmental disturbances. An example of a runaway greenhouse effect is Earth's near-twin planetary neighbor Venus. Because of Venus's thick CO2 atmosphere, the planet's cloud-covered surface is hot enough to melt lead.
Water vapor is an important "greenhouse" gas. It is a major reason why humid regions experience less cooling at night than do dry regions,. However, variations in the atmosphere's CO2 content are what have played a major role in past climatic changes. In recent decades there has been a global increase in atmospheric CO2, largely as a result of the burning of fossil fuels. If the many other determinants of the Earth's present global climate remain more or less constant, the CO2 increase should raise the average temperature at the Earth's surface. As the atmosphere warmed, the amount of H2O would probably also increase, because warm air can contain more H2O than can cooler air. This process might go on indefinitely. On the other hand, reverse processes could develop such as increased cloud cover and increased absorption of CO2 by phytoplankton in the ocean. These would act as natural feedbacks, lowering temperatures.8
In fact, a great deal remains unknown about the cycling of carbon through the environment, and in particular about the role of oceans in this atmospheric carbon cycle. Many further uncertainties exist in greenhouse-effect studies because the temperature records being used tend to represent the warmer urban areas rather than the global environment. Beyond that, the effects of CH4, natural trace gases, and industrial pollutants--indeed, the complex interactions of all of these climate controls working together--are only beginning to be understood by workers in the environmental sciences.2
Despite such uncertainties, numerous scientists have maintained that the rise in global temperatures in the 1980s and early 1990s is a result of the greenhouse effect. A report issued in 1990 by the Intergovernmental Panel on Climate Change (IPCC), prepared by 170 scientists worldwide, further warned that the effect could continue to increase markedly. Most major Western industrial nations have pledged to stabilize or reduce their CO2 emissions during the 1990s. The U.S. pledge thus far concerns only chlorofluorocarbons (CFCs). CFCs attack the Ozone Layer and contribute thereby to the greenhouse effect, because the ozone layer protects the growth of ocean phytoplankton.
Bibliography
Bilger, B., Global Warming (1992)
Bolin, Bert, et al., The Greenhouse Effect, Climatic Change and Ecosystems (1986)
Bright, M., The Greenhouse Effect (1991)
Fisher, David E., Fire and Ice: The Greenhouse Effect, Ozone Depletion, and Nuclear Winter (1990)
Houghton, J., et al., eds., Climate Change: The IPCC Scientific Assessment (1990)
Monastersky, Richard, "Time for Action," Science News, Mar. 30, 1991
Moss, M., and Rahman, S., Climate and Man's Environment (1986)
Schneider, S. H., Global Warming (1989)
Seitz, F., Scientific Perspectives on the Greenhouse Problem (1990)
Shands, W. E., and Hoffmann, J. S., The Greenhouse Effect, Climatic Change, and U. S. Forests (1987)
Stone, P., "Forecast Cloudy," Technology Review, Feb./Mar. 1992
Weiner, Jonathan, The Next One Hundred Years: Shaping the Fate of Our Living Earth (1990)
Wuebbles, D., Primer on Greenhouse Gases (1991).
f:\12000 essays\sciences (985)\Enviromental\Global Warming and Human Population.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The relationship between humans and the state of the ecosystem is not only dependent upon how many people there are, but also upon what they do. When there were few people, the dominant factors controlling ecosystem state were the natural ones that have operated for millions of years. The human population has now grown so large that there are concerns that they have become a significant element in ecosystem dynamics. One of these concerns is the relationship between human activities and climate, particularly the recent observations and the predictions of global warming, beginning with the alarm sounded by W. Broecker (1975).
The relationships among humans, their activities and global temperature can be assessed by making the appropriate measurements and analyzing the data in a way that shows the connections and their magnitudes. Human population can be closely estimated and the consequences of their activities can be measured. For example, the volume of carbon dioxide, methane and nitrous oxide emissions is an indicator of human's energy and resource consumption. An examination of population size, atmospheric concentrations of these gases and global temperature relative to time and with respect to each other is presented here to demonstrate the relations among these factors.
POPULATION GROWTH
Many of us have seen linear graphs of human population showing the enormous growth in the last two centuries. However, significant changes in population dynamics are lost in the exponential growth and long time scales. If the data are replotted on a log-population by log-time scale, significant population dynamics emerge. First, it is apparent that population growth has occurred in three surges and second, that the time between surges has dramatically shortened (Deevey, 1960).
Figure 1. Population (Log-population verses log-time since 1 million years ago). Time values on x-axis, ignoring minus sign, are powers of 10 years before and after 1975 (at 0). Vertical dashed-line at 1995. Filled circles for known values are to left of 1995 and open circles on and to right of 1995 are for projected values. (Data updated from Deevey, 1960).
----------
Deevey's 1960 graph has been brought up to date in Figure 1 to reflect what has been learned since then. The data have been plotted relative to 1975 with negative values before 1975 and positive values thereafter. The reason for this will become clear below. The values of the time scale, ignoring the minus signs, represent powers of 10 years.
It has been argued that a population crash occurred about 65,000 years ago (-4.8, Fig. 1), presumably due to the prolonged ice-ages during the preceding 120,000 years (Gibbons, 1993). Humans came close to perishing and Neanderthal became extinct. However, by 50,000 years ago (-4.6, Fig. 1), humans had generated population mini-explosions all around the planet. Deevey's data for population size since 500 years ago have been replaced with more recent estimates taken from The World Almanac, (1992 - 1995) including population projections out to 2025. A vertical dashed-line has been placed at 1995. Filled symbols for the known values are to the left of it and open symbols on and to the right of it are for values projected into the short-term future.
The first surge coincides with the beginning of the cultural revolution about 600,000 years ago, interrupted by the population crash 65,000 years ago. Population size rebounded 50,000 years ago and then growth slowed considerably. The second surge began with the agricultural revolution about 10,000 years ago and was followed by slow growth. Deevey argued that moving down the food chain was the underlying cause of this large and rapid spurt. The timing of the present surge matches the rise of the industrial-medical revolution 200 years ago.
A relation between innovation and population growth is embedded in the log-log plot. There was rapid growth at the start of each surge. Then, growth rate slowed as people adapted to the precipitating innovations. Each surge increased the population more than 10-fold. It appears that we are nearing the end of the present surge as recent growth rates have declined. After the initial spurt, subsequent innovations did not perpetuate growth rates. The only significant innovations were those that produced the next surge. However, accumulated innovations during the surges may have played a role in the eventual decline in population growth rates. Starting with high birth and death rates, death rate declines and longevity increases, but birth rates stay high. Some time later, birth rates decline so that eventually, net births minus deaths produces slow growth. The result is a spurt in population size. When referring to the industrial revolution, this phenomenon has been called the "demographic transition". It appears that this dynamic may have occurred twice before.
The decreases in time between surges suggests that, if past behavior is the best predictor of future behavior, we are due for another surge. It may have already begun, as indicated by the upturn in the projections at the right end of the curve in Figure 1. What might the basis for another surge be? One can think of several possibilities, including the "green revolution" and the "global economy". A dominant element in past surges has been innovations in energy use (e.g., fire, descending the food-chain, beasts of burden, fossil fuels, high-energy agriculture). Thus, the development of an abundant and cheap energy source would have a profound effect. Another 10-fold (or more) surge would produce a population of 60 to 125 billion.
GLOBAL TEMPERATURE AND GREENHOUSE GASES
Figure 2. Greenhouse Gases and Mean Global Temperature (Greenhouse gas concentrations and mean global temperature verses time). Time scale same as in Fig. 1. Gas-concentration data have been normalized to the 0 to 1 scale on left: CO2 (squares) - 190 to 430 ppm; CH4 (triangles) - 600 to 2400 ppb; N2O (diamonds) - 280 to 340 ppb. Mean global temperature (circles) plotted relative to oC on right. Vertical dashed-line at 1995, horizontal dotted line at maximum CO2 concentration and global temperature over human history before 1990. Filled and open symbols same as in Fig. 1. Projections in short-term future are based upon continuation at current growth rates. (Data measured from graphs in Gribbin, 1990 and Khalil and Rasmussen, 1992).
----------
Mean-global-temperature (MGT) is related to the concentration of greenhouse gases (carbon dioxide, methane, nitrous oxide, water vapor and other trace gases) in the atmosphere. The most prevalent greenhouse gas is carbon dioxide (CO2). It has been shown that there is a strong relation between the atmospheric concentration of CO2 and MGT over the last 160,000 years (Gribbin, 1990). It has been suspected that the burning of fossil fuels and the clearing of land has reached such proportions that these activities have precipitated a significant increase in atmospheric CO2 concentration. The concentrations of greenhouse gases in the atmosphere have been directly measured since about 1960 and have been determined over the more distant past from air-bubbles trapped in old Antarctic, Greenland and Siberian ice and from deep-sea sediments. Mean-global-temperature has also been measured directly over the last few decades. Estimates of global temperature in the distant past have been deduced from a variety of sources. From these data, the relation among atmospheric greenhouse-gas concentrations, MGT and time is illustrated in Figure 2.
The time scale in Figure 2 is the same as that in Figure 1. Because CO2, methane (CH4) and nitrous oxide (N2O) concentrations have different scales, the data have been normalized on a 0 to 1 scale on the left. For CO2 (squares; Gribbin, 1990), 0 is equivalent to 190 parts per million (ppm) and 1 is equivalent to 430 ppm. For CH4 (triangles; R. Cicerone in Gribbin, 1990), the range is 600 to 2400 parts per billion (ppb). For N2O (diamonds; Khalil and Rasmussen, 1992), the scale is 280 to 340 ppb. Mean global temperature (circles; Gribbin, 1990) has been graphed relative to the degrees-centigrade scale on the right. The vertical dashed-line is the same as that in Figure 1. The horizontal dotted-line is the highest CO2 concentration and temperature in human history before 1990. Greenhouse-gas concentrations and MGT in the short-term future are based upon continuation at the current growth rates. This will be justified in another context below.
Figure 3. Population and Global Warming (CO2 concentration and mean global temperature verses log-population) CO2 concentration (circles) and mean global temperature (squares) plotted relative to their absolute scales, ppm on the left and oC on the right, respectively. Vertical dashed line at 1995. (Data from Figs. 1 and 2)
----------
It is clear that the concentrations of all three gases have increased exponentially since 1950 (-1.4, Fig. 2) and that MGT has done so since 1975. Carbon dioxide concentration began to rise in conjunction with the use of fossil fuels after 1850. Although methane comes from a variety of sources, including plant decay, termites and bovine flatulence, CH4 concentration rises at the same time as CO2. This is probably due to its association with fossil-fuel production. Nitrous oxide concentration does not begin to rise until 1950. At this time, the use of human-made fertilizers and internal-combustion-engine exhaust increased dramatically. Ten thousand years ago (-4, Fig. 2), MGT increased substantially just as the agricultural revolution got started. Over the previous 200,000 years, the ecosystem was dominated by ice-ages. Projected MGT in 2025 (1.7, Fig. 2) is about 17oC, 1.5oC higher than in human history prior to 1990.
POPULATION AND GLOBAL TEMPERATURE
We have seen in Figures 1 and 2 that recent population, atmospheric greenhouse-gas concentrations and MGT have grown exponentially over about the same time-course. The relation of CO2 and MGT relative to population size can be observed by graphing these variables as above. Figure 3 shows this graph, where the log of population replaces log-time and CO2 concentration (circles) and MGT (squares) are plotted relative to their absolute scales, ppm on the left and oC on the right, respectively. The vertical dashed-line denotes 1995, as in Figures 1 and 2. When the population reached 4 billion in 1975, the converging relation between population and the other two variables becomes apparent.
The magnitude of the relations in Figures 2 and 3 can be determined by calculating the correlation coefficient between pairs of variables. Table 1 lists these coefficients for the population, greenhouse-gas concentration and MGT variables that we have been examining. The coefficients for the relations during the industrial revolution, 1800 through 1994, are above the diagonal of the table. The coefficients since 2000 years ago through 1994 are below the diagonal. Over the past 2000 years, there is a nearly perfect correlation between the concentration of greenhouse gases and population and between the greenhouse gases themselves. However, the correlations between both population and greenhouse-gas concentrations and MGT (bottom row) are not as strong. After 1800, the latter correlations increase to near perfection (rightmost column). The conclusion from the graphs and table is that there is a strong relationship among population size since 1800, greenhouse-gas concentrations and MGT.
TABLE 1. Correlation coefficients among population size, atmospheric greenhouse-gas concentrations and mean global temperature (1800 through 1994 above the top-left to bottom-right diagonal, n=10; 2000 years ago through 1994 below the diagonal, n=15).
Pop CO2 CH4 N2O Temp
----------
Pop .996 .984 .977 .916
CO2 .990 .994 .974 .942
CH4 .991 .992 .949 .945
N2O .959 .943 .942 .932
Temp .718 .716 .728 .829
GLOBAL WARMING AND CLIMATE
Determining that there is a strong relation between population size and global warming does not tell us what the underlying mechanisms are. However, documentation of the relationship between human activities and the release of greenhouse gases produces a strong inference that population size and global warming are closely related (Gribbin, 1990).
Forecasting the future is risky business. Growth rates for greenhouse-gas concentrations and MGT could decline from those at present due to unanticipated innovations or natural events. For example, volcanoes can spew enough ash into the atmosphere to block sunlight and temporarily reduce MGT slightly. However, short-term continued growth at current rates is probably an underestimate. Although population growth rate has slowed, the population is still growing. The dominating factor is that per-capita energy and resource consumption rates are increasing much faster than the population. This is not only due to anticipated increases in standards of living in underdeveloped countries, but also to future increases in the demand for energy in the developed countries (e.g., air conditioning) as summer temperatures rise. Since most of the energy will come from fossil fuels, at least for the next few decades, we can expect the atmospheric concentrations of greenhouse gases and MGT to rise in the short-term future at a faster rate than they have recently. As MGT rises, water vapor, another greenhouse component, will become a more and more significant factor due to increased evaporation.
Although a 1.5oC increase in MGT above where we were in 1990 (1990 to 2025 in Fig. 2) does not seem like much of a change, it is enough to precipitate major changes in climate. A 1.5oC drop in MGT from where we were in 1990, for example, would put the ecosystem on the verge of an ice-age. Already, there is a suspicion that, since 1975, the persistent El Nino is the first sign of the relation between global warming and climate (Kerr, 1994). As MGT increases further, we can expect more frequent and severe hurricanes and perpetual summertime droughts in many places, particularly in the US Midwest. Paradoxically, more intense winter storms will occur in some places and climatic conditions for agriculture will improve in some areas, such as in Russia (Gribbin, 1990; Bernard, 1993).
There has been considerable debate over the ecosystem's carrying capacity for humans. If we define that carrying capacity as the level that the ecosystem can support without changing state more than it has over the duration of human history, then Figures 2 and 3 indicate that we exceeded that capacity in 1975. This is the point in time where exponential growth began to push MGT along a path which has taken it outside the previous range. This does not necessarily mean that humans could not survive if MGT is about 2oC higher than it has ever been in their history. However, we will have to adapt to a radically different climate pattern and, if MGT goes any higher than that, there could be disastrous problems.
If MGT continues to increase beyond 2025 to 4oC above that in 1990, high-northern-latitude temperatures could be as much as 10oC higher than at the equator. The Arctic ice-cap would begin to melt and the permafrost under the tundra would start thawing out. As a consequence, a thick layer of rotting peat would contribute further to atmospheric CO2 and CH4 concentrations (Gribbin, 1990). With a number of human-made and natural positive-feedback elements in operation simultaneously, a threshold could be crossed (Meyers, 1995; Overpeck, 1996). Are these risks that we should be willing to take for the sake of short-term gains?
REFERENCES
Bernard, H. W. Jr., "Global Warming Unchecked", Indiana Univ. Press, Bloomington, 1993
Broecker, W., Science, 189:460, 1975
Deevey, E. S., Scientific American, 203:195, 1960
Gibbons, A. , Science, 262:27, 1993
Gribbin, J. , "Hothouse Earth", Grove Weidenfeld, New York, 1990
Kerr, R. A., Science, 266:544, 1994
Khalil, M. A. K. and R. A. Rasmussen, J. Geophys. Res., 97:4651, 1992
"The World Almanac", Pharos, New York, 1992 - 1995
Meyers, N. Science 269:358, 1995
Overpeck, J. T. Science, 271:1820, 1996
Post Script
After this document was written (about a 2 years ago), two books came out which provide much more detail relevant to some of these issues:
HOW MANY PEOPLE CAN THE EARTH SUPPORT? by Joel E. Cohen; Norton, 1995.
DIVIDED PLANET: THE ECOLOGY OF RICH AND POOR by Tom Athanasiou; Little Brown, 1996.
Both are superbly done and provide a much more comprehensive and up to date treatment of the population and economic topics included here.
Recent evidence (Mora et al.; SCIENCE 271:1105, 1996) indicates that the possibility of a "greenhouse runaway" on Earth is much more remote than indicated at the end of the previous version of this document. Therefore, the former apocalyptic ending has been changed. Although the data presented points to a catastrophic conclusion, this was (perhaps) an overstatement of the case.
f:\12000 essays\sciences (985)\Enviromental\Global Warming.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Mission Plan
a. Analysis of the Problem
1. History of the Problem
Some scientist's have been concerned since 1896 about what might happen if there
were 5.5 billion tons carbon dioxide in our atmosphere. In 1961 a British scientist did an
experiment showing that the carbon in the air was absorbing some of the sun's radiation.
Afterward a Swedish scientist, Suante Arrhenius, found out if the radiation of the sun was
trapped in the carbon dioxide the temperature of the earth would increase by 1-2 degrees.
In 1988 James Hanson, a respected scientist, told the U.S. Congress "the greenhouse
effect is occurring now and it's changing global climate."(1989 Koral). After the 1900's
people started making factories and started using fossil fuels like coal, oil, and aluminum.
It was the industrial revolution and overpopulation of humans that was the cause of the
environmental problems that we have today.
2. Human Activity Causing the Problem
The reason our Earth is getting hotter is that human activities are emitting too
much carbon dioxide into the atmosphere. The radiation from the sun gets trapped in the
bag of carbon dioxide that surrounds our earth.
One main reason for the problem of global warming is the burning of fossil fuels.
Fossil fuels are coal, oil and natural gases. We use these fuels to run factories, power
plants, cars, trucks, buses, air conditioning and etc. The people of the earth are putting
5.5 billion tons of carbon, in the form of carbon dioxide in the air every year! Seventy
five percent of this is fossil fuels.
3. Impact Causing Global Change
For many years, scientists have been predicting that our disregard for Mother
Nature would make the climatic temperature of this Earth to increase greatly. There have
been arguments that the whole idea of Global Warming is a hoax, that the temperature
cycle is just experiencing an upward trend and will eventually come back down. Now,
however, we are starting to see the evidence of our behavior.
Remember the great heat wave in Chicago? That could have been a consequence
of global warming. Nearly a hundred people died, and the city's economy came to a
standstill. A much more tragic but less known heat wave smashed into India, causing
upward of 600 deaths.
Global Warming doesn't only increase temperatures in hot areas. It also decreases
temperatures in cold areas. An example of this has been the cold spell that struck the
midwest. In Montana, temperatures plummeted to 30 degrees below and stayed there.
The coldest weather ever recorded plagued our country's heart for over three weeks, and
still hasn't returned to normal. A related incident has been the blizzards of the east coast.
Some places in New York State got over twenty feet of snow.
On a Native Island, where native tribes live, if the sea level rises three fourths of a
meter then half of the island will sink. This will happen in many different islands around
the world and if the water keeps on rising as it is, then farming land near the seashores
will be flooded and the crops will be destroyed.
Like California and other states, we are adding CO2 and changing the earth's
weather. Some places are getting too little water which causes a drought and other places
get too much water which causes a flood.
In California, there was an almost permanent drought during the eighties. This
was gone in the nick of time by the great rainstorms of 1995. We also experienced a
frightening cold spell in 1992.
The Road Ahead
With all these obvious scourges plaguing us now, it seems that things cannot get
any worse. However, the current droughts, floods, and storms are just the tip of the
iceberg. If the greenhouse effect continues unabated, then the inhabitants of Planet Earth
have some surprises in store.
Scientists estimate that the global temperature will rise between 5 and 9 degrees
by the middle of the 21st century, accompanied by a sea-level rise of one to four feet. Five
degrees may not seem like a drastic change, but in the last ice age at the beginning of the
Quaternary period, the average temperature was only five degrees colder than it is now.
Thus, our actions our warming the earth enough to break out of an ice age.
Once the temperature reaches a certain threshold, the polar ice caps will began to
melt. While those living in the Arctic may find that a welcome surprise, the implications
for the rest of the world are serious. Even a partial melting of the polar ice caps will cause
sea levels to rise so much as to completely wipe out most coastal cities. This includes
such cultural centers as San Francisco and New York. Those cities that survive will be
battered down by hurricanes much more severe than anything seen in history. Of course,
inland cities are not immune either. Rather than floods, they will face drought. So while
half the world is swimming to work, the other half will be crawling on their knees with a
scorching sun beating against their backs.
When drinkable water is a scarcity, it will become a commodity that represents
political power. The countries with water will be the countries with power. This means
there will be a political upheaval of global proportions. Life as our children know it will be
completely different, and not necessarily for the better. With most of America's lakes
dried up and its major trading ports under several feet of salt water, perhaps we won't be
the economical leader.
If we don't start trying to stop global warming from happening now, there will be
many more consequences. Another consequence will be that there will be high raises in
temperature, affecting human life by causing skin cancer, damaging the human immune,
and causing cataracts. Raises in temperature will also affect agricultural and aquatic life.
Also, many species will die off. And in the forests or maybe animals, there could be
medicines to cure some kind of disease. The way these cancers and diseases come to be
is because the sun deadly rays like UV rays, which mutate human cells.
b. Experimental Design
1. Restate Problem
Natural occurrences are not the only caused and influences of our atmosphere
changing. Human activities also cause the atmosphere to change. Fossil fuels burning is
producing a worldwide increase in the atmosphere concentration of carbon dioxide. If
atmospheric carbon dioxide continues to increase at the present rate, studies estimate that
the average surface temperature will rise 2 degrees Celsius by the middle of the next
century. This will be a climate change greater than any other ever experienced in history,
that we know of. The four main greenhouse gases are Carbon Dioxide (CO2),
Chlorofluorocarbons (CFCs), Methane (CH4), and Nitrous Oxide (N2O). With the
exception of CFCs, all these gases are found in nature. It is the recent explosion of the
human population that has caused an exponential increase in their atmospheric presence.
Although nature has provisions for removing carbon dioxide, it does not take into
account the human factor. The long, complicated carbon cycle can only keep up with
increasing human activity if the tree population increases proportionately. Due to modern
medicine and increased awareness of nutrition and health, the human race has managed to
extend its lifespan considerably, thereby releasing more CO2 into the atmosphere. This,
combined with an alarming rate of rainforest depletion and air pollution, leads to an
unmanageable amount of CO2 in the atmosphere. Since its sources are both natural and
human, carbon dioxide is the largest contributor to the greenhouse effect, at 50%.
As far as CFCs, our only excuse is that "it seemed a good idea at the time." When
they were first invented, they seemed to be the miracle chemical of the century. Because
of their low boiling point, CFCs could act as coolers in refrigerators, freezers, and air
conditioners. Also, they were used to make Styrofoam and as aerosol propellants. As it
turns out, they are as skilled at destruction as they are at refrigerating. Scientists
discovered in the 1970's that CFCs destroy ozone, starting an international ban on their
usage. Later, it was determined that CFCs contribute to global warming as well, making
them a dangerous double whammy. CFCs are no longer used in aerosol and Styrofoam,
however most refrigerators still contain freon, a CFC. Fortunately, the freon can be
recycled. Contributing to 25% of global warming, CFCs are still a major problem, but at
least the U.S. and the other powers have recognized it as such. Methane, also known as a
natural gas, contributes 15% to the greenhouse effect. It is caused by cows and rice
paddies. The major American demand for so much beef urges foreign farmers to clear
forests for pastures. This also causes an increase in carbon dioxide, as well as a cow
population so high that the methane-rich burps of the complex digestive system are a
major contributing factor to the greenhouse effect. Add to that the methane released from
natural sources, and you have a very large problem. The ten percent that is left comes
from nitrous oxide, a common pollutant. It, along with carbon dioxide, forms the major
part of car exhaust. Half a billion cars drive the streets of the world today, a number
expected to double by 2030. N2O is also released by the burning of fossil fuels. Finally,
it finds its way into the atmosphere from nitrogen fertilizers, which are used heavily by
today's modern farmers.
Overall there are many pollutants in our atmosphere, influenced by humans, and
by natural effects. In our opinion if any member of this country wants to live in a good
environment then they have to take charge and to make a difference even if you have to
become a vegetarian so there will not be CO2 from the animals.
2. Hypothesis
If we continue to pollute the air with methane gases and don't do anything about
it, then the average global temperature will rise and there will be many consequences.
Warming expands ocean water and may melt some glaciers. The sea level could rise one
foot in the next 35 years and two in the next 100. Hurricanes, tornadoes and other
extreme storms may become more frequent. Centers of large continents, such as the U.S.
Great Plains, may be drier even if the overall world rainfall increases somewhat. Heat
waves may be more common. Movement of just 1 percent of a future population of 6
billion people due to higher sea level, drought, or other climate change would produce 60
million migrants, many times the number of all refugees today. Impact mixed. Carbon
dioxide stimulates plant growth. However, heat increases demand for water. Growing
zones will shift if weather patterns change. Warming that expands the tropics will also
expand the range of tropical diseases such as malaria and other insect borne maladies.
Possible mass extinction may occur as conditions change faster than species can move or
adapt. Urban and agriculture development leaves few wilderness corridors for migration.
3. EOS Satellite
The Earth Observing System (EOS) Data and Information System (EOSDIS)
is NASA's Mission to Planet Earth's (MTPE) project to provide access to Earth Science
data. EOSDIS manages data from NASA's past and current Earth science research
satellites and field measurement programs, providing data archiving, distribution, and
information management services. During the EOS era--beginning with the launch of the
TRMM satellite in 1997 EOSDIS will command and control satellites and instruments,
and will generate useful products from orbital observations. EOSDIS will also generate
data sets made by assimilation of satellite and in situ observations into global climate
models.
The instrument that we chose that monitors the impact of human activity is
HIRDLS. HIRDLS is an infrared limb-scanning radiometer designed to sound the upper
troposphere, stratosphere, and mesosphere to determine temperature; the concentrations
of O3, H2O, CH4, N2O, NO2, HNO3, N2O5, CFC11, CFC12, and aerosols; and the
locations of polar stratospheric clouds and cloud tops. The goals are to provide sounding
observations with horizontal and vertical resolution superior to that previously obtained;
to observe the lower stratosphere with improved sensitivity and accuracy; and to improve
understanding of atmospheric processes through data analysis, diagnostics, and use of
two- and three-dimensional models.
HIRDLS performs limb scans in the vertical at multiple azimuth angles, measuring
infrared emissions in 21 channels ranging from 6.12 to 17.76 um. Four channels measure
the emission by CO2. Taking advantage of the known mixing ratio of CO2, the
transmittance is calculated, and the equation of radiative transfer is inverted to determine
the vertical distribution of the Planck black body function, from which the temperature is
derived as a function of pressure. Once the temperature profile has been established, it is
used to determine the Planck function profile for the trace gas channels. The measured
radiance and the Planck function profile are then used to determine the transmittance of
each trace species and its mixing ratio distribution.
Winds and threatening tornados are determined from spacial variations of the
height of geopotential surfaces. These are determined at upper levels by integrating the
temperature profiles vertically from a known reference base. HIRDLS will improve
knowledge of data-sparse regions by measuring the height variations of the reference
surface provided by customary sources with the aid of a gyro package. This level, which
is near the base of the stratosphere can also be blended downward using nadir
temperature soundings to improve tropospheric analyses.
Bibliography
"Climate Change Brings Trouble". The Earth Care Annual 1993. Emmaus:
Rodale Press, 1993
"EOS" http://eos.nasa.gov/ Logon November 3, 1996
"Global Warming" http://users.aimnet.com/~hyatt/gw/gw.html Logon October 25,
1996
"Global Warming". Microsoft Encarta 95, Microsoft, 1994.
"HIRDL" http://eos.acd.ucar.edu/hirdls/home.html Logon November 1, 1996
Newton, David. Global Warming A Reference Handbook. Santa Barbara:
ABC-CLIO, 1993
Silver, Cheryl. One Earth, One Future, Our Changing Global Environment.
Washington D.C., National Academy Press, 1990
Woodwell, George. The Rising Tide Global Warming and World Sea Levels.
Washington D.C., Island Press, 1991
f:\12000 essays\sciences (985)\Enviromental\Green laws boost cleanup industry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Engelskaflevering d. 01.09.95
"Green laws boost clean-up industry"
I
Have companies around the globe really become "house-proud", or is planet earth just in for a spring cleaning? It is hard to say - but one thing is for sure; the environmental sector is en-joying a boom. The market for pollution control technology is on a steep exponential growth curve, which seems to be interminable. Especially the European companies put down their names for an immense part of the expansion. But what is the precise nature of this sudden environmental con-cern? After all the deteriorating state of the environment is hardly a novel phenomenon, to say the least.
Just how vigorous this potential goldmine is going to be for the clean-up industry ac-tually depends on law and order, so to speak. That is to say that one of the main reasons for the turn up is new legislation. Recent EU-directives as to pollution may cause heavy demands on the purse of one company and consequently pour that money down the pockets of the clean technology indu-stry. Moreover the deadlines for plants to meet EU-directives are getting close, and everything se-ems to show that the laws will be enforced. Yet far from all companies have to meet with the raised finger of the law to start investing in their environmental responsibilities. Investments on a volunta-ry basis are often due to the fact that it makes good ecnomic sense or because it gives the corporate image a face-lifting.
Seen from a geoprahical point of view Germany and primarily eastern Europe form tremendously good breeding ground for the sale of clean-up equipment. As a result of opencast mi-ning of lignite coal in Poland, for example, a huge clean-up is left, which will amount to billions of dollars. However accidents also occur at sea, where a spate of oil tanker disasters are likely to fill out the order book at oil cleaning industries.
Nevertheless a stroke of bad luck is far from necessary in order to make firms under-stand their green obligations. The power of the consumers has been on the increase over the last few years, and the public environmental image means more to a firm than ever before. The average con-sumer going down to the grocer's for a few necessaries is starting to attach importance to something else than just the product itself. How is the detergent wrapped - is the paper bleached? Is this bottle reusable? Are these outdoor tomatoes? - and so on. Personally I don't think that you notice it, as you're walking alongside the shelves in the local supermarket - but you do pay more attention to ecological messages on the products than you did just 5-10 years ago. After all this is a topic very much in the public mind, so I guess it's quite natural to get involved one way or the other. I know from my own experiences that we have started to put down se-veral ecological products on the shopping list at home, when going to the grocer's. Products like: carrots, rye bread, milk, and cheese appear regularly on our shopping list and always in ecological form. But just recently another common purchase was substituted; red wine, French red wine to be exact, had to give way to a Spanish bottle instead. The day by day "revolution" on the dinner table was my mother's contribution to the prevention of the French nuclear tests. French products in gene-ral was banned on our shopping list - and still are. How far her exertions have got appreciable effect with monsieur Chirac is dubious - but many a little makes a mickle, as they say!
On a more global scale this environmental consciousness of the consumers was to be witnessed just a couple of months ago. The sinking of the drilling rig "Brent Spar" at open sea cau-sed an outcry all over Europe, and customers "rippled their muscles". Shell, the mastermind behind the sinking, was boycott by a vast number of both bulk buying companies and ordinary consumers which resulted in a more environmentally friendly solution at last. To my mind this way of carrying one's point is absolutely excellent. Henceforward I feel that the consumers should utilize "the power of their shopping list" far more frequently. As to "Brent Spar", we kept that one afloat and got it sent to the breakers pre-venting the environment from further molestation. Let's only hope that this will go for the French nuclear weapons as well - before it's too late! "Consumers, unite!"
III
COWIconsult
Parallelvej 15
2800 Lyngby
Denmark
The European
Att.: Michael Bond
Orbit House
5 New Fetter lane
London EC4A 1AP
U.K. 12 June 1994
Dear Sir
Thanks for your letter of 6 June. I regret that I unfortunately can not answer your question, since we are a consulting firm which is not directly involved in any environmental acitvities.
The environmental sector has truely enjoyed a boom during the past few years. Industry is begin-ning to take its green responsibility seriously, consequently we help the companies in finding out whether they can make profits from a green image or not. For instance we do calculations for com-panies so that they can see the financial consequences of any environmental investments.
That is why we can not be of any assistance to you regarding information on special projects. However we do enclose our latest annual report, where you will find the names of some Danish firms, which have been involved in either the cleaning of polluted soil in eastern Europe or the sale of equipment for monitoring oil spill from ship tanks in the North Sea. Perhaps you can obtain fur-ther details at the mentioned companies.
Moreover we refer to our office in London, 35 Bassinghall Street, London EC2V 5DB.
We wish you the best of luck on your articles.
Yours sincerely
COWIconsult
Marlene Eriksen
Marlene Eriksen
Information Manager
Encl
4
f:\12000 essays\sciences (985)\Enviromental\Greenhousing the wrong way.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Greenhousing the Wrong Way
By Andy & Luke
Exactly what is the "Greenhouse Effect?"
The "Greenhouse Effect" is the common name given to global warming. The effect is named so due to its similarities to the function of a greenhouse. Heat from the sun is allowed into our atmosphere, and then bounces off of the Earth and heads back out to space. But now that we have a wall of Carbon Dioxide, Methane, Nitrous Oxide, and CFC's, the sun's heat rays bounce back towards the Earth. The continuous burning of fossil fuels and the rain forests is causing excess amounts of carbon dioxide to be released into the atmosphere.
Carbon dioxide acts as the walls of a greenhouse that encompasses the whole world, trapping heat into the atmosphere. The thickening of the blanket is causing more heat to be trapped resulting in the warming of the earth. One great example of the Greenhouse Effect is the planet Venus. Venus's atmosphere had a thick layer of CO2, giving the planet's surface a temperature warm enough to melt lead.
So What Does All This "Greenhouse" Stuff Have To Do With Me?
As the temperatures rise, the waters get warmer and begin the melting process of the polar ice caps (Popular Science).
Long term predictions of Global warming say that the melting of the polar ice caps will continue causing ocean waters to rise, resulting in massive coastal flooding of major cities such as Los Angeles and Miami.
If the next century's warming stays at a low end of estimates, the consequences are likely to be mild. But if warming reaches the middle or top estimates, we are likely to see such things as more frequent and more intense heat waves, increased flooding, and droughts in different areas. Not to mention the 60,000,000 migrants that would be caused if only 1% of our future population had to seek higher ground. This many migrants would help to further our already crowded cities, and take more jobs and require health care. Diseases such as malaria and other insect borne ailments will have an expanded range if the tropics continue to warm. If conditions change faster than species can adapt, many, many, many animals will become extinct (World Book).
So What Am I Supposed To About All These Man Made Disasters?
Become knowledgeable about the subject of global warming. Recognize that global warming may worsen and prepare yourself and others for it. Help inform others of the problem, and write letters to large companies and your government. Lots of people don't even know about global warming, and most that do are underestimating its possible effect. Renew the search for safe and clean alternatives to fossil fuels and alternatives to transportation (Global Climate Change). The most important thing you can do is care enough to try to make a difference.
Works Cited
"Turning up the Heat", Popular Science, October 1989, p.53
Whitmore, Susan C., Global Climate Change and Agricultural Summary, Sept. 1992
World Book Encyclopedia, volume G, 1991, p.407
f:\12000 essays\sciences (985)\Enviromental\Greenpeace.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I
Living in the Faroe Islands means that you have experienced Greenpeace in action. It also means that your opinion regarding Greenpeace is not as positive as it might have been, without the influence of your fellow countrymen.
"Credit, where credit's due". Greenpeace has done a magnificent work when it comes to preserving our environment, even though their methods are questionable and sometimes rude and immoral.
From Greenpeaces campaigns against the Faroe Islands, where pilot whale hunting has been the subject, we know them and their methods of argumentation to be detestable. Pictures and video-recordings were manipulated with and information, that we can only laugh at, were feeded to the public.
The questionable procedures that have been used in the campaigns against the Faroe Islands however is not a general perspective of Greenpeace and their actions. But the keyword for Greenpeace and what is common for almost every venture that Greenpeace takes, is that they are active. Greenpeace does not believe in bureaucracy, and is that something that we should respect. Taking on a action for Greenpeace sometimes means that you have to break the law or at least bend it a little.
In USA where they protested against some factories which deliberately lead their toxic waste to lakes nearby, Greenpeace sealed the pipes, from which the toxic waste was coming from and furtermore the activists refused to leave.
A up-to-date example could be Mururoa, where the French government held a series of underground nuclear tests and banned all nearby sailing activities meanwhile. Greenpeace had no hesitation, sailed right to Mururoa and stayed there until they were forced to leave by a commando squad.
Even though straight on action, seems to be their game, Greenpeace also has it's own research center, from where the ozone-friendly refrigerators was designed. But 'direct action' is still what Greenpeace stands for.
One could say that in a world where bureaucracy blooms and people do not seem to care - Greenpeace is the rebel.
II
Pat Hanson
875 Green Street
Seattle, WA98116
USA
Greenpeace Denmark
Linnésgade 25
1361 København K
1 October 1995
Dear Sirs
Being a environmentalist like myself and having followed Greenpeaces ventures with interest and expectations, I have gained respect and admirations for your organization and the work it has been doing for the environment.
It is therefore I write to you, now that I am going to start my own shop in the centre of Copenhagen.
The name which I have chosen to my shop is
GREEN MACHINE
The goods will consist of environmental literature and environmental articles to begin with. To mention but a few ecological articles, I am planning to sell, are:
· Clothes
· Caps
· Bags
· Toiletries
I believe that the shop can be a success and my degree in business administration will be a reliable help to make that happen. My financial status is also good and I intend to spare no expense in the making of the GREEN MACHINE shop.
For me it would an honour and a privilege to sell Greenpeace articles in my shop and I hereby ask for your permission to do so.
I believe that a business deal between us would be beneficial to both parts, since your goal obviously is to reach as many customers as possible with your merchandises.
I look forward hearing from your and to eventually do business with you in the future.
Yours faithfully
Pat Hanson
III
GREENPEACE
Greenpeace Denmark
Linnésgade 25
1361 København K
Pat Hanson
875 Green Street
Seattle, WA98116
USA
10 October 1995
Thanks for your letter to Greenpeace in Denmark and your interest for our organisation.
We have rather large administration her in the country, both in Copenhagen and in Århus, and besides that we have shops in both towns. Our goods are being sold in these shops and through mail order via our members magazine.
As you see, we have comprehensive sale of Greenpeace-merchandises and as a multinational association we have surten obligations and restrictions. Therefore we are sorry to inform you that you may not sell Greenpeace-merchandises in the shop, which your are planning to open in Copenhagen.
On the other side we naturally always are interested to encourage the sale of ecological articles, even though this gives our own shops increased competition. Therefore we would like to help you getting started with your shop by sharing our experience with you. Greenpeace does know to aggressive advertising campaigns, and we could perhaps help you in this area. We therefore ask you to contact Henrik Green, which is the leader of our marketing department.
We hope that you may succeed in establishing a shop in Copenhagen.
Yours faithfully
Greenpeace Denmark
Chris Smith
f:\12000 essays\sciences (985)\Enviromental\Hawaiian Goose.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hawaiian Goose
The Branta sandvicensis, or Hawaiian goose looks similar to the Canada Goose except only the face, cap, and hindneck are black; and Nene have buff-colored cheeks. The males and female have the same plumage. The feet of this goose are not completely webbed like the other geese. Lots of calls have been described but the most common call is very similar to that of the Canada Goose, a resonate "honk." The goose has very strong toes; long legs, decreased webbing. They are good swimmers but are not found much near water. The birds nest on the ground and the young can fly at 1012 weeks. The adult Goose cannot fly while in molt for 46 weeks.
Wild Nene populations can be seen in Hawaii Volcanoes National Park, Mauna Loa, and Pu'u Wa'awa'a on the island of Hawaii; in Haleakala National Park on Maui; and at the Kilauea National Wildlife Refuge, along the Na Pali coast and outside Lihue on Kauai. Captive Nene can be seen at he Honolulu Zoo.
Designated Hawaii's State Bird on May 7, 1957, the Nene has endured a long struggle against extinction. During the 1940s this species was almost wiped out by laws which allowed the birds to be hunted during their winter breeding seasons when the birds were most vulnerable. By 1957, when the Nene was named the State Bird, rescue efforts were underway. Conservationists began breeding the birds in captivity in hopes of preserving a remnant of the declining population and, someday, successfully re-establishing them in their native habitat. Other programs for returning captive birds to the wild life was difficult, but more efforts have been successful. Some other efforts used to help this bird have been to get donations for the bird and have schools help out by donating money to organizations. There are now small populations of Nene on the islands of Hawaii, Maui, and Kauai. There are about 1000 Nene outside of Hawaii's zoos, and private collections.
The Nene is currently on the Federal List of Endangered Species, threatened by mongooses and dogs and cats which prey on the Nene's eggs and young. They are also endangered by human intrusion of the environment.
f:\12000 essays\sciences (985)\Enviromental\hello.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I am extremely happy at school which has helped and encouraged me to develop all my skills both educational and social. I was also part of the Yemin Orde project from which I benefitted enormously. Therefore as the school has done so much for me I would like and want to return something positive back into the school, and I feel that this is an excellent opportunity for me to achieve my goal.
· Leadership - I teach a large class of children at Cheder including me planning work for them.
· Ability to work in a team - I exhibited this skill on the Duke Of Edinburgh expedition
· Ability to work as an individual - I am able to use my initiative which can be seen when I designed a computer presentation for the prospective Parents Evening this school year.
· Ability to grasp a situation and respond appropriately - I exhibited this skill during my time in Yemin Orde when a friend of mine had an asthma attack and I individually aided him while going to find help.
I am decisive yet open minded and I am able to listen attentively. I have the ability to reach sound judgements without accepting unquestionably and can report my decision in a fair and balanced way. I believe that I can work well with authority in a variety of situations and I can realise where the ultimate authority lays.
I believe that the position of Student Officer is not so much one of authority but of responsibility.
f:\12000 essays\sciences (985)\Enviromental\Hemp The Truth About The Earths Greatest Plant.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In a perfect world there would be a product that could serve as a fuel source, a food source, a paper source, a textile source, and this product would be easy to produce in any of its forms. Believe it or not such a product does exist; it is the plant known as hemp. No tree or plant species on earth has the commercial, economic, and environmental potential of hemp. Over 30,000 known products can be manufactured from hemp.
Hemp was a common crop grown in the U.S. until 1937 when it was unjustly banned. A common misconception about hemp is that it was banned because it was a widely abused, harmful drug. Hemp was banned because it was a competitive threat to the wood industry. Corporations that profited from the demise of hemp spread rumors that marijuana was a major drug problem, which it was not at the time. They also propagated a campaign that it was a drug that induced uncontrollable violence, another complete falsehood.
Hemp is the plant scientifically known as cannabis sativa. It is referred to as hemp when it is grown for its fibers, stem, and seeds. Its leaves and flowers produce the drugs marijuana and hashish. However, sterile breeds of the plant are still illegal to grow in the U.S. Literally millions of wild hemp plants grow throughout the entire Midwest today. Wild hemp, like hemp used for industry purposes, is useless as an intoxicant. Yet U.S. drug law states that one acre of this can result in the owner being sentenced to death. The death penalty exists for growing one acre of perfectly harmless, non-intoxicating weeds!
Hemp can produce any product that paper can produce. The difference is that one acre of hemp can produce four times as much paper as one acre of trees ( a study done by the U.S. Department of Agriculture). Also, a crop of trees takes twenty to fifty years to be ready for harvest where hemp is ready to harvest four times as much in just a year. In addition, hemp produces twice as much fiber per acre as cotton. Twenty five percent of all pesticides in the world are used on cotton, averaging to four pounds of chemicals per acre of cotton in the U.S. every year. Since hemp is a natural repellent to weeds and insects, it needs almost no insecticides or herbicides. If it were substituted for cotton it could greatly reduce the pesticide usage. Again, hemp can produce anything cotton can and what's more it can produce it better. Levi Strauss tested a pair of hemp denim jeans and the results showed hemp jeans to be 65% more durable than the average store bought pair.
Hemp produces more biomass than any other plant that can be grown in the U.S. This biomass can be converted to fuel in the form of clean-burning alcohol, or non-sulfur man-made coal. It is estimated that if hemp were widely grown in the U.S., it could supply 100% of the nation's energy needs.
Hemp seeds are also a source of many products. The seeds contain high protein oil that can be used for human and animal consumption. Hemp oil is not intoxicating. Extracting hemp oil is cheaper than processing soy beans and it can be processed and flavored in any way that soy beans can. Hemp oil can also be used to make butter, cheese, and tofu. In addition to food products, hemp oil can be used to make paint, varnish, ink, and plastic substitutes.
One of the many high points of hemp is that it's easily grown. Unlike almost all hemp substitutes, hemp can be grown in all fifty states. During the Second World War, the government temporarily re-legalized hemp so farmers could grow it for the war effort. Hemp helped win World War II!
It is high time for this country to take a second look at this product. After reading these facts I challenge anyone to come up with a reason to maintain the hemp prohibition. Two of our founding fathers, George Washington and Thomas Jefferson, were hemp advocates. They said hemp was a necessity to the success of our nation. Now we have an even greater cause than that; the success of the planet. We cannot continue to butcher our forests and pollute our soil and water with chemicals to meet the demands of our every day lives. In turn we will never be able convince enough people to change the way they live to do any good. Fortunately, we have the perfect solution right under our noses: hemp. However, this solution will not do us any good until people realize its potential, and this will only happen if the word is spread. I can only hope that enough people are educated before it's too late.
"Make the most of the hemp seed, sow it everywhere." -George Washington
f:\12000 essays\sciences (985)\Enviromental\HEMP.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PREAMBLE
As we enter a new millennium, we find ourselves reevaluating the paths we've chosen and the decisions we've made. Have we made the best with what we've got or are we stumbling in the dark? How many gaps riddle the blanket of our knowledge?
The problem lies in how we make sense of where we're heading. Do we choose the path of economics and progress or do we choose the path of environmentalism and sustainability? Is there a median available for us to take where the greens of economy and environment are balanced or are we doomed to blindly continue the path of short-term gain and comfort . . . living out a flawed paradigm?
Canada is a prime example of a country that is continually weighing its power and influence on the natural and manmade worlds. We've found ourselves sitting on the global fence between our magliomaniacal brother to the south and our staunch traditionalist motherland to the east. From this division of powers and alliances we find ourselves locked into a self-induced ignorance and stifling conservatism. It's ironic that we have the opportunity to solve most of Canada's critical environmental issues in one fell swoop . . . with one simple plant. It is ignorance and the maintenance of the status quo that has blinded and crippled our ability to realize this resource.
-2-
INTRODUCTION
A plant exists that is so strong that it can be grown without requiring chemicals in almost every part of the world. Many have touted this plant as a possible way in which to wean society from its dependence on fossil fuels for energy and the need to log forests for pulp, paper and wood. It is even said that this plant could adequately clothe and feed the world more efficiently and cheaply than we can do now!
Why is this miracle plant not used if all evidence points to its versatility? The answer is bogged down in a century of law, sociology, the corporate agenda and conspiracy theories. Since the early part of the century, hemp has been considered a drug, though it has no euphoric attributes. Hemp: the wonder plant and possible solution to the bulk of our problems is illegal only because it is seen as guilty by it's association with marijuana.
Hemp is a herbaceous plant called "cannabis sativa", which means `useful (sativa) hemp (cannabis)'. Fiber is the best known product, and the word `hemp' can also mean the rope or twine which is made from the plant, as well as just the stalk of the plant which produced it.
History has proven its acceptance of hemp: both the U.S. Constitution and the first
draft of the Declaration of Independence were drafted on hemp paper; Ben Franklin started the first American newspaper with hemp hurds, while Thomas Jefferson said, "Hemp is of
first necessity to the wealth and protection of the country". Canvass, a hemp product, was widely used as sails in the early shipping industry, as it was the only cloth which would not rot on contact with saline sea spray. Archaeological digs in China have determined that hemp was being used as far back as 4,000 B.C. as a civilization's answer for food and the best fiber for clothes and
-3-
ropes.
Only because we relate it to a natural drug have we justified the banishment of a plant that's been in almost continual use for thousands of years.
HEMP AS AN AGRICULTURAL CASH COW
Hemp is an annual herbaceous plant that can be harvested within four months of planting after growing to heights of 5 meters (20 feet) tall. If rotated with other crops, hemp can be grown without pesticides or herbicides, naturally repels weed growth and, unlike most commercial grains and fibres has very few insect enemies. Hemp requires little fertilizer, and grows well almost everywhere, including most of Canada and even some areas of the Canadian Shield, like North Bay and Sudbury. Hemp puts down deep roots, which is good for stabilizing the soil from erosional forces, and when the leaves drop off the plant, minerals and nitrogen are returned to the environment. Hemp has been grown on the same soil for twenty years in a row without any noticeable depletion of the quality and stability of the soil.
Using less fertilizer and agricultural chemicals is good for two reasons. First, it costs less and requires less effort. Second, many agricultural chemicals are dangerous and contaminate the environment -- the less we have to use, the better.
HEMP AS A PAPER ALTERNATIVE
According to the US Department of Agriculture, one acre of hemp can produce four times more paper than one acre of trees. Trees must grow for twenty to fifty years after planting before
-4-
they can be harvested for commercial use. This lag time between cuttings result in fewer jobs on an annual and total basis, whereas hemp is a continual crop that can provide close to year-round employment for farmers, workers and processors, not to mention peripheral employment for transportation employees, distributors and the manufacturing community.
Both the fiber (bast) and pulp (hurd) of the hemp plant can be used to make with the process originating in ancient China. The world's first paper is thought to have been made from hemp. Fiber paper is thin, tough, and a bit rough. Pulp paper is not as strong as fiber paper, but is easier to make, softer, thicker, and preferable for most everyday purposes. The paper we use most today is a `chemical pulp' paper made from trees.
Hemp pulp paper can be made without chemicals from the hemp hurd. Most hemp paper made today uses the entire hemp stalk, baste and hurd. High-strength fiber paper can be made from the hemp baste, also without chemicals. Hemp offers us an opportunity to make affordable and environmentally safe paper for all of our needs, since it does not need much chemical treatment. Today's paper is manufactured with an excess of chemicals, and will turn yellow and fall apart as acids eat away at the pulp. This takes several decades, but because of this publishers, libraries and archives have to purchase specially processed acid free paper or coating sprays to protect literature. This is a very expensive endeavour. Paper made naturally from hemp is acid free and will last for centuries.
It is estimated that one acre of hemp would replace an entire four acres of forest while, at the same time, this acre would be producing textiles and rope.
Substituting hemp for trees, especially if planted on marginal lands that are no longer able
-5-
to support food crops, would save forest and wildlife habitats and would reduce the tree pulp pollution of lakes, rivers, and streams. Some estimates predict that the production of every ton of hemp pulp saves twelve mature trees from being used for the same purpose.
The prohibition of hemp has led to the unnecessary destruction of forests in Canada and the world over, not to mention the loss of revenue from an easily managed crop that can be grown relatively close to the urban centres where the products will be used.
HEMP AS A SOURCE OF FUEL
To stop and reverse the greenhouse effect, world energy production must return to using fresh biomass as the raw material for all fuel currently made from fossil biomass. The only way to stop the CO 2 build-up in the atmosphere is to cease burning fossil fuels. As the most efficient biomass which can be grown in soil, hemp is a prime candidate as a source of alcohol fuel. The pulp (hurd) of the hemp plant can be burned as is or processed into charcoal, methanol, methane, or gasoline.
Plant "biomass" is simply dead organic material, and it's the fuel for the future. Cleaner than fossil fuels, it can provide gasoline, methane, and charcoal to meet all of our home and industrial energy needs. Hemp has more potential as a clean and renewable energy source than any crop on earth. Burning anything produces carbon dioxide, but year after year, the hemp crop
photosynthesis would convert that carbon dioxide back into oxygen. This biomass can be
converted to fuel in the form of clean-burning alcohol. Unlike fossil fuels, hemp does not contain
sulfur, a major cause of acid rain. We could save our oil reserves and reduce our trade deficit
-6-
without offshore drilling, strip mining, oil spills or nuclear radiation. By developing hemp, the
most productive energy crop for Canada's climate, we can end our dependence both on foreign
oil and on nuclear power.
Is hemp used for fuel today? One acre of hemp will produce one thousands gallons of methanol. Methanol makes a good automobile fuel and is often used in professional automobile races. It has the potential to replace gasoline as a regularly-used automobile fuel.
It would not be in the best interest of Canada to continue in the direction we're heading. The cost to clean up waste from fossil fuel production and use with large tax breaks going to these archaic forms of energy, leaves the taxpayer in jeopardy of bearing the cost. While Canadian politicians continue to support these companies, global pollution worsens all in
the name of profit. As taxpayers learn more about the corporate welfare being doled out to
multinational energy companies, they will begin to demand that government eliminate these
handouts and invest in alternative fuels and crops like hemp.
HEMP AS A FOOD SOURCE
"Behold, I have given you every plant yeilding seed which is upon the face of all the earth, and every tree with seed in its fruit; you shall have them for food". Genesis 1: 29
Hemp provides us with a source of nutritious high protein, and essestial fatty acids that can be used for human and animal consumption. 30% of the seed is oil by volume, which can be used for cooking, and can be ground into flour, or a type of peanut butter, with qualities as good as whale and jojoba oil. The seeds are as nutritious as soya, but more digestible, gives higher
-7-
yields and is easier to harvest. In an era of ozone depletion it's important to note that soy crops can be damaged if they get too much ultraviolet sunlight. The chemicals in the hemp plant helps it to resist untraviolet light.
Hemp protein can be processed and flavored in any way that soybean can. Hemp oil can be used to make nutritious tofu, butter, cheese, salad oils, as well as other foods. Hemp seeds are a complete source of vegetable protein, and contains eight essential amino acids. Two thirds of the protein is in a ready to digest form called 'globulin edestine'. These proteins are the source of 'immunoglobulin' which are part of our immune system.
Hemp seed is one third oil by weight, which is low in saturated fats and contains many oils which our bodies can't make itself, but needs them to survive. What these esssential fatty acids provide our immune system has been use to help those suffering from cancer, cardiovascular disease, glandular atrophy, gall stones, kidney degeneration, dry skin, immune deficiency, acne, menstrual problems as well as AIDS.
HEMP AS A SOURCE OF FIBRE
The hemp plant produces some of the strongest natural fiber known to man. Hemp fiber is ten times stronger than cotton and can be used to make all types of clothing. Hemp has been worn as clothing for thousands of years to make all types of textiles and fabrics for diapers, flags, bedsheets, towels, quilts, rugs, draperies tents, linens, and of course canvas. Hemp is softer, warmer, and more water absorbent than cotton. Natural organic hemp fiber holds its shape like polyester, but Hemp "breathes" and is biodegradable. Hemp can be spun and woven to be as
-8-
smooth as silk, or as coarse as burlap, with designs as intricate as lace.
THE CONCLUSION....AS MUCH AS THERE IS ONE
Hemp is the most valuable, renewable resource we have available in Canada, producing over 25,000 different products yet it is illegal to grow. Most of these products are currently derived from labour and cost intensive non-renewable or unsustainable resources. A select group of research farms are currently permitted to grow the plant, but licenses are difficult to find and the plant can only be used for research.
Until we begin to find ways of shifting the paradigm, of convincing government and society that hemp is the best, if not only alternative to the flawed paths we're blindly stumbling upn, we will be doomed to drudge on in apathy, conservatism and ignorance....attributes noone wishes to have!
Canada should discard it's past traditionalism and take the initiative in re-establishing a thriving hemp industry. With Canada as an example to the global community, an international hemp industry could flourish. We just need to realize that only hemp can save us now.
f:\12000 essays\sciences (985)\Enviromental\Hippopotamus Endangered Species.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Jason Wapiennik
Mr. Trippeer, Biology
January 6th, 1997
The Hippopotamus: Endangered Species Report
The ban on elephant ivory trading has slowed down the poaching of elephants, but
now poachers are getting their ivory from another creature, the hippopotamus. For
the poacher, the hippo is an easy target. They stay together for long hours in muddy
water pools, as many as eighty-one can be found in a single square mile. This
concentration is so big it's only second to that of the elephant. Poachers kill the
animal, then pick out the teeth and sell them for as much as seventy dollars per kilo.
This is a very cheap price. Elephant ivory sells for as much as five-hundred dollars
per kilo. The reason the price-per-kilo is so slow is because hippo ivory is very
brittle compared to the much stronger elephant ivory.
Elephant ivory is no longer at the biggest risk for poaching; hippo ivory is. Eastern
Zaire once had one of the largest hippo populations in the world, around 23,000
hippos. According to a count done in 1994, this number has now dropped to
11,000. The 1989 ban on elephant ivory is the main cause attributed to the
exponential rise to hippo ivory trade.
"European and African activists are petitioning advocacy groups, including last
week's annual Convention on International Trade in Endangered Species in Florida,
for a ban on hippo poaching. But they say they're a long way from putting an end to
the slaughter." (Howard & Koehl)
The hippopotamus is an enormous amphibious animal with smooth, hairless skin.
Hippos can be found in Liberia, the Ivory Coast, and a few can also be found in
Sierra Leone and Guinea. Hippos used to be found anywhere south of the Sahara
Desert where they could find enough water and plenty of room to graze. Now, due
to poachers and predation they are confined to protected areas, but they can still
sometimes be seen in many major rivers and swamps.
Hippos need water that is deep enough to cover them, but it also has to be very close
to a pasture. They must wallow in the water because their thin, hairless skin is
vulnerable to overheating and dehydration. Hippos were once thought to sweat
blood. Actually, hippos secrete a pinkish colored oil that helps them keep their skin
moist in the hot African climate.
Hippos are herbivores. They prefer the short grass of African plains to any other
possible food. They normally eat up to eighty-eight pounds of this grass nightly,
which they mow away a large patch at a time with their twenty-inch muscular lips.
Hippos spend most of their days in the water or wallowing in the mud, only coming
up on land to feed at night.
Hippos defecate in the water. Their dung provides essential basic elements for the
food chain. Tiny microorganisms feed on it and then larger animals feed on those
organisms. On land, hippos' large bodies make trails through the vegetation that
other animals may use for easy access to water holes. Because hippos' favorite food
is short grass, they keep these grasses well-trimmed which may help to prevent grass
fires. Hippos are an important part of the African ecosystem.
If the hippos become extinct, and the likelihood grows more and more each day, the
repercussions it may have on the fragile African ecosystem are tremendous. Imagine
a brush fire consuming acres of previously-livable land under the hot African sun.
These people have no way to put out fires like we do here in the U.S.. The fires in
California were barely maintainable. In short, if the hippos die, everything
dependent on the hippo and it's way-of-life also suffers.
Bibliography
Brust, Beth W. Zoobooks: Hippos. San Diego: Wildlife Education, Ltd., 1989.
Estes, Richard. The Safari Companion. Simon & Schuster, 1991.
MacDonald, David (ed.). The Encyclopedia of Mammals. Vol. 2. London: George,
Allen & Unwin, 1984.
Redmond, Ian. "Africa's Four Legged Whale," Wildlife Conservastion. Jan.-Feb.
1991, pp 60-69.
f:\12000 essays\sciences (985)\Enviromental\Household Waist!.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Household Waste!
One morning my mom said "Andy, get up and clean the bathroom!" It was always an essential and important labor to the family. I got up and gathered all the normal cleaning agents we used; Ajax, ammonia, and this liquid bleach that my mom said worked wonders. The toilet I cleaned using the Ajax the sink I cleaned using the Ajax there seemed to be no need for the other two. Then I saw it- the bath tub, AH! There was a ring around the bath tub that I knew would be difficult to clean off. I decided to add the ammonia I scrubbed at the ring but it was not coming off. I then looked around thinking what to do...
"The Bleach!" I shouted aloud. And then -- it hit me, my mom's hand.
"Never, Never, Never, use Bleach with ammonia. Infact don't mix any chemicals with one another."
This is an excellent example of common mistakes people make when dealing with household chemicals/cleaners. In this assignment I will examine different cleaners commonly used in my house.
I Ajax
I go to the cupboard and find a can of the powder, Ajax. The can use to have a piece of tape to cover the top but now it has been lost; a potential problem. The can has an expiration date on it, 9/98. This expiration date may be incorrect because that piece of tape to cover it has been lost for some time now.
II Windex
In the cupboard in the upstairs bathroom is where we keep the Windex. The Windex is blue and clearly labeled, with no chance of any person mistaking it for something else. The top part is tightly screwed on with Windex filled to 3/4 of the original volume. I cannot find any expiration date, nor can I find any hint there ever was one. I should contact the product vender to see if the Windex is immortal or what.
III Vinegar
I go to the kitchen cupboard and find vinegar. Vinegar is what we use to mop our tile floor with. The vinegar has an "Easy flip-off cap!" and is about half of what it originally was. This too, has no evidence of an expiration date. I don't think I need to contact the item vendor because it's only vinegar.
IV Formula 409
Next to the Ajax in our "Cleaner-Cupboard" we carry Formula 409, the ideal for kitchen clean-up. It is clearly labeled with no chance for misplacement (unless someone puts something else in there when it is empty). The cap is tightly placed on it without any visible breakage. There like most of the others, has no expiration date. Maybe it is immortal too!
Conclusion
With all of these household cleaners/chemicals there is a potential source of danger. With these cleaners/chemicals there is a potential source of quality. It is just that we need to take care of it so we will be okay. We must be careful and not mix thing together, burn things, or any other improper use of them. If we follow the directions we will be safe.
f:\12000 essays\sciences (985)\Enviromental\How Climatic Changes Effect Society.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There has been a significant climatic change that has taken place throughout the years on Earth. These changes have effected society in more than one way. However, there is nothing society can do about the long term influences of climatic changes. Society has tended to address the short term effects of climatic changes that influence the global temperatures within the life span of present generations. The following will show how climatic changes does effect society, health, and economics.
Society depends a lot on natural resources for various aspects. First of all, society depends largely on forests to supply trees which in turn supply wood for construction. Other resources include oil and animals (livestock). In the focus of wood, there is the Western Canadian Boreal Forest which is a large producer of wood for the United States construction industry. However, climatic changes has had large and impacting effects on the construction industry. Compared to eastern Canada, the southern boreal forest region of western Canada has a relatively dry climate. Thus, drought effects are one of the major concerns being addressed by researchers in this region. climate modellers have predicted a large increase in temperature for this region, which could lead to even drier conditions and enormous stresses on vegetation in the western Canadian boreal forest. This type of impact was observed following the 1988 drought it experiences, when there was a die-back of aspen over extensive areas of the aspen parkland in Western Canada. Associated with this drought was a drying up of large lakes in the region. Another potential impact for the region is a major increase in forest fires. This is due to the fact that fire frequency is closely linked to moisture levels which are expected to decrease under climatic change. Thus, it is noticed that with increased climatic change the future that this forest has in supplying lumber is decreasing, and the construction industry will face a slight drawback due to this. In this it is noticed that, with a drawback in the construction industry's output, will also effect the economy and society. The economy will effect society and the decrease in output means a decrease in jobs, which in effect hurts society.
Contrary to the example of the forests in Canada, is the information found on its agriculture. Because average temperatures are expected to increase more near the poles than near the equator, the shift in climatic zones will be more pronounced in the higher latitudes. In the mid-latitude regions ( 45 - 60 latitude ), the shift is expected to be about 200-300 kilometers for every degree Celsius of warming. Since today's latitudinal climate belts are each optimal for particular crops, such shifts could have a powerful impact on agricultural and livestock production. For example, in the Canadian prairies, the growing season might lengthen by 10 days for every 1oC increase in average annual temperature.
Another example (Taken from sources on the net) is the impact of climate change on water. Now, water is a survival of mankind, in general, but almost for all life. Thus, if water was effected by climatic changes, so would society, health, as well as economics be impacted by climatic changes. In areas where climate change causes reduced precipitation, freshwater storage reserves, primarily in the form of groundwater, will steadily shrink. Areas where more precipitation was not matched by increased evaporation would experience floods and higher lake and river levels. An increase in extreme events such as droughts and floods would undermine the reliability of many critical sources. Diminished snow accumulation in winter would reduce the spring run-off that can be vital to replenishing lakes and rivers; a 10% decline in precipitation and a 1-2oC rise in temperature could reduce run-off by 40-70% in drier basins. Worsening droughts combined with the over-exploitation of water resources would cause salt to leach from the soil, thus raising the salinity of the unsaturated zone (the layer between the ground and the underlying water table). In coastal zones, a lowered water table would also draw salt-water from the sea in the fresh groundwater. At the same time, higher levels of carbon dioxide in the atmosphere are expected to improve the efficiency of photosynthesis in plants, which could in turn cause more rapid evapo-transpiration. Together, these various effects would have extremely negative consequences for river watersheds, lake levels, aquifers, and other sources of freshwater. As it is seen in the information found, such consequences would in reality effect society, agriculture, and economics. Society, would have lower levels of freshwater, agriculture would also have lower levels of freshwater to survive on. Because of this, the economy would be effected since more work will need to be enforced to revitalize the sources of freshwater, or find more.
It can be seen through these previous examples, that society is effected by various forms of climatic change. Thus, if society is effected, so is the health of people within the society, and economics is also effected. It basically is like a continual cycle that persists with a relation between climatic changes and the effects it has on society, health, and economics.
f:\12000 essays\sciences (985)\Enviromental\How Saddam Husseins Greed and Totalitarian Quest for Power.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Persian Gulf Crisis, 1990-1991:
How Saddam Hussein's Greed and Totalitarian Quest for Power
Led to the Invasion of Kuwait, World Conflicts and the Degredation of Iraq
Joseph Stalin. Fidel Castro. Adolf Hitler. Saddam Hussein. These names are all
those of leaders who have used a totalitarian approach to leading a nation. Stalin and
Hitler ruled in the early to mid-nineteen hundreds. Like Fidel Castro, Saddam Hussein is
now. Saddam Hussein belongs to the Baath Party of Iraq. This party adopts many
techniques similar to those used by Stalin and Hitler. Saddam Hussein conceived a plan
to invade Kuwait. It was, perhaps, one of the worst mistakes he could have made for his
own reputation and for his country. The invasion of Kuwait as well as the world's
response to it, the environmental disaster it caused, and the degradation of Iraq were
completely the fault one man and his government: Saddam Hussein and his Baath
Government.
One of Hussein's weaknesses is negotiating. Negotiating in his terms is to fight it
out with as much carnage as possible until his side comes out "victoriously". Repeatedly,
Saddam and his government break international convention laws. During his war fought
with Iran, the Iraqi army used chemical weapons on the Iranian troops and even on their
own Iraqi population. This was seemingly overlooked by the rest of the world because
most nations didn't want to see the Ayatollah's Islamic revolution rise. Iraq often obtained
foreign arms support from other nations because of this. It wasn't until the invasion of
Kuwait that the rest of the world seemed to realize the danger that Iraq posed to its own
people and to the Arab states surrounding it. Through poor planning, Saddam Hussein
made three major mistakes that enabled an easy defeat of the Iraqis.
The first mistake was that he captured all of Kuwait at the same time, instead of
leaving it as a border dispute. This might have kept it from becoming an international
affair. The second error was that Hussein positioned his troops too close to the Saudi
Arabian border. Because of this, other nations feared that Saddam's aggression was
endless. The third mistake was that Hussein miscalculated the world's response. He
overestimated the Arab "brotherhood" and by doing so, didn't realize that the rest of the
world would try to stop him. He also overestimated his own country's military power, and
believed that he could annihilate military superpowers like the United States, Britain and
France.
Saddam Hussein's ultimate dream was to possess a nuclear bomb. Most of the
world believed that Iraq didn't have the resources and materials to manufacture one.
Despite a failed attempt at building two reactors in the late seventies, Saddam was
determined to hold nuclear capability. He tried again in 1989 to purchase three high-
temperature furnaces from a New Jersey company, claiming that they were to be used for
prosthetic limbs for Iran-Iraq war vets. The deal was called off after the company,
Consarc, was warned by the Pentagon.
Despite this, Iraq was still rich with weapons. Between 1975 and 1990, this Arab
nation had spent $65 billion in arms [Macleans, June 3, 1991]. In the five years before
the Kuwait invasion, Iraq was one of the world's largest purchaser of arms. In those five
years, Saddam had bought ten percent of all weapons sold around the world. By 1990,
Hussein's Iraqi army had 5,500 tanks (mainly Russian), 8000 Armoured Personnel
Carriers (APCs), thousands of various missiles (ground-to-air), 70 MiG 23s, 25 MiG 29s
and 15 Su 24s [Outlaw State, page 89].
Saddam's quest for power by now was almost complete, except for nuclear
capabilities and a naval power. Most of this support of foreign arms came during the
Iran-Iraq war, against the Ayatollah's Islamic revolution. $500 million of the $65 billion
was spent on high-tech equipment purchased from the United States. It is ironic that
some of the missile sites that were set up by the United States would later become
bombing targets during the Gulf War, in 1991.
There were two primary reasons that Saddam Hussein wanted to invade Kuwait.
The first reason was so that Iraq would have a navy and eventually be classified as a
naval superpower because Kuwait situated on the Persian Gulf. His quest for power
would nearly be fulfilled by doing this. Hussein thought that Iraq would be unstoppable
with a navy. The other reason was that the oil fields could greatly improve the Iraqi
economy that had suffered during the Iraq-Iran war.
It is at this point that his greed comes into picture. Since most industry had to be
stopped during this war, Saddam had a reason to develop a new military industry. The
citizens were glad to support this because of a strong sense of nationalism that had
developed after an Iranian "defeat." New missiles were developed including the Scud.
Despite the weapon industry flourishing, the economy became increasingly
worse. Many Iraqis had travelled to Kuwait, which was a country left virtually unscathed
after the Iran-Iraq war. They realized what the Kuwaiti "oil-money" could buy, for
Kuwait had one of the best incomes per capita in the world. Its major cities were similar
to those in North America (such as New York, Los Angeles and Toronto). A feeling of
jealousy arose from this. Kuwaitis were buying Iraqi land very cheaply because of the
crumbling economy. All foreign purchases of land would soon end.
By the end of 1988 Iraq had defaulted on loan payments to the United States,
Canada, Australia and Britain. They were being rejected time after time for credit.
Saddam required a large and quick influx of money. There was only one way that
Hussein thought that this could be accomplished - to invade Kuwait.
2:03 a.m. August 2, 1990 ... Saddam Hussein invaded Kuwait. A massive force of
120,000 troops, 1000 tanks, 900 Armoured Personnel Carriers and Mi-24 Hind attack
helicopters were used [Beyond the Storm, page 100]. It was all-out use of military power
that showed little mercy. There were many more forces than were needed to take this
small country. The reason for this, (besides Saddam's power-hungry characteristics), was
that the Iraqis were disillusioned after it took longer than expected to defeat the Iranians.
Hussein was basically doing this to ensure that the Kuwaitis could not resist. Five days
before the invasion, satellite pictures picked up the formations of Iraqi troops.
Foreign officials had been phoning Baghdad asking for an explanation to this
massive deployment of troops. Hussein insisted that it was merely routine seasonal
exercises and he had no intention of invading Kuwait.
Global conflicts had already begun because of this. The United States Treasury
Department ordered a freeze of all Iraqi and Kuwaiti assets in the United States (which
totalled over $30 billion [Times Magazine, Aug. 29, 1990]. Russia not only did the same
but cancelled all future arms sales to Iraq. This greatly put a hole in their income but the
decision gained respect from other leaders world wide. The United States fell under
pressure trying to reach other foreign leaders before Saddam did. Fortunately, President
Bush won this race and received nearly unanimous support from foreign leaders. Soon
after, in the early months of 1991, the new league of nations formed by the United States
gave Saddam Hussein an ultimatum: either get out and have a chance to survive or stay in
and suffer the consequences of war. He chose to stay, thinking that his country would
come out victoriously against the rest of the world. Little did Saddam know that choosing
to stay would cause Iraq to crumble even more and lead to disastrous effects on the
environment.
Then came the hundred hour ground war. This completely annihilated the Iraqi
strategic capabilities, it's missile sites, arms factories and advancing forces. The allied
forces flew approximately 100,000 sorties, that averages out to one bombing run a
minute throughout the whole campaign [Beyond the Storm, page 91]. This month long air
campaign broke up the fighting capability of the Iraqi forces and their morale. When the
air attacks did not cause a Kuwaiti withdrawal, the ground attack began. By surrounding
the Iraqis in the desert, many surrendered. The ones occupying Kuwait City tried to flee
but were gunned down by allies as they tried to leave the city. It was defeat for the Iraqis.
As some of the Iraqi troops left Kuwait, they torched 600 of Kuwait's 950 oil
wells [Outlaw State, page 139]. Black smoke dimmed the sun all the way to Saudi Arabia
and Iran. Black rain fell in the Middle East for months, even after all the well fires had
been put out. Millions of gallons of oil had been spilled into the Persian Gulf. Wildlife
was killed off. Fish died, birds died, plants died. The oil present in the Gulf was over
250% more than that in Alaska, years ago [Outlaw State, page 72]. The coastlines were
destroyed, covered in thick black oil. The oil was so concentrated that in some areas of
the gulf the oil was over a meter thick. The coastlines were littered with mines intended
to defend against an attack by the United States Marines that never came. Bodies littered
the streets of Iraq and Kuwait. There was a great rebuilding process ahead for the
Kuwaiti and Iraqi economies.
By invading Kuwait, Saddam had broke promises to three distinct peoples. To his
own people, to his Arab "brothers" and to the rest of the world . He had promised his
citizens of Iraq a better life after the long war with Iran. He had also promised economic
stability. Instead Saddam gave his people unemployment, a war that destroyed their
country, crushed nationalism, and a broken economy. To his Arab brothers he promised
that Iraq would lead them to greatness and develop a military power that would equal
Israel. His military visions led to Arab attacking Arab on the battlefield. To the world he
broke international law after international law. He repeated himself that he would not
invade Kuwait. Many world leaders believed him and thought of him as a reliable trading
partner until this war.
This proves to many that the Hitlers and Stalins of the world are not gone from
the global scene. Saddam Hussein is a modern day figure modeling these two. All the
negative outcomes of the Persian Gulf crisis were either directly or indirectly his fault.
Unfortunately, Saddam Hussein is still the leader of the now-crumbled country of Iraq.
No doubt he will be looking for another quick-fix to the economic problems Iraq must
currently possess. Hopefully, it is not the same method he used in the invasion of Kuwait.
f:\12000 essays\sciences (985)\Enviromental\Humans Extinct Can it be .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Humans soon to be Extinct!!
Say it ain't so!!
by
Ryan Shoquist
English 121
Dr. Gilliard
November 23, 1996
Table of Contents
Abstract.....................................page
Body.........................................pages
Bibliography.................................page
Appendix.....................................pages
Structured List.........................page
Figures.................................page
Figures.................................page
Figures.................................page
Figures.................................page
Figures.................................page
Figures.................................page
Figures.................................page
Abstract
Ever since Dewey McLean (1978) proposed a dinosaur extinction theory that states that a climatic change killed the dinosaurs, it has become the single most accepted theory for the dinosaur extinctions within the scientific community. It is called the dinosaur- greenhouse extinction theory. It says that a climate change via the greenhouse effect killed off the dinosaurs. My paper takes this proposed theory and relates it to the world today. Some of the things that happened back then are also happening now, and if the dinosaur-greenhouse extinction theory is indeed true, then we are also in danger of dying from the greenhouse vertebrate killing mechanism, abrupt atmospheric changes, and the other effects caused by the increased greenhouse effect and people should know about the consequences of what we are doing to the earth. My paper examines the similarities occurring in the two time periods and the possible results that we may soon be facing in the very near future. I am hoping that exposure to the inevitable danger that we are soon going to be facing, will spark action and concern within whomever reads my paper. It is a problem that we all have tended to shrug off and not worry about, but if we don't start worrying about it soon, there will not be anyone around to worry about. The time for action is now. We may still be able to change the future.
Humans Soon to Become Extinct? Can it be?
Roughly sixty-five million years ago a tremendous extinction of global proportions hit the planet earth. This global extinction was so severe that it has defined the boundary between two periods of geologic history called the Cretaceous and the Tertiary periods. All but a few mammals on land and water became extinct. (McLean,1978,p.1) The best known of these extinct animals from this mass extinction are the huge and mighty dinosaurs. What killed them nobody really knows and probably will never know, but scientist haven't hesitated to theorize about it. There have been theories ranging from human involvement to disease to even aliens. However, of all the theories of the so called K-T extinctions, the single most accepted theory is called The Volcano Greenhouse Theory. This theory states that a chain of volcanoes in India, called "the Deccan Traps", released vast quantities of the greenhouse gas carbon dioxide into earth's atmosphere trapping heat from the sun, (McLean,1988,p.2) and turning earth's surface into "the hot, sterilizing, hell of a major greenhouse." (McLean 1981,p.1) If the dinosaurs did in fact die from the Volcano-Greenhouse theory, then we are also in danger of becoming extinct from the Vertebrate Killing Mechanism, abrupt atmospheric changes, and other results from the greenhouse effect that they too died from.
The earth is what is referred to as a "Greenhouse Planet". This means that the earth is warmed by certain gases (fig a1) that without which our earth would be as cold and barren as the moon, and unsuitable for even the most basic life to exist. These greenhouse gases, mostly carbon dioxide and water vapor, in the earth's atmosphere trap heat from the sun causing the earth to be thirty degrees warmer than it would be without them. (fig a3) (McLean, 1978, p.1) It is this extra warmth that allows earth to harbor life.
Carbon dioxide was and still is released into the atmosphere continuously by natural sources such as volcanoes, hot springs, fumaroles, and geysers. The natural processes on the surface of the earth will absorb this normal effect. (fig a9) Over long periods of time, the process was accepted and became in balance with the earth. (McLean, 1985, p.1) Then a time of volcanic activity arrived as the Deccan Traps of the late Cretaceous Period erupted and the pieces had almost all fallen in place for a change.
Volcanic dust and CO2 was strewn into the upper atmosphere for a period of around two hundred plus years. (McLean, 1985,p.1) This caused a time of cooling on the earth due to the dust blocking out the sunlight. The dinosaurs began to adapt to the climatic cooling from the volcanic dust very well. The large body size of the dinosaur was beneficial on the cooling earth, because it easily kept in their body heat allowing them to comfortably survive without harboring their ability to find food. (McLean, 1995 p.1) It seemed as if they were going to stay for a while.
Then the ash cleared and, that was when the whole process was thrown out of balance. The carbon dioxide had been produced faster than the natural systems could absorb it. Also, instead of coming down with the ash to the surface, the carbon dioxide stayed in the upper atmosphere making it thicker and thicker. Now embracing the greenhouse effect, the earth began to heat up like a hot oven, triggering ecological instability the world over. (McLean, 1985, p.1)
The Dinosaur's large size made it impossible for them to even make an attempt to recover from this sudden increase in heat. (McLean 1995) Whereas the thermal inertia contained in their bodies would have been a great benefit in the cooler climate, their small surface to volume ratios were huge disadvantages in the warming and caused the dinosaur's bodies to essentially overheat. (p.1) A smaller size, like that of the mammal, would have helped them to survive better. (p.2)
The abruptness of all these circumstances wreaked havoc in the internal systems of the dinosaurs causing the Greenhouse Vertebrate Physiological Killing Mechanism (fig a16) to begin occurring. (McLean,1995) The Greenhouse-Vertebrate Killing Mechanism states that climate triggers extinctions through it's effects on a species' females. Here's what happens. In response to the growing environmental heat, a female's body will give part of it's blood supply away to the skin surface to help get rid of excess bodily heat. The result of this causes a reduction in the blood flow to the uterus of a pregnant female. (p.2) Since the uterus is the place where the embryo gets all of it's life providing oxygen, food, water, and nutrients, the reduction in blood will make these necessities less available and cause the embryo to die or become abnormal with dwarfing abnormalities or mutations. (p.1) Large animals that couldn't shed off their excess heat, such as the dinosaurs, were the animals most effected. (p.1)
The same thing that happened to the dinosaurs during the K-T extinctions is happening right now right in front of our own faces today as you read this, and most of us don't even know it. Like the dinosaurs, we have also just had a cooling off period. (Broeker,1996,p.3) Volcanic eruptions are thought to be responsible for the global cooling that has been observed. When large masses of gas reach the stratosphere, the uppermost layer of the atmosphere, they produce a worldwide cooling effect. (Volcanoes and Climate,96) The amount and extent of this cooling action is dependent on the eruption size and it's latitude. (Nasa, 1996) If it occurs in a place of great winds and air currents, then it will spread differently than one not in an air current would have on the world and have a different amount of a cooling or heating effect. (p.2)
The full extent of the current global warming probably hasn't even come close to reaching us yet. That is because the effects of the Mt. Pinatubo eruption may just now be finally wearing off. (Nasa,1996) Due to the overlapping cooling effects of Mt. St. Helens, El Chichn, and the Cerro Hudson eruptions there was a continuing cool-off period that is just ending. (p.1) One top meteorologist said (Broeker,1996) "If man made dust is unimportant as a cause of climate change, then a strong case could be made that the current cooling trend will give way to a pronounced warming induced by carbon dioxide." (p.1)
One thing though that has already begun to happen to us again is the Greenhouse Vertebrate Killing Mechanism. Via this mechanism, summer heat kills mammalian embryos on a vast global scale. (McLean 1995,p.2) That is why there are more miscarriages in summer months than there are any other time of the year. How many miscarriages it has actually caused cannot be accurately measured due to the lack of a way to determine if they actually died from the mechanism.
Now another coincidence, just as with the dinosaurs, the greenhouse gasses are once again increasing. (Fig a7, a8) Now though, instead of volcanoes causing the increase, humans are indirectly causing the increase. (unep/umo/94) Previously, the global climate was what changed the world, but now us humans are changing the world by changing the climate. (p.2)
The principle change to date is within the makeup of the atmosphere. (p.2) The greenhouse gasses carbon dioxide, methane, and nitrous oxide have always formed a blanket around the earth. The problem is that we are making the blanket much bigger by spitting out all these different gases into the atmosphere and throwing the proportional amounts off into a state of confusion. (p.2)
According to the British Medical Journal, there are four main human causes of increased greenhouse gasses (1994). These are listed as follows:
combustion of fossil fuels: 57%
Agriculture: 19%
deforestation: 17%
industrial activities and waste: 8%
The greenhouse effect today has been caused by multiple gasses in the upper atmosphere trapping long range radiation in the atmosphere and, therefore raising it's temperature. (Fig a3) (Iucc,1993,p.1)
Carbon dioxide is one of these primary greenhouse gasses and it's concentration has risen greatly since the industrial revolution began. in 1800 it was roughly two hundred and eighty parts per million as it had been for over a million plus years. Currently it is at three hundred and forty parts per million and still rising. If it keeps rising at it's current rate, by the year 2100 the CO2 level will be as much as two thousand two hundred and forty parts per million or as little as four hundred and twenty parts per million. (British Medical Journal, 1994) As fig a4 and a6 show, there is a direct relationship of the amount of CO2 in the air and the temperature of the earth.
"Carbon dioxide levels have climbed sharply in recent years. By some estimates, global carbon dioxide emissions, mainly from burning of coal and oil, will have increased sixty percent within the next two decades." (Business week, 1996) The current rate is so fast that it is unprecedented in the entire geologic world history.
Another danger to us humans is that the increased accumulation of carbon dioxide in the air translates into the marine life. (fig a11) Because of the added heat in the air, the ocean's convection currents will change from taking warm water to the poles and bringing down cold water to a more stagnant ocean. This will then inhibit the ocean's ability to absorb or deposit out carbon dioxide. (Wunch,1988)
Accumulation of carbon dioxide in marine environments is known to have grim effects on many marine animals (McLean,1996) because, the elevated carbon dioxide levels disrupt the Ph balance of the internal fluids of the marine life causing a medical condition called "narcotizing acidosis". Basically their body's Ph becomes too acidic and kills them by taking away the hemoglobin's ability to carry oxygen essentially drowning them. (p.3) High carbon dioxide levels also may cause metabolic arrest and reduction. (p.3) This means that they wouldn't be able to change food into energy or create the essential hormones or chemicals their bodies need. Since the entire food chain of the world depends upon marine life, this would be a catastropic event for all the world. Bleaching of coral in Tahiti and reduced ocean circulation are a few of the symptoms we have already begun experiencing. (Earthaction, 1996) We aren't just affecting ourselves anymore. Because of us, a change in the world is evident, but when it will change is anyone's guess.
Although nobody really knows exactly how things will change, scientists can use data form what happened during the K-T extinctions of dinosaur days to predict what will happen. One of the many changes predicted by scientists that has already begun to happen, (fig a2) is an atmospheric temperature rise between 1.5 degrees C and 4.5 degrees C. That is a very small increase but as fig a12 shows, there is only a few degrees difference between today's temperature and the temperature of the last ice age. An increase of that magnitude will cause the sea to rise (fig a13,a14) between 1.3 and 2.2 feet higher from melting snow and ice. (British Medical Journal, 1994)
Although the sea level rise may seem small, the effects may be very drastic. (fig a15) Large areas of agricultural land may become flooded, many islands will disappear, death rates from heart disease and stroke will rise. (British Medical Journal, 1994) Malaria, Rocky Mountain Spotted Fever (Fig a19) and other temperature dependent diseases that travel by insect will be able to become virulent at higher altitudes, latitudes, and in more places due to the ability of the insects to survive in a greater area. (Fig a20) (Iucc,1993) According to one researcher five of the numerous mosquito borne viruses now common in hot countries, will become threatening in the U.S. if the world temperature rises one half a degree. (p.1) Also, the earth's entire biosphere, (the thin film of life covering the earth) would be affected.
The biosphere depends on the rate of something called the Solar-earth-Space-Energy-Flow (SES). (fig a18) (Mclean,1988) This is the method in which earth sheds it's heat into outer space. Greenhouse gases inhibit the SES from occurring correctly. (p.2) The rate of release of carbon dioxide within the earth by "mantle degassing" increases or decreases the earth's ability to carry out the process of SES. (Earthaction, 1996) The more carbon dioxide there is, the harder that it will be for SES to occur, but a lack of it will make SES occur too fast making the earth cool too quickly. (McLean,1995,p.2) Right now it is happening too slowly causing the earth to heat up. Over long periods of time systems adjust, but how much time do we really have?
No matter what all of this global warming stuff isn't good for us humans. Most people don't recognize the threat but a few do. Dewey McLean gave a speech to the Senate where he stated,
"A major carbon cycle perturbation is the most dangerous global scale phenomenon that life on earth can experience. Today, our civilization is facing a possible modern greenhouse. Via the greenhouse-physiological killing mechanism, the direct effects of the greenhouse warming upon the female mammals and embryos can go in one direction only. That is toward increased embryo death, reduction of mammalian populations, and collapse of mammalian populations in the vulnerable middle latitudes." (McLean,1988,p.4)
Earth's surficial systems are never truly stable, and must continually adjust to the fluctuations in the atmosphere. Although modest fluctuations may be absorbed, major ones that go above a critical threshold force systems to find a new configuration. Those systems that do not find a new configuration cease to exist and die off. (McLean, 1988) I really hope that humanity isn't a part of one of those systems that cannot find a new configuration.
Since we may be going extinct, we must do something now. Not next year or ten years away. Now! The world government needs to propose and enforce strict regulations. The scientific community needs to find methods to slow or reverse the trend. Auto makers have a responsibility to create alternative fuels and methods of travel such as electric cars and to cut emissions from their vehicles. It is the end of the world as we know it and we all must do our own part to save it. The time for action has come. It's now or never. If something is not done, there will be no future and no more life. Only the death of the human race and the other beings on the earth.
Bibliography
Bates, A. (June 1990) Climate In Crisis. The Book Publishing Company, Summertown Tennessee.
***The Berlin Climate Summit Climate Change, (Earthaction 1996) pages 1-4; internet accessed on 11/7/96 at: http://.oneworld.org/earthaction/earthaction_climate.htm
Carpenter, B (Nov 6, 1995) ***Descendants, Beware. U.S. News and World Report. Vol. 119 (Issue 18) (Cd rom) Infotrack access # 02600026
***Climate Change Scenarios: The Possible Health Effect (1990) pp1-2 accessed on 11/7/96 at http://www.unep.ch/iucc/fs116.html
Freundlich, N. (Aug 19,1996) ***The White House Vs. the Greenhouse. Business Week (iss.3489), n.p. (Cd Rom), Infotrack access # 029934198
Harding, G. (1995) ***Broecker's Warning PP 1-9 access on 11/07/96 at http://members.aol.com/trajcom/private/broecker.htm
McLean, D. (August 13,1996) ***Dinosaur-Greenhouse Extinction Theory pp1-5 accessed on 11/6/96 at http://www.vt.edu:10021/artsci/geo...saur_Volcano_Extinction/index.html
McLean, D. (1995) ***Greenhouse Vertebrate Killing Mechanism PP1-2, accessed on 11/7/96 at: http://www.vt.edu:edu:10021/artsci/geo...r_Volcano_Extinction/killmech.html
McLean, D. (1995) ***Holistic Earth Causal Loop Diagram pp1-2 accessed on 11/07/96 at http://www.vt.edu:10021/artsci/geo...r_volcano_Extinction/Earthcau.html
Smith, R. (Nov 26,1994) ***Doctors and Climatic change: action is needed because of the high probability of serious harm to health. British Medical Journal Vol 309 (CD Rom), Infotrack File# 16049245
***Understanding Climate Change: A Beginners Guide to the UN Framework Convention n.d, pp1-9 accessed on 11/7/96 at http://www.unep.ch/iucc/begincon.html
***Volcanoes and Global Climate Change, pp 1-4 p.o, n.d accessed on 11/6/96 http://spso2.gsfc.nasa.gov/NASA_Facts/volcanoes/volcano.html
***Volcanoes: The Inside Story pp1-2 accessed on 11/6/96 at http://cotf.edu/ETE/scen/volcanoes/volcclim.html.
Appendix
Structured List
I. Dinosaurs
extinction theory
causes
proof
chronology
problems
II. Greenhouse effect
Similarities to K-T extinctions
proof
medical danger
results
causes
III. Prevention
Problems
Proof
f:\12000 essays\sciences (985)\Enviromental\Hurricans.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hurricanes
==========
Hurricanes get their start over the warm tropical waters of the North
Atlantic Ocean near the equator. Most hurricanes appear in late summer or
early fall, when sea temperatures are at their highest. The warm waters
heats the air above it, and the updrafts of warm, moist air begin to rise.
Day after day the fluffy cumuli form atop the updrafts. But the cloud
tops rarely rise higher than about 6,000 feet.
At that height in the tropics, there is usually a layer of warm, dry air
that acts like an invisible ceiling or lid.
Once in a while, something happens in the upper air that destroys
this lid. Scientist don not know how this happens. But when it does, it's
the first step in the birth of a hurricane.
With the lid off, the warm, moist air rises higher and higher. Heat
energy, released as the water vapor in the air condenses. As it condenses
it drives the upper drafts to heights of 50,000 to 60,000 feet. The
cumuli become towering thunderheads.
From outside the storm area, air moves in over the sea surface to
replace the air soaring upwards in the thunderheads. The air begins
swirling around the storm center, for the same reason that the air
swirls around a tornado center.
As this air swirls in over the sea surface, it soaks up more and
more water vapour. At the storm center, this new supply of water vapor
gets pulled into the thunderhead updrafts, releasing still more energy
as the water vapor condenses. This makes the updrafts rise faster,
pulling in even larger amounts of air and water vapor from the storm's
edges. And as the updrafts speed up, air swirls faster and faster around
the storm center. The storm clouds, moving with the swirling air, form a
coil.
In a few days the hurricane will have grown greatly in size and
power. The swirling shape of the winds of the hurricane is shaped like a
dough-nut. At the center of this giant "dough-nut" is a cloudless, hole
usually having a radius of 10 miles. Through it, the blue waters of the
ocean can be seen. The hurricane's wind speed near the center of the
hurricane ranges from 75 miles to 150 miles per hour.
The winds of a forming hurricane tend to pull away from the center
as the wind speed increases. When the winds move fast enough, the "hole"
developes.
This hole is the mark of a full-fledge hurricane. The hole in the
center of the hurricane is called the "eye" of the hurricane. Within the
eye, all is calm and peaceful. But in the cloud wall surrounding the eye,
things are very different.
Although hurricane winds do not blow as fast as tornado winds, a
hurricane is far more destructive. That's because tornado winds cover
only a small area, usually less than a mile across. A hurricane's winds
may cover an area 60 miles wide out from the center of the eye. Another
reason is tornadoes rarely last as long as an hour, or travel more than
100 miles. However , a hurricane may rage for a week or more (example:
Hurricane Dorthy) In that time, it may travel tens of thousands of miles
over the sea and land.
At sea, hurricane winds whip up giant waves up to 20 feet high. Such
waves can tear freighters and other oceangoing ships in half. Over land,
hurricane winds can uproot trees, blow down telephone lines and power
lines, and tear chimneys off rooftops. The air is filled with deadly
flying fragments of brick, wood, and glass.
f:\12000 essays\sciences (985)\Enviromental\Hydrogen the fuel of the future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
HYDROGEN: THE FUEL OF THE
FUTURE
By: Json
Why are we as Americans so afraid to change? even if it is a change for the better? the
world has been using oil coal and other petroleum products to power just about everything
that moves for the last 150 years. yet most cars in the united states only get 10-20 miles a
gallon and even the "good" ones can get only a petty 20-50 miles a gallon. so why do we
put up with the inefficiency when there are far better alternatives out there? Such as
hydrogen, which was discovered hundreds of years ago. Hydrogen has long been known
for its explosive propeties (with air) and abundance in the universe (in other forms i.e.
water on earth, and its form in space is a gas). Hydrogen can do just about everything
conventional fuels can do but better.
Hydrogen can be "packaged" in several ways, as a fuel gas in a H2/02 powered engine or
the newly devised solid state pellet of hydrogen isotopes that contains about the equivalent
of 5000 cubic feet of hydrogen and is broken down and releases gas into the second
chamber where it goes to the engine for use. There are many ways to get pure hydrogen
out of many compounds using methods such as electrolysis and chemical reactions. One of
the easiest ways is using a chemical reaction. Simple chemicals (aluminum,sodium
hydroxide, and water) can be reacted in the home to produce heavy hydrogen to power
your furnace or your hot water heater . No electrical power at all is required. The reaction
also gives off a tremendous amount of heat. Even the waste heat could be captured for
heating the house. The resulting sodium aluminate is harmless and could be collected at
recoiling centers for complete acid/base neutralization. This way is a simpler way than
electrolysis produce hydrogen for heating the home, because in a automobile it would be
harder to do.
Electrolysis is another way to produce hydrogen electronically. It is a way that I
am more familiar with because I do it quite a bit in my room and have done several
experiments with it. Electrolysis will produce a 2:1 ratio of hydrogen to oxygen out of
water. higher voltages will give you faster collection. With a 12-volt battery it took
around a half an hour to get a quarter of a mountain dew bottle filled with a catalyst of a
small amount of Baking Soda. I used it because it was cheap and I knew it worked.
Another time I used a 75 volt / 2 amp power supply with a catalyst of 2 drops of sulfuric
acid to a pint of water and the result was very differing from the last time. I filled the
whole mountain dew bottle in less than 6 minutes. All of that gas came from a little less
than a drop of water(when I light it off there was only a little spec of water on the inside
of the bottle)I can only gasp thinking that that was only 75 volts and voltage can get into
the billions of volts. Although electrolysis is not the most efficient way to produce
hydrogen it certainly deserves recognition for working and I am sure sometime soon
someone will discover a way to produce the same amount of H2 and O2 with less power
and time either with a new catalyst or a more efficient power supply.
One reason that hydrogen power has not taken off is that there are thousands of jobs in
the petroleum and coal fields. Really who would want to own a car that requires about
20-30 cents per mile in gas expenses when you could basically pull up to the water hose
every month and fill your tank with something about 20 cents every 2000 or so miles?? So
demand for petroleum products would sky dive and thousands of jobs would be lost and
no one except the water company, car alternator/generator company and the battery
company would profit from it. People would also so angry about losing their jobs over
such a change and boycott the automotive companies making hydro-cars and cause havoc
for the people trying to "upgrade" us to a better system of working. I mean everything in a
car has changed but the engine stays essentially the same. It's commonly known that large
oil companies have been paying off the auto makers to keep all cars under the 40 mile per
gallon range. There are a few exceptions and all they really changed was the size of the
car. Every engine has the capability to get a hundred miles per gallon and up with
modifications to the carburetor and other internal parts of the engine. Only half of the gas
that goes into the chamber actually gets burned and the rest goes right out your tail pipe,
and they call them efficient??
You would figure that when NASA uses it in the space shuttle for fuel to lift a 122 foot
long craft plus a gigantic fuel tank (the solid boosters help also) it has to be working right.
And no one ever whines about pollution either unless they are totally nieve because the
only pollution from the shuttle (besides the boosters) is water vapor. It takes a lot of
energy to get water from water to H2 and O2 and it takes a lot of energy to get it back to
water also, in other words a spark (which is a very hot burning spec of flint but it doesn't
hurt us because it doesn't have a very long life and is small) or a flame and it takes it on an
explosive ride back to water vapor. It is weird that way I guess because, if you burn a cup
of gasoline you can't just capture the soot (carbon) and residues and other "by-products"
and apply electricity to it and instantly have what you started with. So basically hydrogen
is a forever renewable resource that is non-polluting
Politically oil is the source of all evil. We went to war for it and said it was for another
reason, yeah right. If we where ever to switch suddenly from petroleum to an alternative
energy like hydrogen we could expect an immediate political or military threat because
that is just like making the oil corporations go bankrupt, instantly. It would be especially
hard on a country that has oil for its main export like Kuwait or Iraq, And it would be the
stock market crash all over again for us also. It will probably take a very dramatic
threatening event to make us switch from fossil fuels, like running out of them.
The world will eventually realize the potential of hydrogen and put it out there, but it may
come very late like when gas will cost 3.95 a gallon but it will happen, it just has to.
People will have no choice but to change or, well, walk. It is time that we upgrade, we do
not drive "Horseless Carriages" anymore so we shouldn't use ones fuel. We call ourselves
"high tech" when we are using hundred year old technology in our "high tech" things.
Airplanes have gone from the "June Bug" to the F-22 fly by wire fighter in around a
hundred years so why can't cars and heaters and powerplants do the same??
As a globel society we need to upgrade.
f:\12000 essays\sciences (985)\Enviromental\Hydroponics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Introduction
NAME HERE and I became fascinated by hydroponics and the idea
that one doesn't have to get their hands dirty to be a great gardener, and if
your like us that's a good thing.
The idea of hydroponics has been around since the pyramids where
build, but in all these years it never seemed to catch on. It took about forty-six
hundred years before the first scientist took a look at hydroponics and
adapted it to grow crops, this was a professor at the University of California,
and the result was a 25 foot tomato plant that had to be harvest by a ladder.
Thus hydroponics was reborn and has been advancing ever since. Yet up to 5
years ago the home grower and generally the public didn't know about
hydroponics. It was only being used by commercial growers. But now it has
caught on, and resulted in this experiment.
When thinking about hydroponics one must think about the
applications of hydroponics. Not only dose hydroponics produce bigger,
better, and more healthier plants than the traditional dirt, but it can also be
greatly beneficial on things like submarines, space stations, off-shore oil rigs,
or any where else where dirt is hard to come by.
During this experiment we'll be looking for which plant life well do
best with hydroponics, by measuring which plant has grown the highest or
bushiest. We also well be looking for green and healthy looking leaves on the
plants. We have no idea what the results might be, so this should be an
exciting experiment.
Problem
Which form of plant life will thrive the most in a hydroponics
enrichment. Will it be Tomatoes our fruit, Peas our vegetable, Tinkerbell
flowers, or Beans a legume?
Materials
The materials needed in the construction of this unit are...
1- 4' of 6" PVC pipe 7- Three pounds of clay pebbles
2- Two PVC 6" caps 8- Three brass T's
3- Four 2.5" irrigation pots 9- Seven feet of plastic tubing
4- 1' of 4" PVC pipe 10- Tropical Green A 15-0-0
5- One 4" PCV cap 11- Tropical Green B 4-10-27
6- Four rockwool cubes 12- 3 feet of 20mm PVC pipe
13-3 brass nosles
First we selected a suitable PVC pipe and cut it into a 4
foot long piece. Next we cut four holes into the pipe with a
jigsaw. After that we drilled two holes for the brass tee's to
fit into. Next we glued on the two caps. After that we drilled
two holes, one for the bucket to rest on, and the other for the
plastic hose to go into. Then we glued the 20mm pipe into
one of the holes. After that we cut our 4 inch PVC pipe and
glued the cap to it. Then we drilled two holes into the pipe,
and inserted two brass nosles into the holes we just drilled.
Now take the plastic hose and cut it into 4 piece. One piece
is for draining, one is for sub reservoir, and two are to
separate the sub reservoir and feet the Rockwool cubes. If
you would turn to the next page you well find out we ended
up with.
Limitations and
Short Comings
The project in all was a success, but there were some things
that we would have done differently.
For one thing we started the project a little late. That
probably didn't affect the results at all, but the conclusions my have
been more interesting if the plants were fully grown.
One thing we would have liked to have was some more places
for plants to go in. That way we could have more of the same variety
of plant to work with and experiment with. But unfortunly we did
not have the materials to build such a structure.
Implications
Possible implications of hydroponics may be in space stations,
or submarines where there are no gardens or if you build a garden it
could get a little messy.
Also who knows what the future holds, I mean if we start
colonizing the moon or other planets where there is no dirt
hydroponics could be very useful in the feeding of a city of people.
And its not just the fact that there is no dirt at these places, but
because hydroponics produces bigger and better crops fast its the
ideal thing to use.
And now thanks to our experiment everyone knows which
form of plant will do the best, the legume. The legume can be used if
you want the bestest fastest growing plant hydroponicly. But I am
not saying you should feed a whole space colony with beans or
anything, no it's just something to think about.
Conclusions
Beans(The legume)results-This plant has seemed to the victor
of the four plants. Its length is 25cm which serpasts all the other
plants by lots. This plant has the greenest and the biggest leaves of
the group.
Tomatoes (The fruit) results-This plant, in my opinion, came in
second to the beans, but it was close. This plant may have only
grown 7cm but it has a diameter of 6cm, which may be smaller than
the peas but in my opinion the tomatoes look way more healthier
than the peas, because the leaves on the tomatoes are bigger and
have a nicer shade of green.
Peas(The vegetable)results-This plant came in at a close third.
It has a length of 13cm. Even though it has a longer stem than the
tomatoes it was our opinion that the peas weren't as healthy looking
as the tomatoes were. Some of its leaves where green, but some had
unhealthy looking spots on them.
Tinkerbell(The flower)results-The results of this plant wasn't
very good at all. Not only did this plant come in dead last but it also
looks almost dead. It grow about 1cm in the first week and just kind
of stopped there.
Summary
To summarize this project, I would have to say that it was a
success. Since we have answer the problem of which plant thrives
the greatest. Obviously, the bean, our legume is the best by far and
the tomatoes our fruit is second the peas our vegetable are third, and
dwindling in last the last place the all non-mighty flower the
tinkerbell.
Bibliograghy
www.hygexpo.com/greenthumb/about.htm
www.wolrd.net//hydroponics/
www.primenet.com/vantage/tips.htm
www.intercom.net/biz/aquaedu/hatech/pages/hydrois.html
f:\12000 essays\sciences (985)\Enviromental\Individual Org Behavior.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
US. Army
company A 204th Engineer Combat Battalion Heavy
Chapter 3: Foundations of Individual Behavior
Table of contents:
Œ Introduction.
a. Description, History and Organizational structure.
Key biographical characteristics.
a. Age.
b. Gender.
c. Marital Status.
d. Number of dependents.
e. Tenure.
? Factors that determine an individual's personality.
a. Personality determinants
b. Personality Traits.
c. Personality Attributes influencing Organizational behavior.
d. Personalities and national cultures.
e. Matching personalities and jobs. (Holland's Typology).
Summarize how learning theories provide into changing behavior.
a. Theories of learning.
b. Shaping Behavior : A managerial tool.
Reinforcement.
a. Rewards
' Applications for Specific Organizations.
a. S U T A (Substitute Unit Training Assistance).
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Now for the Essay
****************************************************************************
Chapter: 3 Individual Organizational Behavior
The NY Army National Guard.
I.
This organization's history goes back around 300 years, to the time of the "Minutemen". The brave men and women that fought to gain our independence here in the state of NY.
The NY Army National guard is under the control of the Governor of NY. But in Wartime this organization becomes Federalize and under the Command of our Commander in Chief. The President of the United States.
I am the Commander for A-Company Detachment 2 of the 204th Engineer Bn, located in Riverhead, NY here in Long Island. *[Let's take a closer look at the Org. Chart]*
My Unit has 72 Men and Women who are civilians during the month but dedicate one weekend a month and two weeks in the year to serve our State and Country.
We have two types of missions for which we train all year long.
· The State Mission : These are emergency calls in time of disasters, due to nature or mischief. (ex: Floods snowstorms, Riots, Plane crashes, Drug enforcement, Border patrols, etc.)
· The Federal Mission : These takes effect in time when the President or Congress declares War, Police Ac-tion or Rescue and or Protection to a foreign country. We also train by going overseas and helping countries.
Now lets talk about the individual Behaviors of my organization shall we? [Flip to Outline].*
II. Key Biographical Characteristics
a. Age in my unit is very diversified. I have new soldiers entering the unit at age 18, and current members range from their 20's to their 40's. The Standard to Join the US. Army is 18-28 yrs.
· Rank and degree of responsibility are factors of productivity .
· Degree of Absenteeism is found in both.
· Turnover is mainly found in younger soldiers, they move from state to state often.
· Job satisfaction also go hand in hand with rank and responsibility. The Older and higher the rank the more.
b. Of course we don't discriminate
· I found that females are often more absent.
· Also that a degree of J/S shifted more towards the males.
· And females tend to turnover easier than males.
c. Our troops often have to face the increasing risk of being deployed overseas, and that of course always shows in the soldier's work behavior. This is mostly from worried Wives and Husbands thinking that their spouses won't come back. *(That includes mine of course).*
· Single soldiers tend to miss UTA's more than married ones.
· J/S is found more with higher rank soldiers instead of looking at marital status.
d. We provide school, day-care and counseling programs to dependents. This helps the soldiers cope with the stressful job of parenting.
This helps the organization deal with absenteeism, J/S and turnover.
e. Tenure to the US Army means Experience and Skills, that can be shared to younger soldiers.
· Mission Readiness is increased when subordinates respect and obey their older peers.
III. Personality
A. Determinants
Heredity: For our mission usually don't determine a soldier for his looks or muscle definition. That is up to the soldier to ex-ploit when it comes to personal development. *APFT*
Environment: Some of our soldiers when they grow up in places like NYC behave and perform different from those soldiers that spend their upbringing in smaller towns. We found this to be a factor in Mission readiness, Absenteeism, turnover and J/S.
Situation : We are all bound to a code of honor, duty and conduct. So we act differently according to the situation. The differ-ence is found more with younger soldiers, they sometimes don't know how to act and need to be directed or supervised.
B. The Meyers-Briggs
Personality Type Indicators. [MBTI]
Extroverted-Introverted (E-I)
Sensing-Intuitive (S-N)
Thinking-Feeling (T-F)
Perceiving-Judging (P-J)
Recruiters and administrators use this method to classify soldiers into major personality types indicators
We call it the ASVAP: ========>
Combining any of these gives us some good personality types:
here are examples of job classifications by Pers/Type:
ISTJ= Administrative, Dentist, Police, Accountants, Military.
ISFJ= Nurses, Teachers, Librarians, Clerical Supervisors.
INFJ= Clergy, Physicians, Media Specialist, Education Consultants.
ISTP= Farmers, Mechanics, Electronic repair, Engineers, Dental Hygienist.
* Look at the handout and See what personality trait you might belong to *
C. Personality Attributes Influencing OB.
A soldier's Career path is determined on different factors, which are all tied up to his or her performance.
APFT=>MILITARY/CIVILIAN EDU =>MARKSMANSHIP>JOB RATING.
Locus Of Control
Internal = The one I personally believe in. It means that everyone is a master of their own destiny fate and life.
External = People think that their careers and life depends on others' decisions. Or that luck and chance have much to do in it. [Found in most of my soldiers]*
Machiavellism = A way to characterize a persons ability to manipulate, gain or lack power.
· High Machs = (Most of my First Line Supervisors)
· Low Machs = People that don't like to persuade, believe in SOP Reg's and rules. (My higher chain of Command including myself)
Self Esteem = This is found at my job with people that volunteer often, and with people that hide in formations and I don't recall their names. Some females
Self Monitoring = Some soldiers Don't know how to react to certain enviroments such as combat simulation, or emergency deployment. Others like me look forward to this type of disruption. Other might be excellent soldiers but their civilian life might be difficult to handle.
Risk Taking = This is a trait that the armed forces take very seriously. I am talking about the life risking the life of many men and women. All at mgmt level use a formula: Every mission is given a number, if the sum equals 18-21 the degree of risk man-agement calls for Low caution
22-25 is a High caution #>25 is a Dangerous Mission.
* We can say that Risk taking is the big factor in O/B in my organization*
D. Personalities and National Cultures
Type A = These personalities are found in logistics, administration and personnel departments.
Characterized by being impatient, caring about time management. Mostly found in north Americans.
Type B = This is the most popular personality in my unit. They are not concern with time management, they have to be super-vised closely. Found in other than our culture.
E. Personalities and Job-Fit Theory Holland's Typology.
This uses six personality characteristics and matches them with congruent occupations. Realistic =>shy/persistent/practical [Mech., Assy. Line, Farmers]
Investigative =>Analitycal/original/curious [Biologist, Economist, Reporters]
Social =>sociable/friendly/cooperative [Social worker, Teacher, Counselor]
Conventional =>Conforming/efficient/practical [Acct., Corp. Mgr., Bank teller]
Enterprising => Self confident/ambitious/dominant [Lawyer, Real Estate]
Artistic =>Imaginative/ disorderly/ idealistic [Painter, Writer, Musician]
IV Learning
a. Theories
We use the Classical Conditioning theory ,Operand Theory and Social theory of learning. The Classical theory tells us that people respond to a certain stimulus or in a certain manner depend-ing.
[Ex: When a flare is in the sky, the soldier will automatically hit the ground, close eyes and will not move.] This is learned by associating flash with the enemy fire.
· The Operand Theory is used when individuals go beyond the standard in a mission completion. They are recognized in front of their peers.
· The social theory is found in a program called OJT.
With this program a new soldier is teamed with a specialist and at the end will receive a certificate of ON the Job Training.
b. Shaping Behavior
· Positive Reinforcement is used by us when we go to the soldier and thank him for a job well done.
· Negative Reinforcement is used when pay is taking out of their paycheck, or a promotion suspen-sion.
· Punishment is used by drill Sgt.'s when they make the trainees do push-ups or sit-ups or run. This is done only as a way of strengthening the soldier and reinforce that it will not happen again.
· Extinction When a standards is not met as a unit, the Higher HQ's will not send us overseas for training which is where everyone wants to go.
V. Reinforcement
· Continuous Reinforcement is found when a soldier is tasked with a mission and performs meeting the stan-dards. This is recorded in his annual performance review.
· Intermittent Reinforcement is used only when a soldier is having problems meeting our standard, the time they meet the standard we congratulate and reinforce that that is the way it will continue.
Reward Schedules
· The Army once per month pays all soldiers, That is a Fixed-Interval Schedule reward.
· When I tell the soldiers that they are having two inspections per year, they don't know which months. This is Variable-Interval Schedule.
· When a soldier hits 39 out of 40 in a rifle competition they get an expert badge, 25-38 they get a sharpshooter badge and below 25 they get a marksman badge. That is a Fixed Ratio Schedule.
· We don't reward our soldiers with the Variable-Ratio Schedule for reasons of safety and risk management.
VI. Applications For Specific
Organizations
We use an application called the SUTA. This helps us reduce the amount of absenteeism by the soldier.
The soldier calls and gives a good reason for being absent, we task the supervisor to make a training outline of what that sol-dier will do when they have to make it up.
The soldier has 30 days to perform that task and it has to be signed by an officer. This has reduced the number of soldiers tak-ing off for no reason.
f:\12000 essays\sciences (985)\Enviromental\Integrated Pest Management.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Integrated Pest Management
Integrated pest management (IPM) is a recently developed technology for pest control
that is aimed at achieving the desired control while reducing the use of pesticides. To
accomplish this, various combinations of chemical, biological, and physical controls are
employed. In the past, pesticides were all too often applied routinely whether needed or
not. With IPM, pest populations as well as beneficial parasite and predator populations are
monitored to determine whether the pests actually present a serious problem that needs to
be treated. If properly and extensively employed, IPM might reduce pesticide use by as
much as 50 percent, while at the same time improving pest control. If this goal were
achieved, the environmental problems would be minimized, and significant benefits would
result for farmers and society as a whole.
IPM coordinates economically and environmentally acceptable methods of pest control
with judicious and minimal use of toxic pesticides. IPM programs assess local conditions,
including climate, crop characteristics, the biology of the pest species, and soil quality, to
determine the best method of pest control. Tactics employed include better tillage to
prevent soil erosion and introduction of beneficial insects that eat harmful species. Many
pests that are attached to crop residues can be eliminated by plowing them underground.
Simple paper or plastic barriers placed around fruit trees deter insects, which can also be
attracted to light traps and destroyed. Weeds can be controlled by spreading grass, leaf, or
black plastic mulch. Weeds also may be pulled or hoed from the soil.
Many biological controls are also effective. Such insect pests as the European corn borer,
and the Japanese beetle, have been controlled by introducing their predators and parasites.
Wasps that prey on fruit-boring insect larvae are now being commercially bred and
released in California orchards. The many hundreds of species of viruses, bacteria,
protozoa, fungi, and nematodes that parasitize pest insects and weeds are now being
investigated as selective control agents.
Another area of biological control is breeding host plants to be pest resistant, making them
less prone to attack by fungi and insects. The use of sex pheromones is an effective
measure for luring and trapping insects. Pheromones have been synthesized for the
Mediterranean fruit fly, the melon fly, and the Oriental fruit fly. Another promising pest-
control method is the release of sterilized male insects into wild pest populations, causing
females to bear infertile eggs. Of these techniques, breeding host-plant resistance and
using beneficial parasites and predators are the most effective. Interestingly, the combined
use of biological and physical controls accounts for more pest control than chemical
pesticides.
and with that, I conclude this report with saying that we should pay more attention to
Integrated Pest Management to help achieve a better future for our generation and the
next generation to come.
f:\12000 essays\sciences (985)\Enviromental\Jaguars.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ENDANGERED SPECIES STUDY
I. The jaguars of Central and South America have been being killed for game and
protection. This cat used to have homes around the United States to Uruguay, but
ever since the fears of extinction the jaguars have moved to the undeveloped rain
forests in Latin America. Jaguars are being killed due to many people fearing for
their own lives and many are also being killed to protect cattle. Jaguars do stalk
and ambush their prey, however, they rarely ever attack man. One of the main
reasons that jaguars are still living is that they can adapt to many habitats, from
tropical rain forests and swampy areas to scrub lands and grasslands.
II. Jaguars are part of the life cycle of many species. Killing jaguars for protection
and furs are ruining other species that live in the same range as them. These cats
are of no harm to human life, most deaths that jaguars have had a part in were to
protect themselves from being killed.
Many jaguars just disappear from the pressures of being killed. While the
animal is trying to change habitats they have died. The rain forests that the jaguars
inhabit are being torn down to open up lumbering, farming, livestock raising, and
other activities carried out by humans. Killing a jaguar is taking away a life that is
doing no harm to the eco-system. A jaguars' way of living is much like that of a
human, you don't see jaguars killing humans for their skin.
III. Any endangered specie, including the jaguar, has many different alternatives in
which the government or a national group would have to be involved. There are
several organizations that help the breeding and life of many species. One way of
breeding a specific species would be to freeze sperm and embryos so that scientists
may breed more of the species when they are close to extinction.
IV. I feel that the jaguar can be saved by forcing contractors to move their
construction to a different place rather than a rain forest so that the jaguars may
maintain their habitat. Scientists could also freeze sperm and embryos in order to
keep the population of jaguars to a safe level. One last thing that I think should be
outlawed is--poaching. Poaching is not right, these animals that are being killed
have no right to be killed. Jaguars have done rarely anything to hurt mankind or any
of mankinds' environment.
BIBLIOGRAPHY
McClung, Robert M. Vanishing Wildlife of Latin America. New York: William and
Morrow, 1981
Compton's NewMedia Encyclopedia Jaguar. Compton's NewMedia Inc., 1992,1994
f:\12000 essays\sciences (985)\Enviromental\John Muir.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
John Muir's Trail in History
John Muir was a man of great importance in the history of the United States and in the preservation of it's beauty. His tireless efforts to protect natural wonders such as Yosemite Valley demonstrated his undying love for the outdoors. Muir took a stand against the destructive side of civilization in a dauntless battle to save America's forest lands. The trail of preservation that Muir left behind has given countless numbers of people the opportunity to experience nature's magnificence.
John Muir was born on April 21, 1838 in the small rural town of Dunbar, Scotland. As a boy, Muir was "fond of everything that was wild"(My Boyhood and Youth 30) and took great pleasure in the outdoors. In 1849, Muir and his family emigrated to Wisconsin to homestead. The great forests of Northern United States captivated him and fueled his desire to learn more. Muir later enrolled in courses in chemistry, geology, and botany at the University of Wisconsin. After his education, Muir began working in a factory inventing small machines and contraptions. However, a serious working accident in the factory left Muir temporarily blind. When he finally regained his vision, he vowed to live life to the fullest and devote everything he had to nature.
At the age of 29, Muir made a thousand-mile walk from Indianapolis to Florida for the sheer pleasure of being outdoors. This experience enlightened Muir and compelled him to extend his travels. With his family's blessings (his wife and two daughters), he began to wander America's forests, mountains, valleys, and meadows extensively. Alone and on foot, he filled his notebooks with sketches and descriptions of the plants, animals, and trees that he loved. He later took trips around the world, including destinations such as Europe and South America. There he explored the Amazon basin and noted many new plant species. In Alaska, he became the first white man to see Glacier Bay. He definitely made an impact in Alaska's history: Mount Muir, Muir Glacier, Muir Point, and Muir Inlet all carry his name.
However, it was California's Sierra Nevada and Yosemite Valley that truly claimed him. In 1868, he walked across the San Joaquin Valley through waist-high wildflowers and into the high country for the first time. Later he would write: "Then it seemed to me the Sierra should be called not the Nevada, or Snowy Range, but the Range of Light...the most divinely beautiful of all the mountain chains I have ever seen"(Wolfe, 230).
By 1871, Muir had found living glaciers in the Sierra and had conceived his controversial theory of the glaciation of Yosemite Valley. Muir's reputation for exploration, glaciation, and environmental studies began to be well known throughout the country. Famous men of the time - Joseph LeConte, Asa Gray and Ralph Waldo Emerson - made their way to the door of his pine cabin.
In later years he turned seriously to writing; publishing 300 articles and 10 major books composed of his travel journals. They recounted his travels, expounded his naturalist philosophy, and beckoned everyone to "climb the mountains and get their good tidings"(Muir, Life and Letters, 34). Muir's love of the high country gave his writings a spiritual quality. His readers, whether they be presidents, congressmen, or plain folks, were inspired and often moved to action by the enthusiasm of Muir's own unbounded love of nature.
Through a series of articles appearing in Century magazine, Muir drew attention to the devastation of mountain meadows and forests by sheep and cattle. With the help of Century's associate editor, Robert Underwood Johnson, Muir worked to remedy this destruction. In 1890, due in large part to the efforts of Muir and Johnson, an act of Congress created Yosemite National Park. Muir was also personally involved in the creation of Sequoia, Mount Rainier, Petrified Forest and Grand Canyon National Parks. Muir
deservedly is often called the "Father of Our National Park System."
Johnson and others suggested to Muir that an association be formed to protect the newly created Yosemite National Park from the assaults of stockmen and others who would diminish its boundaries. In 1892, Muir and a number of his supporters founded the Sierra Club to, in Muir's words, "do something for wildness and make the mountains glad"(Muir, Summer, 47). It was established specifically to rally citizens who believed in the preservation of the High Sierra and who understood the need for eternal vigilance in its protection. Muir served as the Club's first president.
In 1901, Muir published Our National Parks. The book brought him national attention, influencing President Theodore Roosevelt. In May of 1903, Roosevelt and Muir traveled to Yosemite. Roosevelt was awestruck by the captivating scenery and beauty of the valley. For the duration of the three-day camping excursion, Muir preached the importance of preventing "the destructive work of the lumbermen and other spoilers of the for-est"(Wadsworth, 112). There, together, beneath the trees, they laid the foundation of Roosevelt's innovative and notable conservation programs.
However, the trail of John Muir was not always a smooth one. He fought syndicates, congress, and lobbyists. "The battle we have fought, and are still fighting... is a part of the eternal conflict between right and wrong, and we cannot expect to see the end of it"(Browning 53).
The growing city of San Francisco was in need of a constantly expanding water supply. Hetch Hetchy Valley, north of Yosemite Valley in Yosemite National Park, was a prime location for a dam that would create a lake where the Tuolumne River was. Because it was completely within the National Park, there would be no private property to buy the land from. Muir was strongly opposed of the proposition right from the beginning. He argued that "This valley... is one of the sublime and beautiful and important features of the Park, and to dam and submerge it would be contradictive [to what] they were intended for when the Park was established"(Silverberg, 233).
To Muir's dismay, he found the Sierra Club was divided: a strong minority of members, living in San Francisco, were ready to sacrifice Hetch Hetchy to the city's needs. Muir and his Sierra Club associate William Colby then set up a new organization, the Society for the Preservation of National Parks. At first the new organization was a success and it seemed that Hetch Hetchy would be safe. However, when Woodrow Wilson took office in 1913, the new Secretary of the Interior, a San Franciscan lobbyist of Hetch Hetchy, pushed a bill through congress that allowed the construction of the dam. Muir set forth a flood of appeals, letters, articles, and statements, but to no avail. Hetch Hetchy was lost. Muir later said: "Dam Hetch Hetchy! As well dam for water-tanks the people's cathedral's and churches, for no holier temple has ever been consecrated by the heart of man"(Browning, 65-6).
During this unpleasant affair, Muir's health had been failing dramatically and the defeat was a devastating blow to his already weakened condition. On December 24, 1914, Muir died at the age of 76 in Los Angeles. In acknowledgment of his achievements, California has greatly recognized Muir as an important man to honor in the state's history. The Muir Woods National Monument in Marin County, Calif., and The John Muir Trail extending from Yosemite Valley to the summit of Mt. Whitney were established. Mount Muir, Muir Gorge, Muir Grove, Muir Lake, Muir Mountain, Muir Pass, and Muir's Peak were also named after him. 1976 the California Historical Society voted John Muir the greatest Californian in the state's history. California's governor proclaimed every April 21 John Muir Day in honor of his birthday.
John Muir was perhaps this country's most famous and influential naturalist and conservationist. He taught the people of his time and ours the importance of experiencing and protecting our natural heritage. His words have heightened our perception of nature. His personal and determined involvement in the great conservation questions of his time was and remains an inspiration and stepping stone for today's environmental activists.
Richard Hawley, an active environmentalist and executive director and co-founder of Greenspace, a local environmentalist group in Cambria, commented on the achievements of Muir. "John Muir was a dedicated man that had a vision... and a passion for natural beauty. He is a guiding light for a lot of people. The legacy of John Muir lives on through The John Muir Trail and Yosemite National Park." Hawley went on further to say that "conservation is critical... and Muir set [the environmental movement] in motion."
Many people today follow the path of John Muir's conservation. His teachings of nature and life live on through his writings. He possessed the foresight to know that the forests needed to be protected. He knew that they wouldn't have lasted forever. The Sierra Club that he founded has helped save millions of acres of forest lands, and other national monuments that otherwise would have been destroyed. He truly took a stand for nature, and in doing so, took a stand for mankind.
"The whole wilderness seems to be alive and familiar, full of humanity. The very stones seem talkative, sympathetic, brotherly. No wonder when we consider that we all have the same Father and Mother."
-John Muir, April 1911
(Browning 13).
f:\12000 essays\sciences (985)\Enviromental\Landfills.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Landfills- A Growing Menace
When asked to think of the largest man made structure, people will invariably
come up with an answer like The Great Wall of China, the Great Pyramids, or the Taj
Majal. In contrast to these striking achievements of mankind is the Durham Road Landfill
outside San Francisco, which occupies over seventy million cubic feet. It is a sad
monument to the excesses of modern society [Gore 151]. One must think this huge
reservoir of garbage must be the largest thing ever produced by human hands then.
Unhappily, this is not the case. The Fresh Kills Landfill, located on Staten Island, is the
largest landfill in the world. It sports an elevation of 155 feet, an estimated mass of 100
million tons, and a volume of 2.9 billion cubic feet. In total acreage, it is equal to 16,000
baseball diamonds [Miller 526]. By the year 2005, when the landfill is projected to close,
its elevation will reach 505 feet above sea level, making it the highest point along the
Eastern Seaboard, from Florida to Maine. At that height, the mound will constitute a
hazard to air traffic at Newark airport [Rathje 3-4]. The area now encompassed by the
Fresh Kills (Kills is from the Dutch word for creek) Landfill was originally a tidal marsh.
In 1948, New York City planner Robert Moses developed a highly praised project to
deposit municipal garbage in the swamp until the level of the land was above sea level. A
study of the area predicted the marsh would be filled by the year 1968. He then planned to
develop the area, building houses and attracting light industry over the landfill. The Fresh
Kills Landfill was originally meant to be a conservation project that would benefit the
environment. The mayor of New York City issued a report titled "The Fresh Kills
Landfill Project" in 1951 which stated, in part, that the project "cannot fail to affect
constructively a wide area around it." The report ended by stating, "It is at once practical
and idealistic" [Rathje 4]. One must appreciate the irony in the fact that Robert Moses was
considered a leading conservationist in his time. His major accomplishments include
building asphalt parking lots throughout the New York Metro area, paved roads in and
out of city parks, and the development of Jones Beach, now the most polluted and
overcrowded piece of shoreline in the Northeast United States. In Stewart Udall's book
The Quiet Crisis, the former Secretary of the Interior praises Moses. The JFK cabinet
member calls the Jones Beach development "an imaginative solution ... (the) supreme
answer to the ever-present problems of overcrowding" [Udall 163-4]. JFK's introduction
to the book provides this foreboding passage: "Each generation must deal anew with the
raiders, with the scramble to use public resources for private profit, and with the tendency
to prefer short-run profits to long-run necessities. The crisis may be quiet, but it is urgent"
[Udall xii]. It is these long term effects that the developers of landfills often fail to
consider. Oddly, the subject of landfills is never broached in Udall's book; in 1963 landfills
were a non-issue.
A modern state-of-the-art sanitary landfill is a graveyard for garbage, where
deposited wastes are compacted, spread in thin layers, and covered daily with clay or
synthetic foam. The modern landfill is lined with multiple, impermeable layers of clay,
sand, and plastic before any garbage is deposited. This liner prevents liquids, called
leachates, from percolating into the groundwater. Leachates result from rain water mixing
with fluids in the garbage, making a highly toxic fluid containing inks, heavy metals, and
other poisonous compounds. Ideally, leachates are pumped up from collection points
along the bottom of the landfill and either shipped to liquid waste disposal points or
re-introduced into the upper layers of garbage to resume the cycle. Unfortunately, most
landfills have no such pumping system. [Miller 527]. Until the formation of the
Environmental Protection Agency by President Nixon in 1970, there were virtually no
regulations governing the construction, operation, and closure of landfills. As a result of
this lack of legislation, 85 percent of all landfills existing in this country are unlined. Many
of these landfills are located in close proximity to aquifers or other groundwater features,
or near geologically unstable sites. Many older landfills are leaking toxins into our water
supply at this very moment, with no way to stop them. For example, the Fresh Kills landfill
leaks an estimated one million gallons of toxic sludge into the surrounding water table
every day [Miller 527]. Sanitary landfills do offer certain advantages however. Offensive
odors, which characterized waste depositories at one time are dramatically reduced by the
daily cover of clay or other material over the garbage. Vermin and insects are also denied
a free meal and the opportunity to spread disease by the daily layer of deposited clay.
Furthermore, modern landfills are less of an eyesore than their older counterparts.
However, the sources of these positive affects are the very reasons for some of the
significant drawbacks to landfills [Turk and Turk 486]. The daily compacting and covering
of the garbage deposits squeezes the available oxygen out of the trash. Whatever aerobic
bacteria are present in the garbage are soon suffocated and decomposition stops.
Anaerobic bacteria, by their very nature, are not present in appreciable numbers in our
biosphere. What few manage to enter and survive in the garbage deposits are slow-acting
and perform little in the way of breaking down the materials. In other words, rather than
the giant degrading compost heap most people imagine, a landfill is actually a huge
mummification center. Hot dogs and bananas, decades old, have been recovered from
landfills, still recognizable in their mummified state [Rathje 111-12]. What little
decomposition does occur in landfills generates vast amounts of methane gas, one of the
significant greenhouse effect gasses. Some landfills have built-in processes to reclaim the
methane from the atmosphere. The Fresh Kills landfill pipes methane gas directly into
12,000 homes, but in most instances the gas is either burned off or leaked directly into the
atmosphere. Based on ice core samples from Antarctica, the methane concentration in the
Earth's atmosphere, over the past 160,000 years, has fluctuated between 0.3 and 0.7 parts
per million. The methane levels in the atmosphere are now triple that.[McKibben 17-17].
It is not only the modern landfills that defy decomposition. Because of the stench from the
thousand year old refuse of an ancient Roman landfill, an 1884 archaeological dig had to
be halted periodically so the workers could get fresh air.[Rathje 113] In today's landfills
decomposition is negligible. While the total tonnage of garbage decreases over years, due
mostly to decay, the volume varies less than ten percent. Most of the actual short-term
rotting is from scraps of prepared food. Plastics present in landfills will most likely be
there forever. Even the most unstable plastic requires intense sunlight to decompose, and
sunlight is denied in a sanitary landfill. Newspapers from before World War Two are still
readable in these landfills; they have in fact become important date markers for scientists
examining garbage strata in landfills [Rathje 112-13].
If burning garbage and dumping garbage at sea are unacceptable, what are the
alternatives? Of the landfills, sanitary and otherwise, open for business in 1979, 85 percent
are now closed [Miller 527]. Where is all the garbage going? Some municipalities are
shipping garbage to other cities, or even other states, a costly proposition. Larger
metropolitan agencies have even taken to shipping garbage to Third World countries who
are strapped for cash and eager for the money that comes along with the trash. This, of
course, only transfers the problem from one population to the other. Stories of wandering
garbage barges and orphaned garbage trains have appeared in American newspapers.
Covert garbage disposal has become a lucrative business, as the plethora of medical waste
washed up along the New Jersey shoreline proves. Despite these horror stories, recycling
really is making a difference. Newspapers, which used to make up 25 to 40 percent of the
garbage volume of a typical city, are now effectively eliminated from household garbage.
Aluminum can recycling has become a profitable enterprise, both for the economically
disadvantaged and for the average homeowner trying to offset the ever-increasing cost of
garbage collection. Construction waste is now barred from landfills in most areas; this high
volume material is now recycled or put to Earth-friendly uses, such as making barrier
reefs. Plans for the safe incineration of refuse to generate electric power have presented
some highly contentious issues. The ash from such incinerators is normally highly toxic,
since it concentrates existing toxins. Citizens object to these plants, as long as they will be
located in their neighborhood. A clear-cut answer is probably non-existent. Several
effective programs enacted in unison is the only option that can stop the growing mounds
of trash that are piling up around the country.
Works Cited:
Gore, Albert. Earth in the Balance. New York: Houghton, 1992.
MacKibben, Bill. The End of Nature. New York: Random House, 1989.
Miller, G. Tyler, Jr. Living in the Environment. Belmont CA: Wadsworth, 1994.
Rathje, William and Cullen Murphy. Rubbish!. New York: Harper, 1992.
Turk, Jonathan. Environmental Science. New York: Holt, 1984.
Udall, Stewart. The Quiet Crisis. New York: Holt, 1963.
f:\12000 essays\sciences (985)\Enviromental\Les epoques des testes nucleaires de la France.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Les epoques des testes nucleaires de la France
Mike Carey
FEF OAO
Mme. Lazarrin
1996 11 20
Dans un monde qui grandit chaque journee, on se trouve posant les questions de la
defaite de notre planete causes par les nombreux accidents inexcusable de l'humanite. De
nos jours, l'homme a decouvert les merveilles de la science, et avec cela, il a pu creer des
objets de destruction massifs, telle que la force nucleaire. Si seulment le pauvre Einstein
pouvait voir les horreurs nucleaires de quel ses theoremes ont evolue. La verite est qu'on
a un si grand nombre d'armements nucleaire en reserve dans le monde que l'on pourrait
detruire justqu'a sept fois la taille de notre planete avec la brulure de dix milles soleils.
Vers la fin des annees soixantes, la France a debute sa poursuite vers les armements
nucleaires. Et peu apres, General de Gaulle a etabli le role de la France comme un super-
pouvoir nucleaire, promettant que sa patrie n'aura jamais besoin de l'aide etrangere pour
sa defense et pour assurer l'independance de son pays. Cette rationalisation de de Gaulle,
qui a ete supporte par d'autres presidents apres lui, comme Chirac, a entraine la France
dans une etat d'industralisation nucleaire et a pave le chemin pour une multitude d'essais
nucleaires qui ont defie l'opinion internationale. "Depuis de Gaulle, la France voit ses
armements nucleaires comme centre de ses relations autonome avec les pays Allies, ainsi
qu'avec les autres pays non-nucleaires qui forme l'Union European (EU)."
(Yost, "France's Nuclear Dilemmas", Foreign Affairs, pp. 111.)
Apres la deuxieme guerre mondiale et l'etablissement de NATO, quelques pays ont
grimpe justqu'au sommet de l'echelle pour etre connu comme grande pouvoir nucleaire.
La France est un de ces pays. Au lieu que les pays a grande pouvoir nucleaires entre-
aident en cooperation, il se sont mis en competition avec l'un l'autre, afin d'accelerer la
vitesse des nouvelles decouvertes nucleaires. Durant cette epoque, la France a du faire
competition avec les Etats-Unis, la Chine, la Russe et la Grande-Bretagne. Quand la
France s'est senti forte pour la premiere fois depuis la guerre avec ses defenses et
armements nucleaires, elle voulait s'en ficher du protection nucleaire des Americains et
reconfirmer son independance. "La France, surtout apres la fondation du cinquieme
republique, a utilise la ligne <> comme raisonnement d'eviter le
support Americain, qui etait considere comme un allie non-fiable." (Goldstein,
"Discounting the Free Ride: Alliances and Security in the Postwar World.", International
Organization, pp. 52.) La France ne veut plus etre sous la parapluie de protection des
Americains, alors un organisation pour la protection de toute l'Europe contre les effraies
nucleaires est en place avec la France comme chef. Ceci demontre un peu la fiertee
nationale pour laquelle la France est renommee et sa determination vers son independance.
Pourquoi la France continue-t-elle avec les recherches sur l'energie nucleaires et
ses usages, tellesque les bombes? C'est bien simple: la France a une reserve d'huile tres
minime, alors l'idee d'avoir une grande programme nucleaire etait la meilleur protection
contre les frais enormes pour l'importation de ce produit. C'est pour cela que l'energie
nucleaire en France fait marcher plus que 80% de l'electricite pour le pays, en plus d'en
avoir en reserve et meme d'en vendre aux pays voisins. L'energie nucleaire est le moin
flexible de toutes les sortes d'energies possibles, mais le gouvernement francais dit que
l'emploi de l'energie nucleaire est moin cher que de faire importer l'huile d'ailleurs. Selon
la France, qui n'ont aucun autre moyen de generer de pouvoir, la decision faite pour la
recherche et l'emplacement des divers usines de production d'energie nucleaire (62) leur
ont donne plus que l'energie suffisante pour leur pays, en plus d'aider l'environnement
avec la reduction des polluants venants des huiles fossiles. Cependant, des organisations
telleque l'AEI (Agence d'energie internationale) disent que: "Une haute dependance sur
aucune combustible unique est sure d'etre une source d'insecurite et d'inflexibilite.
Incroyablement, comme nation le plus dependant sur l'energie nucleaire, la France n'a pas
encore etabli ou mettre ses dechets nucleaires" ("Fission a la francaise", The Economist,
pp. 19-20.) Evidement, le taux d'etre un pays a haute pouvoir nucleaire a abaisse la
France a la ridicule internationale en tant que sa manque de depotoires nucleaires et sa
persistence continuelle de fermer les oreilles aux plaintes mondiales sur les essais
nucleaires.
"Ca serait completement irresponsable pour une nation de pouvoir majeur de ne
pas faire des series de petites testes pour assurer que nos preventifs nucleaires sont aussi
valables qu'ils peuvent etre." (Chirac, "No More Hiroshima Coalition",
http://www.rucc.net.au/no.more/french.htm.) Voudriez-vous etre dans une situation ou
des testes nucleaires seraient recherches dans votre coin du monde, sachant bien que la
chance de la grippe de radiation peuvent vous saisir? Dans Juillet de 1995, Jacques Chirac
a decide de commencer une serie de testes qui prendrait quelques temps dans le Sud-
.Pacifique, sur une petite ile nomme Muroroa atoll dans la Polonesie francaise. Meme
avec l'appui de sept millions signatures d'autour du monde etabli par Greenpeace, la
decision faite par le gouvernement francais etait pour rester. "M. Chirac a explique que
ces tirs etaient indispensables pour, d'une part, <> de la dissuasion francaise, et, d'autre part, <>. La decision de M. Chirac est d'autant moins
comprehensible que la fin de la guerre froide et de l'<>
encourage au demantelement nucleaire. (Ramonet, "La Bombe",
http://www.ina.fr/CP/MondeDiplo/1995/08/RAMONET/1720.html.) Pour la France, les
armes nucleaires ont ete associe avec l'independance national et la securite contre une
autre guerre mondiale, mais plus profondement, pour remettre l'honneur et un status
internationalement admire a la France.
f:\12000 essays\sciences (985)\Enviromental\Malibu Fires.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Malibu Fires
Human beings are able to adapt to almost any environment, unfortunately
sometimes we take advantage of our natural surroundings. We find ourselves amidst a
struggle between our lifestyles and nature. Although we affect nature profoundly with
our activities, we in turn are shaped by nature's potent forces. Nature can be brutal to
humans, but we must remember that it merely is following its course. As a result, we
must learn to coexist with it. Fire is a naturally occurring phenomenon which humans
have learned to deal with throughout history. Yet when fire burns uncontrollably, there is
great potential for monumental damage to all surrounding biomass. The Malibu
wildfires are an example of one such instance.
Historically, wildfires had been left to burn uncontrolled for weeks. Fires were
caused by different sources such as lightning or human hunters who wanted to chase
animals out of the woods. As prolonged as these fires were, they had limited catastrophic
effects on the nomadic humans. This is due to the low population density and the fact
that the fires were not very intense. As people began to change from a hunting-gathering
society to agriculturists, they settled in communities. Homes built among the wild brush
were perfect prey to wildfires. Initially, wildfires were put out immediately and people
were barred from setting fires in open spaces. Due to the policy of fire suppression, only
one percent of all wildfires escaped early control. The land was safe from fires
temporarily, but this set the stage for catastrophe as the brush grew more dense.
There have been more than 20 catastrophic wildfires in Los Angeles County since
the beginning of organized fire protection. The first "big one" happened in December of
1927. The fire started in the La Crescenta Valley, climbed over the Verdugo Mountain
range and destroyed more than 100 homes.
In addition to the damage caused in 1927, fires have profoundly affected the
Southern California environment. Almost every square mile of chaparral land in Los
Angeles county has been burned at least once, since 1919. There are basically two large
fire breeding grounds in Los Angeles county: the San Gabriel Mountain range and the
Santa Monica Mountains. In 1993, the Kinneloa Fire in Altadena caused a great amount
of damage to the surrounding area and destroyed 121 homes in the foothills of the San
Gabriel Mountains. It was the most devastating fire in the area, surpassing the previous
worst fire in 1980 that burned 55 homes at the mouth of the San Gabriel Canyon. The
total damage caused by wildfires in the San Gabriel Mountains within the past 60 years
amounted to the loss of 332 homes.
Statistically, Malibu and its surrounding area has seen much damage done to its
vegetation and inhabitants. There have been 24 wildfires that burned a total of 271,047
acres since 1927. These fires have caused a total of five deaths and the destruction of
1,502 homes along with 830 other structures. Recent fires include the Malibu fire in
1985, Dayton Fire in 1982 and Malibu Canyon fire in 1970. In the Malibu Fire, 103
homes were destroyed; in the Dayton Fire, 85 homes were destroyed. The Malibu
Canyon Fire, which joined forces with the New Hall Fire on September 25, 1970,
destroyed a total of 135 homes and burned through a total of 85,000 acres (Wildfire sec.
2 p.1). Out of all the homes burned, 70 were located in Malibu and 65 in Chatsworth
(Wildfire sec. 2 p.1). Previous to that fire, the last time Topanga Canyon had seen a
damaging fire was December 30, 1956, when 74 homes were destroyed (Wildfire sec. 3
p.1). Another painful memory for Topanga Canyon occurred between 1938 and 1943,
during which time three fires destroyed more than 600 structures.
1993 featured one of Malibu's most devastating firestorms. When traveling
through Malibu's scenic landscape, it is almost impossible to imagine that this beautiful
environment could foster such a deadly fire. Lovely ocean-view homes are nestled
within the lush vegetation of the mountainous landscape. In fact, it was Malibu's beauty
that originally lured people to settle there. Unfortunately, Malibu has the ultimate
combination of climate condition, wind pattern, and lust biomass for wildfires. During
the 1993 fires, biomass growing in the Malibu hills acted as fuel, as did the homes that
stood nearby. Some long time residents of Malibu have lost not one but two or even three
homes.
Like deciduous forests that have adequate moisture levels and cool climates,
Malibu is very rich in vegetation. However, Malibu experiences a natural phenomenon
unknown to deciduous forests: during the fall and early winter months, strong Santa Ana
winds take regular trips through Malibu and out to the ocean. As the Santa Ana winds
blow through, evaporating whatever moisture is left in the chaparral after the long dry
summer, relative humidity can drop below 10 percent. Once a fire starts, it is nearly
impossible to contain, until the Santa Ana winds die down. Malibu has a history of
wildfires which "historically follow well-defined wildfire corridors. When large and
damaging fires occur you'll find the wind and fire corridors perfectly aligned." (report 4)
This makes it even more difficult to fight a fire.
The weather conditions on October 26, 1993, worried many government officials
throughout the state of California. The temperature in Southern California was
abnormally hot with very little humidity present in the atmosphere and the Santa Ana
winds were starting to gain in intensity. A seven year drought had created massive
amounts of dead undergrowth and the recent heavy winter rains had caused an abundance
of light fuels to be produced. This was a perfect scenario for disaster.
On November 2, 1993, the Los Angeles County fire department was notified
about a potential fast moving brush fire that had started at the top of the Old Topanga
Canyon road, nestled within the Santa Monica Mountains. The fire was moving rapidly
towards the Malibu coastline at a speed of approximately 1.75 m.p.h. due to 30-50 mile
per hour winds. The 40+ year old vegetation in the surrounding area was providing
ample fuel for a conflagration.
In less than four hours from the start of the fire, the damage inflicted to the land
was immense. Seven miles of the deep brushed Carbon Canyon had been incinerated by
the unforgiving fiery beast. From Carbon Canyon, the fire spread onwards to the west
side of Malibu by Pepperdine University. On the east end, the fire was moving quickly
towards Topanga Canyon. "By Ten P.M., the fire had burned just north of Malibu on the
west and had burned through Carbon Canyon, Rambla Pacifico, Las Flores, Big Rock and
into Tuna Canyon on the east (Firestorm 1993, p.4 sec. 1)."
After burning fiercely throughout most of the afternoon, the intensity of the fire
diminished significantly in the late evening hours of November 2nd. By morning, the
Santa Ana winds had picked up again and the conflagration was spreading further east
and west. At three in the afternoon, the west ridge of the fire was close to containment
but the east ridge threatened the Topanga Canyon community of Fernwood. With the
help of eight water-dropping helicopters form LA County and two more from the Office
of Emergency Services, firefighting companies kept the fire from entering this serene
community.
By 11 PM on the November 3rd the Malibu fire was contained and the Los
Angeles City Fire Department minimized its manpower. Although there was no major
fire activities within Malibu after November 3rd, some fire companies remained on the
scene and fortified the perimeters of the fire area until 6 PM on November 5th, 1993.
They did this to prevent any embers from igniting into another serious fire that would
burn more of the deep undergrowth that showered the Malibu region.
There were many complications that took place throughout the fire ravaged area.
Along the eastern ridge of the fire, many high voltage power lines were burnt which
eliminated power to homes in the surrounding communities and also presented
complications with the fire department's electrically run hydrant system pumps. The fire
companies resorted to using water from local swimming pools to put out some of the
encroaching flames instead of using the pumped water from the hydrants. Fatigue,
injury, and a feeling of vulnerability faced many of the firefighters as they were faced
with a major fire that continually jumped from one structure to another. Some fire
personnel worked 24 to 36 hours straight in order to prevent homes from being torn apart
by the blistering inferno. Even beyond fatigue and injury, firefighters dealt with
problems that they had no control over. Many streets in the city of Malibu are closely
intertwined with the environment. Dense overgrowth crowded the narrow streets which
made it virtually impossible for fire crews to challenge some of the house fires with the
appropriate equipment. Ornamental plants and overgrowth also added to the intensity of
the fire making it hard for firefighters to get close to the burning houses. The topography
of Malibu presented the biggest problem to the firefighting effort. Much of Malibu
consists of steep canyon walls that drop down to narrow roadways. "With a fire burning
with as much as 22,500 BTU per foot per second, firefighters often had to abandon a
position before their path of egress was involved with flames (Firestorm 1993, p.2 sec.
5)."
This fire burned an average of over 1,000 acres per hour and traveled seven
miles in six hours to reach the Pacific Coast. Started by an arsonist, the fire destroyed
384 structures and burned over 16,516 acres of land. Although 384 structures were
destroyed, fire personnel managed to save over 7,000 homes. At the height of the fire,
7136 fire personnel were involved with the protection of structures (Firestorm 1993, p.6.
sec.1). Over 400 different firefighting agencies from all around Southern California
participated in fighting this fire. 565 firefighters suffered injuries, 21 civilians were
injured and three civilians died as a result of this massive inferno.
Despite the care taken in preventing fires, they are inevitable. Fires that occur
naturally or under carefully monitored circumstances can be beneficial to the
environment. Unfortunately, many fires result from human error and carelessness, and do
not positively affect the environment. It would be extremely difficult, if not impossible,
to completely rid of dangerous and damaging fires, but mitigation of the problem should
be looked into and pursued. Every time there is a forest fire, a brush fire, a residential
fire, or any fire that affects a niche/ecosystem, a concerted effort should be taken to study
its effects and analyses should be conducted in hopes of getting prepared for "the next
time." There are many lessons to be learned from the Malibu Fires, especially concerning
water supply, vegetation, brush clearance, and building/fire codes.
An area that is troubled by low moisture levels and high temperatures, the
hillsides of Malibu were perfect targets for wildfires. The fact that the fires occurred in
mountainous terrain complicated matters because the water supply is broken down into
several isolated systems, unlike the network system that exists in many urban areas. The
mountainous water systems were designed to fight structure fires, not wildfires. This is
due to a less concentrated water supply to fight fires. Another problem faced by Malibu
was that the water systems were not capable of storage at the levels needed to fight a
wildfire. This is because huge storage tanks are more susceptible to breakdown than
smaller ones due to technical issues and damage caused by earthquakes.
Vegetation posed another problem during the Malibu fires. Due to its dry brush-
like vegetation the fire grew stronger and more uncontrollable, as it fed on its "fuel." A
solution to this problem is to investigate plant species that are less flammable. For
example, the eucalyptus tree, which is highly susceptible to fires due to its high
concentration of oil, should be avoided in the design of landscapes. In addition, a balance
between soil erosion protection and fire hazard reduction must be met through the choice
of appropriate vegetation. Protecting the soil from erosion should improve its quality,
which in turn is necessary for healthy vegetation. Vegetation has the potential to increase
the moisture content of an environment, and also to decrease temperatures. These two
outcomes would be beneficial to the environment, as long as vegetation that is least
susceptible to fire is found.
In addition to planting the appropriate vegetation, proper brush clearing must be
practiced. Densely planted vegetation spurs a fire on, as its flames can hop from plant to
plant. In general, the Fire Department recommends that vegetation within 30 feet of
structures be eliminated completely or thinned of dead material. Acacia, Cedar, Cypress,
and Eucalyptus trees are specifically pointed out, as are dry annual grasses, shrubs, and
Juniper, which are all highly flammable. Vegetation within 30 to 100 feet should be
thinned as appropriate, planted in isolated "islands" of vegetation, and dead materials
should be removed. These are all measures that can be taken by individual homeowner, if
they so choose. In addition, independent contractors can be hired to do the job. Brush
clearing can be an aesthetic advantage as well as promote healthy vegetation growth.
Building structures must also be analyzed to abate wildfires. For example, instead
of using wood roof shingles, residents should use light- colored, non-combustible roof
coverings. This will increase albedo of the environment, thus reducing the environment's
temperature. Also, swimming pools are a worthwhile investment, for the Fire Department
can incorporate drains that will allow water to be used during fires. In addition, the area
will experience increased moisture level, and albedo will increase due to the reflective
nature of water. Best of all, a pool can be refreshing on hot summer days.
In order to quell firestorms, there are many measures that must be taken
simultaneously. It is not enough to have an outstanding water system, or a well trained
Fire Department. Fires naturally rage out of control. Therefore, people must be educated
on the aspects that they can help control, such as those mentioned above. If the people of
Malibu plan on continuing their stay on a naturally fire-prone environment, they must
learn to adapt their lives to it. These measures, however, are not limited to Malibu
residents. Everyone can learn something from their tragic experience.
Human beings attempt to fight nature by trying to change or disturb its natural
surroundings for the sole benefit of consumption. This is not only bad for the
environment, but also for its inhabitants. When Malibu was home to the Chumash
Indians, old vegetation was periodically burned to foster growth of new vegetation. The
Chumash, who were more closely connected to nature than we are now, learned how and
when to cause fires. "A long time ago the Chumash were here and they used to burn the
brush every once and a while. It did wonders for the vegetation. the flowers were so
beautiful. Then we built houses in their way. we really should not be here (Resident of
Malibu)." Perhaps we should learn from their techniques: rather than allowing the
chaparral to dry out and die (causing a high fire risk), we should clear out old vegetation
to prevent massive fires and learn to respect the environment in which we live in, not
abuse it. Nature is not man's enemy, but should be seen as an ally. Humans need to
learn about their environment in hopes that a better understanding of natural processes
will help humans to peacefully coexist with it.
f:\12000 essays\sciences (985)\Enviromental\Management and The Body Shop.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Management and The Body Shop
In this paper I will be taking a look at basic management functions. The approaches, and the synthesis of two views of management. I will attempt to take an overview of culture and its effect on a company.
In today's changing global environments many companies have joined the open trade policies, and existing foreign opportunities available to growing companies with positive views and socially responsible attitudes.
It all sounds like a lot to cover in a short essay so I will introduce a company that has in its short, yet very successful existence transformed through all the levels and practices mentioned above. The company is called "The Body Shop", I hope you have heard of it for that would make our journey through it's development even more enjoyable.
Management is described as the process of getting activities with and through other people. This philosophy has been so widely examined that there are literally millions of opinions and differing views on the subject. We will only be examining the functions of management where the basics of planning, organizing, leading, and controlling apply to The Body Shop. In 1976 an inexperienced Anita Roddick got tired of unsubstantiated
Management and The Body Shop
claims of the cosmetics industry that their products couldn't deliver. She decided to make a decision that would change her life forever. Anita became a manager of her own small business in Brighton England. Selling the natural secrets found throughout the world; learned from extensive travel while employed as a teacher with the U.N., she created a cottage industry of exotic personal body care products.
Planning proved to be the first big obstacle to learn in the road to efficient management. Taking care of buying from around the world for her special products had plunged Anita into a frightening and difficult role that she needed help with. Anita organized her financial burdens by taking on an investor Ian McGlinn, in turn giving him a 50 percent stake in the business.
Furthermore she sold the name The Body Shop to personal recruits, carefully lead and controlled by her own philosophies and ideals. Anita had become an ideal example of the classic top level manager taking on the responsibility of decision, communication, and information needed to project her company as a serious competitor, ready for today's global market.
Management and The Body Shop
The Body Shop follows the original general administrative theory of Henry Fayol. That is a sort of utopian environment that everyone involved in the company shares the same opinions as Anita Roddick, and tried to achieve harmony, one might say, within their own franchise of The Body Shop. Achieving this was done by personal interviews of potential franchise owners and continual monitoring of the application bureaucracy's intended to find only the right people for the job. Establishing these bureaucratic procedures meant asking questions such as: "what kind of car do you drive?", "what kind do you want?", "how would you like to die?", and "who are your favorite literary heroines?". Based on the answer to questions like these she would either not give them franchise rights or move them along the way towards a business of their own with The Body Shop. The applicants are backed up due to the bureaucracies in place, as often is the case in any bureaucratic environment.
The Body Shop has successfully combined the two traditional management views one of which is the omnipotent view. That is the view that management must take charge as in the executive rule of accountability and direct responsibility. She as coordinating manager of
Management and The Body Shop
The Body Shop has total control over the propaganda and merchandise to be displayed and sold within all the shops that carry the name, The Body Shop.
AS well as maintaining the omnipotent role she has synthesized with the symbolic views that her management is has only a limited affect on the individual sales of each separate store. Anita has faith in her franchise owners that they will maintain the integrity of the name and conduct business according to the philosophy determined at the beginning of development and traditions of the original store. In keeping with the symbolic views of management owners are encouraged to create systems that make the employees feel more important and effective. Such systems include day cares for employees with children and even paid hours of work for local charities and community projects.
Anita does keep abreast of the changing attitudes and social conditions of each culture that effects individual store around the world. Through addressing globally sensitive environmental concerns in each store and by management training programs Anita alone has kept the public image of The Body Shop one of a socially responsive and socially responsible
Management and The Body Shop
company. This has proven to be a very successful style of managerial ethics within her employees rank and also with her customers needed and wants. The Save The Whales campaign and her addressing AIDS has been only a couple examples of the company philosophy and ethical position. These are areas wherein all major transnational and multinational companies could make notes on the programs of The Body Shop.
The Body Shop operates in my opinion, as a multinational company only because the major changes are dictated from England and Anita Roddick has control of each promotion and product that each store carries. All the bureaucratic systems are formed to send information and data respecting new owner applications and informational material directly to Anita Roddick herself and she operates in England and makes decisions from that one country.
This form of company relies heavily on accurate transfer of communication which has so far in this case proven effective. Who knows were the future will take this organization but it seems to be always one step ahead of change.
Management and The Body Shop
Through Anita's passion and her belief that business can be fun, it can be conducted with love and can be a powerful force for good, she had already had the basic fundamentals of today's business ethics. Working within today's rapidly changing business environments, it takes passion and ingenuity to stay competitive.
When dealing with other countries in trade, The Body Shop has broken new ground above and beyond the trade agreements established by governments such as: N.A.F.T.A., U.S. - Canada free trade, and the maquiladoras of Mexico. Anita used her belief that trade not aid is the positive solution to economic hardships in the developing world. For instance, she pays above average wages to orphaned boys and girls to make custom footrollers in India. This is only one example of thousands that she has specially designed trade agreements with improved countries.
The Body Shop is also on the Internet now. In today's shrinking global community, is it essential to be available to whoever needs your services. By having The Body Shop on the Internet, it gives access to anyone on earth to share their products, services and ideas. I have looked through
Management and The Body Shop
the Body Shop website and have printed out a few examples of what they have there. The first page is the Body Shop Home Page. You can go to their "Ban against animal testing", products, Skin Care, and News sections just by clicking on the appropriate picture. The second page is when I clicked on the NEWS button on the Home Page. They issue articles suck as this one whenever they need to keep the public aware of what they are doing, which is a great idea. The final two pages I have included are very interesting and handy. They have a build in Store Locator. You first select which country you want to go to, and then in applicable, your province or state. I have printed the locations for Canada - BC. If interested, the Body Shop Web Site can be located at:
http://www.the-body-shop.com/contents.html
In conclusion, The Body Shop has a very effective style of management with Anita Roddick still in control of the planning, leading, organizing, and making decisions for all the franchise stores. The general management views and culture are responsive to the needs of their employees and their customers. The Body Shop is a multinational company that is a pioneer in the foreign trade department.
Bibliography
Stephen P. Robbins and Robbin Stuart-Kotze
Management Canadian Fourth Edition (Prentice - Hall INC., ONT., 1994) pg. 15-142
Keegan, Moriarty, Duncan, Paliwoda
Marketing Candian Edition (Prentice - Hall INC., ONT., 1995) pg. 738-48
William G. Nickels, James M. McHugh, Susan M. McHugh, Paul D. Berman
Understanding Canadian Business (Richard D. Irwin, INC., 1994) pg. 199-411
Dr. Kent E. Curran [kecurran@unccvm.uncc.edu]
MGMT 3140 - Management Concepts and Practices (http://unccvm.uncc.edu/~kecurran/lect-02.htm; August 25, 1996.)
f:\12000 essays\sciences (985)\Enviromental\management techniques for the redcocked woodpecker.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MANAGEMENT TECHNIQUES FOR THE
RED-COCKADED WOODPECKER ON FEDERAL LANDS
Sean Fraser
NRM 304
ABSTRACT
The red-cockaded woodpecker (Picoides borealis) has been listed as an endangered species since October, 1970. This species inhabits pine forests in the southeastern United States where the majority of prime timberland is privately owned. Private ownership of preferred habitat and historically destructive silvicultural practices create unique problems for federal wildlife managers. This report analyzes three management techniques being used to assess and augment red-cockaded woodpecker populations on federal lands in the region, primarily military installations. Seeking cooperation between diverse government agencies, wildlife managers attempt to accurately assess species abundance, alter woodpecker nesting cavities, and construct nest sites in an effort to enhance red-cockaded woodpecker habitat on limited federal holdings in the American southeast.
Key words: Picoides borealis, Global Positioning System, Geographic Information System, cavity trees, cavity restrictors
The red-cockaded woodpecker (Picoides borealis) is an endangered species that inhabits pine forests in an historical range from Texas to the Atlantic coast (Jackson, 1986; Reed et al., 1988). Picoides borealis nest in clans or family groups that usually consist of one breeding pair and 2 non-breeding male helpers (Jackson, 1986 ). This group establishes and defends a territory that includes foraging habitat and nesting "cavity trees" (Copeyon et al., 1991; Jackson et al., 1986; Rossell and Gorsira, 1996). Red-cockaded woodpecker clans excavate cavities in living pines, and have established a living and foraging routine in conjunction with the southeastern pine forests and the historical occurrence of fire, which reduces hardwood understory while sparing fire-resistant pines (Jackson, 1986). Much of the prime nesting and foraging habitat for this species has been systematically eliminated due to development, timber harvest and intensive fire suppression (Jackson, 1986). The emergence of dense hardwood understory and midstory as a result of fire suppression in red-cockaded woodpecker habitat has resulted in the abandonment of many otherwise undisturbed areas (Jackson, 1986; Kelly et al., 1993).
The red-cockaded woodpecker has been listed as endangered since 1970 (Federal Register, 1970 as cited by Ertep and Lee, 1994). Four requirements for sustained red-cockaded woodpecker populations that are lacking in the species historical range are identified as critical to species stabilization and recovery: 1.) Open pine forests with shade tolerant understory controlled by cyclical fire seasons; 2.) Old growth Pinus palustrus aged > 95 years and Pinus taeda aged > 75 years; 3.) Approximately 200 acres for nesting group or clan; 4.) Multiple clans per area to maintain genetic stability and variability (Jackson, 1986). The opportunity to establish or preserve these habitat qualities on private timberland is largely lost due to historical harvest practices and development, and research on expanding populations on federal holdings is the most vital component in red-cockaded woodpecker stabilization and recovery (Jackson et al., 1979a; Jackson, 1986).
Exacerbating the problem of habitat loss due to encroachment and fire-suppression are natural hazards such as hurricanes, pine-beetle infestations and usurpation of red-cockaded woodpecker cavities by other species (Carter et al., 1989; Rossell and Gorsira, 1996). Effects of historically natural hazards are multiplied in the context of a diminished species abundance (Carter et al., 1989; Jackson, 1986).
Land management for wildlife is subject to unique difficulties in the Southeast, as the majority of forested land is privately owned (Jackson, 1986). In western states, approximately 2/3 of undeveloped land is federally administered, making the enactment of widespread management policies feasible, and controversies are apt to center around questions of access and use, rather than the more difficult problems concerned with private property rights.
MATERIALS AND METHODS
This report will focus on the current techniques being explored and enacted to stabilize and increase red-cockaded woodpecker populations on federal lands throughout its previous range. Three areas of concern regarding the red-cockaded woodpecker populations on federal lands interact to define current management practices (Jackson, 1986). Wildlife biologists, foresters, and the military have tested and combined specific techniques involving habitat assessment and identification, cavity alteration, and cavity construction to manage limited habitat for the red-cockaded woodpecker on federally administered land (Carter et al., 1989; Copeyon, 1990; Ertep and Lee, 1994). Analysis of specific studies and practices in these three areas serve as a description of the technique for managing limited federal lands for the enhancement and stabilization of red-cockaded woodpecker populations.
DISCUSSION
HABITAT ASSESSMENT AND IDENTIFICATION
A significant problem associated with the management of red-cockaded woodpecker populations is obtaining an accurate assessment of habitat availability and home range estimates (Ertep and Lee, 1994; Reed et al., 1988). Differences in habitat quality and availability throughout the range of the red-cockaded woodpecker affect population density and the range of foraging and nesting activities within colonies, making general application of population estimators difficult (Reed et al., 1988). This issue was addressed in 1988 during a study to evaluate red-cockaded woodpecker population indices.
Reed et al. (1988) set out to evaluate studies concerning red-cockaded woodpecker population indices and, if necessary, develop a new techniques to more accurately estimate adult population size. Reed at al. (1988) researched the circular scale technique (CST) as described by Harlow et al. (1983) and found that application of this method of population estimation is limited. CST utilizes aerial identification of active cavity tree groups, and encompasses said groups in a 460-m diameter circle that contains as many of the active cavity trees as possible (Harlow et al., 1983 as cited by Reed et al., 1988). While Harlow et al. (1983) and Lennartz and Matteaur (1986) used CST with great accuracy in their study areas, estimating population sizes to between 92 and 95% of the true number, the 1988 study by Reed et al. determined that the technique cannot be used throughout the red-cockaded woodpecker range. Using CST in the Sandhills region of North Carolina underestimated the number of groups in the Reed et al. study population (Reed et al., 1988). In the Reed et al. (1988) study area, red-cockaded woodpecker population density and the spatial arrangement of colonies was frequently influenced by habitat fragmentation which led to the violation of assumptions held necessary in the CST method of population estimation (Reed et al., 1988). Conclusions in the Reed et al. (1988) study indicate that CST may be generally used as an index, but further research is necessary to establish a universal technique to estimate red-cockaded woodpecker populations.
The development of sophisticated computer programs and topographical analysis techniques may make assessment of red-cockaded woodpecker habitat and species abundance more accurate and less time consuming (Ertep and Lee, 1994; Reed et al., 1988). These advancements in geographic analysis and terrain assessment technology have provided for an unlikely union between wildlife managers and natural resource agencies on US military installations throughout the southeast (Ertep and Lee, 1994; USMC, 1995). The coordination of Geographic Information System programs (GIS) and Digital Multispectral Videography (DMSV) at Fort Benning , Georgia adds a new technological advantage in the search for red-cockaded woodpecker colonies and habitat by accurately identifying longleaf pine stands (USACE, 1996). Image analysis and confirming Global Positioning System information has been validated in initial tests by the confirmation of three GIS and DMSV-identified red-cockaded woodpecker sites through direct ground observation in the areas (USACE, 1996). Research is ongoing to examine the initial findings associated with these new and highly technical habitat assessment techniques (Ertep and Lee, 1996).
CAVITY ALTERATION
A significant problem in the recovery of red-cockaded woodpecker populations involves the usurpation of nesting cavities by other species, primarily southern flying squirrels (Gloucomys volans), northern flickers (Colaptes auratus), European starlings (Sturnus vulgaris), and other species of woodpeckers (Carter et al., 1989; Rossell and Gorsira, 1996). Invasive species occupy or significantly alter cavities, preventing their continued use by red-cockaded woodpeckers (Carter et al., 1989). Many nesting locations take months or years to construct, and adequate old-growth pines are now less frequent in the red-cockaded woodpecker range (Walters, 1986). Wildlife managers and foresters have experimented with altering or reinforcing red-cockaded woodpecker nesting cavities to discourage these invaders. Carter et al. (1989) describe specific techniques for cavity alteration. Three types of cavity restrictors alter the character of the cavity entranceway, acting as a deterrent to enlargement or access by other species (Figure 1). Cavity restrictors generally consist of a camouflaged metal plate fastened over the cavity entrance (Carter et al., 1989).
A study by Rossell and Gorsira (1996) demonstrates the importance of specific cavity parameters in assessing the availability of nesting and roosting cavities for red-cockaded woodpeckers. The results of their study showed that red cockaded woodpeckers nested only in cavities with normal entrances (Rossell and Gorsira, 1996). Even if cavities with enlarged entrances contained normal chambers and were not occupied by competing species, red-cockaded woodpeckers avoided them (Table 1).
TABLE 13/4Diurnal occupants versus entrance (ent.) and chamber (ch.) characteristics of active red-cockaded woodpecker (Picoides borealis) cavities in the Northeast Management Area, Fort Bragg, North Carolina, May, 1993. (Rossell and Gorsira, 1996)
Use among entrance and chamber categories
Normal ent. Enlarged ent.
Occupant or Contents Norm. ch. Enlarged ch. Norm ch. Enlarged ch.
Red cockaded-woodpecker 21 0 0 0
Southern flying squirrel 7 4 1 7
Red-bellied woodpecker 0 0 1 0
Red-headed woodpecker 2 0 0 0
Screech owl 0 0 0 2
Northern flicker 0 0 0 2
Nest material 1 1 1 1
Unoccupied 29 3 3 4
CAVITY CONSTRUCTION
Techniques to artificially create red-cockaded woodpecker cavities have been initially successful on federal holdings such as Fort Bragg, North Carolina, which holds one of the largest red-cockaded woodpecker populations on federally administered lands (Copeyon et al., 1991; Rossell and Gorsira, 1996). The technique and effectiveness of artificial cavity construction is best examined by analyzing the physical characteristics of artificial red-cockaded woodpecker cavities, and reviewing studies wherein the cavities are used as a management tool (Copeyon, 1990; Copeyon, et al., 1991; Rossell and Gorsira, 1996).
Perhaps the most comprehensive study concerning artificial cavity construction for the benefit of the red-cockaded woodpecker was conducted by Copeyon, Walters and Carter as part of a ten year study of red-cockaded woodpecker populations in the Sandhills region of North Carolina (1991). Their work, Induction of Red-Cockaded Woodpecker Group Formation by Artificial Cavity Construction, (Copeyon et al., 1991) represents the most practical and valuable guide to red-cockaded woodpecker population enhancement techniques to date (Conner and Rudolph, 1995).
In 1990, Carole Copeyon published an article describing a technique for constructing artificial cavities for red-cockaded woodpeckers. Explaining that excavation of suitable living
cavities takes a minimum of ten months and normally much longer to complete, Copeyon (1990) surmised that construction of artificial cavities may be an effective management tool that would encourage colonization of abandoned areas and reduce energy expenditure associated with nesting cavity construction.
After making the decision to use artificial nesting cavities as a management tool, wildlife managers should attempt to select older trees in their respective areas of responsibility (Copeyon, 1990; Copeyon et al., 1991). Selection of older trees mimics the natural inclination of the red cockaded woodpecker and that older trees have sufficient heartwood development to support large nesting and roosting cavities without sustaining damage (Copeyon, 1990). As indicated previously, red-cockaded woodpeckers generally select trees between 80 and 100 years old depending on species availability.
Copeyon (1990) reveals that an adequate artificial nesting cavity requires an entrance approximately 4.4cm.-6.4cm. in diameter placed at 1-24 meters above ground level. An entrance tunnel should be excavated into the heartwood with the nesting chamber extending down at a right angle to the entrance tunnel to a depth between 20.3 and 27.3cm. (Figure 2) (Copeyon, 1990). Small resin wells are drilled around the tree above and below the entrance site (Copeyon 1990; Rossell and Gorsira, 1996). Seepage from these wells act to discourage competitors and predators (Copeyon, 1990).
The results of Copeyon's initial study concerning red-cockaded woodpecker cavity construction are contained in (Table 2).
TABLE 1. 3/4Use of artificial cavities by red-cockaded woodpeckers (Picoides borealis) in the Sandhills region of North Carolina (Copeyon, 1990).
Species Age #Constructed #Active
Longleaf Old 29 25
Moderate 7 4
Young 2 2
Total 38 31
Loblolly Old 4 3
Young 2 1
Total 6 4
Cavity construction for red-cockaded woodpecker management is an effective tool for inducing the formation of new colonies in the species' historical range, and may prove to increase reproductive success in already established colonies (Copeyon et al., 1991).
RESULTS
Further research is necessary to establish the impact of management for the red-cokaded woodpecker on other species (Masters et al., 1996). Initial studies indicate that management practices involving the clearance of hardwood understory and the initiation of prescribed burns in red-cockaded woodpecker habitat increase forage for white-tailed deer (Odocoileus virginianus) (Masters et al., 1996). Studies continue to examine concerns about possible negative effects of single species management practices in association with red-cockaded woodpecker recovery effort (Masters et al., 1996).
In the 25 years since the identification of the red-cockaded woodpecker as an endangered species, establishing a unified recovery program among the diverse federal agencies responsible for the administration of lands within the species' range has been difficult (Jackson, 1986). In the first 15 years of listing, no programs existed to effectively manage habitat for the red-cockaded woodpecker. Jackson (1986) described the situation as especially urgent, as the red-cockaded woodpecker was becoming dependent on widely dispersed islands of habitat, isolating colonies and creating the potential for catastrophic losses due to natural occurrences and inter-species competition for roosting and nesting sites. Since 1986, research into habitat requirements for successful red-cockaded woodpecker colonies have been identified (Copeyon et al., 1991; Jackson, 1986). Improvements in identifying suitable habitat, altering existing cavities to decrease competition for roosting and nesting sites, and initiating formation of red-cockaded woodpecker colonies through construction of artificial cavities have been synthesized into a specific technique of managing federal lands for the red-cockaded woodpecker (Copeyon et al., 1991; Ertep and Lee, 1994; Rossell and Gorsira, 1996).
f:\12000 essays\sciences (985)\Enviromental\Manatees In Danger.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Manatees:
Quietly, but swiftly, the plump, dark animal glided across the water while making sounds comparable to that of the squeaks and squeals of a whale ("Florida Manatee" 1). Some would say these aquatic mammals are the ugliest thing below the surface, others would say that these animals are beautiful and resemble portly mermaids, but no matter what anybody says about the manatees, they are unique creatures (Ray and Ciampi 315). They are mammals that are completely harmless, they feed mostly on sea grass and sometimes small underwater creatures like shrimp (Berrill 212). It is a shame for these creatures to be on the endangered species list.
Looking at the physical aspect, these animals are incredibly uncommon, and like no other creature on earth. These majestic beasts can float across the water amazingly fast for its size ("Florida Manatee" 1). They can weigh up to a ton, and get as long as fifteen feet. They are almost devoid of hair, except for some whiskers on their face, and they have internal ears on the sides of their head. Their nostrils are closed by valves, so they can accomplish such feats as flips and quick turns without losing any air. Manatees have no hind legs, but instead one big, flat, spatula-like tail (Sentman 327). This feature made people confuse manatees with mermaids for nearly four centuries (O'Shea 66).
Many biologists say that manatees possibly originated or evolved from ungulates such as elephants and cows because of the way that they are built, and certain features that they have in common. Like elephants, manatees have the peculiar half-moon shaped fingernails, and thick, wrinkled skin. Manatees also shares some traits with cows. The way the manatees spend all day lazily grazing on the ocean floor is incredibly similar to the behavior of cows at a pasture (Breeden 58).
Manatees eat an outrageous amount of food, they consume approximately ten percent of their body weight daily. The large quantities that the manatees eat is another one of its unique qualities ("Florida Manatee" 1). People use the manatees as natural "underwater lawn mowers", setting them free in lakes that have too much sea grass or plants. The manatees consequently eat up the vegetation, which frees up space to allow other wildlife to inhabit the lake. Manatees are also used to clear up canals and irrigation rivers that are clogged with an extreme amount of aquatic plants ("Manatee Facts" 1). The large diet can also be a disadvantage. With the amount of vegetation in manatee habitats decreasing tremendously, the manatees are in danger of starving to extinction. The underwater plants do not survive because of man's harmful deeds such as pollution, erosion caused by deforestation, and draining wetlands for the building of coastal homes. Since the 1970's, in Tampa Bay alone, eighty percent of sea-grass beds have vanished due to these causes (O'Shea 68).
Manatees can also be silly and clumsy at times, they have very bad eyesight and do not have the attribute of sonar or echo location that some underwater mammals have. This causes them to occasionally bump into large underwater rocks and other submerged objects. The poor navigational abilities of the manatee is an obvious disadvantage. A fast oncoming boat may not be seen by a manatee until it is too late ("Manatee Facts" 1).
Manatees are mainly solitary animals, they graze alone and do not travel in groups. Although sometimes, manatees may be seen in temporary groups in which they will socialize, and leave at anytime. They communicate mostly using faint whistles and squeaks, but some biologists speculate that they use scent marks to mark their location like some land mammals. Newborn manatees will also stay with their mother for at least a year, and will recognize her for the rest of its life. If needed, nursing females will adopt a manatee calf that is not its own (O'Shea 70). This type of social behavior shows that manatees are extremely peaceful, and very friendly.
They are also very agile animals, moving at the normal pace of five miles per hour. When provoked, they can burst to speeds exceeding fifteen miles per hour. They also can perform various feats such as barrel rolls, somersaults, head stands, and gliding upside-down ("Florida Manatee" 1). On the most part, manatees can be found pasturing on the bottom of the ocean. They drift around very slowly when doing this activity, and are usually unknowing of anything else taking place around them. This can leave them greatly vulnerable to poachers, and irresponsible boatmen (Berrill 212).
There are three different types of manatees, the West African, Amazonean, and the Caribbean. The differences between the three are slight physical changes, and habitat. The larger, and more recognized of the three is the Caribbean or West Indian manatee, which lives off the southeastern coast of the United States. All three kinds of manatee species live in tropical or sub-tropical climates, and all three species have legends, or myths linked to them.
The West African manatee is noted by a tribe in Mali, they thought that killing a manatee without permission from the gods would give them a curse, and only trained wise-men could perform this task. The Caribbean manatee was recognized when Christopher Columbus sailed to the Indies, and described them as mermaids in his journal. Lastly, the Amazonean manatee is noted by the Central American Siona Indians in a very unusual story. The Siona Indians believed that an ancient god was deceived and trapped by a tapir, a horse-like animal. The tapir then subjected the god to attack by piranhas. In revenge, the god turned one of the tapir's daughters to live forever in the water as a manatee (O'Shea 68).
The manatees' heritage can also be traced by its name. For instance, their mammalian order, Sirenia, is given that name because of the sound that they made ("Florida Manatee" 1). Sailors mistook their sounds for the sounds of Sirens, characters in Greek mythology who had the bodies of birds, and heads of women. In the myth, the Sirens had such voices of sweetness that they lured sailors to drive their boat onto rocky shores ("Sirens" 1). Their name, manatee, comes from a Carib Indian word for a woman's breast. This is because the nipples of a female manatee are very prominent. They are located on the sides of the manatee, and it can be clearly seen from the surface (McClintock 45). Their common nickname, the sea cow, originates from an extinct species called the Steller's Sea Cow. The Steller's Sea Cow is in the same family as the manatee, and used to inhabit the frigid waters of the Bering Sea. The sea cow name lives on, while the original sea cow does not ("Sea Cow" 1). The nickname was passed to the manatees because of their relation with real cows.
Unfortunately, the Manatees presently face many problems, even with protective laws passed by the US government. Careless boaters are the manatees' worst enemy, countless occasions have resulted in the boat's propellers slicing through the flesh of the manatee, and death usually occurred. If the victim manatee did not die, then they have lifetime propeller scars on their back. This is a shame because it can be avoided very easily, and it happens to helpless animals like the manatee. Other things kill manatees also, like herbicidal spray, flood control dams, and worst of all, illegal hunters. These present day killers murder approximately 100 manatees a year ("Manatee Facts" 1).
However, these numbers are minuscule, compared to the commercialized hunting of the manatees back in the late 1950's. As a many as 7,000 manatees were killed in a year because of this commercial hunting. Fortunately the hunting slowed to a halt in the 1970's because humans had begun to realize the impact that they were having on the manatee population (O'Shea 68).
Not considering humans, manatees have almost no natural predators, but sometimes manatees may be killed by what they eat. Manatees consume a wide range of aquatic plants, including algae, which may contain brevetoxin. Brevetoxin is a bacteria that kills many aquatic animals including fish, and apparently manatees. Brevetoxin is usually found in a type of reddish-brown algae called the red tide. Last July, the bacteria alone killed 304 manatees creating a new official record for most manatees killed in a year ("Toxin Killed Manatees" A18). Aside from Brevetoxin, the manatees only natural predator is its unawareness, they sometimes drift too far north, and get killed by the cold sea water. This is a problem that whales and other large sea mammals also have to face. (O'Shea 68)
Having been studied seriously only since the mid 1900's, manatees are a fairly new creature in the science community. This is probably because that manatees are very timid creatures which makes them hard to analyze. Still, not much is known about the manatees to this present day. We do not know basic fundamental facts such as where they go in the warmer climates, exactly how long they live, and most importantly, precisely how many manatees are in existence today (Breeden 58). The lack of knowledge does not mean that steps are not being taken to study these animals. Recently, researchers attached satellite transmitters to the manatee so that scientists can study their movement, and speed. They have learned many new things from this study, such as that they can travel up to fifty kilometers a day, and go back to a designated location every season.
Further developments in manatee research will help in preventing the accidental death of many of these animals. The research that scientists have learned from the transmitters will help in regulating boat speeds in certain areas to avoid the propeller deaths of many manatees, thus decreasing the death toll. The research will also designate specialized places to guard manatees, these areas will be watched very carefully by the US Fish and Wildlife Service (O'Shea 71).
Scientists have no clue as to the manatee population before the commercialized hunting of the 1900's, therefore, people do not know how large an impact man has made on the manatees. Even without the statistics, or the exact numbers of manatees killed by humans in the past, we still know that man has caused most of these deaths (Breeden 58). Whether it be by hunting, or accidental incidents, man is the manatees' worst enemy. To the average person, manatees may not seem important but they are essential to many living things, including humans. Manatees have changed history, and is important to the heritage of many cultures. Manatees are also important to the ecosystem, getting rid of unwanted, or overgrown aquatic plants. If manatees were to be extinct, humanity will lose an old friend, and a very uncommon species.
Works Cited
Berrill, N.J. The Life Of the Ocean. New York: McGraw-Hill, 1966.
Breeden, Robert L., ed. The Ocean Realm. Washington D.C.: National Geographic
Society, 1978.
"Florida Manatee." Com1.med.usf.edu. Online. Netcom. 24 October 1996.
"Manatee Facts." World Wildlife Fund Canada. Online. Netcom. 24 October 1996.
McClintock, Jack. "Too Nice to Live." Time November 1990: 42+.
O'Shea, Thomas J. "Manatees." Scientific American July 1994: 66+.
Ray, Carleton, and Elgin Ciampi. The Underwater Guide to... Marine Life. New York:
A.S. Barnes and Company, 1956.
"Scientists Say Toxin in Red Tide Killed Scores of Manatees." New York Times 5 July
1996: A18
"Sea Cow." 1996. Microsoft Encarta 96 Encyclopedia. CD-ROM. Funk &Wagnall
Corporation. 1996.
Sentman, Everett. "Sirenia." Academic American Encyclopedia. 1993 ed.
"Sirens." 1996. Microsoft Encarta 96 Encyclopedia. CD-ROM. Funk &Wagnall
Corporation. 1996.
f:\12000 essays\sciences (985)\Enviromental\Natural Resources and managment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Natural Resources and Management
Question one
Cultural resources are the traces of all past activities and accomplishments of people
that includes designated historic districts, archeological sites, buildings, structures, and
objects. These also include less tangible forms like aspects of folklife, traditional or religious
practices, and landscapes. These nonrenewable resources often yield unique information about
past societies and environments, and can provide answers for modern day social and
conservation problems.
a ship wreck
an arrowhead
a canon
an Indian campsite
Indian rock art
a tin can a Victorian house
an historic mining town
an irrigation canal
a dam
All of these can be cultural resources. Cultural resources are the physical remains of a
people's way of life that archaeologists and historians study to try to interpret how people
lived. Cultural resources are important because they help us to learn about our past. These
tangible remains help us understand other cultures, appreciate architecture and engineering,
and learn about past accomplishments. Furthermore, they offer educational and recreational
opportunities and provide links to our past.
People have lived in North America for at least 12,000 years. Archaeologists and
historians have divided this time span into prehistoric and historic periods. The prehistoric
period extends from the earliest arrival of humans in North America to the coming of the
European explorers. The historic period begins with the arrival of these explorers and continues
up to the present.
As you walk across public land, something on the ground catches your eye. You pick up a
piece of pottery or an arrowhead, wondering about the people who made this artifact. Who were
they? When did they live? How did they live?
If you return the artifact to where you found it, you have left in place a clue that could
help us answer these questions. If you take the artifact home with you, or just move it to a
different spot, you may have destroyed a clue to the past. Each artifact is not merely
something to be held and examined; it is also a bit of information which, when taken together
with other bits, allows us to unravel the mysteries of the past.
The past that belongs to all of us. It is part of our heritage as Americans and human
beings. People who loot or vandalize archaeological or historic sites are stealing not only
artifacts, but irreplaceable information; they are stealing our past.
People who deface or loot historical sites, disturb Indian burials, or buy or sell grave
goods can be fined or imprisoned under the Archaeological Resources Protection Act, Native
American Graves Protection and Repatriation Act, and Department of the Interior regulations.
You can help protect America's precious cultural resources: Treat historic and
archaeological sites with care and respect when you visit. Take only with your eyes and heart;
leave them intact for your children's children. If you see someone vandalizing or looting a site,
notify the regional archaeologist as soon as possible. Do not attempt to confront the vandal
yourself. Join your local or state archaeological or historical society. You will learn more about
the archaeology and history of your part of the country. Many states have volunteer programs
that allow people to be trained and work on archaeological excavations.
f:\12000 essays\sciences (985)\Enviromental\No You Cannot Come In.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No! You Cannot Come in
Garrett Hardin writes about saving the poor in
his essay"Lifeboat Ethics: The Case Against the
Poor" found in The Blair Reader. Hardin writes about
how the rich countries are in the lifeboat and the
poor countries are swimming in the ocean. He also
writes about how the United States helps other
countries. Hardin feels that if the government keeps
helping other countries and letting people in then
America will also drown. "We must convince them if
we wish to save at least part of the world form
environmental ruin"(page 765).
Why should I help the poor countries? Why
should I let the immigrants in? I see no reason for
helping someone that is not an American. These non
Americans are taking my hard-working money that
they did not earn.
I am tired of the United States of America giving
my money to the poor countries. The government is
giving these people my money for which I worked
hard. The government does not ask for my
permission to give these people my money. By
letting these people on our lifeboat the government is
drowning us all. "If we do let an extra 10 people in
our lifeboat, we will have lost our 'safety factor,' an
engineering principle of critical importance" (page
757). I cannot take a chance in helping people if it is
going to put me in risk.
Instead of giving the money to non Americans it
should be used only in America. The money used to
help the poorer countries can be very useful in the
United States. The middle class people in America
get no help. More of that money can go toward the
middle class families. The middle class families work
had for their money. The government helps poor
families with food, housing, education and many
more things. The rich have more money than they
need, but the middle class is left struggling. The
middle class people cannot move up. The middle
class cannot get ant help from the government. It
makes me mad that the poor Americans do not take
advantages of some of the opportunities available for
them. The middle class people sometimes work two
or three jobs to pay for their own or their children's
college education. The government should use the
money they are sending to other countries to help
the taxpayers.
In my family we have just enough money to get
by. I do not see the government knocking on my
door handing me food or money. Why should they
give my money to other countries? The government
will not help my family because our gross income is
too high to receive any help. Well true maybe our
gross income is high, but we do not take all that
money home. The government is taking money out
of my paycheck, money that I struggle for, and giving
it to other countries. I have my own dependents to
take care of. I should not have to take care of other
people. It was my decision to have my children. If I
wanted more dependents, I would have them. The
government also tells me that if I had gotten a used
car with cheap payments then I would have extra
money left over. Why should I have to get a used
car? If I get a used car, I will only be spending more
money getting it fixed. I am tired of the government
giving me excuses on why I do not need any help.
The only reason the government cannot help me is
because they do not have the money. The only
reason the government does not have the money is
because they are giving it to other countries.
I am tired of the immigrants coming into this
country. Hardin says "But aren't we all immigrants, or
the descendants of immigrants?"(page 764). Well
yes, I am a descendant of immigrants and I
understand that. Nevertheless, what gives the
immigrants the right to come in my boat and take my
money? I have no objections to those immigrants
who come here to work or bring work. What I do
have a problem is with those that come here and
take advantage of our welfare system. These new
immigrants did not deposit money into the welfare
system so why should they be allowed to withdrawal.
If the immigrants were not taking our tax money then
there would be more money for our people.
Who is going to help me? No body is going to
help me, not even my own country. The chances of
me getting social security when I am old are slim.
Meanwhile, these immigrants are getting my money.
While my money is being spent on non Americans, I
might be living in a shack going through Dumpsters
to get by, when I am old.
Garrett Hardin in "Lifeboat Ethics: The Case
Against the Poor" writes that if America keeps trying
to save other people from different countries then
America will also drown. I do not want these people
in my lifeboat. America is already starting to drown. I
want my money to be used to help only American
families. Leave American money in America!
f:\12000 essays\sciences (985)\Enviromental\Nuclear Smuggling.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Smuggling of Nuclear Material
Over the past five years the former states of the Soviet Union haven¹t been able to prevent the leakage of nuclear material. Nuclear materials and technologies are more accessible now than at any other time in history, due to the breakup of the Soviet Union and the worsening of economic conditions. No longer does the Soviet KGB, the Soviet military and the Soviet border guards have the control to stop the smuggling of nuclear material¹s. With the Cold War being over, there is a huge stockpile of over 100 nuclear sites (See Appendix A). Russia, alone has an inventory of 1,300 tons of highly enriched uranium (HEU), and 165 tons of weapon usable plutonium. Such material is coming into high demand on the market. Terrorist, organized crime and countries with nuclear ambition, are high bid contenders for the material. The United States is also becoming involved for the safety of preventing a nuclear disaster. The U.S. has just begun their large task and with Russia¹s worsening economy, smuggling of nuclear material will continue.
During the Cold War the security of Soviet nuclear weapons and missile materials was based on a highly centralized military system and operating within a strong political authority. The workers back then where well disciplined and each individual new his/her role. The workers were among the best treated and loyal to the Russian military. They are now suffering hardships and are forced to scavenge anything to pay for their food, rent and social services.
A new trend is already occurring with some of the workers . There are those that will seek employment out of the nuclear field and in the commercial sector, where salaries are higher. Then the unfortunate who lose their jobs and find no work. The scarier thought is that the uncontempt people in Russia¹s nuclear complex with access to nuclear materials will sell themselves, to make a quick buck. Most suppliers of nuclear material, were insiders who had worked or were then working at nuclear research institutes or naval bases. Most perpetrators had no customers in hand but new that a quick profit existed (See Appendix B).
The first confirmed case involving the diversion of HEU occurred at the Luch Scientific Production Association in Podolsk. Between May and September of 1992, Leonid Smirnov, a chemical engineer stole approximately 1.5 Kg of weapon grade HEU. He recovered the enriched Uranium in the form of a uranium dioxide powder, and stored it in his apartment balcony. He was apparently motivated by an article on the fortune in selling HEU. On October 9, 1992 he was apprehended at a Podolsk railroad. Under Article 223 he was sentenced for three years.
The largest quantity of weapon-usuable nuclear material smuggled outside Russia was found in Prague, Czech Republic on Dec. 14, 1994. Two canisters of HEU enriched to 87.79% U-235 was found in the car of Jaroslav Vagner. Vagner had worked at several power stations at Dukovany and Temelin and he left due to poor wages. His arrest came, due to an anonymous telephone tip.
The list of potential proliferations exist in the state, separatist and terrorist groups and organize crime. Many countries are looking for critical components for their nuclear weapons program. Finding a material would shorten the time in producing a nuclear weapon. For instance, Iran is actively pursuing nuclear weapon capability. They¹re attempting to develop both plutonium and HEC. In 1992 Iran unsuccessfully approached the Ulba Metallurgical Plant and in 1993 three Iranians belonging to intelligence service were arrested in Turkey while seeking to acquire smuggled nuclear material. The CIA, John Deutch estimates: ³Iran is a couple of years away from producing a nuclear weapon.²
Iran¹s neighbor Iraq continues its nuclear program after being significantly damaged from Operation Desert Storm and continued U.N sanctions. The CIA assess that Iraq would take any opportunity to buy nuclear weapons materials. They have already tried but failed in one incident. In 1994, Jordanian authorities intercepted a shipment of sophisticated Russian produced missile guidance instruments, bound to Iraq.
Another country, Libya with nuclear weapons ambitions, currently operates a Soviet supplied research center near Tripoli. President Qadhafi is said to be recruiting nuclear scientist to aide in developing nuclear weapons.
Terrorist are also potential buyers for radioactive materials. They could use it to contaminate water supplies, business centers or government facilities. Terrorist are categorized into Traditional terrorist that would remain hesitant to use nuclear weapons in fear of a crackdown of their supporters then the multinational terrorist which are motivated by revenge, religious fevor and hatred for the west. On March 20, 1995 the Japanese cult Aum Shinrikyo attacked some Japanese civilians with deadly gas. The terrorist group had also tride to mine its own uranium in Australia.
Organized crime is a powerful and pervasive force in Russia. The CIA estimates 200 large sophisticated criminal organized operations. They¹ve established international smuggling networks and have connections to governmental officials. The access to nuclear weapons is easily available due to the connections they¹ve established.
The smuggling of nuclear material out of Russia has become proratable to the low income worker as well as to the benefit of terrorist, organism crime and nuclear research centers. There exist no security at most of the nuclear facilities and so apprehending the material is quit easy. More cases of unsuccessful attempts are becoming known to the media, but any successful attempts the government, both Russia and the U.S have decline to answer. If left unchecked it may even escalate to the complex level of drug smuggling.
Work Cited List
Thomas B. Cochran, Robert S. Norriss, Making the Russian Bomb: From Stalin
to Yelsin, Boulder, CO: Westview Press, 1995.
John Deutch, ³The Threat of Nuclear Diversion Statement for Record,² CIA,
March 20, 1996.
Alexei Lebedev, in ³Russian Weapons Plutonium Storage Termed Unsafe
by Minatom Official,² Nucleonics Week, April 28, 1994.
Weclliam C. Potter, ³Arms Control Today,² Monterey Institute of Internation
Studies, October 1995.
Paul Woessner, ³Nuclear Material Trafficking: An Interim Assessment, ³ Bridgeway
f:\12000 essays\sciences (985)\Enviromental\Nuclear Waste Management.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nuclear Waste management
Nuclear energy harnesses the energy released during the splitting or fusing of atomic nuclei. This heat energy is most often used to convert water to steam, turning turbines, and generating electricity.
However, nuclear energy also has many disadvantages. An event that demonstrated this was the terrible incident at Chernobyl'. Here on April 26, 1986, one of the reactors of a nuclear power plant went out of control and caused the world's worst known reactor disaster to date. An experiment that was not properly supervised was conducted with the water-cooling system turned off. This led to the uncontrolled reaction, which in turn caused a steam explosion. The reactor's protective covering was blown off, and approximately 100 million curies of radionuclides were released into the atmosphere. Some of the radiation spread across northern Europe and into Great Britain. Soviet statements indicated that 31 people died because of the accident, but the number of radiation-caused deaths is still unknown.
The same deadly radiation that was present in this explosion is also present in spent fuels. This presents special problems in the handling, storage, and disposal of the depleted uranium. When nuclear fuel is first loaded into a reactor, 238U and 235U are present. When in the reactor, the 235U is gradually depleted and gives rise to fission products, generally, cesium (137Cs) and strontium (90Sr). These waste materials are very unstable and have to undergo radioactive disintegration before they can be transformed into stable isotopes. Each radioactive isotope in this waste material decays at its characteristic rate. A half-life can be less than a second or can be thousands of years long. The isotopes also emit characteristic radiation: it can be electromagnetic (X-ray or gamma radiation) or it can consist of particles (alpha, beta, or neutron radiation).
Exposure to large doses of ionizing radiation causes characteristic patterns of injury. Doses are measured in rads (1 rad is equal to an amount of radiation that releases 100 ergs of energy per gram of matter). Doses of more than 4000 rads severely damage the human vascular system, causing cerebral edema (excess fluid), which leads to extreme shock and neurological disturbances causing death within 48 hours. Whole-body doses of 1000 to 4000 rads cause less severe vascular damage, but they can lead to a loss of fluids and electrolytes into the intercellular spaces and the gastrointestinal tract causing death within ten days because of a fluid and electrolyte imbalance, severe bone-marrow damage, and terminal infection. Absorbed doses of 150 to 1000 rads cause destruction of human bone marrow, leading to infection and hemorrhage death may occur after four to five weeks after the date of exposure. Currently only the effects of these lower doses can be treated effectively, but if untreated, half the perso
ns receiving as little as 300 to 325 rads to the bone marrow will die.
To store the nuclear waste products that give off this deadly radiation, many precautions must be taken. Spent fuel may be stored or solidified.
The primary way of storing the nuclear waste is storage. Since spent fuel continues to be a source of heat and radiation after it is taken from the reactor, it can be stored underwater in a deep pool at the reactor site. The water keeps the fuel assemblies cool and acts as a shield to protect workers from gamma radiation. The water is kept free of minerals that would corrode the fuel in tubes.
Fuel assemblies are kept separated in the pool by metal racks that leave one foot between centers. This grid structure is made with metal containing boron, which helps to absorb neutrons and prevents their multiplication.
A problem with this type of storage is that in 1977, a federal moratorium on reprocessing was instituted. This required the utility companies to keep used fuel at the reactor site. This requirement was met by building closer-packed racks to store more fuel in the same amount of space.
An alternative way of storing spent fuel is through solidification. Federal regulations require that liquid reprocessing waste be solidified for disposal within five years of production. There are different approaches to solidification. These include calcination, vitrification, and incorporation of waste into ceramics and synthetic materials.
Calcination is a process in which the liquid waste is sprayed through an atomizer and then dried at a high temperature. This results in "calcine" (which is highly radioactive) and temporarily stored in bins for further processing.
Vitrification consists of the mixing of calcined waste with borosilicate glass grit. This is melted in a specialized furnace and cast into a mold. Borosilicate glass is considered a suitable matrix for nuclear waste because the glass has strong interatomic bonding but not a strict atomic structure. Because of this, it is able to contain a variety of different elements. Under running or standing water, radioactive products leak out at a very slow rate. In addition, the glass is resistant to structural damage from radiation.
Another way to encapsulate the waste is through crystalline ceramics. The ceramic matrix is a substance that crystallizes into an ordered atomic structure that can be altered to suit specific types of wastes and geochemical condition. Radioactive products leak very slowly from this type of structure as well, and the crystalline structure continues to exist even if the ceramics break down.
Dry storage of spent fuel has the advantage of avoiding the need for water pools. Containers are easily made, and very little maintenance is required. Design and safety considerations for these containers include radiation levels, effects of temperature, wind, tornado, fire, lightning, snow and ice, earthquake, and aircraft crash.
One of these containers is called the CASTOR V/21. This is a cylindrical container is cast iron 16 feet tall, about 8 feet in diameter, and with walls of 15 inches. It has fins on its outside to help disperse the temperature of decay. This container holds 21 fuel assemblies. These types of containers are relatively low in cost compared to storage in a pool of water and can be moved around if necessary.
Another way to dispose of radioactive wastes is through geologic isolation. This is the disposal of wastes deep within the crust of the earth. This form of disposal is attractive because it appears that wastes can be safely isolated from the biosphere for thousands of years or longer. Disposal in mined vaults does not require the use of advanced technologies, rather the application of what we know today. It is possible to locate mineral, rock, or other bodies beneath the surface of the earth that will not be subject to groundwater intrusion. A preferred place would be at least 1,500 feet below the earth's crust, so that it may avoid erosion for the specified period of time.
None of the preceding methods offers a complete solution to the problem of nuclear waste. They only bury it, temporarily shoving it out of our current view for a latter generation to solve. Maybe the future inhabitants of this world will find a solution to this problem, for as we chose to continue the use of nuclear power, more and more waste will be accumulated, emitting deadly radiation long after we pass away.
f:\12000 essays\sciences (985)\Enviromental\Nuclear Weapons.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Eric Sajo
Research and Writing
12-2
10/21/96
Mrs. Krantz
Nuclear Weapons
A Nuclear weapon is any weapon that gets its destructive power from the transformation of matter in atoms into energy. They include missiles, bombs, artillery shells, mines and torpedoes. Another name for nuclear weapons are Atomic bombs or Hydrogen bombs. The United States was the first country to ever use a Nuclear weapon in battle against Japan.
The major arguments for a test ban was first proposed in the 1950?s. Today, however, the stopping of radioactive fallout and the superpower arms race are still in negotiation. Nations have sought to limit the testing of nuclear weapons to protect people and the environment from nuclear radiation and to slow the development of nuclear weapons. In 1963, Great Britain, the Soviet Union, and the United States negotiated the first test limitation treaty, the Limited Test Ban Treaty. The Treaty?s signers agreed not to test nuclear weapons in the atmosphere, in outer space, or underwater. The only testing that was allowed was underground testing.
Attempts to control the number of nuclear weapons in the world began about 1970. The Strategic Arms Limitation Talks(SALT) was a convention held by the United States and the Soviet Union to limit the numbers in nuclear weapons. In 1982, the United States and the Soviet Union began the Strategic Arms Reduction Talks(START). Unlike the SALT talks, these were aimed at the number of nuclear weapons each country could obtain. Then there was another treaty signed in 1987 which was called the Intermediate Range Nuclear Forces(INF). This treaty called for the dismantling of ground-launched nuclear missiles.
A major obstacle to controlling nuclear weapons has been a lack of trust between the two principal powers; the United States and the Soviet Union. The relationship has improved though in the late 1980?s after President Gorbachev introduced the principles of glasnost and perestroika to the Soviet Political System. In 1989 and 1990, democratic reforms spread spread across Eastern Europe. These reforms have greatly reduced tensions.
The country of China still wants to test their nuclear explosions for mining and for some construction. For two years China has successfully held up the 38-nation Geneva negotiations on a comprehensive test ban treaty. No other nation has been supportive to the Chinese. They find their reason as a lame excuse to start setting off explosions again. The treaty plays a very important role in creating a barrier to stop the spread of nuclear weapons. The two biggest problems are with nuclear weapons nowadays is that testing isn?t necessary to develop a workable, Hiroshima-type fission bomb in this age of computers and wide spread access to nuclear data, and India nor Pakistan, the two most worrisome nuclear powers is likely to sign any deal at all.
The United States, Great Britain, Russia, and France have joined a moratorium on all testing. Only China continues to develop lightweight, multiple warheads that could be deployed on submarine based missiles. Claiming discrimination, India insists it will not accede to a test ban unless the declared nuclear states agree to give up their nuclear arsenals by a certain date. Pakistan also says if India does not sign, they won?t either. One frequently mentioned scenario is for India to conduct a quick series of tests to develop a thermonuclear weapon and only then give in to international pressure to sign the treaty.
The Comprehensive Test Ban Treaty if eventually agreed to, might not be
so comprehensive after all. Meeting in Geneva the 61 nation conference on Disarmament again failed to produce an agreed treaty before breaking up. Negotiators will return again to produce another final effort for a test ban treaty in 1996.
After 18 months of talks, the proposed treaty text bans all nuclear tests, no matter the size or purpose. Still unresolved is whether ratification by the three nuclear powers of India, Pakistan, and Israel should be required before the treaty enters the force. India has declared that it will not ratify a test ban without a timetable for disarmament by the United States, Russia, France, Britain and China. China and Britain are reluctant to accept restrictions on their programs unless India joins in.
Many people believe the political developments of the late 1980?s and of 1990 marked the end of the Cold War. Military analysts expect that nuclear military arsenals will be reduced in size. At the same time, most weapon specialists expect that nuclear weapons will continue to help prevent political tensions-in Europe or elsewhere. They believe that the key issue will be to define the role of nuclear weapons in whatever military forces are considered necessary.
In conclusion, Nuclear Weapons aren?t safe for any country no matter their stability amongst others. Testing Nuclear weapons destroys the well being of our Earth. So many treaties have been passed but it still seems that the likes of a Nuclear war is still stagable. The United States has the most known nuclear tests having a record of 1,030 tests. The closest country next to us is Russia with 715 tests. As you can clearly see it will take a lot more than treaties to negotiate. Let?s just hope this will all end before that ozone layer breaks down on us!!!!
BIBLIOGRAPHY
1. Budanski, Stephen. ?Ban the bomb? Not quite.Ó U.S. News and World Report
17 June 1996: 30
2. Cohen, James. Nuclear arms. Pittsburgh 1979.
3. Mitchell, Alison. ?Clinton and Yeltsin Accentuate the positive at Summit Meeting.Ó Newsweek 22 May 1990 A7 Column 1
4. Von Hippel, Frank. ?Bombs Away.Ó Discover April 1992: 32-35
5. Zimmerman, Tim. ?Nuclear Fiction.Ó U.S. News and World Report 24 August 1996: 20
f:\12000 essays\sciences (985)\Enviromental\Oceans.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The ocean covers Seventy-one percent of our planets surface. Life is
concentrated, however, in about four percent of it, and it is this four percent that is
being polluted by the tons every day. Everyone needs to understand that the oceans
are not endless, and not bottomless. They also much see that the ocean contains
much marine life which are essential to our eco-system. And in order to preserve
this other world of life, we must stop polluting the oceans, and begin to clean them
up. Although using the ocean for a toxic waste dump may provide for a cheap
alternative, we must not succumb to these barbaric urges. If we neglect to deal with
these ideals, than the world as we know it may not be as great a world for our
children as it was for us.
First, we need to understand that the oceans are not the vast resources that we
believe them to be, but just vulnerable natural resources. Before Columbus' day,
the ocean were thought to be boundless. Although Columbus proved this theory
incorrect, the thought still remains in today's societies. "For we of the 20th century
still treat the ocean as the endless, bottomless pit it was considered to be in
medieval times."(Heyerdahl) The majority of the world's population still lives
under the misconception that the ocean is a hungry abyss, eager to devour all their
waste. These beliefs, however, are all untrue. The average depth of the oceans is
only a little more than a mile, when in fact, some lakes exceed this depth rather
handily. Although the size of the ocean is often pondered, the thought that it may
one day be gone, is never even considered.
The vast majority of all life in the ocean, inhabits only 1/25 of these waters,
but it is these surroundings that are in the most danger. In the beginning of the
world, marine plankton was vital to the evolution of man. Today, it is even more
important to us, being that it provides us with a great percentage of oxygen we
receive. "These minute plant species manufactured so much oxygen that it rose
above the surface to help form the atmosphere we have today."(Heyerdahl) With
the disappearance of the plankton through increased pollution, the obvious result
will be a total deprivation of our oxygen supply, in turn limiting all people to certain
limits. And with urban expansion leading to deforestation, our dependence upon
marine life becomes heightened. The importance of marine plankton cannot be
emphasized enough, yet most people fail to recognize it as the vital life supply it is.
Further, since the turn of the century, humans have continually polluted the
waters of the ocean. The trend has not lessened; but has increased as time has
passed. "Most of our new chemical products are not only toxic: they are in fact
created to sterile and kill. And they keep on displaying these same inherent abilities
wherever they end up."(Heyerdahl) Although pollution reforms are in place, the
clean-up efforts cannot keep up with the constant pollution. These wastes are not
degradable; they remain in the ocean causing more death until they wash up on a
distant shore. "Through sewers and seepage they all head for the ocean, where they
remain to accumulate as undesired nuts and bolts in between the cogwheels of a so
far smoothly running machine."(Heyerdahl) Everyday, over 40,000 tons of garbage
from the major cities of America alone are taken on a one-way excursion. Where to?
The ocean, to sit like the many generations of waste before them. This constant
abuse to our natural resources will not be endured for long; for even the ocean has
limits.
In order to survive longer as a species on this planet, we must stop polluting
the oceans, save the fragile marine eco-system, and understand that even the ocean
has limitations, and that they are being pushed too far. "Can man survive with a
dead ocean?"(Heyerdahl) The answer is clear and obvious: no. We cannot
conceivably survive without the immeasurable subsistence the ocean provides us.
Currently, we are on the path to self-destruction. We need get off this often traveled
beaten path, and blaze a new one for ourselves. These ideas have been around since
the beginning; now its time to adhere to them.
f:\12000 essays\sciences (985)\Enviromental\our radiant planet depletion of the ozone layer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Our Radiant Planet- Depletion of the Ozone Layer
Ozone is a relatively unstable form of molecular oxygen containing three oxygen atoms produced when upper-atmosphere oxygen molecules are split by ultra violet light. Stratospheric ozone is found in a broad band, extending generally from 15 to 35km above the earth. Although the ozone layer is surprisingly thin, it acts as a protective shield to the earth, as it filters out most of the harmful solar ultraviolet radiation (in particular UV-B) that would otherwise reach our planets surface.
Humans have damaged the ozone layer by adding molecules containing chlorine and/or bromine that lead to ozone destruction. The largest group among these are chloroflurocarbons (CFC's). At ground level, these molecules are very stable and have many uses in industrial and domestic applications, such as in spray cans, industrial solvents, degreasing compounds, and cooling in fridges. However when released into the stratosphere, such molecules can be broken down by energetic light rays (UV-C radiation) in a reaction that liberates an atom of chlorine, which destroys ozone by oxidising with the Ozone molecules, forming Cl-O and Oxygen. One atom of chlorine can destroy 10,000 ozone molecules! Atoms containing bromine, nitrous oxide, and hydrogen oxide radicals are also primarily dangerous.
As a result, the Ozone in the stratosphere has been reduced to such an extent that ozone holes are appearing around the globe, in particular one over Antarctica that in 1995 measured 8.2 million square miles. This depletion has allowed more dangerous UV-B radiation to reach the earths surface.
So what effects will ozone depletion have on us? Although, at present, the ozone layer blocks out most of the damaging UVB radiation received from the Sun, a small amount slips by, damaging out skin in the form of sunburns and suntans. UVB radiation is strongly absorbed in the skin and in the outer layers of the eye. Human skin has developed various defence mechanisms against the damaging effects of UV radiation. The skin adapts to increased UV exposure by thickening its outer layer and by developing pigmentation that serves to shade the more vulnerable and deeper residing dividing cells. Overly damaged cells will normally self destruct through a process called apoptosis, and if this fails, the immune system should get rid of any resulting aberrant cells. It is when these natural safeguards fail or are overcome by UVB that real trouble can ensue.
The most well known impacts on human health from exposure to UV radiation are skin aging, sunburning and skin cancer, although recent research has expanded the list to include eye damage such as cataracts and the suppression of the body's immunes system to both infectious disease and chemical sensitivities.
Sunburning is caused by UVB exposure. It causes a reddening of the skin and over time can cause a dramatic aging effect on the skin.
The three types of skin cancers induced by UV radiation are basal cell, squamous cell and melanoma. Basal cells and squamous cell cancers, also called non-melanomas are caused by UVB irradiation, and account for 93% of skin cancers. They can be easily removed and are rarely fatal. The rarest cancer caused by UVB is melanoma and is the most deadly. It spreads quickly to the blood, lymphatic system and other organs.
The damage caused by UV radiation to the eyes ranges from acute sunblindness to chronic damage such as cataracts and eye tumours. The principal form of chronic damage linked to UV radiation is cataracts.
Exposure to UVB has been shown to suppress a portion of the human immune system important as the skin's defence against infectious agents such as bacteria, viruses and chemicals. The impact of UVB on human disease works in three levels. One is the increase in activation of viruses, the second is the decrease in the immune system to respond to viral and bacterial infection, and the third is the decrease in tolerance to chemical exposure.
So what about the ecosystem? All animals and plants that are exposed to the Sun, though well shielded by the ozone layer, have developed ways to cope with the UVB radiation the reaches the earths surface, but again when their tolerance levels are overcome many species may be clearly limited in their growth.
In addition to these direct growth effects, there may occur more subtle changes such in plants with delays in flowering and shift in leaf distribution, causing dramatic shifts in plant populations and in biodiversity. Depletion of plants that serve as a sink for carbon dioxide could lead to major problems and enhance climate change, and changes in food web could have a domino effect that could affect mankind.
Similar processes can occur in the marine ecosystem, decreasing populations of phytoplankton in the worlds oceans. Disruption of the marine food webs will have the same effect as changes in the terrestrial webs mentioned above.
Ecosystems may be further disturbed by effects of UV radiation on animals, especially in vulnerable, early stages of life such as larvae or eggs.
Scientific research has only begun to scratch the surface on the impacts of UV-B radiation, but what is known is cause for concern. Despite the ever increasing list of negative UVB effects, ozone depleting chemicals are continually released into our atmosphere. Despite the availability of safer alternatives, technologies which are only slightly safer than the ones which they replace continue to be promoted. Despite the wisdom of the global science we still drag our feet. How far will the burning go? It is already deep into our seas and forests, and deep into our skin. We need to act now, and protect our planet. Our radiant earth.
Bibliography
The Electric Library
Encarta 95
Ozone Depletion book at http://www.now.edu.au/arts/sts/sberer/h.html
Beder, Sharon The hole story , 6/1992 ISBN Sydney
H.S Johnston, Atmospheric Ozone, Annu.Rev London 1994
J.M Wallace and P.V Hobbs, Atmospheric Science, and introductory survey , Academic Press USA 1993
f:\12000 essays\sciences (985)\Enviromental\Ozone Acid rain and Solid Waste polution in Newfoundland C.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ozone (O3) is a molecule consisting of three oxygen atoms, similar to the oxygen we breathe (O2), however oxygen consists of only two oxygen atoms. In the stratosphere, a region high up in the upper atmosphere, light rays are responsible for the breaking down of oxygen (O2), breathable oxygen into its two separate oxygen atoms. Lone oxygen atoms are markedly reactive. When a lone oxygen atom comes into contact with a breathable oxygen molecule (O2) it combines to form ozone (O3). The ozone layer is a small residual amount of ozone concentrated in a band in the upper atmosphere. This band of concentrated ozone resides approximately between twenty and forty kilometers high in the stratosphere. The ozone layer reactions that both create and destroy ozone has come into a dynamic equilibrium. This dynamic equilibrium is very delicate and resulted during atmospheric formation (Environment Canada, 1996).
Ozone, however, is very rare even in the ozone layer. Oxygen makes up approximately twenty percent of air and ozone makes up only 3 x 10-5 percent of air. Furthermore, this minuscule amount of ozone is enough to protect the earth from most ultraviolet light. Ozone prevents most UV-B radiation from reaching the surface of the earth (Environment Canada, 1996).
Ozone is very important to life on earth because the harmfulness of high-energy UV-B radiation stems from the high energy of these light rays, enabling them to penetrate deeply into water, plant tissue and epidermal tissue of animals. Increased UV-B radiation results in harming the metabolic system of cells and ultimately damage to genetic material present in effected cells. Living organisms on the surface of the earth have always been exposed to some, and only slightly differing levels of UV-B radiation depending of geographic location and season. Through evolution, cellular repair mechanisms have evolved to safeguard cells against damage done by UV-B radiation. With the increase in the UV-B radiation, more damage is done to cellular functions then the natural protection system can deal with (Environment Canada, 1996). Life on earth would more or less be void if not for the formation of the ozone layer during atmospheric formation (Porter, 1996). With out the ozone layer the harmful UV-B radiation would not allow the growth of autotrophic plants, resulting in reduction in oxygen production; ultimately the destruction of most living organisms on the earth surface would result. Increased UV-B radiation has been linked to many incidence of increased health problems among humans. UV-B radiation leads to increase skin cancer, eye damage, and possible inhibition of the immune system (Health Canada). These incidence have been noticed in humans, and it is presumed that these problems will occur in other animals as well. Terrestrial plant life is of great vulnerability to increased UV-B radiation, it can cause the destruction of chlorophyll in plant leaves resulting is less growth, and ultimately reduction in crop yields, forest annual increments and a general decline in forest ecosystem health. The UV-B radiation also causes the potential for the decrease in the populations of phytoplankton in the world's oceans, causing yet more problems when one analyzes phytoplankton in the oceans food chain (Clair, 1996).
Humans are responsible for almost all activities and pollutants that deplete the ozone layer. Humanity has damaged the ozone layer by adding synthetically made molecules containing both chlorine and/or bromine to the atmosphere. Both chlorine and bromine are attributed to ozone destruction. The most commonly know group of these are called CFCs, chlorofluorocarbons. Chlorofluorocarbons are utilized for many industrial and domestic applications. At the earth surface, these molecules remain stable. However, with their release into the atmosphere they are subject to global air currents, winds aloft and atmospheric mixing, causing them to drift up into the stratosphere. Other chemicals such as halons, carbon tetrachloride and methyl chloroform, also attribute to ozone depletion. However some naturally found molecules in the stratosphere, such as nitrous oxide, also a by product of the burning of fossil fuels, attribute to the break down of ozone (O3). Natural factors include the quasi-biennial oscillation of stratospheric winds which occurs approximately once every 2.3 years, and the 11 year sunspot cycle. However the observation of the sunspot cycle reveals that the total global ozone levels should not decrease more than one to two percent (Environment Canada, 1996).
In the stratosphere such molecules are effected by energetic UV-C radiation. UV-C radiation breaks down chlorine, freeing an atom of chlorine (Cl). Chlorine atoms will react with ozone (O3) by splitting of one oxygen atom to form Chlorine oxide (ClO) and Oxygen (O2). The Chlorine oxide however will again be broken down into Chlorine and a free oxygen atom to allow the chlorine to continue destroying ozone. One Chlorine atom (Cl) can destroy ten thousand ozone molecules (Environment Canada, 1996).
With the identification of the human-produced chemicals that have lead to the destruction of the ozone layer the extent of the threat to stratospheric ozone has been realized. With the emergence of the scientific evidence on the ozone depletion threat, the international community agreed to regulate ozone destructive chemicals, and setup a timetable for their complete phase-out.
The 1987 Montreal Protocol, and subsequent London 1990, and Copenhagen 1992 amendments was an agreement that stipulated this timetable. The Montreal Protocol was a monumental achievement in international environmental cooperation and protection. The Protocol allowed for the refinement of the timetable as the on-going process of scientific understanding on ozone depletion improved, the phase-outs cloud be expedited. In the spring of 1989, eighty countries met in Helsinki, Finland to assess new information. Unanimous agreement to a five point "Helsinki Declaration". The Declaration stipulated that all countries join both the Vienna convention for the protection of the ozone layer and the Montreal Protocol, phase out of CFCs by 2000, phase out halons as soon as feasible, commit to the development of alternative environmentally acceptable chemicals and technologies, and make information accessible to developing countries. In 1995, over one hundred and fifty countries had ratified the Montreal Protocol. In compliance, chlorofluorocarbons, carbon tetrachloride and methyl chloroform production was to be phased out at the end of 1995; methyl bromide is currently scheduled for United States phase-out by 2001; and all hydorchlorofluorocarbons will be phased out by 2030 (Environment Canada, 1993).
Environment Canada has implemented a UV index to provide information to the general public on specific UV hazards daily. Constant monitoring, global awareness and the eventual phase-out of all ozone depleting substances are all part of Canada's measures for the protection of the ozone layer. Environment Canada highlights five measures being taken to control Canada's ozone depleting substances:
¨Canada's ozone depleting substance phase-out plan, developed as a result of the Montreal Protocol, has accomplished many of its goals already.
¨Most new cars with air conditioning manufactured in Canada are now fitted with hydrofluorocarbon air conditioning systems that use HFC-134a (hydrofluorocarbon 134-a). HCFCs and HFCs have been introduced to replace CFCs. On average, HCFCs have about 5% of the ozone-depleting potential of CFCs.
¨Recovery and recycling regulations for ozone depleting substances (not including methyl bromide) are in place in 9 out of the 10 provinces, while Newfoundland and Yukon are in the process of drafting regulations. Guidelines are being prepared in the Northwest Territories.
¨On August 10, 1995, the Zer-O-Zone project was launched at Winnipeg City Hall. The project, which is an initiative of the Sierra Club, is intended to foster public awareness of and support for Manitoba's Ozone Protection Regulation.
¨Canada has established bilateral agreements for ozone depleting substance technology and information transfer with China, Brazil and Venezuela.
¨A Multilateral Fund has been set up by industrialized countries under the Montreal Protocol to assist developing countries in the phase-out of controlled substances.
(Environment Canada, 1996)
Acid rain, the widely used term for precipitation acidified by atmospheric pollutants may be either dry or wet deposition. Acid rain is caused by pollutants such as sulphur dioxide (SO2 ) and nitrogen oxides (NOx), these pollutants originate from fossil fuel burning utilities, industrial and automotive sources. In the atmosphere sulphur dioxide (SO2) and nitrogen oxides (NOx) are converted chemically to sulphuric acid and nitric acid respectively. Diluted forms of these acids fall to the earth surface as rain, hail, drizzle, freezing rain, snow or fog (wet deposition), they are also deposited as acid gas or dust (dry deposition). Normal rain (pH 5.6) is slightly acidic, but acid rain can be as much as 100 times more acidic (Watt, 1987).
With the burning of fossil fuels these chemicals are released into the atmosphere, acidic pollutants may be transported great distances by the prevailing winds, winds aloft and weather systems before being deposited. It is estimated that more than 50% of the acid rain that falls in eastern Canada comes from US. sources (U.S. Environmental Protection Agency, 1991).
Natural sources of SO2 and NOx do exist. In comparison though more than 90% of the SO2 and NOx emissions occurring in North America are from human activity. In Canada, the largest sources of SO2 are the smelting or refining of sulphur-bearing metal ores and the burning of fossil fuels for energy. NOx pollutants are formed during the combustion of fossil fuels in transportation (responsible for 35% of total emissions), industrial processes/fuel combustion (23%), power generation (12%) and other sources (30%) (River Road Environmental Technology Centre, 1991).
Of Canada's total land area, about 4 million km2 or 43% is highly sensitive to acid rain (Hughs, 1991). With little ability to neutralize acidic pollutants eastern Canada is more seriously affected by acid deposition. Eastern Canada being composed of thin, coarsely textured soil (glacial till) and granite bedrock (characteristic of the Canadian Shield) do not have the buffering ability found in the deeper organic soils of western Canada. Further, eastern Canada receives more acidic deposition than any other region in Canada. Acid rain is a less serious problem in western Canada because of lower overall exposure to acidic pollutants and a generally less acid-sensitive environment. However, the northern parts of Manitoba and Saskatchewan, along with the north eastern corner of Alberta remain in the Canadian Shield region, and are more affected by acid deposition.
Acid rain may contribute to declining growth rates and increased death rates in trees. For example, instances of dieback and deterioration have been noted in white birch in southeastern New Brunswick caused by acid fog, and acidic cloud precipitation (Hughs, 1991). High levels of acidic deposition result in the acidification of acid-sensitive lakes, rivers and streams and cause metals to leach from surrounding soils into the water system. High acidity and elevated levels of metals (notably aluminum) can seriously impair the ability of water bodies to support aquatic life, resulting in a decline in species diversity. Lakes and streams in areas that receive high levels of acidic deposition are currently being monitored to check their acidification status. Over the past decade, 33% of the monitored Canadian lakes showed evidence of improvement, 11% continued to acidify and the rest remained unchanged in acidity (Environment Canada, 1996). Aquatic sensitivity, with respect to aquatic sensitivity classes (high, moderate, low) New Brunswick, Nova Scotia, Prince Edward Island, and Newfoundland are among the top six provinces with eighty plus percent of their lakes in the moderate to high sensitivity classes.
Table 1.0
AQUATIC SENSITIVITY, BY PROVINCE
Freshwater Areas in Aquatic Sensitivity
High Mod. Low %High/mod.
British Columbia 32 44 18 73%
Alberta 6 21 70 28%
Saskatchewan 37 3 56 42%
Manitoba 30 2 38 46%
Ontario 34 20 20 73%
Quebec 32 8 7 94%
New Brunswick 31 49 12 87%
Nova Scotia 54 33 19 82%
Prince Edward Island 26 56 <1 99%
Newfoundland 56 30 4 96%
(Union of Concerned Scientists, 1996)
In Newfoundland the lack of water treatment in some rural communities has resulted in an increase of potable water acidity. With the use of copper piping for water main use in Newfoundland, acidic water can cause serious problems. The acidity causes the leeching of the copper away from the water main pipe and into the water system causing increased copper content in the water as well as problems dealing with water main leaks and breakage's. The same problem is evident with the use of asbestos cement pipe. However the leeching of cement away from the pipe allows the release of the asbestos fibers into the water system. Asbestos is carcinogenic, and therefore this problem arises serious health concerns. Human exposure to particulate matter, including sulphate and acidic aerosols, which penetrate deep into the lungs and leads to increased respiratory problems. Recent research indicates a relationship between decreased lung function, increased cardio-respiratory mortality and long-term exposure to ambient acidic aerosols.
SO2 and its by-products have been linked with rates of deterioration in building materials, such as cement, limestone and sandstone. Some of the Atlantic Province's significant historic structures (for example, the Basilica, St. John's) are slowly being eroded by acidic pollutants. A Canadian Acid Rain Control Program was formalized in 1985 by establishing federal-provincial agreements with the seven provinces east of Saskatchewan. Participating provinces agreed to reduce their combined SO2 emissions to 2.3 million tonnes per year by 1994. This target was exceeded in 1993. Total eastern Canadian SO2 emissions were 1.7 million tonnes in 1994, representing a 56% reduction from 1980 levels. In 1991, Canada signed an agreement with the United States for the reduction of SO2 and NOx emissions. Canada's obligations under this agreement include the establishment of a permanent national limit on SO2 of 3.2 million tonnes by the year 2000 and a 10% reduction in projected NOx emissions from stationary sources by the same year (NB., NF., NS., Departments of Environment, 1991).
In 1995, Canada began to develop a national strategy dealing with acidic deposition and acidifying emissions. Furthermore, the formulation of new deposition objectives for beyond year 2000. The aim is to protect acid-sensitive ecosystems, human health and air visibility in Canada and ensure the achievement of its international commitments. This strategy will be considered by federal and provincial/territorial Ministers of Energy and Environment in 1997 (Ryan, 1996).
The Five major environmental pollution sources in Newfoundland and Labrador are :
¨ Municipal Sewage
¨ Vehicle Emissions
¨ Municipal Solid Waste
¨ Total Carbon Dioxide (CO2) output
¨ Primary Natural Resource Processing
¨ Municipal sewage is a problem affecting all the Atlantic provinces. 150,000 m3 of untreated sewage is discharged daily in to Halifax harbour (Whelan, 1996). With a common lack of waste treatment in the Atlantic provinces, except PEI. , actions throughout the Atlantic Provinces should be taken. The St. John's harbour is a similar situation to the Halifax harbour. Although St. John's has a smaller population the narrows at the harbour entrance poses problems as well. The tidal current is impeded by the narrows not allowing the waste products to be totally removed from the harbour. The rural areas of Newfoundland although much smaller still remain with no waste treatment facilities. Sediments are contaminated with organic matter, heavy metals, and organic chemicals such as PAHs and PCBs (Whelan, 1996). Primary treatment plants should be facilitated in major population centers around Newfoundland and possible secondary treatment should be explored as well. Many small rural communities could maintain there present waste disposal into the Atlantic pending proper environmental study to determine if the area can handle the small volume of decomposing waste. However, with population increase sewage treatment plants should be facilitated in these area's, as they should have been many years ago in St. John's, and other major centers in Newfoundland.
¨ Vehicle Emissions are not only a Newfoundland problem but a major global problem. The demand for personal transportation is not likely to change in Newfoundland in the future, and national trends show an increase in the number of vehicles on the roads in Canada (Environment Canada, 1996). The main action that must be taken to minimize vehicle emissions are the adoption of vehicle emissions control program in Newfoundland. This would cause all vehicles on the road to maintain a minimum standard of fuel emission production. High occupancy vehicle lanes and other similar incentives could be implemented. Testing is going on presently in some Canadian cities to encourage ride-sharing and improving fuel efficiency per passenger-kilometer (Maddocks, 1996). Ultimately research into alternative fuels, electric vehicles, hydrogen fuel cells, and radically redesigned light-weight super fuel efficient automobiles suggest that there is significant potential for improving energy efficiency and reducing vehicle emissions. Although this last point is a broad scope for the Newfoundland vehicle emission problem, this problem is global and therefore global cooperation in research is vital to minimizing this problem.
¨ Municipal Solid Waste has been on the increase over past decades. In New Brunswick if trends continue the average waste generated per person will be approximately 550 kg by 1997, up from approximately 350 kg in 1967 (Maddocks, 1996). Solid waste is disposed of in small dump sites, large landfills and by incineration. In Newfoundland there is a relatively large number of screened incinerators. However with the global push to lower atmospheric air pollutants, incineration, although space conducive, also maintains its problems. All three forms of waste management have there problems. With the formation of better landfill design and site choice, landfills are becoming better managed and better contained. In the long term the only way to curb the production in solid wastes is to bring about a reduction of wastes produced. The use of composting is useful in the deposing of organic waste, however with regional composting you again run into the problem of site selection, due to public opposition. The problem of both land and sea persistent litter is also a problem in Newfoundland. A hazard to both aesthetics and marine animals (through entanglement and ingestion). A reduction is garbage produced per person is ultimately the best way to solve the problem. The national Packaging Protocol calls for a reduction in packaging of 50%, over the 1988 levels, by the year 2000 (Maddocks, 1996). These are the kinds of reduction in produced waste that are beneficial to solid waste management.
¨ Total Carbon Dioxide (CO2) output is a combination of home burning of oil and wood for heating, the refining of the fossil fuels for the use in the heating and powering of gasoline engines, and the production of electrical power (Maddocks, 1996). Although Newfoundland power pushes people to use electricity for the use in heat, many people are still using oil in their homes. Surprisingly enough the oil fuelled furnace is more fuel efficient then the oil burning electrical power station supplying St. John's with its electricity (Dawne, 1996). In rural areas of Newfoundland many people are heating their homes with wood, this has a very high percentage of carbon dioxide for the relative heat in BTU's. With the extreme need for a good fuel efficient source of heat during the long Newfoundland winter, it is evident other fuel sources must be explored. With fossil fuels being the cheapest form of heat, economics will play major role in the choices available. There is still room, however, for better fuel efficiency and reduced carbon dioxide emissions. The use of less polluting fuels, such as natural gas should be examined. The economic benefit of finding a cleaner, and cheaper source of heat is extremely important. The full range of environmental and economic impacts over its life cycle (extraction, refinement, and use) needs to be considered, whatever fuel is used.
¨ Primary Natural Resource Processing can be split into two groups with Pulp and Paper Mills and Fish and Food Processing Plants, the Newfoundland Department of the Environment does not include Mining and Smelting in this group of polluters (Whelan, 1996). Pulp and Paper Mills are responsible for the discharge of effluents containing organic wastes and suspended solids to fresh and coastal waters. Effluents from the plants in Newfoundland produce a variety of toxic organo-chlorine compounds, including dioxins and furans. The formation of organic acids, due to the decomposition of wood, particulate matter also poses a problem. The volume of wood waste that is dumped into rivers and bays in Newfounland have caused the formation of toxic carcinogenic fish habitat enviroments (Whelan, 1996). With new regulatory measures in place the environmental stress on the water ways will be reduced, however, even though sulphur dioxide air emissions have been reduced, noxious odours continue to be an aesthetic problem. The technology has come available in recent years for the use of a closed water system for pulp and paper plants. This system, however is not widely used because of the setup cost. Closed water systems would almost entirely eliminate the noxious odour problem and largely decrease the need to dump effluents into fresh and coastal waters. The technology is available, once again the problem of the economics behind the production is the main concern. Fish Processing Plants operating in Newfoundland, however drastically reduced since 1992, primarily stress the environment by releasing high-strength oxygen-demanding wastes to the coastal environment. Harmful bacteria in plant effluents, and nuisance odours are also a potential concern. With the moratorium on the cod fishery in 1992, the closure of many fish plants was actually a multifaceted benefit to the environment, both to the areas surrounding the plants and to the cod fishery.
References :
Clair, T. 1996. Personal communication. Atmospheric Environment Service. Newfoundland Region, St. John's, Newfoundland.
Dawne, L. 1996. Personal communication. Jacques Whitford Environmental Consultants. St. John's Newfoundland.
Hughs, R.N. 1991. Acid Deposition in New Brunswick 1988-1990. New Brunswick Department of the Environment. Technical Report T-9001. April 1991.
Maddocks, D. 1996. Personal communication. Newfoundland Environmental Management, Newfoundland Department of Environment and Lands. Newfoundland Region, St. John's, Newfoundland.
New Brunswick Department of the Environment. 1991. Report Relating to the Canada/New Brunswick Agreement Respecting a Sulphur Dioxide Emission Reduction Program for the Calendar Year 1990. Fredericton, NB. March 1991.
Newfoundland Department of Environment and Lands. 1991. Canada/Newfoundland Agreement Respecting a Sulphur Dioxide Emission Reduction Program Report. St. John's, Newfoundland. March 1991.
Nova Scotia Department of the Environment. 1991. Canada/Nova Scotia Acid Rain Reduction Agreement Report on the Year ending 31 March 1991. Halifax, NS. March 1991.
Porter, K. 1996. Personal communication. Atmospheric Environment Service. Newfoundland Region, St. John's, Newfoundland.
Power, K. 1996. Personal Communication. Environmental Protection, Environment Canada, Atlantic Region. St. John's, Newfoundland.
Ryan, P. 1996. Personal communication. Department of Fisheries and Oceans (DFO). Newfounland Region, St. John's, Newfoundland.
River Road Environmental Technology Centre. March 1991. Update and summary report: measurement program for toxic air contaminants in Canadian urban air. Environmental Protection, Environment Canada. Ottawa.
Environment Canada. 1996. State of the Environment Report Overview, August 1996. Atmospheric Environment Service, Environmental Protection, Ottawa, Canada. 1996.
Environment Canada. 1993. Montreal Protocol with 1990, 1992 Amendments. Atmospheric Environment Service, Ottawa, 1993.
U.S. Environmental Protection Agency. 1991. National Air Pollutant Emission Estimates 1940-1989. Office of Air Quality, Planning and Standards. EPA-450/4-91-004. March 1991.
Watt, W.D. 1987. A summary of the impact of acid rain on Atlantic salmon (Salmo salar) in Canada. Water, Air & Soil Pollution. Vol. 35: 27-35.
Whelan, R. 1996. Personal communication. Newfoundland Department of Environment and Lands. Newfounland Region, St. John's, Newfoundland.
Acknowledgments:
General information and advice provided by the following agencies, in personal communication or from the world wide web are gratefully acknowledged:
E. I. Du Pont de Nemours, Wilmington, DE
Environment Canada
Environmental Protection Service
National Water Research Insitute
Ontario Ministry of Environment and Energy
Global Resources, Union of Concerned Scientists
Health Canada
Health Protection Branch.
National Aeronautics and Space Administration (NASA)
Greenbelt, Maryland, USA
National Oceanic and Atmospheric Administration (NOAA)
Climate Monitoring and Diagnostics Laboratory
Boulder, Colorado, USA
United Nations Environment Programme
World Meteorological Organization
Worldwatch Institute
Washington, D. C., USA
f:\12000 essays\sciences (985)\Enviromental\Ozone Depletion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ozone Depletion
In this world of rapid change, it's extremely difficult for a company to stay ahead of the game even using all the resources available to them. So, it's difficult to imagine the problems they would run into when a group of environmentalists decide to boycott a substance which is the foundation of their company. These chemicals, although very useful, cause consequences that need to be dealt with now in order to prevent further damage.
The chemicals in question are numerous, but the two gaining the most attention are chloroflurocarbons (CFC's) and carbon tetrachloride. CFC's have a wide range of uses, but are popularly used in aerosol propellants and air conditioning for homes and cars (Singer and Crandall npg). Carbon tetrachloride is one of the major components in making CFC so
their damage is similar. When they inter the outer atmosphere,
They react with ozone chemicals to release chlorine and bromine that in turn deteriorate the ozone and form "thinning" or "holes." This is catastrophic because they are bonded very strongly together and cannot be broken down by water. This means they travel into the atmosphere virtually unharmed by rain or decomposition (Goldfarb 282).
The reason these are causing such a commotion is the damage they cause to living things on Earth. When the ozone depletes, it causes more ultraviolet (UV) rays to hit the Earth's surface than are healthy (Singer and Crandall npg). UV rays affect the DNA of every living cell, altering the protein make-up of that cell (Goldfarb 288). Most importantly it affects "microscopic photoplankton" which rest at the bottom of the food chain, placing us in extreme danger (Goldfarb 288). Henry Lee, leading researcher on ozone depletion for the
Environmental Protection Agency (EPA), says that UV rays will only have a slight effect on oceans, though. He says the problem lies on the fact that 70 percent of the Earth's surface is covered with water, making it a widespread problem. In addition to that, humans exposed to excess UV rays over a period of time are likely to develop some form of cancer (Singer and Crandall npg). The EPA released a report that stated if CFC's weren't controlled, in the future there will be approximately "40 million additional cases of non-melanoma skin cancer found and 800,000 additional skin cancer deaths" (Singer and Crandall npg).
Now that scientists know what these and other "culprits" do, they're trying to find solutions to this world-wide problem. When they found these chemicals to be harmful, environmentalists didn't hesitate in taking action. They
placed a boycott on the use of aerosol spray cans. The U.S. and Canada responded by banning "CFC powered spray cans," and that, along with Europe agreeing to cut back by 35 percent, caused the rate of damage to fall drastically (Singer and Crandall npg). Therefore, manufacturers have to stop using these. The only other alternative is to find replacements for these deadly compounds. This is easier and more practical than stopping production altogether. It costs millions of dollars to re-tool manufacturers' machines, while losing money in the process. DuPont is the largest producer of CFC's and stands to lose the most if and when a ban is placed on CFC's. Because of this position, they're stepping up research on chemicals that get the job done, but cause less damage (Singer and Crandall npg). Hydroflurocarbons (HFC's) are "made with hydrogen instead of chlorine," which doesn't contribute to the ozone problem, but is a factor to the greenhouse effect (Goldfarb
290). Hydrochloroflurocarbons (HCFC's), like HFC's, have hydrogen in place of the deadly chlorine, but still contribute to ozone depletion. The only difference being HCFC's deplete at a much slower rate (Goldfarb 290). The major breakthrough is the discovery of CFC-134a, It's also chlorine-free but deteriorates before it reaches the outer atmosphere, so no damage is done. It will take the place of CFC-11 and CFC-12 in some refrigeration and coolant products (Singer and Crandall npg).
Many people are surprised to see the government moving so quickly in regards to this major problem. One writer said it was hard enough to get lawmakers to agree on anything, but in this situation, they're "moving with unusual speed and resolve" ("Ozone Defense" 63). Which doesn't excuse the fact that when they knew CFC's and tetrachloride were harmful they should have put an immediate freeze on
production of them. One scientist commented by saying, "...absolute proof is not needed when we are conducting an experiment on our own planet" (Goldfarb 291). Regardless of what happened in the past, we should be thankful they moved as quickly as they did. In doing so, They bought us some much needed time in this race for worldwide safety.
f:\12000 essays\sciences (985)\Enviromental\plastic not paper.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PLASTIC NOT PAPER
Walking through the grocery store I always try to look for the best buy. I always buy what's on sale, I guess you could say I'm cheep. Then I get to the check out lane, preferably the one with fewer people. I empty my wallet and pay. Then I wait. I think it's going to happen but I am not sure. Then it does, the baggier says, "Would you like paper or plastic?" I look that person right in the eye and I tell him, "I want the one that's better for the environment, I want the one that will help prevent pollution, I want the one that cost less, I want plastic." Plastic bags save money, they conserve energy, they are practical and they are better for the environment. That's why plastic bags are the best choice at the check out line.
Of course your wondering how plastic bags save money, well just think 2,000 paper bags stacked on each other reaches a height of about 7.25 inches, while paper reaches a soaring height of 7.5 feet. This means it takes seven trucks to deliver the same amount of paper as one plastic delivering truck. Talk about a big waste of gas. Plastic bags cost about 1/4 of a cent to make, while paper cost close to 3 cents. This is money we save as well as the store owner. This is a lot of money that is going to waste considering that plastic bags are so much more practical then paper.
You can use them for lots of other things. You can take on trips to the grocery store, your can protect dry clothing from wet towels in an exercise bag. You can line
your house hold waste basket with them, you tote dry shoes to work on rainy days. Hold plastic, aluminum and glass for recycling. Plastic bags are also very practical to carry. You can carry 5 compared with 2 paper bags. Plastic bags also hold just as many items as paper. This is also very practical because you can get your groceries out of your car a lot faster, after all grocery shopping is not the most fun thing to do.
A study by Franklin Associates LTD analyzed the environmental impacts of plastic and unbleached paper bag through out there life cycles. They found that plastic grocery bags conserve 40% less energy, 80% less solid waste,70% less atmospheric emissions, and release up to 95% fewer water born wastes. All these things are natural resources that we have to conserve and can't afford to lose. The brown paper bags used in most grocery stores are made from virgin paper without any contributions from recycled materials. Paper making pollutes the water, releases dioxins, contributing to acid rain and cost trees lives. Weird as it may sound some virgin paper can be more damaging to wildlife than plastic substances, like 6pack rings. If you choose paper over plastic you are supporting higher levels of pollution.
The choice of plastic over paper doesn't seem to be that big of a decision, but as you now know it does effect the environment. If you don't care about the environment then use the plastic bags for there great other practical uses. So the next time you are at the check out lane and you get asked the question, say plastic with pride.
f:\12000 essays\sciences (985)\Enviromental\Pollution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE CHINA SYNDROME
The China Syndrome is about a nuclear power plant in Los Angeles, California.
The Ventanna Nuclear Power Plant came close to the China Syndrome! A Channel 3 news
reporter, Kimberly Wells, and her camera man, Richard Adams, captured an accident on
film at the nuclear power plant that would have caused the China Syndrome. The China
Syndrome could have killed off a place about the size of Pennsylvania. One of the head
operators of the company, Jack Godell, talked to Kimberly Wells at a company gathering.
Jack told Kimberly that there was just a turbine trip. Kimberly and her camera man went
to Jack Godell's house and confronted him about the evidence. The camera man asked
Jack to speak publicly about the accident. On the way to a Nuclear Power convention
there was a car following Jack. Jack went to the Ventanna Nuclear Power Plant to hide
from the people following him. After arriving, Jack went to the control room to find that
the people running the plant were making a big mistake. He saw the people raising the
power back up to 100%. He tried to explain that there could be another accident if they
raised the power all the way because of a problem with the pumps. The people didn't
believe Jack and were starting to raise the power up again. When Jack saw what they were
doing he grabbed the security officer's gun and forced everyone out of the control room.
After he locked the door he lowered the power down to 75% so the pumps wouldn't
break.
Jack agreed to have a one on one interview with Kimberly so the public would be
warned. While the camera crew was on their way to do the live interview so was the
S.W.A.T. team to get Jack out of the room. Also the people running the plant and who
didn't believe Jack were up to something too. The operators were rerunning the wires to
make a false accident that would distract Jack. The distraction would make it easier for
the S.W.A.T. team to get inside the control room. The camera crew arrived and Kimberly
went into the room where Jack was to do the interview. As soon as the interview started
the operators tripped the alarm. Jack started to panic. The S.W.A.T. team broke in and
shot Jack because he had a gun in his hand. After everything was over Jack said he could
feel another accident coming. About five seconds later the room began to shake and the
second accident started. The back up systems held up and there wasn't a China Syndrome.
On the way out some of the people inside were saying nothing happened. Kimberly got in
front of the camera and explained what happened. One of the head operators was stopped
by Kimberly on his way out. He said Jack was not crazy, like everyone said, and there will
be an investigation on the Ventanna Nuclear Power Plant.
f:\12000 essays\sciences (985)\Enviromental\Pollutions Deleterious Effects .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
Paul Cordova
L. Lehr
December 11, 1995
"An Ecosystem's Disturbance by a Pollutant
Freedman defines a pollutant as "the occurrence of toxic substances or energy in a larger quality then the ecological communities or particular species can tolerate without suffering measurable detriment" (Freeman, 562). Although the effects of a pollutant on an organism vary depending on the dose and duration (how long administered). The impact can be one of sublethality to lethality, all dependent upon the factors involved. These factors need to be looked at when determining an ecosystem's disturbance by a pollutant.
Some of the most frequent pollutants in our ecosystem include: gases such as sulphur dioxide, elements such as mercury and arsenic, and even pollution by nutrients which is referred to as eutrophication. Each of these pollutants pose a different effect on the ecosystem at different doses. This varied effect is what is referred to as dose and duration. The amount of the pollutant administered over what period of time greatly affects the impact that the pollutant will have on an ecosystem and population.
Pollutants can affect both a population and an ecosystem. A pollutant on a population level can be either non-target or target. Target effects are those that can kill off the entire population. Non-target effects are those that effects a significant number of individuals and spreads over to other individuals, such is the case when crop dusters spread herbicides, insecticides. Next we look at population damage by a pollutant, which in turn has a detrimental effect on the ecosystem in several ways. First, by the killing of an entire population by a pollutant, it offsets the food chain and potentially kills off other species that depended on that organism for food. Such is the case when a keystone species is killed. If predators were the dominant species high on the food chain, the organisms that the predator keep to a minimum could massively over produce creating a disturbance in the delicate balance of carrying capacity in the ecosystem. Along with this imbalance another potential problem in an ecosystem is the possibility of the pollutant accumulating in the (lipophilic) fat cells. As the pollutant makes it way through the food chain it increases with the increasing body mass of the organism. These potential problems are referred to as bioconcentration and biomagnificaiton, respectively. Both of these problems being a great concern of humans because of their location on the food chain. These are only a few of the impacts that a pollutant can have on a population and ecosystem.
Another factor to consider is the carrying capacity when evaluating the effects of a pollutant on an ecosystem. A carrying capacity curve describes the number of individuals that a specific ecosystem can sustain. Factors involved include available resources (food, water, etc.), other members of the species of reproductive age and abiotic factors such as climate, terrain are all determinants of carrying capacity. This curve is drawn below:
# of individuals
Years
If a pollutant is introduced into an ecosystem , it can affect the carrying capacity curve of several organisms (Chiras, 127). This effect on the curve is caused by the killing off of the intolerant and allowing more room for both the resistant strain and new organisms. In some cases the pollutant will create unsuitable habitats causing migration.
Another important part of the idea of a carrying capacity is the Verholst (logistic) equation: The actual growth rate is equal to the potential growth rate multiplied by the carrying capacity level. Three major characteristics exist for this equation. First, that the rate of growth is density dependent, the larger the population, the slower it will grow. Secondly, the population growth is not limited and will reach a stable maximum. Lastly, the speed at which a population approaches its maximum value is solely determined by the rate of increase (r). In a population with a stable age structure this would be the birth rate minus the death rate, but this is almost impossible. If any of the variables in this equation are affected by a pollutant then the growth rate of an organism can be seriously affected which can in turn affect the entire ecosystem (Freeman, 122).
Now using the approach of classical toxicology we study the poisoning effects of chemicals on individual animals resulting in lethal or sublethal effects. Effects on individuals may range from rapid death (lethal) through sublethal effects to no effects at all. The most obvious effect of exposure to a pollutant is rapid death and it is common practice to assess this type of toxicity by the LD50 (the lethal dose for 50% of test animals) values, scientist can judge the relative toxicity of two chemicals. For example, a chemical with an LD50 of 200 milligrams per kilogram of body weight is half as toxic as one with an LD50 the more toxic a chemical. Death is rarely instantaneous, and even cyanide takes at least some tens of seconds to kill a human being. Death is alwaBAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD one set of conditions, often ill defined, with one type of exposure, and with no indication of the influence of other environmental variables.
Perkins (1979) suggests that a sublethal exposure kills at most only a small proportion of a population, but the possibility that s sublethal exposure could cause a small proportion of individuals to die from acute toxicity seems self contradictory (Freedman, 126). For both the sake of this assignment and for practical purposes, it would be incautious to suppose that a sublethal exposure that affects individual organisms adversely is not close to that which will affect the population. There is no good reason to suppose that there is a constant relationship for different pollutants or different species, between the dose needed to kill and that needed to impair an organism. Therefore, given the difficulties of studying an ecosystem, the most effective way to predict biological effects is likely to be by discerning the least exposure that produces a deleterious response in individual organisms (Moriarty, 1960) and then examining the extent to which different environmental conditions alter this minimum exposure.
Further adding to the complexity several additional factors come into play with the effect and response of an organism from a pollutant. One such factor is age. Although we think of youngsters of all species as resilient creatures, young, growing organisms are generally more susceptible to toxic chemicals than adults (Chiras, 127). Health Status is determined by many factors, among them one's nutrition, level of stress, and personal habits such as smoking. As a rule, the poorer one's health, the more susceptible he or she is to a toxin (Freeman, 214). Toxins may also interact with each other producing several different responses. Some chemical substances for example, team up to produce an additive response that is, an effect that is simply the sum of the individual responses. Others may produce a synergistic response that is, a response stronger than the sum of the two individual ones. A pollutant can also synergize for instance, sulphur dioxide gas and particulates (minute airborne particles) inhaled together can reduce air flow through the lungs' tiny passages. The combined response is much greater than the sum of the individual responses.
Plants have three strategies in response to a disturbance - this was suggested by Grimes. These strategies are:
C - selection - having high competitive ability
S - selection - having a high endurance for stress
R - selection - having a good ability to colonize disturbed areas.
Plant response to a disturbance was suggested by Connell and Slatyer (1977) using models. Model I (the "facilitation" model assumes that only certain species that come early in the succession are capable of colonizing the site. In contrast the other two models both assume that any individual of any species that happens to arrive at the site is capable of colonizing it, although all models accept that certain species will tend to appear first because of their colonizing abilities. All models also suppose that the first colonist will so modify the site that it becomes unsuitable for those species that normally occur early in the succession. The three hypotheses then suggest three different ways in which other species will appear. Model I suggests that early occupants modify the environment so that it becomes more suitable for species that come later in the succession. Model II (the "tolerance" model) suggests that the sequence in which species appear depends solely on their speeds of dispersal and growth. Model III (inhibition) - the species already present makes the environment less suitable for subsequent recruitment of later species. All these hypothesis do not rely on the idea of a community as a sugra-organism but on succession as a sugra-organism but on succession as a process that relies on two factors: 1) the probabilities that propagules of different species will be present and
2) the ability of these propagules to survive, develop and reproduce.
Now to look at the whole picture, we ask ourselves: "How do we predict the response of a community from a pollutant?" Should we look at one population at a time, or in some holistic approach. Moriarty suggests that some of the currently favored approaches rest ont he assumption, often implicit rather than explicit, that communities are sugra-organisms. He (Moriarty) suggests that two topics that should be discussed when dealing with the idea of community response: 1) indicator species
2) biological or environmental health may be misleading.
The term indicator species, which is used in the classification of communities (p. 62) is also used in ecotoxicology, with a variety of meanings. At times it indicates the idea that knowledge of one species within a community will indicate the well-being or biological health of the whole community. Moriarty suggests that this seems a reasonable proposition if one accepts the traditional view of community as sugra-organism, but suggests that it is in fact misleading. He (Moriarty) adds that there is no fundamental reason from community structure to suppose that any particular species within the community will give a better measure of impact from pollutants than will another. Pollutants will affect populations of particular species, and which species are first affected will depend on the relative degrees of exposure and susceptibility and these are functions much more of the particular pollutant and of the individual species than of the community. An indicator species can only be used to assess the impact of pollution on a community if quite a lot is known about both the pollution and the community (Moriarty, 69). Concerning the idea of the concept of biological or environmental health being misleading: one may properly refer to the health of a community. A community can change "markedly" if affected by a pollutant, but it will just become a different community that is neither more nor less "healthy" just different (Moriarty, 69). It may be a less desirable community, for economic, social, scientific or aesthetic reasons, but that is quite a different matter. Effects of pollution may be described as a retrogression - a decrease in diversity, productivity, biomass and structural complexity. Moriarty argues that while there may be the appearance of a retrogression process it should not be taken as a generality. In conclusion, on the effect and response of an organism from a pollutant, the most appropriate emphasis is on populations. The effect of pollutants on populations within a community can be complex and apart from reduction or elimination of populations - resurgence, population increase or introduction of rarer species, sublethal effects and genetic changes may all be part of the changes that occur.
Another very important characteristic of populations that we cannot overlook is their emetic composition. Much of the variation between individuals is inherited from their parents. It is common knowledge that relatively few offspring of any species survive to reproduce. Charles Darwin (biologist, 1859) formed the idea of natural selection: the idea that some individuals will have a higher probability of survival than others, and on average such individuals will then leave more descendants than other less well adapted individuals. We will use Darwin's, Mendel's and Watson and Crick's and other information to investigate our concern - the role of pollutants in natural selection. It has been shown many times that pollutants can exert powerful selective forces, and we need therefore to understand something of the mechanisms of inheritance and how natural selection acts on populations.
For the purpose of this assignment I will outline/review all the general findings of important works that proved significant in understanding the concepts of genetics. A good place to start would be with an outline of some of Mendel's results obtained when breeding peas (Pisum sativum). "A" indicates the dominate gene for yellow seed, "a" the recessive gene for green seed.
However, genes do not always fall into this simple dominant/recessive pattern. Some may be incompletely dominant in the heterozygote, showing a transition stage between the phenotypes of the homozygous dominant and recessive conditions. Later workers also found that there are often more than two alternative forms alleles) of a gene. One such worker was Avery (1944) who showed that the genetic material in a bacterium consists of the nucleic acid DNA (deoxyribonucleic acid), and in 1953 Watson and Crick first suggested the three-dimensional structure of DNA from which has developed all the subsequent work on the genetic code. The essential feature of this code is that: genes are arranged along chromosomes, which in essence may be regarded as giant molecules of DNA. The DNA molecule consists of two intertwined helical chains of many nucleotides, with ten nucleotides in both chains for each complete turn of the helix (Watson, 1965).
Diagram to illustrate the double helix of DNA with the two polynucleotide chains linked by complementary base-pairs (Adenine (A) with Thymine (T), and Guanine (G) with Cytosive (C). Replication occurs when the two strands separate and both act as templates on which new complementary strands are formed (Moriarty, 62).
Occasionally, something goes wrong with the replication process and one or more genes may be altered, lost or gained. These changes, or mutations are usually less favorable to the organism than the original gene, and are often sufficiently unfavorable to be lethal. Nevertheless, mutations in the reproductive cells are of crucial importance: these are in favorable, the source of new genetic variation in subsequent generations.
This knowledge about gene structure and function modifies the Mendelian view of inheritance.
Now, after the brief introduction and history of genetics it is time to consider the relevance of ecological genetics to pollution. Most current problems of pollution occur on a much shorter time-scale than that required for the evolution of new species. The critical difference between evolutionary change and that wrought by pollution is the speed: populations can disappear very rapidly from pollution and if unchecked, we would have a very impoverished fauna and flora (Moriarty, 81).
One very popular example of the effects of pollution on wildlife, and perhaps the most striking evolutionary change over to be actually witnessed was the occurrence of melanism in moths. This effect is commonly associated with industrial development. White moths would rest on white lichen on trees and were well-nigh visible on them. But with industrial pollution (between 1848 and 1990) lichen turned a black color exposing and making the white moth (f. typica) prey to birds. Birds posed a selective pressure against the white moths. Now black moths were favored evolutionary. This is known as the heterozygous advantage, in which a bank of recessive alleles becomes favored due to a change in the environment. The biological significance of melanism was a matter for debate for some decades, and although it is now generally accepted that melanism in (f. typica) is associated with atmospheric pollution, some of the details are still unclear. Although several points are worth emphasizing. Pollution in this instance is not having a direct effect on the moth populations, nor indeed on their predators, but an alteration to the habitat has altered greatly the relative fitness of different genotypes. Melanism also illustrates the difficulty of producing adequate proof, or disproof, of cause and effect when pollutants are thought to be causing major biological effects. In conclusion, with regards to genetics, it is important to appreciate that the effects of pollutants can be modified by an organisms genetic constitution, and that pollutants can alter a population's gene pool (Freeman, 128). The interactions between pollutants and genes can be relevant both to understanding and to predicting effects and are potentially of great value for monitoring (Moriarty, 102).
In summary, as stated throughout this school year in my 2375 Pollution class, the effects of pollutants on populations are mediated via their effects, direct or indirect on individuals and the likelihood of these effects depends on the dose. Sublethal effects can be unravelled from knowledge of the mode of action. Alternatively, emphasis in the study of sublethal effects can be placed on the health of the individual organism. With both approaches, the effect of other environmental variables needs to be given much more prominence than heretofore and this could profitably be linked with studies on amounts of pollutant within organisms (Moriarty, 176). It is from this basis that Moriarty states that we have to consider how best to predict and to monitor the ecological effects of potential pollutants.
In my opinion, I feel that (as does Moriarty) one should relate pollution to the wider aspects of man's impact on his environment. We can, to a considerable extent, control and mitigate our negative impacts upon this planet because as we have learned from our past experiences, this planet does have a finite carrying capacity for our own as well as for all other species.
References
Campbell, N.A. Biology (3rd ed) 1993. Benjamin/Cummings Publishing Company.
Chiras, Daniel D. Environmental Science: Action for a Sustainable Future. (4th ed), 1994. The Benjamin/Cummings Publishing Company.
Freedman, Bill. Environmental Ecology: The Ecological Effects of pollution, disturbances and other stresses / Bill Freedman (2nd ed.), 1995.
Moriarty, F. Ecotoxicology (2nd ed), 1993. Academic Press Limited.
f:\12000 essays\sciences (985)\Enviromental\Population Bomb.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"People are realizing that we cannot forever continue to multiply and
subdue the earth without losing our standard of life and the natural beauty
that must be part of it. these are the years of decision- the decision of men
to stay the flood of man." Ehrlich here explains the one of the most pressing
problems facing man in the 20th century. In Population Bomb Ehrlich
explains that pollution, shortages, and an overall deterioation of the
standard of living is all due to overpopulation.
In chapter one Ehrlich explains the pressing problems facing modern
civilization and how these problems are directly or indirectly linked to
overpopulation. Ehrlich explains situation using various examples of how
mass starvation is inevitable if population continues to increase the way it is
currently. In third world countries their food supplies are becoming
increasingly scarce because of their increasing populations. In these third
world countries the rich-poor gap is increasing creating the potential for
large parts of the population to starve. Paraphrasing Ehrlich's ideas in
chapter can be explained as; there is only so many resources and as
population increases those resources will soon be depleted. Ehrlich uses
historical population research to lead to the conclusion that in 90 years the
population could be well over the earths carrying capacity. In third world
countries where population control is rarely used population, pollution, and
scarcity are becoming ever increasing problems. Roughly 40% of the
population in third world countries are children 15 years or older. Ehrlich
explains that if population growth continues at this rate older generations
will find themselves without adequate food and medicine. Near the end of
the chapter Ehrlich explains the cause of the massive increase in population
growth; as he explains that science and medicine have decreased the death
rate exponentially while the birth rate has not decreased. In "Too Little
Food" Ehrlich starts off with the assumption that about 50% of the people in
the world are in some degree malnourished. He uses statistics from "New
Republic" and the Population Crisis Committee to put the number of deaths
to around four million people dying each year of starvation alone, not disease
caused by starvation. Ehrlich explains that sometime around 1958
population growth exceeded the available food supply. When this happened
the laws of supply and demand took over and caused massive inflation in
food costs and causes marginal farm land to be put into production. All of
these signs caused a period of time with severe shortages in food. In 1966
alone the world population increased by 70 million while food production
remained relativly the same from 1965. Ehrlich shows that the increasing
food shortages in under developed countries are putting an extra strain on
US to produce more food to keep them from starving. Another problem arises
from these food shipments to third world countries; third world countries are
becoming dependant on aid shipments, and because of this their own food
production has declined. Ehrlich says, " Most of these countries now rely
heavily on imports. As the crisis deepens, where will the imports come from?
Not from Russia.Not from Canada, Argentina, or Australia. They need
money and will be busy selling to food-short countries such as Russia, who
can afford to buy. From the US then? They will get some, perhaps, but not
anywhere near enough. Our vast agricultural surpluses are gone. Our
agriculture is already highly efficient so that the prospects of massively
increasing production are dim. And the problems of food transports are vast.
No responsible person thinks that the US can save the world from famine
with food exports, although there is considerable debate as to how long we
can put off the day of reckoning. In the final part of chapter one Ehrlich
states all the problems that overpopulation has created. One of the first
problems is the environmental consequences of agriculture. Even the US in
facing problems maintaing our massive food production; erosion, strip-
mining, and gullying have become pressing problems facing the US. Ehrlich
presents a paradox by explaining that as food production is increased, the
quantity and quality of the farmlands are being destroyed; man is faced with
a complicated problem. One of these problems is pesticides. The pesticide
industry has actually created "super pests". These pests are immune to
pesticides. Ehrlich uses the DDT as an example of how pesticides have
actually comeback to damage the ecosystem they were meant to protect.
DDT a pesticide used frequently in the middle part of the century to control
mosquitos and other like pests, has been found to be a carcinogen and very
dangerous to human life. Traces of this chemical have been found at such
bizzare places as in pengiuns in Antarctica and Eskimos in Alaska. Another
problem Ehrlich denotes is the "Greenhouse Effect." All of the carbon dioxide
from industry and air pollution has affected how much heat has been
radiated back to space. Ehrlich surmises that if we continue to tamper with
the atmosphere and alter the tempature a few degrees in one way or the
other; we could possibly risk another ice age, or the melting of the polar ice
caps. Ehrlich closes chapter one with the basic theory of, "Too many cars, too
many factories, too much detergent, too much pesticide,, multiplying
contrails, inadequate sewage treatment plants, too little water, too much
carbon dioxide, all can be traced easily to too much people.
Chapter three outlines what is being done to combat the problems of
overpopulation. The first solution that Ehrlich crtiques is Family Planning.
Ehrlich denotes several flaws in family planning. He first notes how the
Rythmn Method used by many catholic nations is only 15% effective in the
prevention of pregnancy. He also notes that by the time many women come
into family planning practices they already have six or seven children.
Ehrlich also uses India as an empirical example of how family planning
failed. India at the start of the program had a population of 370 million
people and a growth rate of around 1.3%. After 16 years of effort by the
program the population of India soared to over 500 million and the growth
rate more than doubled to 3%. Ehrlich states quite emphatically, "In fact, I
know of no country in the world that has achieved true population control
through family planning." The other solution Ehrlich examines the
probability of the producing more food and other materials to maintain a
larger population. Ehrlich starts by saying that this is basically non-sense,
the world will reach its carrying capacity and nothing can be done about it.
He says, " Can we expect great increases in food production to occur through
the placing of more land under cultivation? The answer is a most definite
NO." If more land can not be put under cultivation then the production curve
must some how be shifted to maximize output under the same status-quo
situations. Ehrlich really see no way to increase production enough to
counteract the effects of overpopulation. In the final chapter of what is being
done Ehrlich looks at the current solutions to the environment as either
impractical or borderline absurd. Ehrlich examines how industry is polluting
the atmosphere and yet their are no substantial regulations placed upon
them. Ehrlich mentions several types of pollution such as: pesticides, carbon
dioxide, detergent, and even noise pollution. Ehrlich closes the chapter with
the analogy," What then, is being done overall to nurse our sick environment
back to health? How well are we treating these symptoms of the Earth's
disease of overpopulation. Are we getting ahead of the filfth, corruption, and
noise? Are we guarding the natural cycles on which our lives depend? Are
we protecting ourselves from the subtle and chronic poisining? The answer is
obvious the pallatives are too few and too weak. The patient continues to get
sicker."
In the final chapters 4 and 5 Ehrlich looks at what solutions are
possible and what man can do to help out in the battle on overpopulation.
Ehrlich's solution to overpopulation is explained quite simple," A general
answer to the question `what needs to be done?' is simple. We must bring the
world population under control, bringing the growth rate to zero or making it
go negative."
f:\12000 essays\sciences (985)\Enviromental\Poverty and How it Effects are Everyday Lives .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Another project that is trying to make lead way in the poverty war. Is the Project South(PS) is a program that is felt strongly felt though out the South East United States.
Project South(PS) is a community-based, membership institute that develops popular political and economic education and action research for organizing and liberation. They contribute to the development of a strategic vision for the movement emerging in the ese new times-bringing together grassroots, scholar and cultural activists and youth on the basis of equality to join in the process of understanding and transforming our society. The work of Project South is funded by dues and contributions, in-kind work, and grants from the Atlanta Black United Fund, the Center for Responsive Politics, the Schumann Foundation, the Fund for Southern Communities, and the Mayer-Katz Foundation,(South).
The following are the true statistics form the 1995 fiscal year and the consequences of budget cuts.
While the poverty rate of 20.8 percent for children under 18 years old in 1995 was significantly lower than the 1994 rate of 21.8 percent, it remained higher than those of other age groups. There was a significant drop in the number of people living below the official government poverty level between 1994 and 1995. In 1995, there were 36.4 million poor, a figure 1.6 million lower than the 38.1 million poor in 1994.
The 1995 federal budget set in motion a program that will see billions and billions of dollars cut from social programs. At a time of greatest need, the government is cutting support programs and making it more difficult for those who need the programs to qualify. So what is the government doing about poverty? Nothing! It is abandoning its responsibility and blaming the victims of its policies for the situation they find themselfs in.
f:\12000 essays\sciences (985)\Enviromental\Predators.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Natural Resources Management
01/29/97
Predators and Ecosystem Management
Predators have an everlasting effect on different kinds of ecosystems. They influence there ecosystems by controlling the abundance of lower species certain habitats. In this article, "Predators and Ecosystem Management" by (James A. Estes). He explains results of case studies that indicate important ecological roles for predators in a huge coastal ecosystem. The main challenge in this article is to determine if there are recurrent patterns else where in nature and to also understand when and where they occur.
The author gives his perspective on predators and the coastal ecosystems by giving us a living example, the sea otter and the kelp forests. The relation between the sea otters and the kelp-forests was provided because of a accident of history, the over-exploitation of sea otters in the Pacific maritime fur trade. The study compared areas where sea otters were abundant with nearby areas to area's where they're almost extinct. By doing this comparison of the sea otters coastal system it was possible to gain much insight into the sea otter ecological role in kelp-forest ecosystem.
Over the years it's been possible for us to observe the kelp-forest ecosystem over-time, thanks to the massive growth of the sea otters population we observed the change from otter-free to otter-dominated.
This article relates to many aspects of our textbook. On page 89 in chapter 5 the text explains what an ecosystem is, defined by the book, it's a community of species interacting with one another where there is a non-living environment. In this case the otter and kelp-forests ecosystems a coastal ecosystem. As mentioned in the book, the food chain is involved in the sequence of events with the organism that are the source of the food. In a survey of coastal habitats in many areas of the North Pacific Ocean have revealed that kelp forests usually are extensively deforested where sea otters are absent whereas this condition is rare where occur ( Estes and Duggins 1995).
All ecosystem management has recently emerged as the main way of conversation in wildlife biology and as an alternative to the traditional approach of species-level management. This kind of approach, (1) has involved many resource-management agencies because of the growth or disappearance of their habitats, and (2) the amount of species is great and the time is too short to conserve these species in any other way.
Bibliography
1) Estes, James A.,1996,24(s): Predator and ecosystem management,
Wildlife Society Bulletin, Ca, pg.390-396.
2) Miller, Tyler G.,1996,Living In The Environnment,
Wadsworth Publishing Company,Ca. pg.122,105-107.
f:\12000 essays\sciences (985)\Enviromental\Privately Owned Gasoline Powered Vehicles Should be Limited.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
February 25, 1995
Social Studies 10H
Privately Owned Gasoline Powered
Vehicles Should Be Limited
The automobile has become a very important part of today's society. It is a necessity to own or to have access to a car in order to keep up with all of the competition of the business world, and also one's social demands. Most people would not be able to travel around a country or the world without this incredible machine, for it provides freedom and mobility, even for people who do not own a car. Unfortunately, the car has a very destructive nature. Automobiles make a major contribution to air and noise pollution, depletion of fossil fuels, and to the abnormalities in children and adults due to lead poisoning. In order to stop this devastation, the use of gas powered automobiles must be limited by replacing them with alternative modes of transportation, or by finding a way to ease them out of utilization.
There are many reasons why the number of privately owned gasoline powered cars on the road should be limited. First of all, and most importantly, automobiles are harmful to our environment. Automobiles run on gasoline, which is a mixture derived from petroleum. Gasoline contains hundreds of different hydrocarbons, or compounds containing the chemical elements carbon and hydrogen(Gasoline). When the gas is burned in the engine of the car, several byproducts result. These exhausts include hydrocarbons and oxides of three elements: Carbon, nitrogen, and sulfur(Emissions). Tiny amounts of poisonous trace elements such as lead, cadmium, and nickel also are present. Everything contained in the exhaust affects the environment intensely. Auto engine exhaust contributes about 50% of today's atmospheric pollution, and in highly populated and industrialized cities, air pollution consists of up to 80% car exhaust.
Because of all of the gasoline powered cars on the road, the earth's outermost protective shell, the ozone layer, is being destroyed. The ozone layer guards against, among other things, global warming and skin cancer(Fisher 14). If it is annihilated, the whole planet, including the human race, will be erased along with it. This is one reason gasoline powered automobiles should be limited.
The automobile also contributes to noise pollution. Cities around the world are constantly packed with cars, and, as a result, there are traffic jams. Patience, as a virtue, is not always bestowed on everyone, and, therefore, people start honking their horns and yelling at others. This produces a polyphonic sound that is not very pleasing to anyone, especially those in the traffic jam who have already had a stressful day at work. Obviously, this is not the fault of the automobile itself, but the fault of the owners. If there were a limit on the number of cars allowed on such public roads as Fifth Avenue or the Henry Hudson Parkway in New York City, noise pollution, and air pollution, for that matter, would not be a major factor of environmental degradation.
Another reason privately owned gasoline powered vehicles should be limited is the depletion of the fossil fuel supply. People all over the world need petroleum, a fossil fuel, to fill their cars in order to get around. However, petroleum, like many other natural resources on this earth, is in short supply. The continued use of petroleum at the current rate will cause the limited supply to dwindle. Our society does not seem to realize this point, though, and, as a result, petroleum is wasted in many ways while en route to an automobile's gas tank.
Oil companies transport petroleum all over the world by many means. Over the years, some methods have proved to be dangerous, such as the truck, train, tanker, or boat. A clear example of this danger occurred when the Exxon Valdez tanker ran aground in March of 1989(Nadis 16). The Valdez was carrying 11 million gallons of oil, and a drunk captain, across the Prince William Sound at the time of the disaster. All 11 million gallons poured out, thereafter seen only upon the thousands of species of animals that this accident destroyed. A total ecological system was wiped out from a shipment of oil meant for automobiles.
Oil is not only lost in transport, though. Storage tanks can waste quite a lot of petroleum without anyone knowing about it, but, at the same time, polluting the environment. Seventeen million gallons of oil have leaked from a storage tank of a service station in Brooklyn, New York(Nadis 17). A similar situation has occurred in El Segundo, California, but on a much grander scale. A two hundred million gallon pool lies underneath a service station there, and twenty eight million gallons of that has oozed closer to the San Francisco Bay, endangering water supplies(17). Among the nearly six million underground oil tanks that exist in this country, five hundred thousand are believed to be leaking at the moment, wasting millions of gallons of petroleum that could be used to heat houses and fuel industries(17). However, this natural resource sits under gas stations, waiting to be pumped into a car. Instead of oil helping humanity, the loss of oil hurts it.
A third reason privately owned gasoline powered vehicles should be limited is because they are contributing to an enormous source of lead in the air, which is dangerous to the body. When gasoline is burned in the engine of an automobile, it can release many things, dependent upon what type of gasoline it is. There are two main types of gasoline, leaded and unleaded. The leaded contains lead, while the unleaded does not contain as much. Fortunately, most cars today require gasoline of the unleaded type. However, some old cars still in use need leaded fuel(52). This poses a threat to every person in the world, for every one of us could die of lead poisoning.
Lead was first added to gasoline in the 1920's to improve car mileage and prevent engine knock, or an explosion that occurs when the gas is compressed in the engine(Applebee 2). Lead levels in human blood rose with the proliferation of cars and trucks on the highway(2). It has since been proved that auto emissions are the single largest source of lead in our environment, and that high levels of lead in young children can cause brain damage, mental retardation, kidney disorders, and interfere with the processing of Vitamin D(Applebee 2; Gurman 2).
Because of the preponderance of unleaded fuel on the market, the amount of lead in the air has decreased. But does this mean that the chance of lead poisoning from car exhaust has decreased dramatically? Not at all. Over twenty percent of lead poisoning cases in children reported in 1990 have been caused by car exhaust, dropping only five percent from 1985 (Nadis 55). This produces evidence that many, if not all, of the ways to reduce lead in the air that is harmful to humans have failed.
All of these matters indicate one thing: The automobile hurts the earth and its people environmentally and physically. In order to stop these things from occurring, we, the entire population as a whole, must consolidate our opinions and come up with alternatives to these harmful activities.
One such alternative is the electrically powered automobile, which runs on a battery much the same as the one underneath the hood of the car now(72). It even looks like a regular car. However, there are differences. The one major difference of the electric car from the gasoline powered car is that the electric car is emission free(73). However, the electric car cannot be implemented into our society because it does not run as long as the gasoline powered vehicles do. Despite this fact, if this type of car were substituted for gasoline powered cars, the environment would be on its way to becoming healthier.
If the electric car is not utilized, some measures must be taken in order to cut down air pollution. One way to do this would be to use cleaner burning fuels than what is used now, which is a mixture of many hydrocarbons(Gasoline). The highest quality fuel that can be obtained is iso-octane, which is given a rating of 100. The lowest quality fuel is heptane, which is graded 0. The gasoline that we pump into our cars is a mixture that is compared to the performance of both of these fuels. For example, an octane number of 89 means that it compares to a mixture of 89 percent iso-octane and 11 percent heptane. In order to cut down on air pollution, all gas that is pumped out of a station should be graded 95 or higher.
However, gasoline is never completely burned in the engine of a car, no matter how high the quality(Air Pollution). This is why alternatives must be considered in order to maintain a healthy environment. These alternatives include hydrocarbons like ethanol and methanol, solar power, and steam power.
These solutions are aimed at the future, though. What can we do now to cut down on the amount of automobiles on the road? One thing that could be done is to limit the use of privately owned gasoline powered cars to specific days of the week, as was done in the 1970's. Still another way would be to restrict each family to one automobile. It will be tough to inject these solutions into our society, but if enough support is extended from the population, the plans could work.
Gottlieb Daimler invented the automobile in 1885 on the principle that it would be of help to everyone. It was seen as that for almost half a century, and then, all of a sudden, people started realizing its harmful effects. People felt the heaviness of the air, heard the noise on the streets, and discovered the harmful effects of lead. All of these things were rightfully blamed on the automobile, a machine that had transformed a relatively unpolluted earth into a contaminated sphere. It will be hard to mend all of these problems, though, because many of them are interrelated. However, a good start is to limit the ownership and use of privately owned gasoline powered automobiles. This will make difficulties like air pollution, noise pollution, natural resource depletion, and lead poisoning much easier to control and eventually do away with in the future.
Works Cited
Alternative Fuels. Compton's Encyclopedia, Online Edition. Downloaded from America Online, February 6, 1995.
Applebee, Liana. The Car-Friend and Foe. Social Issues Resources Series, 1980.
Automobile History: Alternatives to Gasoline. Compton's Encyclopedia, Online Edition. Downloaded from America Online, February 6, 1995.
Automobile Industry Model Design: Emissions. Compton's Encyclopedia, Online Edition. Downloaded from America Online, February 6, 1995.
Automobile Power Plant: Exhaust System. Compton's Encyclopedia, Online Edition. Downloaded from America Online, February 6, 1995.
Environmental Pollution: Air Pollution.
f:\12000 essays\sciences (985)\Enviromental\Purple Loosestrife.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
PURPLE LOOSESTRIFE
The scene is breathtakingly beautiful, a thick brush of purple flowers blankets Canada's wetlands. This blanket silences the expected sounds of the wetland environment, birds chirping, ducks splashing, insects buzzing and animals thriving. This unnatural silence is disturbing, the favourite flowers that used to litter this landscape are no longer visible, the water that used to ripple continuously is perfectly still. The wetland is dead, except for this overpowering, hardy purple flower that has choked out all other vegetation and species. Purple loosestrife now controls this landscape.
Purple loosestrife is an exotic species that was introduced to North America from Europe during the early 1800's. Europeans sailing to North America would fill their ships ballast with wet sand taken from shores of Europe, a habitat where purple loosestrife thrived. Upon arrival in North America the ballast would be dumped overboard on the shoreline. By 1830 the plant was well established along the New England seaboard. Purple loosestrife seeds were also found in sheep and livestock feed that was imported from Europe during this period. This new organism was introduced to a new habitat free from traditional parasites, predators and competitors, purple loosestrife thrived in the environmental conditions and by 1880 was rapidly spreading north and west through the canal and marine routes. Purple loosestrife stands also increased due to the importation of seeds and root stalks by horticulturists. It was introduced to many communities as an herb, an ornamental garden flower and as a desirable honey plant.
One of the earliest reported studies of purple loosestrife being a problem in Canada was documented by Mr. Louis - Marie, in 1944. He stated that purple loosestrife was invading the St. Lawrence flood plain pastures between Montreal and Quebec. At that time Louis - Marie conducted a study to find suitable control methods for purple loosestrife. His results indicated that repeated mowing, continuous grazing, deep discing and harrowing were effective in keeping the spread of purple loosestrife controlled on agriculture land. Since the 1940's purple loosestrife infestations have increased greatly and the plant is now a major problem threatening many wetland ecosystems across North America.
Figure 1 - Purple loosestrife flowers.
(Parker 1993)
Lythrum Salicaria, commonly known as purple loosestrife belongs to the Lythraceae family, which consists of 25 genera and 550 species worldwide. The genus Lythrum consists of thirty - five species, two of which are located in North America, Lythrum Purish which is native to the continent and the invasive purple loosestrife. Through cross breeding, purple loosestrife is quickly overtaking Lythrum Purish and causing a decrease in native species. "The generic name comes from the Greek luthrum, blood, possibly in reference to the colour of the flowers or to one of it's herbal uses, as an astringent to stop the flow of blood." (Canadian Wildlife Federation 1993, 38) Purple loosestrife, an aggressive, competitive, invasive weed often grows to the height of a human and when it is mature can be 1.5 metres in width. The stalk of the plant is square and woody and may grow to 50 centimeters in diameter. The perennial rootstock can give rise to 50 stems annually which produce smooth edged leaves on opposite sides of the stalk. Purple loosestrife flowers are long pink and purple spikes which bloom from June to September (Figure 1). One purple loosestrife plant alone is solid and hardy but when this plant invades an area it creates a "dense, impermeable stands which
Figure 2 - Purple loosestrife growing in a typical habitat.
(Parker 1993)
are unsuitable as cover, food or resting sites for a wide range of native wetland animals..." (Michigan Department of Natural Resources 1994). Due to the lack of predators which feed upon purple loosestrife, this dominant plant has an advantage when competing against most other native wetland species for food sunlight and space. These advantages allow purple loosestrife to create dense, monotypic stands which reduce the size and diversity of native plant populations. Purple loosestrife can also grow on a range of substrates and under nutrient deficit conditions. It has the ability to regenerate quickly after cutting or damage and can withstand flooding once adult plants have been established. There are no native species that are as hardy as purple loosestrife, therefore without competition and predators the wetland ecosystem cannot control the spread of purple loosestrife.
Purple loosestrife is now found world wide in wet, marshy places, coastal areas, ditches and stream banks. (See Figure 2) It is prevalent in most of Europe and Asia, the former USSR, the Middle East, North Africa, Tasmania, Australia and North America. It has not been found in cold Arctic regions. In North America purple loosestrife is located between the Canadian territories and north of the 35th parallel with the exception of Montana. The most serious infestations are found in the wetlands of Southern Quebec and Ontario and along the Red and Assiniboine Rivers in Manitoba. Second to Manitoba, British Columbia has the next largest purple loosestrife infestation, weed populations are reported from Vancouver Island to the lower Fraser and Okanogan Rivers south of Penticton. In Saskatchewan and Alberta. small, isolated stands of purple loosestrife are reported and the Atlantic Provinces are quickly being invaded. Currently areas that are sensitive to the new invasions are the salt and freshwater marshes in the Maritimes. (See Figure 3)
`
Figure 4 - Purple loosestrife seedling.
(Parker 1993)
Regardless of the purple loosestrife location, one of the main reasons for the rapid infestations is due to the plants prolific seed production and reproduction cycle. "It has been estimated that a mature plant can produce 2.7 million seeds per growing season" (DeClerck-Float 1992,15). Purple loosestrife seeds are small and easily transported by water or by mud that attaches to the feet of birds or off road vehicles. The seeds remain dormant over the winter and germinate in late spring or early summer. They are capable of germinating in either the mud or when submerged under water providing the water temperature is between 15 - 20 oC and there are adequate light levels (See Figure 4). Through experiments performed by S.R.A. Shamsi and F.H. Whitehead, it has been determined that "prolonged seed dormancy may be possible, since seeds stored for three years in a refrigerator at 3 - 4 oC were still useable and had a germination rate of 80% " (DeClerck-Float 1992,15). The production of purple loosestrife seeds and their exclusive characteristics allow the plants to develop large seed banks at a site which is a factor that makes purple loosestrife so difficult to control. The plant has the ability to reproduce from the seed bank. Purple loosestrife can also spread vegetatively by adventitious shoots and roots from clipped or tramped plants. Any part of the plant that falls to the ground, even from a wheelbarrow, can develop into a plant. This shows the plant's desire to live no matter what obstacles it faces, again making it very difficult to control. Purple loosestrife plants have three style lengths (short, mid, long) and three strengths (short, mid, long). Pollination occurs between plants with the same style and stamen length. Purple loosestrife flowers have of one style length and the two sets of stamens are different lengths, therefore a plant is technically self - incompatible. "However, Ottenbriet (1991), found that the self - incompatibility system is not strict, as mid - styled plants showed a high degree of self fertility with themselves and other mid - styled plants." (DeClerck - Float 1992, 16) This proves, it is not safe to plant self - incompatible purple loosestrife, there is a risk of pollination which will lead to further distribution of the plant. This misconception is a problem because nurseries are selling self - incompatible plants as garden flowers which reproduce with themselves or with other species from the loosestrife family creating more invasive stands.
Purple loosestrife's hardy, competitive and reproductive characteristics classifies it as a large environmental concern. The plant is threatening wet lands, decreasing water foul population, clogging irrigation systems and becoming a threat to the fisheries. "Mosquin and Whiting (1992) regard purple loosestrife to be one of the five invasive alien plants that have had a major impact on natural ecosystems in Canada." (Canadian Wildlife Federation 1993,41) Canadian wetlands are rapidly being over taken by purple loosestrife, large stands of the plant displace native species that can't compete against this exotic species. The loss of native flora and fauna means the loss of habitat and food for wetland animals, this destroys the well balanced, wetland ecosystem. Across the Maritimes, prarie sloughs are becoming increasingly infested with purple loosestrife thus destroying the breeding ground of many North American waterfowl. This additional stress compiled with urbanization and pollution could cause the extinction of North America's waterfowl population. The invasion of purple loosestrife across the Maritimes is causing extra labor for farmers as well as an increased cost because the plants are clogging the irrigation systems. In B.C. purple loosestrife is invading the salt water shores and is becoming a threat to the fisheries. The overpowering stands of purple loosestrife are increasing costs and frustrations for many industries across Canada.
On the contrary, bee keepers and horticulturists have found economic uses for purple loosestrife. Bee keepers favour purple loosestrife because the plant forms dense stands and large quantities of pollen in July and August. Purple loosestrife is one of the few plants producing large amounts of nectar during the late summer. The downfall to purple loosestrife honey is that it is ill tasting and greenish, although this can be diluted by the good nectar from other flowers. Canadian bee keepers do not want purple loosestrife to spread for fear of losing the nectar from the good flowers but they also don't want to lose the large quantities of nectar obtained from purple loosestrife. Horticulturists favour purple loosestrife as a garden perennial in the prarie provinces. It is favoured because it's both showy and hardy and able to withstand the fluctuating climate. Horticulturists are finally realizing that the pros of purple loosestrife as a garden perennial are far outweighed by the cons of purple loosestrife as an exotic invader.
The most pressing question with regards to purple loosestrife right now is, how can we control it? Studies have been conducted since 1941 with the aim of finding effective control processes - one has still not been found. To gain control over purple loosestrife and to reduce it's impact on the environment three goals that must be attained:
1) Eliminate the species from highly significant sites where a low infestation is present.
2) Eliminate the species in geographical areas where it is just beginning to establish itself.
3) Contain the plant in large sites in order to slow down it's spread.
By achieving these goals the impact of purple loosestrife across Canada will be stabilized until an effective biological control agent is found. (Canadian Wildlife Federation 1993, 41)
There are three forms of control used on plant species, cultural control, chemical control and biological control. Cultural control involves manual labor such as mowing, cultivating, inundation, hand pulling, shearing, fire and flooding. Each method is moderately successful depending on the specific situation. Mowing, cultivating and inundation are not suitable control mechanisms for purple loosestrife in many natural areas because by destroying the exotic plant you also kill the struggling native species. In private areas which are overrun with purple loosestrife these methods will reduce the spread of seeds but will not kill the plants and therefore they will return the following year. Hand pulling and shearing are only suitable for very limited infestations due to their labor intensive nature. For these methods to be effective all roots, stems, leaves and flowers must be removed and destroyed. Fire has proven to be an ineffective method of control because the purple loosestrife root crown is well protected below the surface, the hot fire that is necessary to kill the crown cannot be created. Flooding as a method of control has proven redundant against mature plants. Adult purple loosestrife plants can survive in water levels of 90cm. Flooding does however affect immature plants but the water levels must be extremely high and it appears to take several years to have an appreciable affect in the reduction. Unfortunately flooding will also have a serious effect on native flora and fauna. Cultural control is both labor intensive and not very productive.
Chemical controls for purple loosestrife have been tested in both Canada and the USA but no herbicides have been accepted for use in Canada. In the USA, Rodeo, See 2 and 4-D have been registered for use but there is limited benefit compared to the high cost and temporary effectiveness. Canada has been testing Triclopyr amine, which is a broad leaf herbicide, that can be used for control of purple loosestrife. Researchers feel that it is an effective and safe product that can be used to keep purple loosestrife in check. The largest problem when using chemical controls is insuring that the effects of the herbicide will not negatively effect the native species as well as purple loosestrife.
The final method of purple loosestrife control and the most promising for the future, is biological control. This involves the introduction and management of selected natural enemies of purple loosestrife. It is a slow process and is not always efficient depending on the circumstances. The results are often long term and the infested sites must be monitored for several years. Biological control agents affect weed population indirectly by increasing the stress on the weeds which may reduce their ability to complete with the native plants. Biological control of purple loosestrife was initially investigated by the International Institute of Biological Control (IIBC) in Europe. The USA contracted the institute to conduct a study of possible biological control agents that could be used to control purple loosestrife. (Canadian Wildlife Federation 1993, 42) As a result of this study three insects were approved for release in the USA in June of 1992 and at this time the insects were also released into field trials in Canada. These three insects are Hylobius Transversouittatus, Galerucella Calmariensis and Galerucella Pusilla.
The Hylobius Transversouittatus is a root feeding weevil that is a parasite of purple loosestrife. The climate in Europe, which is native to this insect is very similar to the Canadian climate thus making it easy for the weevil to adapt. The H. Transversouittatus larvae mine the roots and change the vascular system which reduces seed production and germination. The adult weevils emerge in May or June and begin laying their eggs in the roots. The females continue laying their eggs until September thus covering 2/3 of the growing season. Over a period of time the effect of the weevil will drastically reduce the purple loosestrife stand. "The damage caused by the feeding of seven larvae per plant was found to reduce seed germination by 50%." (DeClerk - Float 1992, 10) Similar to purple loosestrife, the H. Transversouittatus is easily adapted and can withstand prolonged periods of flooding. The larvae do not feed off the roots when the water levels are high, they go into diapause until the roots dry out then they resume feeding. This weevil has only one natural enemy, the Mymarid egg but this enemy is not parasitic and has little impact on the population. H. Transversouittatus has been tested and results show that the insect will not have an impact on native species growing in Canada but will have a large impact on purple loosestrife. Feeding by the insects in high densities causes defoliation in mature plants, kills seedlings and destroying or preventing the formation of flower spikes. H. Transversouittatus appears to be a very likely candidate as a biological control agent for purple loosestrife but several years of trials will be necessary to determine it's effectiveness. It could take up to ten years to show it's full potential.
Galerucella Calmariensis and Galerucella Pusilla can be classified together because they are both leaf feeding beetles that have similar life histories, occupy the same habitat and affect purple loosestrife in the same manner. These two species are often found together in Northern Europe with one of the species dominating destruction of the stand. G. Calmariensis extends farther north than G. Pusilla and will be better suited for Canada's northern sites of purple loosestrife. Both species are parasites which have good host finding capabilities. Females will move from one host to the next, once a certain level of feeding damage has been reached, this guarantees the spread of the attack in large purple loosestrife stands. After being put through the same tests as H. Transversouitatus, Galerucella Calmariensis and Galerucella Pusilla were found to be extremely host specific and do not pose a threat to native species in Canada. In Europe these beetles are more commonly found than H. Transversouitatus. All three of these insects appear to be very promising in their control over purple loosestrife stands but, as mentioned earlier, it could take a few years to notice any progress. The idea of introducing another species to Canada's wetland ecosystem is not approved by all due to the purple loosestrife infestation incident. Many believe that tampering with nature is what has caused the problems in the first place and hopefully by letting nature run it's course all will turn out for the best . Unfortunately this viewpoint can not be supported for long. Canada is at a point right now that without the biological control agents, purple loosestrfie will destroy a lot of wetland and farmland. With biological control we can only hope that the ecosystems can be brought back under control.
Purple loosestrife is a very serious problem. It's rapid invasion is threatening wetlands, waterfowl and fisheries as well as the diversity of Canada's flora and fauna. If this plant is not brought under control quickly then the result of this exotic species being brought to Canada could be disastrous. The use of cultural and chemical control has not been effective so we now rely on the success of biological control to stop the spread of this hardy invasive plant and to replenish the diversity of Canada's wetland ecosystem. As a country we must do everything we can to reduce the spread and growth of purple loosestrife. As a concerned Canadian you can report any local purple loosestrife stands, spread your knowledge about the problem, strongly discourage the plantings of any new plants or the selling of the weed in nurseries and join the Ontario Federation of Anglers and Hunters. By doing this you are donating money and support the tests that are being conducted. We must work together to remove the purple blanket that silences our wetlands.
f:\12000 essays\sciences (985)\Enviromental\Rabies.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What is Rabies? Who gets Rabies? Rabies is a viral disease of humans and other mammals. It is most common in carnivores. The word rabies comes from the word "hydrophobia", fear of water. Rabies is a potentially deadly disease.
There are many things you can do to prevent yourself from meeting rabies. The most important thing to do, is to be certain your pets have updated vaccinations. Your pets can first get their vaccinations when they are three months old. After that booster vaccinations must be given every one to three years according to your state and city laws. It also depends on the type of vaccination.
Most people associate rabies with dogs, cats, raccoons, skunks, wolves, etc. The most common animals to have rabies are dogs, cats, and raccoons. Rabies cases in cats have outnumbered all other domestic animals every year since 1988. There was fifty-three percents increase in cat rabies between 1991-1992. Most of the cases with cats have been unvaccinated strays.
Even if your pets do not go outside, they should still be vaccinated. You cannot tell if you pet will accidentally get out or an infected animal will get in. Avoid close contact with any wild animal. Never feed, handle, pet, or take any wild animals in. Rabid animals will usually act in an abnormal way, have a foamy saliva around the mouth, and show a loss of hair or fur. If the animal is nocturnal, it may be out during the day. Rabid animals are usually very outgoing and aggressive.
To keep wildlife away from your home avoid leaving pet food outside, and keep the lids on trash cans secure, or store them inside a garage or shed. You can prevent wildlife from your entering you home by sealing holes and screening chimneys. If a wild animal does get in, do not touch it. Call your local animal-control officer or humane society and let them remove it.
The rabies virus can be transmitted in three different ways. These are through saliva, the bite of an infected animal, and by contact through the mucus membranes, or breaks in the skin.
Symptoms develop in ten to fifty days after exposure to this virus. Symptoms in humans usually begin with depression, restlessness, fatigue, and a fever. This is followed by a period of excitability, excessive salivation, and convulsions, especially in throat spasms. The victim is unable to drink although he or she is extremely thirsty. Death from paralysis and suffocation follows within ten days. Once the symptoms of rabies have appeared, there is no possible treatment for the disease.
The first vaccine against rabies was developed in France during the 1880's by Louis Pasteur. Rabies cases in humans have since become rare in the United States and other developed countries. This is because of the vaccination programs for domestic animals. People in high risk occupations such as veterinarians, forest-service, and health workers in developing countries are also often treated against the disease. In 1987 a less expensive, low-dose vaccine was introduced for a wider use by campers, travelers, etc. This is a series of shots that is painful, but it works very well. This is the latest type of vaccine available.
There are four things you can do if you are bitten by an animal that might have rabies. You should wash any wounds thoroughly with warm soapy water. Immediately after, call your doctor or go to an emergency room. Collect as much information about the animal as possible. If it is someone else's pet, find out if it's rabies vaccinations are up to date. Then report the incident to you local animal control officer or health board. This is all you can do about the incident. There are very few rabies cases reported each year. The few cases reported, mostly with the contact off wild animals. The wild animals that are most frequently related with the spread of rabies are skunks, foxes, coyotes, raccoons, rabbits, bats, stray cats, squirrels, rats, and other small rodents.
Despite the few cases reported, more people and animals die of rabies every year. Fortunately, there are vaccines to help prevent the kind of virus from spreading and taking animal or human lives.
f:\12000 essays\sciences (985)\Enviromental\radon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Radon
Radon is a naturally occurring radioactive element that can be found in soil, underground water, and outdoor air. Some of the properties of this gas include being odorless, tasteless, and colorless. The concentrations vary throughout the country depending on the types of rocks that are found in the soil. Exposure over prolonged periods of time to radon decay products has been associated with an increased risk of lung cancer.(3) The EPA describes an elevated concentration as being at or above their suggested guidelines of 4pCi/l (pico Curies per liter, used as a radiation unit of measure for radon). Exposures below this level may create a risk of lung cancer, farther reductions to lower levels may be too difficult or even impossible to achieve.(4)
Radon enters buildings through: exposed soil in crawl spaces, through cracks, openings in floors, and through below grade walls and floors. This is the primary source of elevated radon levels in buildings.(5) Outdoor air contains radon, but it is in extremely low concentrations therefore it is not a health hazard. Some wells contain water that has radon dissolved in it. This can be a hazard if the water is agitated or heated, allowing the gas to escape and elevate the levels that are in the building.(6)
Health Risk
The Surgeon General's office reports that indoor radon gas is a national health problem. This gas causes thousands of deaths every year.(7) These deaths are a result of lung cancer, which is caused by the radioactive particles that make up the gas.(8) The likelihood of getting lung cancer from radon depends on: the concentration that you are exposed to, the amount of time that you are exposed, and whether you smoke or not. The radioactive particles are inhaled when we breathe, and become trapped in the lungs. Once in the lungs they release small amounts of energy that can damage the tissue of the lungs which in turn can cause cancer.(9)
Radon is the second leading cause of lung cancer, with smoking being number one according to the Surgeon Generals office.(10) Smoking greatly increases the risk of getting lung cancer. Non smokers are allot less likely to get lung cancer from radon than smokers.(11) Radon is a big problem because a majority of the population spends most of its time indoors. This increases the amount of time that they are exposed, and the likelihood that they will get lung cancer.(12)
Where Radon Originates
Radon is created by the radioactive decay of uranium found in rocks, soil, and water. Uranium and its by products of decay, namely radon are abundant and are constantly being generated.(13) Radon is capable of easily traveling through rocks and soil.(14) The gas is also found dissolved in water, due to decay in the soil or rock below.(15)
Radon in Water
The risk from radon in water is much lower than the risk from radon in air. This is because the water must be heated or agitated to release the gas. This can happen in a shower, boiling water on a stove, or by using a washing machine. Most public water supplies don't present a radon risk, this is because the water is aerated at the treatment site and the gas escapes into the atmosphere. Most water that contains hazardous amounts of radon comes from wells. Wells should be tested for radon if the building that they are supplying contains hazardous amounts in the air. The testing procedures for water are different from those used on air.(16)
Water containing radon can usually be treated. The most effective treatment is to remove radon from the water before it enters the home, this is called point of entry
treatment. Water can also be treated at the tap, this is known as point of use treatment. However this treatment is much less effective at removing the risk.(17)
Radon Entry
Radon travels through the ground and into the air, allowing the gas to easily enter buildings and homes. There are many ways that the gas can enter a building. Cracks in concrete slabs allow the gas to enter through the floor. The gas also enters through pores and cracks that are found in concrete foundations. Faulty wall to floor joints also allow entry. Exposed soil creates more radon as uranium decays within the soil. A weeping drain tile that is drained to an open sump will cause radon to enter the home more easily. Loose pipe fittings will allow enough of an opening to let radon gas enter. Open tops of block walls let the gas move from the foundation and release in an open area. Also certain building materials, such as rock used in interior construction of fireplaces, will release the gas. Domestic use of well water allows the gas to enter through showers and through agitation processes.
Testing
The EPA reports that radon has been found in homes all across the United States.(18) Testing is the essential key to knowing whether a home is at risk from radon.(19) To test for radon special equipment must be used.(20) There are a number of
different devices for testing for radon on the market today. Some devices are known as passive devices, and require no power to operate. They consist of charcoal canisters, alpha track devices and charcoal liquid scintillation. All of these devices are relatively simple, and can be purchased at hardware stores. These devices are exposed to air in the building for a specified length of time and then sent out to a processing laboratory for analysis.(21) Active devices are test equipment that requires power to operate. These devices continuously monitor for radon. They do this by recording the amount of radon that is decaying in the building's air. This type of testing is more costly because it requires a professional, as well as expensive equipment.(22) Testing can either be long term or short term. Long term tests run for more than ninety days. Alpha track devices are most commonly used for this type of test. The most common short term tests are charcoal canisters and continuous monitors.(23)
Reducing Radon Levels
There are a number of methods that can be used to reduce the amounts of radon that enter a building. Soil suction is one such method, it draws the radon from below the building and vents it to the atmosphere, where it is quickly diluted. Another method is active subslab suction, this is the most common method. It uses suction pipes that are inserted through the floor slab into the soil beneath it. These pipes use a fan to pull the gas out from below the house and up into the atmosphere. Another method is known as passive subslab suction, it is the same as active subslab suction except that it uses air
currents in place of the fan. Drain tiles can be used to direct water away from the foundation. Yet another method is sump hole suction, this method is used in basements that have sump pump. By capping the pump, it can continue to drain water and serve as a location for a radon suction device. Ventilation is another popular method of removing the gas. Sometimes just opening the basement windows is enough other times the use of a fan may be required. Sealing cracks in the foundation also helps to prevent some gas from entering and it also helps reduce the loss of heated or cooled air. Another type of ventilation is heat recovery ventilation, it will increase the air circulation and will use heated or cooled air that is being exhausted to warm or cool the incoming fresh air.(24)
Conclusion
In conclusion, radon causes many problems. According to the surgeon general's office it is the second leading cause of cancer.(25) This is due to the radioactive particles decaying in the lungs and releasing energy that can cause tissue destruction that leads to cancer. Radon is found almost everywhere. So it must be dealt with. Some common ways are to reduce the amounts of the gas that enter the home are sealing cracks and ventilating the building. Due to the gas being colorless and odorless special testing equipment was designed to monitor it. This testing should be done by homeowners and business owners that are concerned about the safety of inhabitants. Through testing and corrective measures radon can effectively be dealt with.
Citations
1. Radon Reduction in New Construction. Washington: GPO, March, 1993.
2. Home Buyer's and Sellers Guide to Radon. Washington: GPO, March, 1993.
3. Murphy, James. "The Colorless, Odorless Killer". TIME: July, 1985: P.72
4. ibid. P.21
5. Consumers Guide to Radon Reduction. Washington: GPO, August, 1992. P.4
6. ibid. P.5
7. A Guide to Radon. Washington: GPO, September, 1993. P.14
8. ibid. P.9
9. ibid. P.15
10. ibid. P.3
11. ibid. P.3
12. ibid. P.5
13. ibid. P.6
14. ibid. P.13
15. ibid. P.7
16. ibid. P.2
17. ibid. P.2
18. Murphy, James. "The Colorless, Odorless Killer". TIME: July, 1985: P.72
19. A Guide to Radon. Washington: GPO, September, 1993. P.14
20. ibid. P.9
21. ibid. P.19
22. ibid. P.19
23. ibid. P.6
24. ibid. P.17
25. ibid. P.2
Bibliography
1. A Guide to Radon. Washington: GPO, September, 1993
2. Consumers Guide to Radon Reduction. Washington: GPO, August, 1992.
3. Home Buyer's and Sellers Guide to Radon. Washington: GPO, March, 1993.
4. Murphy, James. "The Colorless, Odorless Killer". TIME: July, 1985
5. Radon Reduction in New Construction. Washington: GPO, March, 1993.
f:\12000 essays\sciences (985)\Enviromental\Rainforest.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The destruction of the rainforest is a problem that the people of the world can not continue to ignore. 14 percent of the Earth's land used to be covered by rainforests yet this number has dropped significantly to only about 6 percent (http://www.ran.org/ran/info_center/index.html). Rainforests provide the people of the world with many necessities, some of which would no longer be available if rainforests did not exist. In the last 50 years, rainforests have declined at a terrifying speed of 150 acres per minute or 75 million acres per year (http://www.ran.org/ran/info_center/index.html). People must open their eyes to the horrible tragedy that will inevitably occur if the citizens of the world do not realize the seriousness of this problem.
To better understand the importance of the rainforest, one must be knowledgeable about what a rainforest actually is. The two main types of rainforests are temperate and tropical. Tropical rainforests are located in Latin and South America, Africa, Southeast Asia, and other areas in which temperatures stay above 80 degrees Fahrenheit year round. They can be found in 85 countries all over the world, however, 90 percent of them are concentrated into fifteen countries, each containing over ten million hectares. Tropical rainforests receive 160 to 400 inches of rain each year. Although these dense, damp forests cover just 5 percent of the Earth's surface, they can provide homes for between 50 and 90 percent of the Earth's plants and animals (http://www.davesite.com/rainforests/review1.shtml).
Tropical rainforests consist of three distinct layers referred to as the forest floor, the understory, and the canopy. The forest floor contains very poor soil which is mainly due to the trees not allowing for ample sunlight to reach the ground. Because only one to two percent of the light at the top of the forest's canopy manages to reach the floor below, photosynthesis ceases to exist. On top of the soil lies a thin layer of the remains of millions of dead trees, plants, and animals which are quickly broken down by the numerous number of organisms on the floor (Nichol 45). It contains a variety of insects as well as larger mammals such as gorillas and jaguars. The understory is home to smaller mammals such as anteaters, lemurs, and tree kangaroos. It also contains small trees and numerous shrubs. The top layer, the canopy, is made up of the tops of trees which can grow to be over 200 feet in height. Here, trees receive the necessary sunlight to undergo photosynthesis which is crucial for the survival of the forest as a whole. Many tropical birds, monkeys, apes, snakes, and other animals reside in the canopy (http://www.davesite.com/rainforests/review1.shtml).
Temperate rainforests are located along the Pacific coast of Canada, the United States, New Zealand, Tasmania, Chile, Ireland, as well as Scotland and Norway. Most temperate rainforests are much younger than tropical rainforests only being less than 10, 000 years old. The temperate rainforests differ from the tropical in that their soil is full of much more nutrients. Temperate rainforests are also much more scarce than tropical rainforests (http://www.davesite.com/rainforests/review1.shtml).
The rainforests of the world are homes to just about every group of animals known to man and it would be impossible to give recognition to them all. The only animals that appear to be few in number are large mammals. The largest animal of the rainforest is thought to be the okapi, "a shy, elusive beast from west Africa (Nichol 56)." Gorillas, apes, the orang-utan of the Far East, gibbons, and chimps which can grow to the size of a human are also among the larger animals in the forest. A wide variety of monkeys including the tiniest monkeys in the world, the pigmy marmoset, live among the trees in the South American rainforests (Nichol 61).
One of the rarest primates in the world, the golden lion tamarin, lives in a very small portion of the rainforest in Brazil. These breathtakingly beautiful little monkeys resemble golden toys and it is believed that only 150 survive in the wild. Without the rainforest, these precious treasures would be lost forever (Nichol 61).
Over 100 types of birds including the spix macaw, hoatzin, and a numerous variety of parrots would be extinct if the rainforests were non-existent. Many birds of the rainforest appear seasonally, or when the trees begin to bud. Other rare animals in the rainforest include the Javan rhinos, capybaras, and the giraffe stag beetle (Nichol 71).
The rainforest has a larger diversity of plants than any other area on Earth. For example, "a single hectare in Kenya's Kakamega Forest may host between 100 and 150 different tree species, compared to only about 10 different species in a hectare of the forest of North America ( http://www.davesite.com/rainforests/review3.shtml). Many of these plants don't appear in any other part of the world. A small portion of these species are the passion flower, the rambutan, the heliconia flower, and an abundance of hardwood trees.
For hundreds of thousands of years, indigenous people, or Indians, have called the rainforest home. They are very knowledgeable about the rainforest and the secrets it holds. They have taught the people of the world how to find and use wild plants and how to farm small crops on the poor soil of the rainforest floor. There are said to be more than a thousand of these groups of people throughout the world, many of which are close to extinction. If these people become non-existent, the secrets of the rainforests may remain a mystery forever (http://www.stevensonpress.com/intro.html).
Many of the plants in tropical rainforests are used for medicines by both people in the forest and hospitals throughout the world. One-fourth of the drugs that are sold in the United States have products that come from rainforests (http://www.ran.org/ran/). From something as important as a treatment to help fight heart disease to an over the counter drug such as aspirin, every medicine that comes from the rainforest serves a significant purpose to the people of the world.
One of the best-known medicines that comes from the rainforest is quinine. For many years, quinine was the only treatment for malaria. Another plant that aided in the fight against a deadly disease is the Madagascar periwinkle. It was discovered that two compounds from this plant could be used in the treatment of leukemia. As a result of this plant, the survival rate of victims of leukemia has risen form one in five to four in five (Nichol 78-79).
On a global basis, the rainforests are of extreme importance because they help control the Earth's climate. The plants in the forest store carbon dioxide in their roots, stems, branches, and leaves which lessens the greenhouse effect, consequently, lessening global warming. Also, when rain falls in the rainforest, the high temperatures make the water evaporate back into the air which recycles the water. Also, the clouds that cover the rainforests around the equator reflect the sunlight. This keeps the rainforest from getting too hot (http://www.stevensonpress.com/intro.html).
Destroying the rainforest could have devastating results. The people who live in the rainforests would be forced to move into camps or cities. These people would ultimately die off because of the new diseases that city life would bring, diseases that are not found in the rainforest. If they ceased to exist, their culture could be lost forever (http://www.ran.org/ran/).
The destruction of the rainforest could also cause an increase in the greenhouse effect. The carbon dioxide that the plants of the rainforest had been storing would be released and cause the temperature of the Earth to rise and the ice caps to melt. This would cause major flooding around the world.
Yet another important downfall of the cutting down of the rainforest is the effect on the forest floor. It is a known fact that 80 percent of the rainforest's nutrients comes from trees and plants which means the other 20 percent remains in the soil. When the leaves fall to the forest floor, these nutrients are immediately recycled back into the plants and trees. When a rainforest is clear-cut, this process is dramatically affected. The sun is not blocked by the trees which begins to dry up the soil. It is then blown away by the wind which makes it nearly impossible for the rainforest to grow back (http://www.stevensonpress.com/intro.html).
One of the most devastating affects of the cutting down of the rainforests would be the extinction of a tremendous amount of the plants and animals that reside there. Also, the remedies that have prevented many deaths over the years would no longer exist because the plants in which they originated from would be gone.
Although it should be obvious that the rainforest is better left alone, some people insist on destroying them. The Forest Alliance of British Columbia accounted for this by saying, "The global population has more than tripled this century, and will continue to grow for the next 50 years, particularly in developing countries. World population is expected to reach ten billion by 2050. Because the number of people living on the planet increases every year , the number of forest products needed also increases, forcing temperate and tropical rainforests to be cut down (http://www.davesite.com/rainforests/review4.shtml)."
Farming in the rainforest is very hard because of the poor soil but is still done because the land is cheap. Because of the lack of nutrients, farmers can not use the same piece of land over and over. In following years, many farmers just move to a new piece of land which destroys the forest little by little. Ranchers also follow the same process of using a piece of land to raise cattle and then clearing another large piece of land. "During the 1980s, about 16.9 million hectares of tropical rainforest was cut down and replaced with farms and grazing land for cattle (http://www.mtc.com.my/lib/formal/fact4/overview.htm).
Another reason why the rainforests are being destroyed is the logging industry. Trees from the rainforest are used for building houses, making furniture, and providing pulp for paper products. Many corporations have convinced countries that contain rainforests that it would improve their economy if they would allow logging in the rainforest. Many of these countries' economies now depend on their support (http://www.davesite.com/rainforests/review4.shtml).
Many companies such as Occidental Petroleum try to bribe and trick the natives of the rainforest into giving them their land. This oil company was unsuccessful in trying to illegally force the people of the rainforest to sign away rights to the land which would violate the Ecuadorian and international law protecting indigenous people. This will hopefully set an example for the companies of the rest of the world who want to cut down the precious rainforest (http://www.davesite.com/rainforests/review4.shtml).
Although the destruction of the rainforest seems as if it is a problem that only world leaders can attack, it is definitely something that a person as an individual can protest. Many people have boycotted fast food restaurants that serve hamburgers that came from cattle raised on rainforest land. If there is no demand, then companies will stop raising cattle on land cleared from a rainforest. Also, an individual could help by not buying furniture products made from rosewood, mahogany, ebony, or teakwood, materials that are most likely from the rainforest. In many cases, people have taken it upon themselves to adopt acres of the rainforest. The 1996 Tropical Rainforest Coalition has stated that it would cost only forty-five dollars to "adopt" one acre of the rainforest. This amount of money would fund land acquisition, legal fees, and security costs which would make sure that the adopted land would be protected (http://www.davesite.com/rainforests/review5.shtml).
The destruction of the rainforest is a problem that the people of the world can not continue to ignore. 14 percent of the Earth's land used to be covered by rainforests yet this number has dropped significantly to only about 6 percent (http://www.ran.org/ran/info_center/index.html). Rainforests provide the people of the world with many necessities, some of which would no longer be available if rainforests did not exist. In the last 50 years, rainforests have declined at a terrifying speed of 150 acres per minute or 75 million acres per year (http://www.ran.org/ran/info_center/index.html). People must open their eyes to the horrible tragedy that will inevitably occur if the citizens of the world do not realize the seriousness of this problem.
THE RAINFOREST
by
Marisa Rauchway
Honors Earth Science
Mr. Preziosi
February 3, 1997
BIBLIOGRAPHY
http://www.ran.org/ran/info_center/index.html
http://www.davesite.com/rainforests/review1.shtml
http://www.davesite.com/rainforests/review2.shtml
http://www.davesite.com/rainforests/review3.shtml
http://www.davesite.com/rainforests/review4.shtml
http://www.davesite.com/rainforests/review5.shtml
http://www.stevensonpress.com/intro.html
http://www.ran.org/ran
http://www.mtc.com.my/lib/formal/fact4/overview.htm
Nichol, John. The Mighty Rainforest. The Netherlands: David and Charles Printing, 1990.
f:\12000 essays\sciences (985)\Enviromental\recycled water.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Recommendation For Recycling Water in Florida
Prepared for: Tom Petty, Chairman Of The Board
Department Of Environmental Regulation Board
by:
Environmental Specialist, Pasco County Florida
November 29, 1996
Contents
Abstract..............................................................................................2
Executive Summary...............................................................................3
Introduction.........................................................................................4
Methods..............................................................................................4
Results................................................................................................5
Basic background information on water reuse in Florida.....................................5
Reclaiming Waste Water in Florida
Uses for reclaimed or reused water...............................................................7
Conclusions..........................................................................................7
Recommendations..................................................................................7
References...........................................................................................7
Abstract
"Recommendation for Recycling Water in a Florida Pilot Plant"
by
The water shortage problem has affected all of us in one way or another. Either through the mandatory restrictions or the increased price of water, or even the ever increasing occurrence of sinkholes, the evidence of a water shortage is everywhere. Since we need water to survive, and there are no alternatives to support life on this planet, we must find a way to keep up with our ever increasing water demand.
This report presents the water shortage problem that is occurring in Florida. This report will familiarize you with the problem and explain the other uses currently being employed in Florida. This report also explains the procedure, as well as a recommendation including the site and costs involved, along with a short background on the proposed procedure. I recommend that the recycled project be funded and allow the pilot plant to meet the ever increasing demand for water in Florida.
Executive Summary
The water shortage problem effects us all in one way or another. Either through the mandatory restrictions or the increased price of water, or even the ever increasing occurrence of sinkholes, the evidence of a water shortage is everywhere. Since we need water to survive, and there are no alternatives to support life on this planet, we must find a way to keep up with our expanding water demand.
I feel that the only viable option is to recycle the water we are using. By recycling the water, we will be able to drop the price and stop the sinkholes from occurring and ease the mandatory restrictions placed upon us by the water shortage.
The research that was completed and all the information I gathered showed that a price of $50,000 would cover all the expenses needed to set up a pilot plant, including the labor which will be done in-house.
The $50,000 required will be recovered in less then a year's time, and since it will also satisfy the voracious appetite for water, I feel it is a viable option. The plant could be operational in 3 months upon approval of the funds. I feel this option is both economically and environmentally feasible and would like to get started as soon as possible.
Introduction
Water, our most precious resource, is becoming in short demand. With water use increasing every day here in Florida, will there be enough water for everybody? We live in a state where people are migrating into every day, due to the desirable climate and recreation options. With this influx increasing at an alarming rate, where will we get the water to supply the demand? Clearly, at the present rate of use the water table is decreasing. As we see more and more sinkholes, due to the overpumping of the water table, we realize another alternative must be developed. This completion report will update you on the progress of the option of recycling the water in our Pasco County test plant, at the Moon Lake plant.
We use water every day and in many ways. We use water to take a shower, brush our teeth, water our lawns, wash our laundry and cars or just simply to support our very existence. Clearly we cannot do without water, and there simply is not enough to go around. One alternative is to recycle the water. We already treat our waste water with processes that result in a water 99.5% pure. If this water was to be sent to a water treatment plant to be processed along with the water already being processed, there would be plenty of water available. This water could be used as potable water, for drinking or cooking, or for laundry or irrigation. The reclaimed water could be reinjected (deep well injection) into the aquifer to offset the amount being pumped every day.
Enclosed is a flow chart through a waste water and water plant already in use. There is little or no modification required to accomplish recycling of water. Once the water completes the treatment at the waste water facility, it would be rerouted to the head, or beginning of the water treatment plant. As of this point in time, we have completed a flow chart designed for your plant and a brief estimate of the costs involved.
The facilities already in use to process the water we drink now could be used with little, or in some cases no modifications. This would alleviate our water shortage problems both now and for future generations. With the reclaimed water we would not only save existing supplies, but probably drop the cost of water below that which it is now. According to our estimates, the changes to the Moon Lake Water Treatment Plant will cost approximately $50,000; this includes labor, which will be done in house.
The scenario is that the water effluent leaving the wastewater plant will be sent to the headwork's of the water plant, complete the journey through the water treatment plant and sent out with the other potable water. At the present time the water leaving the waste water plant is simply used for irrigation or dumped into drying ponds. With this new technology this wasted water can be used for drinking water, saving both our resources and money that is presently being spent pumping water out of the ground. This has already been in use in for some time in New York. We have observed excellent results with this scenario in the Westbury plant we inspected. We expect to achieve equally successful results in the Moon Lake plant as well. This should alleviate the water shortage and also bring the cost of processing potable water down in the future.
Methods
To carry out this project, I performed the following tasks:
1. Completed the approximate price of recycling waste water. The estimates include labor and materials and, since no additional land is required, the $50,000 estimate should cover all expenses.
2. Picked out the sight for the project, and have included a flow chart, which is attached for you to get an overall idea of what to expect.
3. Solicited and received prices of the materials required.
4. Upon your approval of the recycling option, we will draw up blue prints and lay out the floor plans for the expansion required to recycle water. Once the funds have been made available, this will be carried out immediately, and we can go over the blue prints and see if they meet your approval.
Results
First I will provide a basic background on the feasibility of water recycling and the progress already made in the state of Florida. Then I will propose the next step: instead of using the recycled water for irrigation use only, I propose the water to be used for drinking purposes as well.
Basic background information on water reuse in Florida
Reclaiming Waste Water in Florida
As recently as the mid 1960s, secondary treatment and surface water discharges were considered the norm for Florida's wastewater treatment plants. As the population doubled between 1950 and 1960, and once again between 1960 and 1980, Florida created more treatment plants to keep up. In 1966 there were nearly 600 treatment plants in Florida; by 1986 this had increased to 4,250, and by 1993 this stabilized back down to about 3,500. The vast majority are small with about 80% having a capacity of less than 0.1 MGD. Collectively, they represent only about 3% of the total permitted capacity of all domestic wastewater facilities in the state. This can be a problem since it is usually economically unfeasible for these small plants to be able to provide any sort of water reuse. Another problem is that Florida's warm, slow-moving streams and sensitive lake and esturine require tighter treatment requirements. This has led to an increased interest in land application of treated wastewater and reuse technologies to both clean up the wastewater effluent, and to find another economically suitable use for it.
The first reuse projects were created for Tallahassee and St. Petersburg. These have significantly influenced reuse in Florida and have paved the way for today's multitude of reuse projects. Tallahassee initiated testing of spray irrigation systems in 1961. This has evolved into a 2000 acre system for farmland. St. Petersburg implemented an urban reuse system in the late 1970s. Here reclaimed water was used for irrigation of residential properties, golf courses, parks, schools, and other landscaped areas. The experimental work that was conducted by the State Virologist for the St. Petersburg project serves as the basis for Florida's high level reuse disinfection criteria. In the 1980s, the creation of the CONSERV II citrus irrigation project was implemented in portions of Orlando and Orange County. Project APRICOT, which is an urban and residential irrigation project in Altamonte Springs (Orlando), and the Orlando wetlands project are among some of the more recent projects dealing with water reuse.
In 1987, the five Water Management Districts (WMD)of Florida established the Water Resource Caution Areas(WRCA). These are areas that have existing or projected (20 year) future water resource problems. These areas collectively cover all of the eastern half and southern half (including far north of Tampa) of Florida, in actuality about two thirds of the state in all. State legislation is now requiring the preparation of reuse feasibility studies for treatment facilities and the "Water Policy" requires the use of reclaimed water within the WRCAs, unless the use of reclaimed water is not economically, environmentally, or technically feasible.
Florida's antidegredation policy, which is contained in permitting and surface water quality rules, applies to all proposed new or expanding surface water discharges. It requires demonstration that the proposed water discharge is clearly in the public interest. As part of the public interest test, the applicant must evaluate the feasibility of reuse. If reuse is determined to be feasible, reuse is preferred over surface water discharge, or other means of disposal.
Florida's Chapter 62-610, FAC of the reuse program contains detailed rules for reuse of reclaimed water. It regulates slow rate land application (irrigation), rapid rate land applications systems (rapid infiltration basins), absorption fields (a form of rapid rate system involving sub-surface placement of reclaimed water), and other land application systems. Part III of the chapter deals with irrigation of public access areas (golf courses, parks, schools, and other landscaped areas), residential properties, and edible food crops. Other urban uses of reclaimed water, such as toilet flushing, aesthetic uses, fire protection, construction dust control, and others, also are regulated by Part III.
The WMD for the south region of Florida stated that in 1995, six percent of the 243 individual water use permits issued included reuse. All of the water use applicants were required to evaluate the feasibility of reuse. Nearly 75% of the 163 wastewater treatment plants that have a capacity greater than 100,000 gpd practiced reuse for all or part of their disposition of reclaimed water. They collectively treated 772 MGD of domestic wastewater and 112 MGD (15%) was reused. The number could have been higher, but 35% of the total wastewater treated contained excessive amounts of salts and was rendered unsuitable for reuse. Most of this is due to infiltration (permeability) of the sewers by saltwater canals and does not appear to be addressed for repair any time in the near future.
The South West Florida Water Management District (SWFWMD) is involved in two funding assistance programs. The Cooperative Funding Program will fund up to 50% of the cost of design and construction including pumping, storage, and transmission facilities and reuse master plans. A total of 90 of these projects have been budgeted through Fiscal Year 96. The New Water Sources Initiative Program provides funding for alternative water supply projects. Nine of the sixteen current projects utilize reclaimed wastewater or storm water.
In the SWFWMD region over half of the 180 largest wastewater plants supplied 104 MGD of reclaimed water. This was 33% of the total volume of wastewater generated in the district. In some areas of SWFWMD the demand for reclaimed water now exceeds the available supply.
With The Department of Environmental Protection (DEP), the five WMDs, and the Florida Public Service Commission (PSC) all playing roles in the reuse program, some sort of coordination is needed. This is done by the Reuse Coordinating Committee. This committee is chaired by DEP's Reuse Coordinator and consists of representatives from the DEP, the WMDs, and the PSC. The committee meets on a regular basis to coordinate the many reuse activities. In 1993, the committee published "Reuse Conventions," which included an overview of the reuse program, made recommendations for increasing program effectiveness, and established standard terminology and procedures to be used by the members in their efforts to encourage and promote reuse.
Wastewater reuse is becoming very popular in Florida. It has been projected that the capacity for reuse by the wastewater plants will collectively increase to about 1390 MGD. This is an increase from the 1995 reported number of about 850 MGD. The infrastructure that is needed to transport the reclaimed water is what appears to be missing. This is something that will cost a lot of money, but will be a necessity in the future, especially for South Florida (Florida Water Resource Journal 32-35).
Uses for reclaimed or reused water
As you can see, reused water has many irrigation and aesthetic uses. I would like to take these uses one step further, as a potable drinking source. I feel that by taking the water from the effluent or from the output of the wastewater plant and recycling this water to the headwork's of the water treatment plant already in use, we can reuse the water we have been discarding as non-drinkable water. The water treatment plants already in use are capable of providing drinking water from the waste effluent with no or little modifications. The wastewater is already being used elsewhere and now I feel it is time to start to look to this vast supply of usable water as a new drinkable water source.
Conclusions
Obviously, we don't have enough water available to meet the ever increasing demands. The most economically and environmentally sound choice therefore is to reuse the water readily available to us. We have the technology accessible to use to make this a viable option and I feel we should pursue this option. This would almost completely alleviate any water shortage we have, since all the water we use would be recycled back into drinking water, thus relieving the demand to pump more and more water from an already over used aquifer.
Recommendation
I recommend that the funds be made available for the pilot plant to be put into effect, and allow us to take the next step in water reuse in Florida. The new plant will drastically reduce the amount of water now being pumped from the ground, thus reduce the sinkholes and alleviate the water shortage problem. I feel the small investment is more than worthwhile and will be recouped in a year's time. I would like to start this project and bring this new technology to light and begin a new generation of water treatment.
References
Young, Harley and David York (1996, November). "Reclaimed Water Reuse in Florida
and the South Gulf Coast." Florida Water Resource Journal, pp. 32-35.
f:\12000 essays\sciences (985)\Enviromental\Reforestation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The purpose of this written report is to inform the reader about the concerns and facts involved with
reforestation. Reforestation began in Ontario after World War II. What happened was, professional foresters
were assigned to an area and became responsible for its well being. Under the Crown Timber Act, long term
management was prepared. Then the many steps needed to rebuild a forest began.
Included in this report will be information on the effects of cutting and replanting, such as Carbon Dioxide,
and Global Warming. Following this will be methods for planning a forest, and how they are conveyed before
planting in a forest begins.
There are many reasons why forests are cut down. One is to benefit economically, with furniture and home
building. But there is also another reason. Arguments say "the United States could help slow the atmospheric
accumulation of carbon dioxide by replacing old-growth forests with faster-growing young trees". A new study
of young and old forests says how this is in fact not true. Loggers have said that new trees pull the carbon
dioxide better than old trees, and this may seem true, but it is not. There is one point being overlooked from
all of this. The older, larger trees can store much, much more carbon dioxide than a new tree could. By cutting
and burning these magnificent seasoned trees, the CO2 is being released back into the atmosphere. These releases
of carbon dioxide add up in our surroundings, only to intensify Global Warming. Although this shows what happens
when one burns and cuts down old forests, one must still plant new trees for long term plans, not letting them
grow for a few years, to then cut them down.
There are many methods for planning a forest. The simplest method of replanting a forest is to leave it to
nature. A suitable seed bed in which trees will readily take root is integral for successful regeneration.
Reducing competition by eliminating grass, weed or shrubs is another requirement in securing a new crop of trees.
These will sprout to produce seedlings. Though the weeds were eliminated before, they still grow back, and
because of this poor, quality trees will grow. Another method though, is to create a planned forest, where new
conifers are grown from seed in a special nursery. Seeding is a reforestation technique used mainly in the Boreal
forest area where fire or logging tends to leave no or very little seeds for growth. In specific cases, Ministry
staff seed the area with treated tree seeds. Following this is the planting. In many cases, planting is the
only means of initiating a new forest. Up to 80 000 000 trees are planted annually in Ontario on Crown and
private land. Usually immature forests have to be tended to. Once situated, a new crop needs intermittent care
for the next 60 to 100 years. This means continuing protection from fires, disease and insects and routine
thinning to focus the growth on selected crop trees.
Before a forest can be grown, certain procedures must first occur. Collecting and processing seeds is one of
them. Tree flowers fertilized by blowing winds or insects generate seed, in a time of somewhere within 1 to 2
years. Seed collecting from the woods must be timed with periodically occurring good seed years. Angus, near
Barrie, is where all forest tree seed collection is co-ordinated. Stock of seeds can value up to $500 000.
Usually this is around 3 billion seeds from 59 tree classes.
In summary of the aforesaid, trees are very valuable to the human race economically and for health. Without trees
the environment could worsen to the point where we would be living on one large dessert. We must remember that
forest do not grow as easily as they used to because of fires and other disasters. This is why many forests are
planned, and cared for. Most of us will never now how they turn out because for a forest to completely grow,
it needs within anywhere from 60 to 100 years or more.
There are many reasons why we should have reforestation. One being mostly that we need forest to live! Without
forests, or any type of plant, the carbon cycle can't result. There are not many arguments against reforestation,
but there can be some opposition for the land being used between a large business company and the Ministry.
I feel replanting of forests is very crucial to the human race. The earth depends on many cycles, where one
organism depends on the other because of what it does. We, exhale carbon dioxide which the trees take in, while
they give off vital oxygen.
In closing, we live in an age of technology, leaving behind us the past. With the past we are forgetting forests;
we must make sure this doesn't happen.
f:\12000 essays\sciences (985)\Enviromental\Role of Government Intervention in Environmental Issues.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Paper #1: Role of Government Intervention in Environmental Issues
In environmental cases, a policy framework is sometimes more
effective when there is less government intervention. As the level of
government intervention diminishes, this allows more flexibility for
corporations to achieve efficiency. Furthermore the traditional command and
control approach has proven to be costly, bureaucratic and often inefficient.
It is important to address the fact that there are numerous benefits that
can be achieved for both policy makers and industries, if a policy framework
is based on market forces. However it is important that there is a need for
some government intervention, but should be as minimal as possible.
I have chosen to examine the article from the New York Times
entitled RU.S. Seeking Options of Pollution RulesS. Although pollution is
detrimental to our environment, you have to take into account that it is
almost impossible to entirely prevent pollution. This is scientifically
impossible and it would have severely negative economic impact on the
industries. So the core issue becomes the fact no matter what, there will
always be pollution, as long as these industries exist. So we should focus
on how we can minimize this and yet at the same time have an efficient market
system? Furthermore, we should also focus on how we can accomplish this so
that sustainable growth and development can take place. So there is
definitely a need for some form of government intervention to enforce and
monitor this. Reason being that there is always an element of equality that
has to be enforced, when dealing with cases such as this. For instance,
larger corporations may have an advantage over smaller corporation, since
they have stronger influence on politicians and lobbyists. So the
governmentUs role should be to ensure that all industries (regardless size
and/or power) have equal opportunities to benefit from this type of approach.
In another words, the government should simply be a RwatchdogS.
Government should monitor so that the distribution and transaction of the
permits are done in an appropriate manner.
The case of Minnesota Mining & Manufacturing Corporation is a
classic example of tradable permit approach. Under this model corporations
are able to buy, sell and trade permits that legally allows emission. Many
economists have favored this approach because this also provides incentives
for technical improvement. So the aggregate effect would be that most
industries would try to maximize their profits by trying to come up with new
techniques to reduce the level of emission. This in turn would allow them to
reduce the cost that they would have to pay from polluting. Norm Miller also
endorses this approach by stating that Rperformance-based approaches are more
efficient, both for industry and for governmentS.
Allowing a company to devise and manage their own pollution control
plan is another effective (and Rde-regulativeS) approach. In the article,
this was exemplified in an Arizona based company called Intel. Individual
companies such as Intel knows what is best for the company. This means that
each individual companies know what the best equipment is and what the best
procedures are to achieve established standards. Rather than having the
government telling them what to do, the people at Intel were able to devise
their own plan. This saved them a great amount of time with out the usual
cumbersome, bureaucratic procedures. The Intel company, in this case, bought
the effluent from the cityUs waste water treatment plan. This allows
corporations to work more closely with the local communities. Usually, the
result is that both parties would benefit and even achieve a common goal.
There are, however, potential problems that may occur from this.
Although we can presume that market forces will allow everything to work
itself out, it may still promote degradation. Reason being that, under this
model there is still a notion of Ryou can pollute as long as you can pay for
itS. So if a great number of corporations are financially able to pay for
their level of emission, the aggregate effect on our environment would be
devastating. Under this model, it is also difficult to penalize the
polluters. Where as under the command and control approach, severe fine or
even imprisonment can be imposed to prevent pollution. There is also a
possibility that this may lead to individualistic attitude. In a competitive
market, everybody (or every corporation) tries to maximize their gain by
acting in an individualistic manner. Individualistic type of behavior has
been known to lead to greater level of environmental degradation.
Market based approach is definitely an economically liberal (and
also RReagan-esqueS) approach, since there is the Rhands-offS notion. But if
the initial framework is implemented in an appropriate manner, this can turn
out to be very flexible, user friendly and environmentally friendly approach.
In my opinion, government role should simply be initiating, implementing and
monitoring with minimum regulatory intervention. However, government should
set forth some sort of environmental goal before implementing a regulation.
If this prevails, it will allow growth in a sustainable manner.
Paper #2: Global Economy and the Environment
As the global economy gets integrated, national or local
corporations will gradually transform in to a multinational corporation
(MNC). When this type of development occurs, the host countries are usually
the ones that become the immediate stake holders. This is because when a MNC
sets its foot into a host country, there are economic, political, social and
environmental impacts that result from their corporate actions. In many
cases, it is certainly possible that it can end up in a win-win situation, if
the host country and the MNCUs both work mutually. However there have been
unfortunate examples, where this has not been the case.
In general, international agreements have its advantages, due to
the fact that we can harmonize international standards. Therefore
environmental concern is one of the key issues that the policy makers and
MNCUs should set a high priority on. This is because growth and development
is strongly correlated with environmental degradation. Furthermore, it is
fair to say that the MNCUs are more likely to have a more harmful
environmental impact from growth and development, as opposed to the local
corporations. This is because MNCUs may not be as knowledgeable as local
corporations in resource utilization and land management. This also refers
to the notion of Rthe locals know their land better than anybody elseS. The
tropical rain forest of Brazil is a good example of this. The RindigenousS
or the local people have a good understanding of how to extract and utilize
its resources in a very sustainable manner. However when a multinational
timber company comes into Brazil, result of their actions will probably be
more harmful, due to the fact that they are not complying to the
RtraditionalS methods.
Another important aspect is the fact that in any international
trade agreement, a MNC is most likely going to shift their production to a
lesser developed country. This is because LDCUs are a good target for cheap
labor and low start-up costs. In Robert PastorUs essay, he mentions the term
maquiladoras; Rcheaper labor that allows them (Mexicans) to assemble parts,
import from the U.S. and then reexport the assembled productsS. In places
such as the maquiladoras, safety standards are not as rigid and this puts the
local workers in a serious health risk. The Rblack lungS case is an example
where miners in Latin America contracted respiratory diseases from working at
unsafely regulated coal mines. Since it was in a lesser developed country,
occupational health standards were lower than the usual. The Union Carbide
incident from Bhopal, India is another example, where the explosion took
place due to lack of safety and precautionary measures. Many experts have
commented that the Union Carbide incident could have been completely avoided,
if the plant was located elsewhere, in a more developed country, where they
have more strict standards. So there is a need for universal standard on
these types of issues. Unless this is achieved, the LDCUs would be placed in
a vulnerable situations as more and more MNCUs will take advantage of this.
When MNCUs come into a host country, this increases their revenue
and their GDP. However this does not necessarily mean that everyone benefits
from it. This is especially the case in the most third world countries. The
benefits usually go to the elites or sometimes to the ones living a more
urbanized areas. This disrupts the level of equality as the few rich
individuals get richer and a great number of poverty stricken individuals get
poorer. This also increases political corruption. A good example of this is
the case in Brazil with the discovery of oil in the late 1960Us. The level
of corruption resulted in an unprecedented amount of national debt, leaving
them worse off then before. In addition the Brazil suffered a great deal of
environmental and resource degradation as a result of unsound environmental
activities from the MNCUs . As Walter Reid puts it, there a need for
Rgovernments to have a responsibility to invest a share of the national
benefits in rural developmentS.
Most free trades also make it more difficult to push a political
agenda. Major power such as the U.S. use economic sanctions on other
countries to enforce their political agenda. Not too long ago, the French
government was engaged in funding for nuclear testing. Most U.N. officials
as well as the U.S. were outraged by the fact that France was not complying
with the international arms agreement. As you know nuclear testing not only
encourages the international arms race, it also has a detrimental effect on
the global environment. However, because U.S. was engaged in a heavy free
trade with France, this made it more difficult to impose an economic
sanction. So there is also a need for more serious political considerations,
when being engaged in a free trade. In this case, the Department of Commerce
should have carefully reexamined the political and military criteria, before
a high level of free trade took place between the U.S and France.
But as the world becomes more integrated socially and economically,
the idea of expanding the international trade will have numerous benefits, if
they are carried out in an RappropriateS manner. After all, free trade
promotes transfer of living in LDCUs as well as improving economic
efficiency. This also allows increase in efficient use of natural resource,
which can have numerous environmental benefits. NAFTA is a good example of
an environmental success where the U.S. EPA and MexicoUs SEDESOL worked
closely together to achieve common environmental goals. Free trades can
serve as an instrument that can increase international cooperation. However
it can have an enormous unintended consequences. Therefore there is a need
for more scrutiny in the decision making process.
f:\12000 essays\sciences (985)\Enviromental\Should the Harris Superquarry ga ahead .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Rural Economic Development
SHOULD THE HARRIS SUPERQUARRY
GO AHEAD ?
Kenneth Mercer
BSc Rural Resources III
16th December 1994
CONTENTS
TABLE OF FIGURES I
1 SUMMARY 1
2 INTRODUCTION 1
FIGURE 1 LOCATION OF THE SUPERQUARRY 2
3 THE ISSUES SURROUNDING THE DEBATE 3
3.1 History 3
3.2 The reasons for the selection of Lingerbay 4
3.3 The need for economic development 4
3.4 Other incentives 5
3.5 The environmental concerns 5
3.6 Making the quarry more palatable 6
4 CONCLUSIONS 6
4.1 The case for development 6
4.2 The case against development 6
4.3 The probable outcome 6
4.4 A suitable compromise 6
REFERENCES 7
TABLE OF FIGURES
FIGURE 1 LOCATION OF THE SUPERQUARRY 3
1
1 SUMMARY
There is considerable environmental opposition to the development of the Harris
superquarry. This is unlikely to stop the development on its own, but if the Scottish
Office decides that the project can go ahead environmental restrictions are likely to be
imposed on the operation to minimise, as far as possible, the impact. The reasons for the
development centre round the need for economic development to bring jobs and
prosperity to this remote area. The life of the quarry is expected to be around 60 years
and provide an initial 30 jobs, rising to 80 as the quarry reaches peak production. The
question is if a superquarry is the best solution to the problems of a remote rural area.
What will happen when the jobs come to an end and would another form of investment
not be more appropriate to their needs? Would the presence of a quarry restrict the
choice for further development? Could an integrated approach be adopted and a 2nd
generation quarry planned? The decision of whether or not to go ahead cannot be
delayed indefinitely as Norway and Spain are looking at developing their own. If it is to
go ahead then an early start will give Harris a stronger position in the market.
2 INTRODUCTION
This report examines the controversy and key issues surrounding the superquarry at
Rodel, Lingerbay on the southern coast of the Isle of Harris (Figure 1) and attempts to
find an acceptable solution. The quarry will hollow out the heart of the mountain but
leave enough of a shell to leave the skyline largely unaffected. The whole question of
whether or not it should go ahead or not is the subject of the current public enquiry in
Stornaway. A decision must be made soon. The market for aggregates is limited,
Norway and Spain (Section 3.1, 1991) have their own sites and are also looking at the
potential for developing them.
FIGURE 1 LOCATION OF THE SUPERQUARRY
(Glasgow Herald, 20/10/94)
3 THE ISSUES SURROUNDING THE DEBATE
3.1 History
1927 A detailed geological survey identified the deposit of anorthosite.
1965 Planning permission was given in principle to quarry the rock. The remit
covered a larger site than is planned today.
1966 Some small scale quarrying took place but found an on site rock crushing
plant and a deep harbour were necessary for economic viability.
74-76 Outline planning permission was given for quarrying, shipping and loading
facilities but this was never acted on.
1977 The Scottish Office issued National Planning Guidelines. Harris was
identified as one of 9 potential sites. (The Scotsman 18/7/93)
1980 Ian Wilson, a Scottish entrepreneur specialising in minerals, persuaded
Ralph Verney, the advisor to the environmental secretary, to recommend
a large scale study on the potential of superquarrys in Scotland. The
Scottish Office commissioned Dalradian Mineral Services - Wilson and
Colin Gribble - to write a report on the prospects. It was published in
1980 and listed 16 potential sites including 5 key sites, one of which was
Rodel. Many of the mineral rites were bought by Wilson before he
published the report, the rest he acquired later. He sold his idea for the
Harris superquarry at Rodel (Figure 1) to Redland Aggregates, and if the
quarry goes ahead, he will receive a royalty for each tonne of rock
removed. (New Scientist 1994)
1981 Outline planning permission was given for quarrying but it was not on a
large enough scale to be economically viable.
1988 The Scottish Office asked the Western Islands Island Council to develop a
policy on mineral extraction. This has still not been done.
1989 Government Planning Guidance Notes predicted a demand for crushed
rock.
1991 Consultants Ove Arup surveyed the potential for sites and identified 12 in
Norway, 1 - 2 in the north of Spain and less than 4 in Scotland.
Redland Aggregates submitted a new planning application to the
Western Isles Island Council.
1992 The Scottish Office issued a draft report which recognised the potential
for Rodel but found that socio-economic benefits needed to be balanced
with environmental consequences. (The Scotsman 18/7/93)
1993 A poll was sent out to 1822 islanders asking them to vote on the issue.
1109 replied, which amounted to a 60.9% response. The results showed
that the majority of the Islanders were in favour of the quarry. The votes
cast were as follows: For, 682 (62.1%) and Against, 417 (37.9%). There
was a strong regional variation though, the further from the site the
people were, the more in favour they tended to be. (Glasgow Herald
17/6/93) A week later this poll resulted in the Western Islands Council
voting in favour of the planning application by 24 votes to 3. (Glasgow
Herald 25/6/93) Western Isles Island Council held a Special meeting in
Tarbet. (The Scotsman 18/7/93) The Department of the Environment
concluded that England could not meet its own demands for aggregates.
(New Scientist 1994)
1994 A Royal commission report concluded that the demand for aggregates for
road construction would be considerably cut by reducing our current
dependence on road transport. It recommended that if coastal
superquarries were to be granted planning permission then it should be a
legal requirement that the quarried rock should be transported by sea. It
further concluded that the recycling of construction materials would
remove the need for superquarries and reduce the distance over which
aggregates would need to be transported. (Royal Commission 1994) By
September the Highlands and Islands Enterprise had given its general
support to the project and the Highlands and Islands Development Board
had approved a grant and loan totalling £250,000 to the company set up
by Ian Wilson, Harris Minerals Ltd. (Glasgow Herald 30/9/94)
3.2 The reasons for the selection of Lingerbay
The reasons for the selection of the site were mainly economic:
* The mountain consists of an estimated potential of 6 million tonnes of anorthosite.
As far as the aggregate industry is concerned this rock is a top quality product,
suitable for a producing a wide range of aggregates, gravels and sands.
* The mountain is situated by a deep glacial sea loch which is required for the
access of the 30,000 tonne ships which will remove the rock. Unless the rock can be
directly loaded from the site to the ships the quarry will not be economically viable. The
loch is deep enough to accommodate the deep harbour (24 meters) required.
3.3 The need for economic development
Lack of employment drives people out of the countryside. This creates problems as it
results in an ageing population and a higher dependant to worker ratio. This has a
dramatic effect on the cash flow of the area - As pensioners have less to spend than a
paid worker, there is less money spent in the local shops and pubs. This means in a cut
in services - Less profits result in less provision. This is the downward spiral of rural
depopulation and deprivation. Deprivation exists if welfare drops below an agreed
standard. This definition goes further than the problem of finance. Education, public
transport, healthcare, housing and recreational services are all covered by the above
definition. In remote rural areas the general level of these services are clearly lower than
the national average. (Midwinter, A and Monaghan 1990) Harris now has a population
of 2,200 which represents a decline of 41% over the last 40 years, for those who remain
33% of households have no adult in work. (The Guardian 8/11/94) Ian Wilson claims
that the creation of the superquarry will bring prosperity to the dieting corners of the
Highlands and Islands and is the economic development necessary to reverse this decline.
3.4 Other incentives
Redland Aggregates has conceded annual donations to a local trust fund if the quarry
goes ahead. This would rise to a sum of £100,000 as the quarry reached full production.
(Glasgow Herald 16/6/94)
Ships could provide a cheap piggyback for distributing local produce. (New Scientist
1994)
3.5 The environmental concerns
* Ships ballast water could introduce foreign species of sea life. This is a concern
because without predatory biological control any introduced species could multiply
rapidly and put the local marine ecosystem at risk. (New Scientist 1994) There is
particular concern over the introduction of toxic phytoplankton species. (SNH 1994)
* The area is home to otters. They are protected by the 1981 Wildlife and Countryside
Act and some would be displaced by the development. (Scottish Field 1993)
* The potential for a collision with oil tankers will be greatly increased due to the extra
traffic involved. (Friends of the Earth)
* Although not a SSSI the site beats the qualifying mark of 300 points and is the home
of 149 species of bryophite (Mosses and liverworts) 7 of which are rare. (The
Scotsman 10/10/94) These are particularly vulnerable to dust. Heather and bog
mosses, an integral part of the ecosystem, could be sensitive to increases in calcium
and soil pH levels. (SNH 1994)
* Harris is designated as a National Scenic Area and should be preserved. (The
Scotsman 10/10/94)
* Development of a quarry could also restrict some types of other development. Harris
has an exceptional asset of a pollution free environment. This is recognised by Scotia
Pharmaceuticals who plan the development of an a micro-algae farm on Harris. This
development is under threat because they could not risk any chance of contamination
to a product destined for the medical industry. (The Scotsman 3/10/94)
3.6 Making the quarry more palatable
Redland Aggregates has indicated that non resident workers would have to leave the
island at weekends to minimise any conflict with the locals. This would be written into
their contract of employment. (The Scotsman 13/10/94)
A 2nd generation superquarry would have a dual purpose, it would provide rock for
quarrying but this would be part of a construction programme. The end result would not
be just a hole in the ground but could be designed to fill some other use, for example
produce HEP.
4 CONCLUSIONS
4.1 The case for development
The Scottish Office approves. (Section 3.1) Rodel is the best site in geological terms.
(Section 3.2) The quarrying and shipping would be badly needed economic catalysts to
the area. (Section 3.3 and 3.4) There is a limited demand for aggregates and Spain and
Norway are developing their own plans. If the Harris quarry is delayed too long then it
will have to face this extra competition.
4.2 The case against development
The area is an NSA and development would cause environmental concerns. (Section
3.5) There are other alternatives - especially the recycling of construction materials.
(Section 3.1)
4.3 The probable outcome
There is no doubt that Harris could benefit from economic development, but what would
become of it when the rock runs out or if demand falls? My personal feeling is that the
rock should be left alone. The contamination of a pristine environment is too high a cost
to pay. Clean Industry which could benefit from this resource would be a more
appropriate development but due to the support of both central and local government,
the islanders and Ian Wilson I feel planning permission will most likely be given.
4.4 A suitable compromise
If the development is to go ahead then I would like to see a second generation
development. (Section 3.6) This would give the quarry a secondary use and could
provide long term benefit to the community when it has reached the end of its productive
life. The operation should also have strict regulations on extraction procedure to reduce,
as far as possible, any environmental impact. The Western Islands Island Council should
be ordered to develop a policy on mineral extraction and include plans to phase in other
development as the quarry nears the end of its life. The last thing Harris needs is to be
left in an economic vacuum when the rock runs out.
REFERENCES
Friends of the Earth, Superquarries versus sustainability, Recruitment leaflet
Glasgow Herald, (17/6/93), Harris majority backs superquarry
Glasgow Herald, (25/6/93), Isles' £50 Million quarry finally given go ahead
Glasgow Herald, (16/6/94), Quarry firm to pledge £100,000 to Island trust
Glasgow Herald, (30/9/94), Enterprise at odds with heritage
Glasgow Herald, (20/10/94), First shots fired in quarry inquiry
The Guardian, (8/11/94) Native chieftain brings magic of the stones across the Atlantic
to help Hebrides see off threat to mince mountain into chippings, Page 6
Midwinter, A and Monaghan, (1990), The measurement and analysis of rural
deprivation, Report for COSLA, February 1990
New Scientist, (1994), Rush for rock in the Highlands, 8/1/94
Royal Commission, (1994), Transport and the environment-18th Report, HMSO,
London
The Scotsman, (18/7/93), Moving mountains to see how the land lies
The Scotsman, (3/10/94), Drug firm says quarry could hit expansion
The Scotsman, (10/10/94), The cruel dilemma for the people of Harris
The Scotsman, (13/10/94), Island curbs on superquarry contract staff
Scottish Field, (1993), Otter disruption, October 1993
SNH, (1994), Lingerbay press pack
f:\12000 essays\sciences (985)\Enviromental\Should there be a nuclear power plant in saskatchewan.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Should there be a nuclear power plant in Saskatchewan
I think there should be a nuclear power plant built in Saskatchewan
because I believe it would contribute to the province a great deal. There is a
growing need for power in Saskatchewan.
Right now in Saskatchewan there is a need for more power. There has
question as to putting a nuclear plant is Saskatchewan This I think is the ideal
choice of power plants because on 1 bundle of uranium is equal to the power
output of 400 tones or 1900 barrels of oil. This is more than adequate to cope
with our need for power. Also one good example of our need is that during
winter Saskatchewan has to buy power from other provinces in order to have,
that is how serious the shortage is.
In Saskatchewan there is lots of unemployment. Building a nuclear
power plant would create more jobs. This would also benefit the government
because less people would be collecting unemployment insurance and
welfare. Thus adding to the amount the government could be spending on
other things such as fixing highways, better healthcare, and more funding to
school.
Nuclear power is also a lot environment wise. Nuclear power requires
a mere fraction of the space that is required to set up a solar, wind, or
hydroelectric generating station which. This will allow more space for private
landowners and will also keep land prices at a lower cost. Nuclear power is
also a much cleaner operating type of fuel. The amount of waste produced is
from a nuclear power plant is not even a fraction of the amount of sulfur,
carbon monoxide, and nitrogen oxide produced by a coal plant. By building a
nuclear power we will reduce acid rain and not add to the global warming.
Hydro stations form algae in lakes which reduces the amount of oxygen in
the water making it harder for marine life to survive. Although the damage
nuclear accidents cause is very bad the risk of a accident is not very probable,
so in the long run the damages caused by a nuclear power are very little
compared to other generating stations.
Also lets look at economy. Any new industry or company brought to
the province also bring income to the government. Which will again make the
government able to improve other also important Things.
Nuclear power is also a cheaper fuel. Since we have such large
deposits of uranium in Saskatchewan it will cost barely anything to fuel the
reactors
So you see it only makes sense to place a nuclear reactor in
Saskatchewan because of the lesser amount of pollution and cost to run a
nuclear reactor.
f:\12000 essays\sciences (985)\Enviromental\Solar Energy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Imagine a perfect source of energy. One with which no pollution
what-so-ever is associated with. No poisonous gasses or destruction
of rain forests. This abundant source of energy comes from the sun.
Solar energy is the visible energy produced in the sun as a result
of a constant nuclear fusion reaction that is taking place.
The amount of energy at the solar constant, which is at outer edge
of the earth's atmosphere, is two calories per minute per cm squared.
A calorie is the amount of energy needed to raise one gram of water
one degree Celsius. If we could efficiently harness the energy bombarding
the earth for twenty-four hours we could power New York for a year.
Unfortunately the photovoltaic cells that change the energy into
electricity are so inefficient that it would take twenty-five years
to pay for it's self in output.
There many uses of solar energy. Some homes rely fully on the power
of the sun to heat their water. Other houses have flat plate
collectors which aid in the heating of the house and water. Solar
Energy plays a vital role in the absorption cooling cycle in a process
called solar cooling.
Since wind is caused by the up and down movement of hot and cold air,
wind energy can be a branch of solar energy. The same thing with
tidal energy. And since the sun plays a vital role in the water cycle
hydroelectric energy can be attributed to solar energy.
Solar Energy has great potential in becoming a main source of energy in the future.
Bibliography
"Alternative Resources" Internet. Large URL
Schneider, Herman and Nina. Science for Today and
Tomorrow. Boston: D.C. Health and Company.
"Solar Energy." Microsoft Encarta. 1995 ed.
f:\12000 essays\sciences (985)\Enviromental\Speach On Nuclear Weapons.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In our society, nuclear energy has become one of the most criticized
forms of energy by the environmentalists. Thus, a look at nuclear energy
and the environment and its impact on economic growth.
Lewis Munford, an analyst, once wrote, "Too much energy is as fatal as
too little, hence the regulation of energy input and output not its
unlimited expansion, is in fact one of the main laws of life." This is
true when dealing with nuclear power. Because our societies structure and
processes both depend upon energy, man is searching for the most efficient
and cheapest form of energy that can be used on a long term basis. And
because we equate power with growth, the more energy that a country uses, -
the greater their expected economic growth. The problem is that energy is
considered to have two facets or parts: it is a major source of man-made
repercussions as well as being the basis of life support systems.
Therefore, we are between two sections in which one is the section of
"resource availability and waste", and the other "the continuity of life
support systems pertinent to survival."
Thus, the environmentalists believe that nuclear energy should not be
used for various reasons. First of all, the waste product, i.e. plutonium,
is extremely radioactive, which may cause the people who are working or
living in or around the area of storage or use, to acquire leukemia and
other cancers. They also show how billions of dollars are spent yearly on
safety devices for a single reactor, and this still doesn't ensure the
impossibility of a "melt down." Two examples were then given of Chernobyl
and Three Mile Island, in 1979, when thousands of people were killed and
incapacitated. Finally, the environmentalists claim that if society wastes
less energy, and develops the means to use the energy more efficiency, then
there would be a definite decrease in the requirement for more energy
producing plants.
On the other hand, some business men and economists say that the
present conditions should be kept intact, as the other forms of energy,
e.g. oil, natural gas and coal, are only temporary, in dealing with
surplus, and give off more pollution with less economic growth.
Concurrently, countries wanted a more reliable, smokeless form of energy
not controlled by OPEC, and very little uranium was required to produce
such a high amount of resultant energy. Lastly, they said that renewable
energy is (a) unreliable in that the wind, for example, could not be
depended upon to blow, nor the sun to shine, and (b) were intermittent in
that a 1,000 mega-watt solar farm may occupy about 5,000 acres of land,
compared with less than 150 acres of land for a similar capacity nuclear
power generation station.
Because the energy technology that society employs directly influences
the quantity and quality of life, the energy option that is chosen should
have the greatest cost- benefit effectiveness as well as maximizing
flexibility and purchases. However, those who believe in continuous energy
consumption growth, seem to forget that there is only a limited supply of
energy in every energy system, and to "overdo" any resource may provide for
an unacceptable impact upon global and regional ecology.
Thus, if the business world pushes the environment as far as it can
go, Ceribus Paribus, please refer to figure 1. Thus, to use petroleum as a
substitute for uranium, which is needed to power the nuclear system, would
not be economically or environmentally sensible. I say this because, first
of all, there is a major supply of uranium considering it was one of the
last energy sources to be found as well as only a small amount of it is
required to produce a lot of energy. Secondly, petroleum gives off carbon
monoxide which is one of the reasons for ozone depletion; whereas, the
uranium does not give off pollution except that it produces plutonium which
needs to be buried for more than fifty years to get rid of its radiation.
Finally, because so much of the petroleum will be required to power the
vast area that nuclear energy can cover, the cost to us as the consumer
would be massive! This would mean slower economic growth and/or expansion,
especially when compared to nuclear energy. Therefore: Ceribus Paribus -
(a) if the cost decreases, the demand increases, and - (b) if the cost
increases, the demand decreases. Please refer to figures #2 and #3
respectively.
Nuclear plants are now replacing coal burning plants. It will cost
the taxpayers far more than they are currently paying for electricity.
However, industrial officials claim that since the plants have useful
lifetimes, they will save the consumers money in the long run. The problem
with this is that this depends on hard to predict factors, such as the
future price of oil and the national demand for electricity. It should also
be noted that there is also a sharp jump in consumer costs when the plants
are turned on to pay for the construction costs, plant manufacturers or
other loan sources, plus interest.
Thus, the cost of electricity may go up three-fold. New plants
usually supply substantially more energy than the area requires; meaning
that the consumer will be paying for this waste of energy, which is cost
per kilowatt hour. It should also be noted that some plants are canceled
during construction, which can raise the cost up to several billion
dollars. This is absorbed by the government through tax laws,
shareholders, and rate payers; and is considering the fact there is a
continual rise in construction prices and a decrease in costs of
alternative fuels, many utilities cancel plants, when almost half
completed. (Late cancellation cost is an increase in the proportion to the
amount that has been invested.)
Albert Schweitzer, an ecologist wrote, nuclear power "threatens the
present and forecloses the future. It is unethical, and inferior to
non-fission futures that enhance survival for humans, alive and yet to be
born, and nature, with all its living entities." Therefore, in conclusion,
it is clearly evident why nuclear energy should be abandoned, even though
it may be considered as economically sound, and that we should concentrate
more on conservation and quality rather than expansion as we have done in
the past.
f:\12000 essays\sciences (985)\Enviromental\Technological Development and the Third World.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TECHNOLOGICAL DEVELOPMENT & THE THIRD WORLD I wonder if people in Third World countries know that they are considered the "Third World?" Do they use that term in reference to themselves? Do they have any perception of the comparison, judgment and bias that goes into that statement? I'd like to think that they don't. In the film about the Ladack people that we watched in class, it was mentioned that they didn't have a word for poverty. No such word even existed in their language. But that was before. It was before the invasion of other cultures, and it was before they had anything to compare themselves to. And in comparison, they saw that, materially, they had less. And in that knowledge, they believed that they, as a people, were less.
In this essay, I will examine third world communities and the relationship between technological development and environmental degradation. I will look first at the way in which development occurred in the South, and the reason it happened the way that it did. From there, I will show how these methods of development proceeded to eventually cause widespread environmental damage and it's effect on the local people. .
DEVELOPMENT: "WESTERN" STYLE
When I refer to "the environment", I mean not only the habitat that humans, plants and animals inhabit, but also the physical, emotional and psychological attitudes that are encompassed by these in their daily existence. Development, by my definition, will consequently refer to the technological advancement of a community as well as the improved status of humans and other species. This is my definition, and one that others employ frequently now. However, the model I will be examining first is the development theory based on the economic - political system. "A
typical western (read: economic) definition of development would be ' an ambiguous term for a multidimensional process involving material, social and organizational change, accelerated economic growth, [and] the reduction of absolute poverty and inequality.'" (1) The key emphasis in this statement is the phrase "economic growth." In Europe and North America, development politics has revolved around the economic aspect of producing surplus, and gaining capital. Because of our relatively rich land resource base, our method of technological development has been quite successful. Statistics show us as high wage earners, wealthy in public services such as health care and education, low infant mortality rate, long lifespan, and high GNP per person. Because of the comfort that our economic development has brought us, we have omitted the aspect of development in regard to human psychological well-being and the preservation of our natural surroundings that should be concurrent with technological development. With ours as the only current model of successful development, newly industrializing countries such as South and Central America, and Africa (and up until quite recently many Asian countries) attempted to achieve results in the same way. The problem that ensued for these countries was that instead of working slowly towards their goals, they sold themselves to get ahead economically. Instead of recognizing the problems that this method was causing and stopping them, governments and the wealthy private sector, took control of the industry and continued to exploit it. With the rich in control, the poorer classes had little choice but to follow, and the downward spiral of poverty and instability began.
HOW IT HAPPENED
As the Third World nations struggled to become "developed," the rich countries became involved in their affairs. Interest in the countries arose primarily because of the trade resources that these lands provided. The potential for profit became evident because the new countries were struggling with their economy. They were experiencing internal unrest between their members and they needed money and resources to get started. Before they had a stable internal economy, they were bounding into the international market and selling their resources for a quick profit. Cash-cropping became a way to enter the international arena of market and trade, but the damage to the land took only a few short years to be discovered, and by that time luxuries had become "necessities." People wanted the cash flow to continue and instead of finding ways to use their land sustainable, they continued poor resource management regardless of the consequences. Deforestation became another common practice because of the demand for wood overseas. Export, although a seemingly beneficial development strategy, became detrimental to third world countries because it catered to the demand for certain items. Coffee beans are a large export item in South and Central America. With the rising demand for coffee in North America, land that was previously used for agriculture was taken over and used for growing coffee beans. The consequences of this were twofold; local people were suffering from lack of land to use for food production, and the potential land was useless because of the cash-crops.
ENVIRONMENTAL RESULTS OF TECHNOLOGY :TODAY
A more current example of the technological development that is resulting in environmental degradation is the misuse of resources. In Africa, industrial water pollution has become a widespread problem. Third World communities don't often have the awareness that the
South has about sustainable techniques and the importance of employing them. Most people in North America live in cities and have their water purified to a certain health standard and brought to them. People in the Third World use the river for washing, drinking and bathing. Unclean water leads not only to damage of the ecosystems but also to the health of those who use it. Another problem is that countries from the South have based their industry in developing countries because they have lower environmental standards. With the benefits of jobs and money that these companies bring, the host country will rarely challenge the damaging techniques that they use. "Pollution forms another major set of environmental problems in the region. It used to be said that pollution is a problem of the rich countries, and that for the developing countries, development must come first and we can worry about the environment later. Pollution and the deteriorating quality of life caused by environmental degradation in our region has shown how fallacious this argument is." (2) We no longer have a choice but to address the problems that man is creating in nature and the environment. The excuse of development will no longer hold.
"(we, the) people.. in Latin America are using our best resources for the benefit of the rich countries - exporting to them our energy, our fish, our raw materials and using our labor resources to extract and export these materials and all at low prices and poor terms of trade." (3) While our technology is helping the third world countries in areas such as health and education, our own desire for goods and profit prevent us from allowing them their full potential. We create an economy where we will do whatever it takes to get what we want. As an example, we of the developed nations tell the third world that they should stop environmental damage, while it is our companies that are taking advantage of their low standards. We tell them to stop cash-cropping,
but we buy their coffee beans at any price. With these hypocritical standards, we will never influence them to turn their economy around. As we our economically motivated in our own interest, they too need economic motivation to change their destructive habits. Especially since with us, their products are primarily "extras," while for them, their trade of the product is negatively influencing their economy and affecting their people.
In Asia and the Pacific, urbanization, modernization, and technology are creating different environmental problems. It is the problem of human need. Thousands of people have been displaced from farms because the government or the private sector expropriates them for industrial use. Rich foodlands are being destroyed and turned into highways, airports or dams.With no where to go and no jobs, the people are migrating to the city in search of homes and employment. Slums and squatter dwellings result with problems of rising crime and unhygenic living conditions. This puts terrible strain on both the human and physical environment, creating a situation with little hope for a successful future.
SOLUTIONS
To combat these crisis, we must adopt some new behaviors. Our current model of development is showing some obvious flaws and it is evident that it is the impact of technology that has resulted in. environmental damage. But technology is not the only factor at fault. It is the influence of technology combined with human greed that has presented these complex human and environmental problems. Laws monitoring pollution of the environment must be enforced, and followed equally in all countries. With the knowledge that we now possess of the global chaos that is at hand, we have no excuse but to do so.
The hypocrisy that exists between the systems must also be stopped. Considering not only ourselves, but the endangered lives of others is essential to the continuation of our species as a whole. Our fortunate position in a developed nation does not give us the right to create a hierarchy of our existence as more important than the life of another.
Possibly, the only way that we are going to combat any of these problems is by education. It will take more than a few dedicated people to change the world, but with the influence of many, anything is possible.
f:\12000 essays\sciences (985)\Enviromental\Temagami.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Table of Contents
Introduction 2
The History of the Forest 2
The Forests of Canada 3
Part One: The History of the Logger 5
The Canadian Forestry Industry 5
The Ontario Forestry Industry 7
Part Two: Forest Conservation in Ontario 8
Political Activity 8
Temagami 9
Part Three: The Temagami Debate 11
The Forester 11
The Environmentalist 12
Part Four: The Law of the Land 13
Civil Disobedience 13
Government Legislation / Wildlands League Lawsuit 15
Natural vs. Positive Law 16
Conclusion 17
Summation 17
Future Outlook 18
Bibliography and Suggested Reading 21
Appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Introduction
"Our understanding of the way the natural world works - and how our actions affect it - is often incomplete. This means that we [must] exercise caution, and special concern for natural values in the face of such uncertainty and respect the 'precautionary principal'." - Ontario Minister of Natural Resources, 1991
The History of the Forest
Forests have long been recognized as having vast power, both through their potential and how it has been viewed by humans, as well as through their effect on humans in sometimes subtle ways. The inherent properties of wood have always made it attractive as a versatile resource but there are other, more subtle ways in which it affects people. The tropical rainforests, responsible for producing most of the earth's breathable air, have been given the lofty title of "lungs of the Earth," and as stated by the Canadian Encyclopedia Plus '93, "forests provide an additional, although intangible, benefit: the opportunity for renewal of the human spirit" (CAN ENCYC). Once humanity accepts these facts, we open ourselves up to profound responsibilities regarding their protection. Unfortunately for both ourselves and our environment, we have long deigned to shoulder these responsibilities, seeing only the obvious potential of the end product of wood; overall, humanity has always managed the forests very poorly, even before forest management became an issue.
Since earliest civilized times, wood has been coveted as a resource for its ability to burn, as well as its pliable nature. With the discovery of fire, came hand in hand the need for fuel. Fortunately, trees have always been abundant in all reaches of the earth, which has made living in harsh climates easier, greatly increasing our already rapid rate of expansion. Eventually electricity replaced wood as a source of energy, but the uses for wood have expanded over time to include building material and paper, and to the present day trees remain important to industry on a global scale. Unfortunately, humans have always had a poor reputation in regards to their environment, the forests being no exceptions. We have always looked upon resources as something to be exploited - used to the fullest, then forgotten. Therefore it should come as no surprise to learn how clear-cutting of forests has become the norm, even knowing that the forest will likely not recover fully for generations after clear cutting and countless animals will die in the process. It should come as no surprise to learn of the appallingly large quantities of tropical rainforests destroyed each day merely to make room for resorts or temporary farmland that will eventually become desert land. It is not highly surprising to learn of these and other such facts, yet they are still terrible to behold, especially knowing that they continue to be true today and will most likely continue to be true in the future.
The Forests of Canada
The forestry industry has always adopted a "cut and get out" philosophy, which has been accepted and most often encouraged by land-hungry industrialized populations who view trees as little more than an obstruction to growth. (ENCARTA) Such philosophies mean in simple terms clear-cutting large tracts of land and running as quickly as possible, leaving behind nothing but slash, a slowly eroding landscape and animals searching for lost habitat. For a long time forestry was no more than trying to reap maximum profits, clear maximum land in minimum time and get out quickly. We have indeed come far since those times. Clear-cutting is now a thing of the past and strict measures are in place to ensure that logging is done in a sustainable manner. That can be assured . . . can't it? No, not so readily as it may seem; that we have come a far way already is evident but in which direction? Clear cutting, as will be shown, is not a thing of the past and as to the regulations in place... we shall see. These question, and many others besides, can be answered by looking at the case study of Temagami.
The word Temagami has become inextricably associated with terms like "old-growth", "protest", "forestry", "environment" and many more. However the actual Temagami issue has always been shrouded in an impenetrable fog which has only lifted at two times in its history as a potential logging and mining site. Behind the fog, a great many things were going on but the focus on Temagami herein will be the two times it surfaced as a genuine concern. "Red Squirrel Road" and "Owain Lake" have become commonly heard phrases but the questions, those which will be examined herein, are more apparant; what do these key phrases mean? And more importantly, what have they to do with the law? Temagami is a prime example in determining the relationship between the environment and the law - both natural and positive.
Forestry is a major issue in Canadain society. There are many fundemental problems with the industry and accociated attitudes as stands today but how can the situation be changed for the good of all concerned? This difficult question will be answered herein to a great extent and perhaps some light will be shed on a murky but important issue. Although not all aspects of the issue can be covered, this essay will, through the case study of Temagami, focus on the legal perspective of forestry - the laws which are in place, those which have been changed or should be changed, as well as those laws which are being broken by either side of the controversy - and outline some methods by which conservation can be acheived through our legal system.
Part One: The History of the Logger
"What are the roots that clutch, what branches grow out of this stony rubbish? Son of man you cannot say, or guess, for you know only a heap of broken images, where the sun beats, and the dead tree gives no shelter, the cricket no relief,
and the dry stone no sound of water."--T.S. Eliot
The Canadian Forestry Industry
Forestry has been longstanding as an industry in Canada; in some ways it was the first real industry - as European settlers found a land of endless forest, they realized that lumber would be the prime resource. Today, approximately 300 000 Canadians are directly employed in the forestry industry - almost 15 percent.(Can Encyc. "Forestry") In practice, forestry means much more than merely cutting trees. Forestry is defined by Encarta '95 as "the management of forestlands for maximum sustained yield of forest resources and benefits." This may seem a simple definition, however the wording of it deserves further attention. First, forestry means management; management means looking after the forests rather than adopting a 'slash and burn' attitude. Second, forestry attempts to attain maximum yields; this appears to support the 'slash and burn' attitude, rather than a conservationist approach. However, the word 'sustained' is the catch; when added it means that this maximum yiled must be available year after year. Therefore, in theory, forestry is sustainable management, as the definition states.
Past practices have strayed greatly from this definition. In North America, the first foresters were interested in only exploiting forests, worrying little about management and even less about sustainability. This view, which has persisted well into the 20th century, has always been supported by settlers who have viewed the immeasureable number of trees as an inconvenience which had to be removed before farms, houses, towns and roads could be built. (ENCARTA) As more and more settlers came to North America, agriculture began to expand, roads were built, and trees were cut and burnt more for room than for use as a resource.
Such activity became common throughout the United States, as well as the lowlands of Canada where early settlers found the best soil for farmland. Unfortunately, once the majority of trees had been cut down, previously lush soil would begin to erode as rain and wind pounded on the unprotected earth. Under reasonable, small scale farming, such would be of little consequence, however when huge tracts of forest are removed at once, it becomes almost impossible to keep the farmland from turning to wasteland - one has only to look at ancient nations such as Mesopotamia, once a heavy agricultural area and now a vast desert, or the ever expanding Sahara desert to see the devestating effect of soil erosion. (CAN ENCYC) After a time, people began to understand this, at least in a crude sense. Forestry, it seemed, must be more than simply cutting down trees. The forests must also be managed to ensure more cutting in the future.
It was not until the end of the nineteenth century, with the signing of the British North America Act in 1867, that forestry was considered important under Canadian law. It was written into the act that "The Management and Sale of the Public Lands belonging to the Province and of the Timber and Wood thereon" would be assigned to the jurisdiction of the individual provinces. (CAN ENCYC) Although this gave the forests some protection under the law in regards to supposed 'sustainability', there remained - as there still remains to an extent to this day, a greed which, for the most part, overpowered any thoughts of conserving for the future.
The Ontario Forestry Industry
The year 1893 marked the beginning of a somewhat dubious ecological protection program in Ontario with the establishing of the Algonquin National Park as a "public park and forest reservation, fish and game preserve, health resort and pleasure ground for the benefit, advantage and enjoyment of the people of the Province." (GRAY 92) The purpose of the park was the logging of the tall pines, rather than for any conservationist motive. Scattered parks were established on a purely ad hoc basis throughout Ontario for almost eighty years, during which exploitative logging grew and forests were destroyed.
Eventually, starting in the 1960s and spreading in the 70s, people began to notice the forests dissapearing, began to see parks as more than merely recreational; more and more concerns were being voiced regarding "uncontrolled development, uncoordinated land-use planning, and the corresponding loss of wilderness." (GRAY 91) One of the outcomes of these protests was that the Ministry of Natural Resources developed the Ontario Provincial Park Planning and Management Policies - titled "The Blue Book". (GRAY) The blue book, which is still in use today, is perhaps the closest thing to forest protection in Ontario. It allowed a comprehensive park system to be created with six classes of park which could ensure some measure of protection to these areas. More parks were created but it was becoming apparant that these parks were doing little to stop the great change being forced on the landscape of Ontario. Writers from the World Wildlife Fund (WWF) state that "over the past 200 years Ontario's natural landscape has been changed on a scale greater than any other since glaciation." (GRAY 92) Most old growth (120+ yrs) pine forests have been cut and replaced with alien monocultural trees - to make future harvesting easier; the land of the Teme-Augama would come under dispute due to fear of such. Part Two: Forest Conservation In Ontario
Political Activity
In 1990, the election of the provincial NDP under Bob Rae appeared to herrald a new beginning for forestry conservation. Rae had been arrested a year previous in the protest over the Temagami Red Squirrel Road extension - which will be discussed further in part two - and appeared to place the environment high on his agenda. Promises were made to protect five previously unrepresented natural regions by 1994, to be added to the thirty-two already protected out of sixty-five [see appendix, map 2]. (GRAY 95) However little ever came of the promises; by the end of 1993 only one old growth area, inside Algonquin Park itself, was to be protected from logging and road building. Meanwhile, Howard Hampton, the new minister of natural resources, declared that forest harvest across the province was to be increased by up to 50 per cent as a result recommendations by a committee made up entirely of foresters, labour, and the government. (GRAY 94) Public interest groups were outraged; as a means of appeasing them, the government announced a "Keep it Wild" program. The program was said to be a means of protecting the old growth forests in a meaningful way but in the end it became more about public relations than anything. Bits and pieces of forest throughout the province were protected but the outcome was by absolutely no means sufficiant for sustainability. One good thing did come out of the NDP government; a piece of legislation which seemed minimal at the time but would have resounding influence from a legal perspective in the future, the Crown Forest Sustainability Act. The act requires that certain guidlines be followed by the MNR when approving any logging plans. (WILDLANDS) However, for the time being, it appeared that the NDP was as hurtful through their inaction as any past government. And today the PC government appears to be doing nothing to keep the out of control lumber industry in check. Logging practices continue to decimate the landscape, replacing it with rows of arrow straight man-made trees. It appears that each successive government is more willing to promise to support the environment but less willing in actuality to make any meaningful progress. In order to explain this in a meaningful way, the issue of the Temagami old growth forests should be examined; it is a perfect example of Ontario's battle between industry and the environment.
Temagami
Temagami is named as the land of the Teme-Augama. It is known as one of the most diverse ecosystems in Ontario, if not Canada; known for clean, clear lakes and "one of the highest quality lake trout fisheries remaining in Ontario"; (TEMAGAMI 1) for the 2,400 km of canoeable river systems; for one of the last remaining old-growth forests in the province. Temagami has been glorified by painters Archibald Lampman and David Brown Milne, as well writer Archibald Belaney - known as the Grey Owl. (CAN ENCYC) Also, Temagami is known for the controversy between industrialists and environmentalists over the wildlands it contains. In the course of the past century, loggers and miners have slowly eaten away at the Temagami wilderness while successive governments have sat idly by, and finally this became too much to bear. In the early seventies, the Teme-Augama Anishnaibi decided they must speak out; the method they chose was the launching of a formal challenge against the government's right to allow industry into their homeland. (TEMAGAMI 1) As word of the challenge spread, others joined the call and the opening stage was set for what would later become the first protest to be looked at herein; the Red Squirrel Road blockade.
The Red Squirrel Road extension was perhaps the most expensive fifteen kilometres of road laid down by the Ontario government. The bill ended up at six million dollars - half of which was for security against the protesters. (MAITHERS) The Teme-Augama banded together with other concerned protesters, chaining themselves to bulldozers, blocking roads by sitting in the path of loggers, and destroying machinery; all in all, performing a great many acts of civil disobedience which will be discussed later. The outcome, besides the spending of copious amounts of money by both sides, was the setting up of the Comprehensive Planning Council (CPC) by the NDP, meant to "strengthen the role of local communities in the management of natural resources in the Temagami area." (MNR 1) Many protected areas within Temagami were proposed however, dispite making many protective recommendations, eventually it became clear that the CPC did not intend to recommend any sort of substantial protection.
This brings the issue to where it stands today. "Red Squirrel Road" has been replaced with "Owain Lake" but from a legal perspective the concerns are the same. The provincial government appears to be even less environmentally friendly than the CPC. In fact, according to Northwatch, an independant environmental group, "seventeen of the thirty-nine recommendations of the CPC were not accepted beyond an amiguous 'agreement in principal' (ie. not in practice)." [see appendix] (NWNEWS) The Ministry, however, boasts that they have "increased environmental protection in the Temagami area, protected old-growth red and white pine and resolved long-standing land use issues." (MNR) The debate, which will be discussed in the next section, remains relatively the same, with a few twists. Industrialists still battle for the right to carry on with their jobs while environmentalists and Anishnaibi fight to protect the diverse wilderness. In order that a better background of the debate be presented, the concerns of each must be presented individually; only then can the actual legal conflict be truly appreciated.
Part Three: The Temagami Debate
"If Greenpeace devoted all the energy to northern forests as it did to tropical forests, we'd be in trouble"
-- Tony Shebbeare, director of the Brussels office of the Canadian Pulp and Paper Association
The Forester
Almost fifteen percent of Canadians were employed directly in the forestry industry in 1989; (C.E.) since then, little has changed. This type of fact is the basis for what is, and always has been the industrialist response to environmentalist concerns; you can't criticise industry because it creates jobs. And clearly most people accept it, especially today as jobs are becoming more and more scarce. The forest industry has arisen, as was stated earlier, from an attitude of exploitation fostered by greed, expansion, and industrialization. Since early europeans first came to Canada, logging trees has been second nature, a part of the conquering of the country. Only today is there any apparant feeling of conservation; people are perhaps admitting, if somewhat reluctantly, that such practice as clear cutting might be wrong. However, though foresters may be beginning to reconcile a small amount of what has been long ingrained into the industry, the mentality remains today that industry cannot be impeded no matter the cost, as long as jobs are at stake. Basically, forestry today is just like any other industry; a means of raping wilderness such as Temagami in order to make a quick buck. Can they be blamed for wanting to earn a living?
In the Temagami case, the MNR has been responsible for most of the logging facilities already set up in Temagami, however, according to the Wildlands League, a Toronto-based environmental organization, they have largely withdrawn from the area and will probably seek to hand management over to a large forest company. (WILDLANDS) As of yet, no such company has stepped forward, however several small companies have begun logging already. What these companies, along with the MNR, want, is the ability to conduct their industry as it has always been conducted; the adage "if it's not broken, don't fix it" seems to apply perfectly to them as they vehemently deny myths like global warming and animal extinction. They feel that the concessions allowed by the MNR in this case are more than fair, and there is the suspiscion that environmentalists wont be happy until all forestry activity has been eliminated.
The Environmentalist
The environmentalists do not have the same long-standing base that foresters do. The environmentalist movement itself is a recent thing, beginning in the 1960s and 70s with the Green Revolution. Since that time, such individuals and groups have sprung up all around the globe; in the beginning no more than a minor annoyance to industrialists, farmers and average citizens, yet eventually becoming a major factor to be considered by industrialists whenever they attempt anything affecting the environment in any way. Today, environmental concerns are bringing many people to believe that resources are not as 'unlimited' as everyone has believed for so long and the industrial movement is finding it more and more difficult to accomplish the same goals they would have easily accomplished as recently as ten years ago. In response to the Temagami issue, four prominent environmental groups have risen to to stand against the industrialists. They are the Wildlands League - headed by Tim Gray in Toronto, Northwatch - the Northeastern Ontario environmental coalition, Temagami Lakes Association - a powerful cottage owners organization, and Friends of Temagami - a coalition created for the specific purpose of fighting against Temagami loggers and miners. What they want, as outlined in the Wildlands League's Future of Temagami Plan, is a Wildland Reserve established to protect important watershed areas, as well as several other sites of ecological value, amd the Red Squrrel as well as two other roads permenently closed where they enter the Reserve. (TEM. 3) They feel that these measures are the only way to preserve the ecological diversity found in the Temagami wilderness; their feeling is that the MNR and the forestry industry simply do not care about ecological stability.
From a legal perspective, there is much to discuss in the Temagami case. Some laws have already been hinted at but little has been said yet about specific legal issues. There are three different aspects of the law which are brought into play in this issue; the purely criminal aspect of civil disobedience, the environmental laws and regulations (or lack thereof), and the ever pressing conflit between positive and natural law. These will all be dealt with individually in the next section, then weighed together to come up with some definite conclusions.
Part Four: The Law of the Land
"What gives us the right to take the law into our own hands? The answer is simple. Our birthright as natural creatures, citizens of the earth, gives us the right to uphold and defend the laws of nature." ---Watson (TALOS 23)
Civil Disobedience
According to Abbie Hoffman, "the best way to get heard is to get arrested, and the more times the better." Deemed troublemakers by some and revolutionary by others, the Red Squirrel Road and Owain Lake protesters did just that. Scores of people; sitting in the dead center of the road and refusing to move regardless of threats or coersions, destroying bulldozers or chaining themselves to them, sitting on platforms high atop the trees, hammering metal stakes into various trees to destroy chainsaws; and calling it all civil disobedience. The end result? Many arrests were made, yet few were ever charged for the acts of mischief (mischief being the most likely charge) - most were held for a night or even dropped off in North Bay; those who actually caused damage were never caught or pursued. The government was forced to pay three million dollars for security measures or damages caused by the protesters in the Red Squirrel Road building alone, and the builders lost a great deal of time and money.
The legal battle over the civil disobedience is of two views. Some people view the acts as a waste of time and tax payers money, holding the belief that if there is a legal way to protest, it should be used rather than resorting to illegal practices. Clearly, such reasoning is sound; there are many legal methods of protesting and governments always hold the policy of being more willing to listen to the legal protesters than lawbreaking troublemakers. Knowing such, some might wonder as to the reasoning behind such a clearly premeditaited group crime.
The answers are varied however, looking at the effects of the disobedience, one comes to mind. Media. Those of the second view towards civil disobedience see it as a means of voicing their concerns to the public effectively and quickly. The fact that their actions are illegal serves only to attract media attention. To them it is a last ditch effort at raising public concern and perhaps forcing the government into action. To a large extent they have succeeded; the only times Temagami has really come up in headline news were during the two large-scale protests. The environmentalists also believe that, as a justification to the laws that are being broken, natural law must prevail over positive law; such will be dealt with later. First, the issue of environmental law must be dealt with.
Government Legislation / Wildlands League Lawsuit
Environmental legislation is one of the big issues under contention. Environmentalists say that under current legislation the old growth forest cannot sustain itself, provided that loggers take full advantage of the lack of any real legislation. The industrialists, backed by the government, believe that they are just trying to do their job and that the current legislation is strict enough, protecting over fifty per cent of the remaining old growth pines. The actually protected areas fall under the Ontario Provincial Park Planning and Management Policies but what is under contention today is the Crown Forest Sustainability Act. This past September, the Wildlands League and Friends of Temagami, represented by the Sierra Club Legal Defence Fund, filed a law suit against the government under the CFSA, claiming that the MNR had "failed to ensure that logging will protect wildlife, ecosystems or the public interest".(SIERRA) This lawsuit is in itself a landmark, being the first attack on Ontario forestry from a legal point of view. As simply stated by Tim Gray of the Wildlands League, "we are seeking to have the Ontario Court order the Minister to obey the law . . . we had to act now to draw the public's attention to the MNR's plans to rid themsleves of even these minimal laws to protect the public interest." (TEM. UPDATE) As such, the earilier government's weak legislation has become an unlikely hero in the eyes of the environmentalists. The two groups sought an injuction forcing a 'stay' of the logging in the Owain Lake forest area until the case was completed; their feeling was that "we will lose the forest by the time our case is heard." (TEM. UPDATE) After three days of testimony and four days of deliberation by an Ontario Divisional Court judge, the request was denied. However, the case will proceed to full trial this winter and the outlook is optimistic for the environmentalists. If the case succeeds, the industrialists will be forced to cease all activity in the area until the MNR develops the neccessary environmental guidlines.
There are few other pieces of legislation corresponding with forestry conservation - it is mainly left up to the individual regional MNR to establish guidlines as regarding their area. The Environmental Assessment Act requires that an assessment be carried out prior to allowing logging of an area, but the Environmental Protection Act does not even mention forestry. That there is no real forestry or even habitat protection in any current Canadian legislation is perhaps an indication that governments still don't realize the full consequences of our present practices. That thought brings up the issue of whether such dire circumstances as environmentalists see us to be in - and with no legislation to back their claims up - warrant the breaking of laws set down by governments - in order to enforce those made by nature.
Natural vs. Positive Law
Early philosophers believed that those laws created by humans (positive laws) should stem from and reflect those created by nature (natural laws). Cicero is credited as saying that "civil or human laws should be set aside or disobeyed if, in the minds of 'wise and intelligent men,' the laws were deemed in conflict with those of nature." (TALOS 17) In some ways however, along the way, humanity has failed to see the connection or it being severed. Environmental resources have always caused some controversy in this regard; human greed sometimes has an insidious way of overriding care for nature. People are unwilling to compromise their ability to make money, even though it might mean that nature is severly damaged in the process. The desire to make money cannot, in itself, however, be seen as greed; in that respect we must aknowledge that loggers are not to blame for distruction they wreak. It is the law makers themselves who are perpetuating the constant rate of natural destruction both through inaction and harmful action. The question then arises; are environmentalists justified in disobeying positive law In order to bring about what they see as disobedience to higher law?
The question brought up in this case is highly disturbing; clearly, the activists acted in disobedience to the law as defined by our government. Yet, just as clearly, there was a cause for their actions - to save ancient forests and the ecological diversity they hold from annihilation and replacement by tree farm. The question in the case is highly sim
f:\12000 essays\sciences (985)\Enviromental\The Atmospheric Ozone Layer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The stratospheric ozone layer exists at altitudes between about 10 and 40km depending on latitude, just above the tropopause. Its existence is crucial for life on earth as we know it, because the ozone layer controls the absorption of a portion of the deadly ultraviolet (UV) rays from the sun. UV-A rays, including wavelengths between 320 and 400nm, are not affected by ozone. UV-C rays between 200 and 280nm, are absorbed by the other atmospheric constituents besides ozone. It is the UV-B rays, between 280 and 320nm, absorbed only by ozone, that are of the greatest concern. Any loss or destruction of the stratospheric ozone layer could mean greater amount of UV-B radiation would reach the earth, creating among other problems, an increase in skin cancer (melanoma) in humans. As UV-B rays increase, the possibility of interferences with the normal life cycles of animals and plants would become more of a reality, with the eventual possibility of death.
Stratospheric ozone has been used for several decades as a tracer for stratospheric circulation. Initial measurements were made by ozonesondes attached to high altitude balloons, by chemical-sondes or optical devices, which measured ozone concentrations through the depletion of UV light.
However, the need to measure ozone concentrations from the surface at regular intervals, led to the development of the Dobson spectrophotometer in the 1960s. The British Antarctic Survey has the responsibility to routinely monitor stratospheric ozone levels over the Antarctic stations at Halley Bay (76°S 27°W) and at Argentine Islands (65°S 64°W). Analysis of ozone measurements in 1984 by a team led by John Farnam, made the startling discovery that spring values of total ozone during the 1980-1984 period had fallen dramatically compared to the earlier period between 1957-73. This decrease had only occurred for about six weeks in the Southern Hemisphere spring and had begun in the spring of 1979. This discovery placed the British scientists into the limelight of world publicity, for it revived a somewhat sagging public interest in the potential destruction of the stratospheric ozone layer by anthropogenic trace gases, particularly nitrogen species and chlorofluorocarbons.
Ozone concentrations peak around an altitude of 30km in the tropics and around 15-20km over the polar regions. The ozone formed over the tropics is distributed poleward through the stratospheric circulation, particularly in the upper stratosphere where the airflow is the strongest and most meridional. Since the level of peak ozone is considerably higher in altitude in the tropics, ozone descends as it moves toward the poles, where because of very low photochemical destruction, it accumulates, particularly in the winter hemisphere (see fig.1). Some ozone eventually enters the troposphere over the poles.
Seasonal variations are much stronger in the polar regions reaching 50% of the annual mean in the Arctic. In spring, Northern Hemisphere transport of ozone toward the poles builds to a maximum (40-80°N), associated with the maximum altitude difference in the major ozone regions of the tropics and the poles. The polar flux of ozone ceases as the westerly circulation dominant in winter is replaced by easterlies over the tropics. In the Southern Hemisphere the spring maximum occurs near 60°S, one to two months after the maximum in the subtropics. Throughout the summer, photochemical reactions reach a maximum in the lower tropical stratosphere and ozone concentrations fall. Autumn circulations are the weakest, with the latitudinal gradient between the poles and the equator virtually disappearing. Ozone concentrations throughout most of the stratosphere reach a minimum. As the circumpolar vortex expands for winter, the strength of circulation increases rapidly, ozone transport from the tropics also increases strongly, and meridional circulation and variability peak in the winter months.
Anthropogenic influences on the stratospheric ozone layer
Figure 2, establishes the basic natural formation and destruction processes associated with stratospheric ozone. However, several other gases which have long lifetimes in the troposphere, eventually arrive in the stratosphere through normal atmospheric circulation patterns and may interfere with or destroy the natural ozone cycle. The trace gases of most importance are hydrogen species (particularly OH and CH4), nitrogen species (NO, N2O and NO2) and chlorine species. The gases not only react directly with ozone or odd oxygen atoms, but also may combine in several different ways in chain processes to interfere with the ozone cycle. Figure 2, presents examples of these reactions. The lifetime of these trace gases is crucial to the chemistry of the stratospheric ozone layer. Figure 3 illustrates the photochemical lifetime of the major trace gases affecting the ozone layer according to altitude. Many of these major gases have lifetimes of less than a month in the stratosphere compared to more than 100 years in the troposphere.
Hydrogen species
The influence of OH, HO2 and of CH4 on the stratospheric ozone layer tends to be less important than the other major trace gases, except in the upper stratopshere. The major indirect influence of the hydrogen species in the mid to lower stratosphere is through their catalytic properties, enhancing nitrogen and chlorine species reactions.
Nitrogen species
There is not much information available about seasonal and annual Nox species in the stratosphere compared to ozone. NO and NO2 concentrations in winter are considerably lower than in summer in both hemispheres. In the early 1970s there was major concern that Nox emissions from supersonic aircrafts would create a major depletion of the ozone layer. Considerable ozone reductions (16%) were expected in the Northern Hemisphere, where most of the supersonic transports would be flying, but stratospheric circulation patterns would ensure at least an 8% reduction in ozone over the Southern Hemisphere. Fortunately for the globe, the massive fleets of supersonic transports never eventuated. The Concorde was barred from landing at many airports for noise and other environmental reasons and now flies only limited routes, mainly from Great Britain and France. Concern over Nox emissions has been overshadowed by the potential problems associated with the chlorofluorocarbons.
Chlorofluorocarbon species
In 1974, Molina and Rowland first suggested that anthropogenic emissions of chlorofluorocarbons (CFCs) could be depleting stratospheric ozone through the removal of odd oxygen by the chlorine atom. CFCs released from aerosol spray cans, refrigerants, foam insulation and foam packaging containers, increased concentrations of Cl compounds in the troposphere considerably. CFCs are not soluble in water and thus are not washed out of the troposphere. There are no biological reactions that will allow their removal. The result is very long tropospheric residence times and the inevitable transport into the stratosphere through normal atmospheric circulation. The chlorine atom, released from a CFC, reacts with ozone to form ClO and O2. Since ClO reacts with ozone six times faster than any of the nitrogen species (Rowland and Molina, 1975), it becomes the dominant mechanism to destroy stratospheric ozone. As a result, a lone Cl atom can be responsible for destroying several hundred thousand ozone molecules. Based on recent results, reductions of ozone for 5-9% are possible with locational changes 4% in the tropics, 9% in the temperate zones and 14% in the polar regions. Recent discoveries such as that by Farnam (1985) lead most experts to believe that important destruction of the stratospheric ozone layer is not far off.
The Polar "Holes" - The Antarctic
With the help of the Dobson spectrophotometer, Farnam (1985) was able to establish that the total ozone concentrations over the bases in Antarctica had been falling during the October-November period since 1979. The trend of ozone loss during this time varied from year to year, but over the six year period showed an overall decrease. Verification from other bases in Antarctica came soon afterward (Table 4-Komlyr, 1988). Further verification came from the Nimbus satellite, from which the scientists were able to produce graphic colour-enhanced photographs of the depletion of ozone over Antarctica. The media began using the phrase "Antarctic Ozone Hole" to describe this phenomenon and unfortunately its importance has been expanded out of proportion to the global total ozone situation. By definition, the "hole" represents a depletion of ozone concentrations over Antarctica, not an empty space in atmosphere.
Atmospheric scientists were at first puzzled about the cause of the ozone hole. Three theories were suggested. The first was that there was a connection with the 11 year sunspot cycle. When a large number of sunspots occur, there is considerable NOx produced in the upper atmosphere which could interact with the ozone by reactions shown in table 2. The second was that during the period when the sun was rising, there could be dynamic interactions between the troposphere and the stratosphere with an upwelling of ozone-poor air into the stratosphere from below. Such upwelling should also include many tropospheric trace gases not normally found in abundance in the stratosphere. Third, the ozone hole could be caused by chemical reactions, particularly reactive Cl, somehow released from reservoir molecules which were transported to Antarctica by the stratospheric circulation from source regions much further North.
Detailed investigations of these theories were made by the United States National Academy of Sciences (N.A.S.) in 1988. The theory suggesting sunspot influences was discounted because there was minimal NO2 measured in the upper stratosphere over Antarctica, and in the main area of expected ozone loss, above 25km, ozone concentrations remained relatively high during the lifetime of the hole. The second theory, suggesting convective upwelling from the troposphere, was also eliminated as a possibility, since trace gas concentrations normally found in the troposphere were not measured in the stratospheric ozone hole. This left the third possibility, Cl chemistry, which the N.A.S. report suggested, occurred under a unique set of meteorological circumstances
At the end of the Southern Hemisphere winter, as the sun is beginning to appear over Antarctica, the circumpolar vortex circulation in the lower stratosphere is at its strongest. Extremely stable and durable at this time of year (September and October), the vortex blocks any incursions of warmer air from the mid-latitudes and allows an extensive drop in temperature inside, over the continent. Within the depths of the hole, important chemical reactions which deplete the ozone concentration are taking place. In order for the chemical reaction theory to work, there must be an overabundance of ClO in the Antarctic stratosphere between 12 and 25km and a diminished concentration of NOx series, which might interfere with Cl attacks on ozone. Concentrations of NOx species decrease toward the hole centre and ClO concentrations are 100 to 500 times higher than observed outside the hole.
In 1987, the increases in ClO occurred across a very sharp boundary layer, fluctuating between about 67 and 75°S. Over a latitude span of about 1°, ClO increased from less than 100 pptv to over 200 pptv, depending on altitude. Ozone averaged 256DU. This area of steep change marked the chemical boundary of the hole. Spatial distributions of ClO and ozone showed a marked negative correlation inside the hole. Whereas ozone decreased by about 60% crossing the boundary, ClO increased by greater than a factor of 10. This result provides strong circumstantial evidence that the link between ozone loss and chlorine over Antarctica is real.
There is still much to be learned about what causes the Antarctic ozone hole. Questions regarding changes in ClO at various latitudes, changes in concentrations in molecules from day to night, the progressive deepening of the ozone hole through the 1980s, and several other details remain unanswered. Colder stratospheric temperatures within the hole are likely to create thicker, longer lasting clouds which enhance processes for ozone removal, but details are not yet clear. Day-to-day variations in ozone within the hole have not yet been properly explained, and there is some question whether the ozone hole will continue its depth and persistence in future years.
The Arctic
The discovery of the Antarctic ozone hole raised the possibility that a similar hole could exist over the Arctic. Early results from a series of measurements in the winter of 1988-89 suggests that ozone loss over the Arctic exists, but not to the degree of that over the Antarctic
Trends in global total ozone
The publicity surrounding the discovery of and research activity in the Antarctic ozone hole has unfortunately tended to obscure a potentially far greater problem, decreases in total ozone concentrations across the globe. The loss of ozone above the tropics and mid-latitudes, and the resultant increase in harmful UV radiation could be disastrous to the earth's population if the changes were major. Since the late 1970s, there has been a slow but steady decrease in global total ozone, even if the major losses over Antarctica is not included. The trend is on the order of -2.7% per year in all seasons with the greater losses occurring in the Northern Hemisphere autumn and winter (greater than 3%) and the least in the Northern Hemisphere summer (1.6%).
Surface impacts and political decisions
The impacts of a depleted ozone layer on surface organisms depend on their location to increased UV-B radiation. As a rough estimate, many experts suggest that the percentage increase in UV-B radiation affecting surface organisms would be about twice the percentage loss in stratospheric ozone from anthropogenic causes. The most immediate effect on human beings would be an increase in various skin cancers and skin cancers are increasing. Increases in the evidence of cataracts and interference with the human immunity system are other possible influences. A more serious potential long-term threat is the damage to cell DNA and the genetic structure in not only human beings but in other animals, plants and organisms.
With the discovery of the Antarctic ozone hole and the resultant world-wide interest, publicity and concern, a historic meeting occurred in Montreal, Canada in September 1987. For the first time ever, 57 countries and organisations met to make a specific decision to limit the emissions of a series of pollutants which were likely to create major environmental problems affecting the globe in the future. The eventual document adopted on September 16, 1987 and entitled "The Montreal Protocol", was signed immediately by 24 countries and since has been ratified by several more.
REFERENCES:
1. Jonathan Weiner, "Plant Earth", New York, Bantam Books, 1986
2. "Atmospheric Ozone, Global Ozone Research and Monitoring Project" (Vol. 16, Geneva 1985 International Organisation of Meteorology)
3. Lydia Dotto and Harold Sciff, "The Ozone War", Garden City, N.Y., Doubleday, 1978
4. John Gribbin, "The Hole in the Sky", N.Y., Bantam Books, 1988
5. James G. Titus, "Effect of Changes in Stratospheric Ozone and Global Climate" Vol. 2, United Nations Environmental Programme
6. G. Levi, 1988, "Ozone depletion at the Poles", Physics Today
7. P. Bowman, 1988, "Global trends in total Ozone", Science
8. Hans U. Dutsch, "Vertical Ozone Distribution", International Centre for Atmospheric Research, Boulder, Colorado
f:\12000 essays\sciences (985)\Enviromental\The Beauty of Snow.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Beauty of Snow
Snowflakes have six sides and all have a different design on them. Snow flakes are clear but get a white color from light that is shining off the crystals. Snow is a mystery to many people but after you look at it you will be more enlightened about snow. Snow is a form of precipitation that consists of tiny pieces of frozen water bonded together. Snowfall varies tremendously across the earth. It falls all the time in the polar regions but it occurs more heavily in the mountainous regions. Snow even falls by the equator on certain high mountain tops.
The beauty of snow is that you can do many things with it. You can ski on it, sled on it, or just have a snowball fight. The beauty of snow is appreciated by some and hated by a few others. We must remember that snow is only around for a short while till spring comes and the temperatures rise above thirty two degrees Fahrenheit. When that day in the spring comes then all of that wet beautiful cold fun stuff becomes just plain old water again.
f:\12000 essays\sciences (985)\Enviromental\The cause and effects of acid mine drainage.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE CAUSE, EFFECTS AND TREATMENT OF ACID ROCK DRAINAGE
INTRODUCTION
Imagine going fishing on a cool Autumn day, the trees are all different shades of orange, brown and red and the birds are singing their beautiful songs, but their is a serious problem because when you arrive at the river all plant and animal life are gone. This is by no means a recent phenomenon. This is due to the effects of acid rock drainage (ARD). This is a problem that has been occurring since ancient times, but it was not until the 1800's when fast growing industrialization and heavy mining that it caught alot of attention.
Acid rock drainage is the term used to describe leachate, seepage, or drainage that has been affected by the natural oxidation of sulfur minerals contained in rock which is exposed to air and water. The major components of ARD formation are reactive sulfide minerals, oxygen, and water. Biological activity and reactions is what is responsible for the production of ARD. These reactions make low pH water that has the ability to mobilize heavy metals contained in geological materials with which it comes in contact. "ARD causes a devastating impact on the quality of the ground or surface water it discharges to. (Ellison & Hutchison)"
ACID MINE DRAINAGE
Within the mining process there are several sources that cause ARD. No matter what activities occur, ARD usually occurs when certain conditions are met. These conditions are the factors that limit or accelerate the release of ARD. The initial release of ARD can occur anywhere from a few months to many decades after the sulfide containing material is disturbed or deposited. ARD has been associated with mines since mining began. When ARD occurs due to the effects of mining it is called acid mine drainage, or AMD. The coal mining industry here in the eastern United States has been associated with a major source of AMD for decades. When water comes in contact with pyrite in coal and the rock surrounding it, chemical reactions take place which cause the water to gain acidity and to pick up iron, manganese and aluminum. Water that comes into contact with coal has a orange-red yellow and sometimes white color. The metals stay in the solution beneath the earth due to the lack of oxygen. When the water comes out of the mine or the borehole it reacts with the oxygen in the air or some that may be deposited in the stream and deposits the iron, manganese and aluminum and deposits it on the rocks and the stream bed. Each of the chemicals in acid mine drainage is toxic to fish and aquatic insects in moderate concentrations. At real high concentrations all plant life is killed.
"Underground mines that are likely to result in ARD are those where mining is located above the water table. (Kelly 1988)" Most of the mines are also located in mountainous terrain. "Underground workings usually result in a ground water table that has been lowered significantly and permanently. (Kelly 1988)" Mining also helps in the breaking of rock exposing more surface area to oxidation.
OTHER SOURCES OF ARD
ARD is not necessarily confined to these mining activities. "Any process, natural or anthropogenic, that exposes sulfide- bearing rock to air and water will cause it to occur. (Ellison & Hutchison)" There are examples of natural ARD where springs produce acidic water. These are found near outcrops of sulfide-bearing rock, but not all exposing sulfide rock will result in ARD formation. "Acid drainage will not occur if sulfide minerals are nonreactive, the rock contains sufficient alkaline material to neutralize any acid produced, or the climate is arid and there is not adequate rainfall infiltration to cause leakage. (Ellison & Hutchison 1992)"
CHEMISTRY
"The most important factor in determining the extent of the acid mine drainage is not the pH, but the total acidity. (Ellison &Hutchison 1992)" Total acidity is a measure of the excess amount of H+ ions over other ions in the solution. A high acidity is accompanied by a low pH in AMD. This is what separates AMD from acid rain, which has a low pH and a low acidity. These differences are due to the sources of acid in different ecosystems.
A buffer, as we learned in class, "is a compound that tends to maintain the pH of a solution over a narrow range as small amounts of acid or base are added.(Rhankin 1996)" This is also a substance that can also be either an acid or a base. A low pH has a lot of bad effects on the "bicarbonate buffering system."(Kelly 1988) At low pH solutions carbonate and bicarbonate are changed over to carbonic acid and then on to water and carbon dioxide. Because of this water looses its ability to buffer the pH of the water and plants in and around the water that use the bicarbonate in the process of photosynthesis. Another effect of low pH is the increase in the rate of the decomposition of clay minerals and carbonates, releasing toxic metals such as aluminum and silica. Ironically however, Aluminum silicates can aid in the "buffering" of pH.
HEAVY METALS
The presence of high concentrations of heavy metals from acid mine drainage is just as much a threat to the environment as acidity is. When sulfide is oxidized, heavy metal ions are released into the water. "The key concept in this case is the specialization of the metal distinguishes between 'filterable' and 'particulate' fraction of a metal.(Kelly 1988)" Filterable means that particles can be trapped by a filter. The particulate fraction of the metal includes solid minerals, crystals, and metals that set up into organisms.
The presence of heavy metals in the aquatic environment can have a serious effect on the plants and animals in an ecosystem. Plants uptake the metals and because plants are at the bottom of the food chain, these metals are passed on to animals. The animals become contaminated with the metals through eating and drinking. There are actually some types of algae that actually thrive in harsh metal environments because they are not affected by the toxicity and therefore they have no competition. These types of species are blue-green algae: Plectonema, and green algae: Mougeoutia, Stigeoclonium, and Holmidium rivular (Kelly 1988). These species are the exception because there are "very few aquatic plants known to be naturally tolerant to heavy metals.(Kelly 1988)"
LAWS AND REGULATIONS
Recently, many laws and regulations have been passed to help treat and control the acid mine drainage. The EPA has helped establish new limits and regulations such as no net acidity of drainage (pH between 6-9), average total iron content of discharge must be less than 3 mg/L, and the average total manganese content less than 2 mg/L. Processes used now to prevent acid discharge are proper filtering equipment and drainage ponds that contain acid rock indefinitely. The most common methods of treating acid mine drainage are through chemical and biological processes.(Klepper 1989)
The Appalachian Clean Stream Initiative was established by the Office of Surface Mining (OSM) and is trying to clean up acid drainage by combining the efforts of citizen groups, corporations and government agencies. President of the OSM, Robert Uram said, "Private organizations both grassroots and national have joined, in addition to government programs at the federal, state, and local levels."
"The most effective way to control acid generation is to prevent its initiation.(Siwik 1989)" The biggest part of the reclamation and restoration is to research into the use of peat/wetland treatment for heavy metal removal from acid mine drainage.(Siwik 1989) According to the EPA standards, many of the mines will have to be designed and operated to meet the standard of "zero discharge" from the mines.
CHEMICAL TREATMENT
Chemical treatment is the most common method used to eliminate acid drainage from abandoned underground mines. There is three major working parts that do just this; complexion, oxidation, and reduction"(Kelly 1989) Neutralization of acid water with lime is a common practice. Chemicals commonly used in neutralization techniques are lime and sodium bicarbonate or "costic soda." Other examples of substances that have been found to reduce acid mine drainage are bactericides including antibiotics, detergents, heavy metals and food preservatives. Antibiotics and heavy metals are to costly and to dangerous to the surrounding aquatic life. Alconex, an inexpensive detergent, and sodium laurel sulfate both are found to reduce acid in mine drainage.
BIOLOGICAL TREATMENT
Some choose to use biological treatment to treat acid mine drainage and these ways can include: Biodegration of a chemical into basic oxidation products such as carbon dioxide, water, and nitrogen. To me, a very interesting way of treating acid mine drainage successfully and also high metal removal. The reason for this is that the plants that are in the wetland are anaerobic and therefore the rates of decomposition and mineralization of organic matter from the plants of the wetlands is slowed, and organic matter tends to accumulate on the surface of sediments. Wetland, therefore can gather and transform organic material and nutrients.(Bastian 1993) Natural and constructed wetland have been used to treat wastewater. The first one that was ever constructed was in 1982. There are over 200 systems in Appalachia alone.(Bastian 1993)
Even though this is safer for the ecosystem it is found that at most sites, chemical treatment is still necessary to meet efficient standards, but the costs of chemical treatment is greatly reduced with the initial biological treatment. Most operators find that the costs of the construction of the wetlands are made up within one year due to the money saved on chemicals.
CONCLUSION
In conclusion, acid rock drainage is a big problem all throughout the world due to alot of industrialization and mining. This is not only a serious problem around the world, it touches home here, especially here in Appalachia, but it seems to be under or getting under control with all the new regulations and standards the EPA is setting. Low pH and a high acidity level is harmful to us our wildlife and our plants. With the help of more education and more research it will not have to be a problem for our future.
f:\12000 essays\sciences (985)\Enviromental\The Effects of Dam Building.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Enviromental\The Environmental Effects Associated with Industrialization.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Study of Environmental Issues Associated with Industrialization
Although our industrial ways seem to be a very progressive step into the future, there are many flaws to the way many things are today. Things have definitely changed over the past century, as we can currently do things much more efficiently then before. The cost of this efficiency may seem inexpensive in many ways, however we do not realize that the cost of these new technologies do not just include money, time and labour, but it also costs us our well being as well as the beauty and comfort of our own home, earth. Ozone depletion, climate change as well as the direct effects of chemicals from industrial emissions and fuel combustion are a great threat to our planet and if nothing is done to resolve this problem soon, the results may be disastrous.
There is a layer of chemicals twenty kilometers up in the stratosphere called the ozone layer. This layer protects the inhabitants of earth by reflecting much of the suns harmful ultra violet (UV) rays. Without this layer above us, many living things including humans could not survive. The ozone layer is currently depleting and the reason for this is believed to be caused by a few things. Deforestation, fertilizer use and fuel combustion are minor contributors to this problem while chemicals such as chloroflourocarbons (CFCs), halons, carbon tetrachloride, methyl chloroform, methyl bromide and hydrochloroflourocarbons (HCFCs) are the major contributors to the deterioration of the ozone layer. These chemicals have industrial halocarbons that break up into chlorine and bromine in the upper stratosphere when they react with the sun's rays. Chlorine eats up the ozone layer while bromine acts as a catalyst and speeds up the process. Often found in Antarctica, there are frozen chemical clouds in the upper stratosphere called polar stratospheric clouds. These polar stratospheric clouds destroy the ozone layer at a much faster pace then the industrial halocarbons. The depletion of the ozone layer is a great threat to mankind and all other
living things on earth because without this layer of chemicals, we will be exposed to excess UV rays. This excess exposure can lead to many things such as malignant melanoma and non-melanoma skin cancer, damage to eyes by means such as snow blindness and cataracts, which is the clouding of the eye that can eventually lead to blindness. Above all this, excessive UV exposure can lead to symptoms similar to AIDS as prolonged exposure could weaken the human immune system. As far as plants and animals go, plants may die or may not be as healthy as a result of too much UV exposure and animals will suffer similar symptoms as humans. So if the ozone layer that we depend very much on is destroyed, it could be concluded that we as inhabitants of the world are also destroyed.
It is believed but not yet proven that we are altering the world climate by releasing chemicals into the atmosphere by a process called "global warming" or the "greenhouse effect". Some of the chemicals that are believed to contribute to the greenhouse effect are carbon dioxide (CO2), nitrous oxide, halogen gases and CFCs. These chemicals cause the climate of the world to increase by trapping the suns heat in the atmosphere and can last anywhere from one decade to one century. Although chemicals released by man only account for one third of the greenhouse effect, it is our contribution to this problem that will set the world off balance. It seems now that by the year 2100, carbon dioxide will double, causing global temperatures to rise to anywhere in between one point five to four point five degrees Celsius. Many people may wonder why global warming is such a problem as humans can easily adapt to their environment. If this global warming causes global temperatures to rise, we as humans will be able to cope with this change, however plants and animals may not be able to adapt to this change and as a result they may die and become extinct, resulting in a break in the food chain. The ocean levels will also continue to rise as they have been at a pace of two to eight centimeters a decade for several more decades. In fact, if
Antarctica melts slightly the ocean level can rise up to sixty meters. As the global temperatures rise, the world will become drier and therefore there will be more droughts, and heat waves possibly causing more fires and again producing more CO2 and further contributing to the problem. Ocean temperatures, currents and fish habitats will also change with the climate of the world. Chemicals however, are not only believed to heat up the world in the process of global warming, chemicals are also the probable cause of an unexplained coolness in some parts of the world. Sulfur dioxide is a chemical that reflects sunlight and because it reflects sunlight it is assumed that sulfur dioxide cools specific areas of the earth that should be warmer.
Chemicals cause a lot of indirect damage to all living things on earth, however, it is possible and most frequent that chemicals endanger the lives of living things directly. Unintentionally inhaling chemicals is one way these chemicals can harm us directly. Carbon monoxide, when inhaled, binds to the blood's hemoglobin and prevents the necessary oxygen from reaching tissues. When inhaled, carbon monoxide can also dull mental acuity. A deadly chemical cloud at ground level called smog also endangers the health of living things. Volatile organic compounds (VOCs) and nitrogen oxides (NOx) from vehicle exhaust and industrial emissions combine to form ozone at ground level. When the sun reacts with this ozone layer at ground level, it produces smog. Every year the ozone layer at ground level has increased by one percent. As carbon dioxide emissions increase throughout the world, some plants may benefit from this increase and so this excess carbon dioxide may act as a fertilizer, but to other plants, too much carbon dioxide may be a bad thing, causing the plant to die and possibly become extinct. The St. Claire River is now known as "Chemical Valley" because of an accident that had occurred there several years ago. An industrial company near the river had spilled sixty different chemicals mixed together into the river. This accident had sterilized the river and had effected much of the
agriculture around it. The Great Lakes is another example of the direct effect of chemicals on living things. There are chemicals in our body today that there not present back in the early 90's, the polluted Great Lakes that we locals depend on, are believed to be the cause. Animals reproducing near the great lakes and that rely on the great lakes are more frequently unsuccessful then before, female birds are growing crossed bills, males are either immune to this or die in the shell, fish are being feminized because the do not have secondary sex steroids which chemicals from the pulp and paper industry are believed to be responsible for. For humans, the sperm count in men has decreased fifty percent in the last fifty years, breast cancer has become an epidemic, males experience genetile disorders and children have problems learning.
Chemicals released into the atmosphere by industry, vehicles, fertilizer use, etc. can harm plant, animal and even human health, so therefore if this problem is not resolved quickly, the world we live in now could soon turn into a world of chaos. If a species of any animal becomes extinct the food chain will collapse, if any species of plants become extinct the food chain again will collapse and if that species of plants is used for any type of medication, the people who depend on that medication may also die. There are some organizations in the world that are trying to turn things around, however there are not enough people to support these groups. The general public doesn't seem to care much about this problem or is not yet aware of this issue. Even the government of Canada doesn't not want to take action against pollution more then likely because of budget limitations. It was concluded by Dr. Gordon McBean that "Humans have already radically altered the composition of the atmosphere and hence it's radiative properties. In other words, we have quite unintentionally started a long-term, global-scale geophysical experiment with the life-support system of this planet - an experiment that we do not control and, as yet, poorly understand. That, in itself, is cause for concern."
f:\12000 essays\sciences (985)\Enviromental\The Environmental Impact of Eating Beef and Dairy Products 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There are currently 1.28 billion cattle populating the earth. They occupy nearly 24 percent of the landmass of the planet. Their combined weight exceeds that of the earth's entire human population. Raising cows for beef has been linked to several environmental problems, and eating beef can worsen your health. The Dairy Industry puts not only your health in danger from consuming their products, but the lives of the cows that produce them.
There is severe environmental damage brought on by cattle ranching, including the destruction of rainforests and grasslands. Since 1960 more than 25 percent of Central America's forests have been cleared to create pastureland for grazing cattle. By the late 1970's two-thirds of all agricultural land in Central America was occupied by cattle and other livestock. More than half the rual families in Central America-35 million people-are now landless or own too litle land to support themselves. Cattle are also a major cause of desertification around the planet. Today about 1.3 billion cattle are trampling and stripping much of the vegetative cover from the earth's remaining grasslands. Each animal eats its way through 900 pounds of vegetation a month. Without plants to anchor the soil, absorb the water, and recycle the nutrients, the land has become increasingly vulnerable to wind and water erosion. More than 60 percent of the world's rangeland has been damaged by overgrazing during the past half century.
Cattle ranching has also been linked to Global Warming. The grain-fed-cattle complex is now a significant factor in the emission of three of the gases that cause the greenhouse effect- methane, carbon dioxide, and nitrous oxides- and is likely to play an even larger role in Global Warming in the coming decades. The burning of fossil fuels accounted for nearly two-thirds of the 815 billion tons of carbon dioxide added to the atmosphere in 1987. The other third came from the increased burning of the forests and grasslands. When the trees are cleared and burned to make room for cattle pastures, they emit a massive volume of carbon dioxide into the atmosphere. Commercial cattle ranching also contributes to Global Warming in other ways. With 70 percent of all U. S. grain production now devoted to livestock feed, much of ot for cattle, the energy burned by farm machinery and transport vehicles just to produce and ship the feed represents a significant addition to carbon dioxide emissions. It now takes the equivalent of a gallon of gasoline to produce a pound of grain-fed beef in the United States. To sustain the yearly beef requirements of an average family of four requires the use of more than 260 gallons of fossil fuel. Finally; Nitrous Oxide, which accounts for 6 percent of the global warming effect, is released from fertilizer used in growing the feed; and methane, which makes up 18 percent, is emitted from the cattle.
The final victims of the world cattle complex are the animals themselves. Immediately after birth, male calves are castrated to make them more "docile", and to improve the quality of their meat. To ensure that the animals will not injure each other, they are dehorned with a chemical paste that burns out their horns' roots. Neither of these procedures is done with anesthesia.
There are about 42,000 feedlots in 13 major cattle-feeding states in the United states. The feedlot is generaly a fenced-in area with a concrete feed trough along one side. In many of the larger feedlots, thousands of cattle are crowded together side by side in severely cramped quarters. To obtain the optimum weight gain in the minimum time, feedlot managers administer a variety of pharmaceuticals to their cattle, including growth-stimulating hormones and feed additives. Anabolic steroids, in the form of small time-release pellets, are implanted in the animals' ears. cattle are given estradiol, testosterone, and progesterone. The hormones stimulate the cells to produce additional protein, adding muscle and fat tissue more rapidly. Today 80 percent of all the herbicides used in the United States are sprayed on corn and soybeans. After being consumed by the cattle, these herbicides accumulate in their bodies and are passed along to the consumer in finished cuts of beef. beef now ranks number one in herbicide contamination and number two in overall pesticide contamination. Some feedlots now expiriment with adding cardboard, newspapers, and sawdust to the feed to reduce costs. Other factory farms scrape up the manure from chicken houses and pigpens and add it directly to cattle feed. Food and Drug Administration (FDA) officials say that it is not uncommon for some feedlot operators to mix industrial sewage and oils into the feed to reduce costs and fatten animals more quickly.
Moving beyond beef in our daily diets is a personal decision, but one that has profound and far-reaching consequences. Millions of Americans and Europeans are making personal choices to move beyond beef, or at least to cut down their consumption, and this will have a significant impact on the future of our planet and humanity. Beef consumption in the United States has dropped markedly in the past 20 years, from 83 pounds per person per year in 1975 to less than 68 pounds per person per year in 1990.
Today's dairy cow has been bred to be a milk machine, producing an average of 15,557 pounds of milk a year, almost 40 percent more than her counterpart of just 16 years ago. while the undomesticated cow produced enough milk to feed her one or two calves, a dairy cow in a modern dairy farm produces about twenty times more milk than her calf needs. Excessive production demands, coupled with the trend toward confining cows indoors or in densely populated drylots (enclosures devoid of grass), have resulted in serious welfare and disease problems for the dairy cow.
The modern dairy cow is usually artifically inseminated, pumped full of hormones and growth stimulants, and super-ovulated so she can churn out more calves, faster and faster. Cows are fed a diet geared toward high production. This diet, which is heavy in grain, is fed to species whose digestive track is suited to roughages. High-production diets create many health problems, including severe metabolic disorders and painful lameness, which are compounded by confinement. Also, at any given time, half of U.S. dairy cattle have mastitis (a painful udder inflamation, usually caused by infection).
Today's cow is typically burned out (unable to keep up production) and sent to slaughter, for human consumption and other uses, at an average age of four years. Her natural life span would be from twenty to twenty-five years.
A recent analysis by the FDA found that meat from dairy cows and their calves was the source of 60 percent of those drug and other chemical residues found in edible meats in ammounts that violated allowable limits (Dairy cows are the source for the majority of processed beef and 26 percent of hamburger in the United States ). The government's ability to ensure a safe milk supply has also come into question.
Despite a dairy product surplus and with cows already pushed to their limits, recombinant Bovine Growth Hormone (rBGH), a genetically engineered drug injected into dairy cows to increase milk production, has been approved for use by American dairy farmers. Embryo transfer, cloning, the creation of transgenic cows, and the engineering of cows to secrete pharmaceuticals and other substances in their milk are also under way.
Another practice growing in popularity is tail docking, the removal of about two-thirds of an adult dairy cow's tail- without use of an anesthetic. This procedure, the rationale for which is that it keeps cows cleaner, is completely unnecessary. It also deprives the cow of her natural means of swatting flies.
Newborn dairy calves are typically taken from their mothers at birth of shortly thereafter. Some female calves are kept as replacements for cows in the dairy herd. The other calves are sent to slaughter as babies, to veal farms, or to be raised for beef. Many are sent to stockyards when only one or two days old, even before they can walk. Calves in the sale/slaughter pipeline are often transported long distances, subjected to rough handling, and exposed to numerous diseases and weather extremes. They may be given no opportunity to rest or eat. Calves destined to be slaughtered at sixteen weeks old for "milk-fed" veal spend their lives in crates so narrow that they are unable even to turn around. Denied water and solid food, they are fed a diet consisting solely of an intentionally iron-deficient milk replacement, often containing antibiotics, which they typically lap up from a bucket twice a day. Veal is a by-product of the dairy industry that owes its existance to the surplus calves delivered by ten million dairy cows every year.
Veal consumption has decreased from its peak of 3.5 pounds per capita to under one pound per capita in 1993, owing in large part to the public's refusal to purchase inhumanely produced products such as milk-fed veal.
Another by-product of the dairy industry is the downed animal- an animal who is too weak, ill, or injured to stand or walk without assistance. Burned-out dairy cows and newborn calves make up a large percentage of downed animals, who often suffer from brutal treatment at livestock markets. Baby calves that cannot walk are often dragged or thrown and are trampled by other animals. Downed dairy cows are painfully dragged off trucks and across stockyards by chains or ropes tied around one leg. Both downed calves and cows are shocked with electric prods, kicked, and beaten during the transport and auction process in futile attempts to get them to move on their own. They are often left without food, water, or veterinary care, sometimes for days at a time, untill they either die or are loaded onto trucks yet again for a trip to slaughter. as many as 90 percent of downed animals could be prevented by simple improvements in management, handling, and transportation practices, including keeping newborn calves on the farm of their birth for a minimum of five days before sending them to market.
There are many health problems linked with eating beef and dairy products. Harvard scientists found that women who had beef, lamb, or pork as a daily main dish ran two and a half the risk of developing colon cancer as did those who ate the meats less than once a month. The conclusions are drawn from a study of 88,751 nurses that was begun in 1980. Eating beef has also been linked to heart disease, high blood pressure, and strokes. Drinking milk has been linked to asthma, allergies, intestinal bleeding, and juvenile diabetes. Cutting dairy products out of your diet gives you a greater chance of avoiding bronchial, respiratory, and stomach problems.
Eating Beef, as well as dairy products, has an extreme impact on the environment. Raising cows for beef has been linked to several environmental problems, such as Global Warming, and eating beef can worsen your health. The dairy industry puts not only your health in danger from consuming dairy products, but that of the cows who make them as well.
f:\12000 essays\sciences (985)\Enviromental\The Environmental Impact of Eating Beef and Dairy Products.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
There are currently 1.28 billion cattle populating the earth. They occupy nearly 24 percent of the landmass of the planet. Their combined weight exceeds that of the earth's entire human population. Raising cows for beef has been linked to several environmental problems, and eating beef can worsen your health. The Dairy Industry puts not only your health in danger from consuming their products, but the lives of the cows that produce them.
There is severe environmental damage brought on by cattle ranching, including the destruction of rainforests and grasslands. Since 1960 more than 25 percent of Central America's forests have been cleared to create pastureland for grazing cattle. By the late 1970's two-thirds of all agricultural land in Central America was occupied by cattle and other livestock. More than half the rual families in Central America-35 million people-are now landless or own too litle land to support themselves. Cattle are also a major cause of desertification around the planet. Today about 1.3 billion cattle are trampling and stripping much of the vegetative cover from the earth's remaining grasslands. Each animal eats its way through 900 pounds of vegetation a month. Without plants to anchor the soil, absorb the water, and recycle the nutrients, the land has become increasingly vulnerable to wind and water erosion. More than 60 percent of the world's rangeland has been damaged by overgrazing during the past half century.
Cattle ranching has also been linked to Global Warming. The grain-fed-cattle complex is now a significant factor in the emission of three of the gases that cause the greenhouse effect- methane, carbon dioxide, and nitrous oxides- and is likely to play an even larger role in Global Warming in the coming decades. The burning of fossil fuels accounted for nearly two-thirds of the 815 billion tons of carbon dioxide added to the atmosphere in 1987. The other third came from the increased burning of the forests and grasslands. When the trees are cleared and burned to make room for cattle pastures, they emit a massive volume of carbon dioxide into the atmosphere. Commercial cattle ranching also contributes to Global Warming in other ways. With 70 percent of all U. S. grain production now devoted to livestock feed, much of ot for cattle, the energy burned by farm machinery and transport vehicles just to produce and ship the feed represents a significant addition to carbon dioxide emissions. It now takes the equivalent of a gallon of gasoline to produce a pound of grain-fed beef in the United States. To sustain the yearly beef requirements of an average family of four requires the use of more than 260 gallons of fossil fuel. Finally; Nitrous Oxide, which accounts for 6 percent of the global warming effect, is released from fertilizer used in growing the feed; and methane, which makes up 18 percent, is emitted from the cattle.
The final victims of the world cattle complex are the animals themselves. Immediately after birth, male calves are castrated to make them more "docile", and to improve the quality of their meat. To ensure that the animals will not injure each other, they are dehorned with a chemical paste that burns out their horns' roots. Neither of these procedures is done with anesthesia.
There are about 42,000 feedlots in 13 major cattle-feeding states in the United states. The feedlot is generaly a fenced-in area with a concrete feed trough along one side. In many of the larger feedlots, thousands of cattle are crowded together side by side in severely cramped quarters. To obtain the optimum weight gain in the minimum time, feedlot managers administer a variety of pharmaceuticals to their cattle, including growth-stimulating hormones and feed additives. Anabolic steroids, in the form of small time-release pellets, are implanted in the animals' ears. cattle are given estradiol, testosterone, and progesterone. The hormones stimulate the cells to produce additional protein, adding muscle and fat tissue more rapidly. Today 80 percent of all the herbicides used in the United States are sprayed on corn and soybeans. After being consumed by the cattle, these herbicides accumulate in their bodies and are passed along to the consumer in finished cuts of beef. beef now ranks number one in herbicide contamination and number two in overall pesticide contamination. Some feedlots now expiriment with adding cardboard, newspapers, and sawdust to the feed to reduce costs. Other factory farms scrape up the manure from chicken houses and pigpens and add it directly to cattle feed. Food and Drug Administration (FDA) officials say that it is not uncommon for some feedlot operators to mix industrial sewage and oils into the feed to reduce costs and fatten animals more quickly.
Moving beyond beef in our daily diets is a personal decision, but one that has profound and far-reaching consequences. Millions of Americans and Europeans are making personal choices to move beyond beef, or at least to cut down their consumption, and this will have a significant impact on the future of our planet and humanity. Beef consumption in the United States has dropped markedly in the past 20 years, from 83 pounds per person per year in 1975 to less than 68 pounds per person per year in 1990.
Today's dairy cow has been bred to be a milk machine, producing an average of 15,557 pounds of milk a year, almost 40 percent more than her counterpart of just 16 years ago. while the undomesticated cow produced enough milk to feed her one or two calves, a dairy cow in a modern dairy farm produces about twenty times more milk than her calf needs. Excessive production demands, coupled with the trend toward confining cows indoors or in densely populated drylots (enclosures devoid of grass), have resulted in serious welfare and disease problems for the dairy cow.
The modern dairy cow is usually artifically inseminated, pumped full of hormones and growth stimulants, and super-ovulated so she can churn out more calves, faster and faster. Cows are fed a diet geared toward high production. This diet, which is heavy in grain, is fed to species whose digestive track is suited to roughages. High-production diets create many health problems, including severe metabolic disorders and painful lameness, which are compounded by confinement. Also, at any given time, half of U.S. dairy cattle have mastitis (a painful udder inflamation, usually caused by infection).
Today's cow is typically burned out (unable to keep up production) and sent to slaughter, for human consumption and other uses, at an average age of four years. Her natural life span would be from twenty to twenty-five years.
A recent analysis by the FDA found that meat from dairy cows and their calves was the source of 60 percent of those drug and other chemical residues found in edible meats in ammounts that violated allowable limits (Dairy cows are the source for the majority of processed beef and 26 percent of hamburger in the United States ). The government's ability to ensure a safe milk supply has also come into question.
Despite a dairy product surplus and with cows already pushed to their limits, recombinant Bovine Growth Hormone (rBGH), a genetically engineered drug injected into dairy cows to increase milk production, has been approved for use by American dairy farmers. Embryo transfer, cloning, the creation of transgenic cows, and the engineering of cows to secrete pharmaceuticals and other substances in their milk are also under way.
Another practice growing in popularity is tail docking, the removal of about two-thirds of an adult dairy cow's tail- without use of an anesthetic. This procedure, the rationale for which is that it keeps cows cleaner, is completely unnecessary. It also deprives the cow of her natural means of swatting flies.
Newborn dairy calves are typically taken from their mothers at birth of shortly thereafter. Some female calves are kept as replacements for cows in the dairy herd. The other calves are sent to slaughter as babies, to veal farms, or to be raised for beef. Many are sent to stockyards when only one or two days old, even before they can walk. Calves in the sale/slaughter pipeline are often transported long distances, subjected to rough handling, and exposed to numerous diseases and weather extremes. They may be given no opportunity to rest or eat. Calves destined to be slaughtered at sixteen weeks old for "milk-fed" veal spend their lives in crates so narrow that they are unable even to turn around. Denied water and solid food, they are fed a diet consisting solely of an intentionally iron-deficient milk replacement, often containing antibiotics, which they typically lap up from a bucket twice a day. Veal is a by-product of the dairy industry that owes its existance to the surplus calves delivered by ten million dairy cows every year.
Veal consumption has decreased from its peak of 3.5 pounds per capita to under one pound per capita in 1993, owing in large part to the public's refusal to purchase inhumanely produced products such as milk-fed veal.
Another by-product of the dairy industry is the downed animal- an animal who is too weak, ill, or injured to stand or walk without assistance. Burned-out dairy cows and newborn calves make up a large percentage of downed animals, who often suffer from brutal treatment at livestock markets. Baby calves that cannot walk are often dragged or thrown and are trampled by other animals. Downed dairy cows are painfully dragged off trucks and across stockyards by chains or ropes tied around one leg. Both downed calves and cows are shocked with electric prods, kicked, and beaten during the transport and auction process in futile attempts to get them to move on their own. They are often left without food, water, or veterinary care, sometimes for days at a time, untill they either die or are loaded onto trucks yet again for a trip to slaughter. as many as 90 percent of downed animals could be prevented by simple improvements in management, handling, and transportation practices, including keeping newborn calves on the farm of their birth for a minimum of five days before sending them to market.
There are many health problems linked with eating beef and dairy products. Harvard scientists found that women who had beef, lamb, or pork as a daily main dish ran two and a half the risk of developing colon cancer as did those who ate the meats less than once a month. The conclusions are drawn from a study of 88,751 nurses that was begun in 1980. Eating beef has also been linked to heart disease, high blood pressure, and strokes. Drinking milk has been linked to asthma, allergies, intestinal bleeding, and juvenile diabetes. Cutting dairy products out of your diet gives you a greater chance of avoiding bronchial, respiratory, and stomach problems.
Eating Beef, as well as dairy products, has an extreme impact on the environment. Raising cows for beef has been linked to several environmental problems, such as Global Warming, and eating beef can worsen your health. The dairy industry puts not only your health in danger from consuming dairy products, but that of the cows who make them as well.
f:\12000 essays\sciences (985)\Enviromental\The EPA Can It Will it Save Our Environment .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Pollution of our environment is an issue that concerns each and
every one of us. "The threat of environmental degradation now looms
greater than the threat of nuclear war." Patrick Henry said, "I know no
way of judging the future but by the past." In the past man has trampled
on the environment.
"The word 'ecology' means 'a study of home.'" It means
discovering what damage man has done, then finding ways to fix it.The Environmental
Protection Agency is trying to fix our home, the planet Earth.
Destruction of forests, land degradation, atmosperic
contamination, and water scarcity are some of the major environmental
problems. In 1970, the EPA was created by President Nixon to protect the
public health and environment. The cancer-causing DDT was banned in 1972
and was found accumulating in the food chain. The use of lead in
gasoline was phased out in '73 which caused lead levels to drop 98%. In
'74 the agency required drinking water to be physically and chemically
treated. CFCs were banned in '78 and a nation-wide toxic waste site
cleanup program was developed in 1980. The EPA then evacuated Times
Beach, Montana for dangerous levels of dioxin in soil, which was then
criticized for its heavyhandedness and arrogance. Charges of
mismanagement and undue political influence caused the head of the EPA to
resign in '83. "The deputy director resigns because of charges of making
a 'hit list' of employees to be hired, fired, or promoted because of
political leanings. The former head of the toxic waste cleanup is found
guilty of perjury and obstructing congressional inquiry. A regulation
requiring treatment of hazardous wastes before disposal underground was
made in 1984." The spill of the Exxon Valdez caused the Environmental
Protection Agency to be ctiticized for slow response in '89. Texas
Eastern Gas Pipeline was fined $15 million for the contamination of PCB
at 89 sites in '90. They were also required to pay $750 million in
cleanups. "The EPA then develops the new Clean Air Act which required
states to demonstrate progress toward meeting national air quality
standards for harmful pollutants such as smog and carbon monoxide." The
EPA issued a report in 1990 ranking the most serious threats to the
environment and to human health. The highest-risk problems to human
health are air pollution, exposure to toxic chemicals, and pollution of
drinking water. In '91, $25 million in fines was given by Exxon
Corporation and Exxon Shipping and the U.S. and Alaskan governments
received $100 million. They also estimated a $900 million redemption
fund. In '93, the EPA announced that secondhand smoke can cause cancer,
which the tobacco industry representatives said were inconclusive. The
Clinton administration then doubled the list of chemicals that must be
publicly reported under community right-to-know laws in 1994. There was
then a proposal to cut their budget by $1 billion from a Republican
controlled Congress, to the level it was 15 years ago.
"The Environmental Protection Agency has made the country a
better place for people to live," according to EPA Administrator Carol
Browner. But notice other comments that have been made about them. "The
federal EPA is enmeshed in political controversy and a struggle for its
very existance." When it was created in December 1970, they have been
embroiled in one drama after another- both environmental and political.
"Congress distrusts it, businesses hate it, and even its friends
criticize it.
The EPA has made a commendable effort at trying to protect our
environment. For instance, in reducing indoor pollution where we see
non-smoking areas designated and the restriction of cigarette ads from
television have helped people in general. The recycling effort, the
disposal of toxic wastes, and the passage of laws to protect our
environment have been beneficial, however, special interest groups and
lobbyists have made their job difficult. Because of the controversy and
termoil in and out of the agency, would certainly indicate that the EPA
is not capable of solving our environmental problems.
f:\12000 essays\sciences (985)\Enviromental\The Evils of Hunting.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hunting is an unnecessary activity in the modern world. Slob hunting is the way many so-called sportsmen hunt these days. The phrase slob hunting refers to indiscriminate assault on animals, whatever their type. This type of recreation is harmful and undesirable the United States for three reasons: 1) It reduces the number of areas available for tracking animals, 2) It upsets the natural balance, causing many species to have their ranks drastically reduced, and 3) It can be unnecessarily cruel to animals.
Hunting on private lands is one of the best ways to hunt, because the game is plentiful, and there is challenge to the sport. However, when land-owners encounter a slob hunter on their lands, they are much less likely to let anyone hunt their land in the future. When a person wishes to hunt on a certain person's land, s/he is often denied the privilege, after the proprietor of the land has had one negative experience with some other hunter. This can make hunting a much harder sport to participate in, even for the people who are hunting for their livelihood. (Satchell 30)
Over the years, hunting has reduced the animal population drastically. In the 1970's, the number of ducks making annual flights was approximately 91.5 million. In 1995, the number had been reduced to around 64 million. Within 20 years, in short, the duck population was reduced by almost one third, showing the drastic toll hunting is taking on our wildlife. If we assume that other species have been reduced in number at approximately the same rate in recent years, then what are the larger implications for our ecological balance? If this trend continues, by the year 2055, the members of species which are hunted could be reduced by as much as 81 percent. (Satchell 31)
Not only does hunting reduce the number of animals, it can also be unnecessarily cruel to the creatures. When deer are bowhunted, they often are not instantly killed. Most deer will then suffer a painful and lingering death, as only 50 percent of deer struck are retrieved and put out of their misery. This is unnecessary cruelty; only an end to slob hunting would bring this to a halt. (Satchell 32)
In the modern era, with advanced technologies, we have found ways of killing large numbers of animals painlessly. Although some people continue to hunt, it is needless because it can lessen the area available for hunting in, cause many animals to have their population decreased, and be pointlessly cruel to animals. These three evils humanity can live without, which makes slob hunting an ecologically-unsound throwback to a less enlightened age.
f:\12000 essays\sciences (985)\Enviromental\The Greenhouse Effect 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Greenhouse Effect
Our world is suffering, and it is suffering from something people call the
"Greenhouse" effect. The greenhouse effect is caused by humans over pollution of the
earth. If we do not stop this soon the earth will "die".
We have caused this over many years of over industrialization in this growing
world. We think that bigger is better, so we make vehicles bigger and better, and we
make pretty much everything else bigger as well. So we make larger factories to build
these larger things, and these larger factories release larger amounts of pollution.
Scientist have been predicting the outcome of this change for years now, but none
of these scientists believe in the same thing. The ones that do think that it will come at a
different time. So people are still optimistic on what is going to happen, and when it is
going to take place.
First, to understand what the greenhouse effect is, we must first understand what a
Greenhouse is. A Greenhouse is a building made either of clear plastic sheets, or of
glass. The sun's rays go through the glass, and heat up the air inside the building, and
they have a hard time getting out. These rays get trapped inside the building, and
continually heat the air inside, and even through the night the rays stay in and heat the
air. The greenhouse is also called a "HOT HOUSE" because it gets so hot.
The greenhouse effect is caused by gases such as carbon monoxide, carbon
dioxide, and nitrogen escaping into the atmosphere. These gases get trapped in the ozone
layer and do not let the suns rays escape very easily. This causes the earth to warm up.
This warming can cause droughts, and this would really affect the farmers. This heating
up will cause the plants and animals would die.
f:\12000 essays\sciences (985)\Enviromental\the greenhouse effect.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE GREENHOUSE EFFECT
This essay is going to describe what the greenhouse effect is and what it does.It is also going to say what causes the greenhouse effect and the consequences of it.
What is meant by the term the greenhouse effect is that the heat from the sun comes into the Earth's atmosphere and cannot get out so becomes trapped.It gets its name because this is very much like a greenhouse.This effect occurs as the incoming short wave radiation is changed when it hits the Earth's surface into long wave or infra-red radiation.Heat energy in this form is then absorbed and stored in water vapour and carbon dioxide in the atmosphere.
Many different things cause the greenhouse effect. The amount of carbon dioxide in the atmosphere is increased by 0.4 percent each year because of the massive consumption of fossil fuels such as oil, coal, and natural gas. Another contributing factor is the the amount of forest logged, every second of the day the area of a football field in trees is cleared by either being logged or burnt. Two other deadly greenhouse gasses which are entering the atmosphere even faster than carbon dioxide are methane and chloroflourocarbons, although they are not as damaging in the long run.
These increases are likely to affect worldwide temperatures dramatically. In 100 years time the average temperature for most parts of the world will increase from between 2C to 6C if greenhouse pollution continues at its present rate. This temperature increase would drastically affect the growth of many different crops and cause the polar ice caps to melt, thus increasing sea levels to rise up to several metres. If this rise in sea level was to occur many areas would be much more prone to flooding, and generally much deeper floods than would be expected nowdays. This flooding would happen paricularly around coastal regions worldwide, and also along many rivers that flow to or from coastal inlets.
The greenhouse effect is very important because it leads to rising water levels and temperatures which can have a large effect on the world's climate.Changes in the climate can lead to significant changes in agricultural industries.This can lead to good farmland becoming deserts, make it too hot for some crops to grow or let pests and diseases thrive.As a result some countries can suffer bad economic conditions and people will have to move.Some low lying countries such as those on the Pacific islands or Holland could have a lot of flooding and a lot of money would need to be spent building sea walls and levee banks to stop flooding.For these reasons it is very important to carefully manage the greenhouse effect by reducing the amount of carbon dioxide going into the air and planting more trees.
f:\12000 essays\sciences (985)\Enviromental\The Lake.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Lake
It was the middle of springtime and across from my house where the
incident took place. There was a lake there in which my brother and I
loved to explore from time to time. The humidity and waterdrops where
reminiscent of a fully functional sauna. The onslaught of heat and
burning glow of the sun was relentless. Nonetheless, this fact did not
bother us one bit, but gave us more incentive to dance with our cool and
embracing "long-lost love".
The first step of this operation was making sure that our neighbors
had gone away from the house for at least two hours. Since it was their
lake and property, this made it safe for us in not getting caught in the
middle of our escapade. Upon this, my brother and I snuck to their
backyard like two undercover police officers, until we were in the clear.
Nerve-wracking minutes later, flowed the emerald green and ever-so lively
lake in front of us. We stopped and starred in awe. The lake had appeared
so shiny and reflective, it resembled a finely-cut diamond. The rare and
distinct fragrance enticed us. It smelled like mother-nature herself,
with aromas ranging from wildlife and wet grass, to evaporated swamp
water and healthy dirt.
Then, the time for us to find the desired vessel arrived. We chose
the kayaks, and set out for the water. Carefully, with our torn-jeans
rolled up, and shirts off, we dragged the massive thing over the slope of
grass and mud into the shallow stream. We then hopped aboard, grabbed the
paddles, and floated and splashed into nowhere. The wavy current sucked
us downstream, periodically bouncing us off of sandbags and sharp
branches leaning over the water- Now that was true adventure! Minutes
later, my brother and I, after passing under many pipes and tunnels,
floated into a huge "cul de sac" of water, with an island in the center.
In our amazement, we paddled there as vigorously as toddlers learning to
swim. We tied the kayaks to a thin branch with the slimy green rope
mysteriously attached to them, and hopped onto the island. We basked in
pure amazement.
After the tempo settled, we started our natural brotherly routine.
My brother and I sat on the muddy bank, with our feet dipped in water,
and threw stones out as far away as we could in our competitive nature.
We set aside our differences, and together, bonded. My newfound companion
and I sat, laughed, fought, played, and talked, as the sun slowly left
us.
At this point it did not matter what happened to us for taking the
kayaks, because whatever it was, it could not replace the priceless
experience we shared with one another.
f:\12000 essays\sciences (985)\Enviromental\The Population Growth Rate in India.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Page 1
Eco 228
The Population Growth Rate in India
For many years concern has been voiced over the seemingly unchecked rate of population growth in India, but the most recent indications are that some success is being achieved in slowing the rate of population growth. The progress which has been achieved to date is still only of a modest nature and should not serve as premature cause for complacency. Moreover, a slowing of the rate of population growth is not incompatible with a dangerous population increase in a country like India which has so huge a population base to begin with. Nevertheless, the most recent signs do offer some occasion for adopting a certain degree of cautious optimism in regard to the problem.
One important factor which is responsible for viewing the future with more optimism than may previously have been the case has been the increase in the size of the middle class, a tendency which has been promoted by the current tendency to ease restrictions on entrepreneurship and private investment. It is a well-known fact that as persons become more prosperous and better educated they begin to undertake measures designed to eliminate the size of their families. (The
Page 2
obvious exception would be families like the Kennedys who adhere to religious strictures against artificial birth control, but the major Indian religions have traditionally lacked such strictures.) Ironically, the state of Kerala which had long had a Communist-led government had for many years represented a population planning model because of its implementation of programs fostering education and the emancipation of women. The success of such programs has indicated that even the poorer classes can be induced to think in terms of population control and family planning through education, but increased affluence correspondingly increases the pressure for the limitation of family size, for parents who enjoy good life want to pass it on to their children under circumstances where there will be enough to go around. In contrast, under conditions of severe impoverishment there is not only likely to be lack of knowledge of family planning or access to modes of birth control, but children themselves are likely to be viewed as an asset. Or, perhaps one might more accurately say with regard to India, sons are viewed as an asset. We will have more to say later about the relationship between gender and population growth, but here we may make the obvious point that if a family seeks sons it may also have to bring into the world some "unwanted" daughters, thereby furthering the trend towards large families. Under conditions of severe impoverishment, attended as it has traditionally been by high childhood mortality rates, "it has estimated for India that in order to have a 95 per cent probability of raising a son to adulthood, the couple had to have at least six children."
Page 3
In general, direct efforts on the part of government to promote family planning have had only limited success in India. In large part this has been due to the factors which have traditionally operated in Indian culture and society to promote large families, of which more will be said later. Here, however, it might be noted that the most common family planning modes have proven difficult to implement under Indian conditions. Where government efforts are concerned, "for mass consumption only three methods are...advocated: sterilization (vasectomy for fathers and tubectomy for mothers), IUDs and condoms." Sterilization has traditionally met with strong resistance among uneducated sectors of the population who associate it with loss of virility or feminimity, and, often being irrevocable, it has been a source of understandable concern in a society where couples who may already have several children risk losing some or all of them as a result of such factors as epidemics earthquakes or floods. Resistance to sterilization has traditionally been strongest among men, Chandrasekhar suggesting that the prevalence of tubectomies as opposed to vasactomies serving serving indication that "women are becoming increasingly aware of the problem and want to solve it without waiting for their husbands to decide on vasectomy."
In regard to IUD, which has been promoted since its introduction in India in 1963, the method has not proven popular because of the relative frequency of excessive bleeding and, though more infrequent, involuntary expulsion. Taking note of the fact that in traditional Indian
Page 4
society gynecology, obstetrics and other fields requiring intimate contact and conversation with women are invariably reserved to female doctors only, Chandrasekhar observes that "the real problem is the lack of sufficient numbers of dedicated women physicians who are willing to work in rural areas and spend some time in pre-insertion and post-insertion follow-up of their patients." The third major mode of contraception-condom use has seen a marked increase in usage in India in recent years; however, much of this increase has been due less to family planning concerns but to fear of AIDS on the part of sexually-active persons, such as prostitutes and their clients, who could be expected to take precautions against pregnancy anyway. As for the pill, it still has not proven a major contraceptive mode among the uneducated masses who are most inclined to have large families.
In addition to long-recognized family planning modes, other factors have been operating to limit the rate of population growth in recent years. Unfortunately, infanticide of girl babies has become increasingly commonplace in India, perhaps because the growth in materialism has led the lower classes to become more and more aware of the "undesirability" of girls. While the Hindu emphases upon dowery, which can have the effect of impoverishing a family with many daughters, is no doubt a significant contributing factor, it should be pointed out that population figures for Pakistan and Bangla Desh would suggest a prevalence of infanticide of girl babies in these nation as well, despite the fact that under Islam there has traditionally been no dowery at marriage but,
Page 5
instead, a so-called "bridal price" paid by the family of the groom. Thus, indications are that Muslims throughout the subcontinent have accepted the Indian cultural presumption that girl babies are undesirable even though under Islam the bride's parents theoretically stand to benefit financially. Mahmood Mamdan notes that, in regard to India, "the preferential treatment of male over female clearly shows in the much higher infant death rate among females and in the resulting higher ratios of males over females in general population," adding that "in most other parts of the world, females of a general population have lower death rates than males." Indeed, except for the Arab all countries of the Persian Gulf, which offer employment to large numbers of unmarried men from other areas of the Middle East, the only other countries which display a population ratio significantly in favor of males on the Indian pattern are Pakistan and Bangla Desh, where, as has already been noted, the infanticide of female babies presumable also prevails.
In addition to the elimination of girl babies, either through outright murder or the denying them food and care traditionally given to boys, abortion, on the basis of amniocentesis, has been another means of population control where girl babies are concerned. As in the case of infanticide, the authorities have been largely powerless to restrict the practice, abortion being for the most part legal in India even though the use of amniocentesis for the purpose of aborting a healthy female baby is theoretically against the law. Another means of reducing the "unwanted" girl babies is abandonment to charitable organizations under circumstances where adoption will
Page 6
result. The anonymous abandonment of children to charitable agencies is the another practice that is illegal but impossible for the government to prevent, for the agencies understandably hesitate to refuse to accept a child from a parent apparently intent on abandonment for fear that infanticide will then be resorted to by such a parent. And, although Indian law requires that an adoption agency give priority to placement with families within India, the relative paucity of Indian couples seeking to adopt children insures that virtually all babies given up for adoption will find homes in the affluent industrialized countries of the West.
We have therefore seen that, while the rate of India's population growth has been slowing, some of the measures adopted to this end are not of the best. To insure that comprehensive family planning programs find widespread acceptance considerably more progress needs to be made in raising the standard of living of the Indian masses for "although the wealthier, better-educated urban families do curtail their fertility, the poor have not had the means or motivation to do so." "Most important, perhaps," writes John Cool, is the fact that thousands of years of Indian experience have shaped cultural values and social institutions, which encourage the survival of the family and the community through high fertility. Modernization is slowly changing this situation, but to insure success considerably more progress needs to be made.
Bibliography
Page 7
Chandrasekhar, S. Abortion in a Crowded World: The problem of abortion with special reference to India (Seattle: University of Washington Press, 1974).
Franda, Marcus F. (ed.). Response to Population Growth in India: Changes in Social, Political, and Economic Behavior (New Yew: Praeger, 1975)
Bahnisikha. The Indian Population Problem: A Household economics Approach (New Delhi: Sage Publications, 1990)
Mandelbaum, David G. Human Fertility in India: Social Components and Policy Perspectives (Berkeley: University of California Press, 1974).
NOTES
f:\12000 essays\sciences (985)\Enviromental\The Population Problem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Two hundred years ago, Thomas Malthus, in An Essay on the Principle of Population, reached the conclusion that the number of people in the world will increase exponentially, while the ability to feed these people will only increase arithmetically (21). Current evidence shows that this theory may not be far from the truth. For example, between 1950 and 1984, the total amount of grain produced more than doubled, much more than the increase in population in those 34 years. More recently though, these statistics have become reversed. From 1950 to 1984, the amount of grain increased at 3 percent annually. Yet, from 1984 to 1993, grain production had grown at barely 1 percent per year, a decrease in grain production per person of 12 percent (Brown 31).
Also strengthening to Malthus' argument is the theory that the world population will increase to over 10 billion by 2050, two times what it was in 1990 (Bongaarts 36). Demographers predict that 2.8 billion people were added to the world population between 1950 and 1990, an average of 70,000 a year. Between 1990 and 2030, it is estimated that another 3.6 billion will be added, an average of 90,000 a year (Brown 31). Moreover, in the 18th century, the world population growth was 0.34%; it increased to 0.54% in the 19th century and in the first half of the 20th century to 0.84% (Weiskel 40).
Neo-Malthusians base their arguments on the teachings of Thomas Malthus. Of the Neo-Malthusians, Garrett Hardin is one of the most prominent and controversial. Hardin's essays discuss the problem of overpopulation and the effects it will have on the future. In Lifeboat Ethics, he concludes that continuous increases in population will have disastrous outcomes.
Neo-Malthusian arguments come under much scrutiny by those who believe that the population explosion is only a myth. Those who hold these beliefs state that the evidence Neo-Malthusians use to justify their views is far from conclusive. Critics hold that the Neo-Malthusian call for authoritarian control is much too radical. Thus, these critics belittle the theories of Neo-Malthusians on the basis that population is not a problem.
However radical Hardin's theories may be, current evidence shows that he may not be too far off the mark. It is hardly arguable that the population has increased in the past few decades, for current statistics show that this actually is the case. Equally revealing, is the fact that vast amounts of land are being transformed into more living space. More people means more waste, more pollution, and more development. With this taken into consideration, it seems that Hardin's teachings should no longer fall on deaf ears.
When discussing the issue of population, it is important to note that it is one of the most controversial issues facing the world today. Population growth, like many other environmental issues, has two sides. One side will claim that the population explosion is only a myth, while the other side will argue that the population explosion is reality. Because of this, statistics concerning this subject vary widely. But, in order to persuade, it is necessary to take one side or the other. Thus, statistics may be questioned as to their validity, even though the statistics come from credible sources.
Lifeboat Ethics
The United States is the most populous country in the world, behind only China and India. Unlike China and India though, the United States is the fastest growing industrialized nation. The United States' population expands so quickly because of the imbalance between migration and immigration, and births and deaths. For example, in 1992, 4.1 million babies were born. Weighing this statistic against the number of deaths and the number of people who entered and left the country, the result was that the United States obtained 2.8 million more people than it had gotten rid of (Douglis 12).
Population increases place great strain on the American society and more particularly it causes tremendous destruction to the natural environment. For example, more than half of the wetlands in the United States are gone, and of all of the original forest cover, 90 percent has been destroyed. This depletion has caused the near extinction of over 796 individual plants and animals. At least part of the year, the air that over 100 million people breathe is too dirty to meet federal standards. And finally, almost 15 million people are subject to polluted water supplies (Douglis 12). It is very likely that total destruction of the environment can take place and probably will if something is not done to curb the population growth.
When discussing Hardin's essays it is necessary to confront the problem of immigration. Immigration is responsible for approximately 40 percent of the population growth in the United States (Douglis 12). The United States now accepts more immigrants than all other developed countries combined (Morganthau 22). It is estimated that approximately one million immigrants from all over the world are making the United States their new home each year (Mandel 32). This estimate does not include illegal immigration, which makes this total even greater (McKenna 336).
It is obvious that immigrants have a much better life in the United States than in their previous homes. Immigrants come to the United States to benefit from the United States' economy, and return to their original homes with more money. Take for example a quote form a Malaysian immigrant working illegally in the United States:
"If you take one dollar back to Malaysia, it is double the value. You work here to earn U.S. dollars so you can greatly improve your living standard in Malaysia." (Mandel 32)
While immigrants benefit themselves by coming to the United States, they leave natural born Americans competing for jobs.
By 2050, it is estimated that the population of the United States will be close to 383 million. Of this, approximately 139 million, or 36 percent, will be immigrants and their children. This will make Americans of European descent, which in 1960 were an 89 percent majority, a minority of less than 50 percent (Brimelow 42).
Immigration poses great threats to the national economy, and costs taxpayers millions of dollars every year. Studies show that post-1970 immigrants, legal and illegal, used $50.8 billion of government services in 1992. Subtracting the $20.2 billion they paid in taxes, the difference, which American taxpayers had to make up, was $30.6 billion. These figures, averaged out, account for $1,585 for every immigrant. Over the next ten years, it is estimated that an additional $50 billion in American tax money will go toward supporting immigrants (Thomas 19).
According to Garret Hardin's idea of Lifeboat Ethics, continuing to add to the population of the United States will create many hardships. In order to bring the population within a reasonable number, Hardin suggests population control. Like other Neo-Malthusians, he states that this can only be accomplished under authoritarian government. Under authoritarian control, couples would no longer be able to receive private benefits from reproduction, while they pass the costs of their fertility on to society (Chen 88). He claims that individual rights--particularly reproductive rights--are too broad. He argues that population control cannot be achieved with birth control alone. Birth control simply gives the person the choice of when to have children and how many to have (Chen 90). Thus, in order to attain a stable population, the right to reproduce freely can no longer be allowed.
Hardin begins his argument by noting that poor countries have a GNP of approximately $200 per year, while rich countries have a GNP of nearly $3,000 a year. Thus, there are two lifeboats: one full of equally rich people, the other disastrously overcrowded with poor people. Because of the overcrowding in the poor lifeboats, some people are forced into the water, hoping eventually to be admitted onto a rich lifeboat where they can benefit from the "goodies" on board. This is where the central problem of "the ethics of a lifeboat" becomes a primary issue. What should the passengers on the rich lifeboat do (Hardin 223)?
First, Hardin notes that the lifeboat has a limited carrying capacity, which he designates at 60. Fifty people are already aboard the lifeboat, leaving room for 10 more. He also notes that the 10 empty spaces should be left empty in order to preserve the safety factor of the boat. Assuming there are 100 swimmers waiting to be taken aboard, what happens next (Hardin 223)?
Hardin suggests three solutions. First of which is to allow all 100 people to board the lifeboat. This would bring the total passengers of the lifeboat to 150. Because the boat only has a capacity of 60, the safety factor is destroyed, and the boat becomes overcrowded. Eventually the lifeboat sinks and everyone drowns. In Hardin's words, "complete justice, complete catastrophe" (Hardin 224).
The second solution is to allow only 10 more people on the boat, abolishing the safety factor, but keeping the boat from becoming too overcrowded. The problem with this solution though is which swimmers to let in, and what to say to the other 90 left stranded in the water (Hardin 224).
The final solution is to allow no one in the boat, thus greatly increasing the chances of survival for the 50 passengers already on board. This solution, to many of the passengers, would be wrong, for they would feel guilty about their good luck. Hardin offers a simple response: Get out and give up your seat to someone else. Eventually, if all of the guilt ridden people relinquish their seats, the boat would be guilt free and the ethics of the lifeboat would again be restored (Hardin 224).
Hardin next argues the issue of reproduction. He notes that populations of poor nations double every 35 years, while the populations of rich nations double every 87 years. To put it in Hardin's perspective, consider the United States a lifeboat. At the time Hardin wrote his essay, the population of the United States was 210 million and the average rate of increase was 0.8% per year, that is doubling in number every 87 years (Hardin 225).
Even though the populations of rich nations are outnumbered by the populations of poor nations by two to one, consider, for example, that there are an equal number of people on the outside of the lifeboat as there are on the lifeboat (210 million). The people outside of the lifeboat increase at a rate of 3.3% per year. Therefore, in 21 years this population would be doubled (Hardin 225).
If the 210 million swimmers were allowed onto the lifeboat (the United States), the initial ratio of "Americans" to "Non-Americans" would be one to one. But, 87 years later, the population of "Americans" would have doubled to 420 million, while the "Non-Americans" (doubling every 21 years) would now have increased to almost 3.5 billion. If this were the case, each "American" would have more than 8 other people to share with (Hardin 225).
Immigration causes more problems than those discussed by Hardin. It causes social friction, and the decline of English-speaking Americans (Morganthau 22). As more and more immigrants poor into American cities, they collectively will feel no need to learn the English language. If one city becomes a majority of immigrants rather than a majority of natural born Americans, tension is the result. This tension will result in societal separatism, which will finally lead to political separatism (James 340).
There are many arguments that focus on the benefits of immigration. Arguments that conclude that immigration creates jobs, promotes a diverse culture, and even arguments that immigration may produce the next Einstein. These arguments, that the United States should not close its borders, come primarily from those people who claim that the United States is a melting pot. If the United States continues to live by the words inscribed on the Statue of Liberty, it is destined to create more bad than good, not only socially and politically, but also environmentally.
Arguments for immigration tend to miss the primary problem that immigration causes: the environmental problem. Immigration means more people. More people give rise to the need for more living space which in turn leads to destruction of the environment. Even though immigration may be beneficial in some ways, the United States must protect its national identity, and even more importantly, it must protect what land it has left.
Failure to close the doors to immigrants will continually increase environmental, economic, and societal problems in America. Without proper legislation, these problems will never be solved. Although America is the land of opportunities, the environment must not be taken for granted. For if it is, disaster is inevitable.
Conclusion
The Book of Genesis tells the story of creation of man. God said to man, "be fruitful and increase in numbers; fill the earth and subdue it." Prior to the nineteenth century, it was believed that God would provide for those who came into the world (Day 101). But, in 1798, this view was shaken by Thomas Malthus' An Essay on the Principle of Population, in which he concluded that while population increases geometrically, agricultural production only increases arithmetically. Thus, eventually, food production will not be able to keep up with an increasing number of people. The question is, which theory can be justified?
Those who say the we always have room for more people fall into the category who feel that the Bible justifies increases in population. What these people fail to understand is that when more people are added, the standard of living decreases. These people who say that living space is near infinite may be correct in their beliefs. The question is, which is more desirable: the maximum number of people at the lowest standard of living--or a smaller number of people at a comfortable standard of living (Hardin 58)?
In order to further exemplify how increasing population decreases the standard of living, consideration should be given to a study done by the National Institute of Mental Health. The study was done to show the negative effects of overpopulation (Calhoun 6). This study shows what the world has to look forward to if Garrett Hardin and Thomas Malthus are correct.
Four male and four female mice were placed in an eight foot square cage. The eight mice were not subject to problems they may have faced in the outside world. In two years the eight mice turned in to 2,200 mice. During this time, the effects of overcrowding had become relevant, as not one newborn mouse had survived in the two year testing period. Finally, after two years and three months, the final mouse (a female) died (Calhoun 6).
During the experiment, various abnormalities were considered related to the overcrowding. Once the carrying capacity of the cage was reached (620), strange things started to occur. Aggressiveness and cannibalism overcame some of the mice. Sexual activities became perverted. Some mice become excessively active, while others became "passive blobs of protoplasm" (Calhoun 6).
One of the experimenters stated the implications of the study. He noted that the mice were subject to a perfect universe, free from disease, weather, etc. The mice progressed and took advantage of their ideal habitat, but only until they ran out of room. The abnormalities of the mice became so predominant that even after the mouse population returned to its original carrying capacity (620), there was nothing that could be done to alter their behavior. Before all of the mice died some were taken out and placed in a new environment, left to freely reproduce again. This resulted in failure though, as all of the offspring soon died. In conclusion, the study showed that the situation of the mouse population would grow worse until the animals destroyed their entire world (Calhoun 6).
If this experiment would hold true for the human race, it is time (maybe even past time) to make some changes. Either way, the earth is not to be taken for granted. No longer can natural resources be used as if there is an infinite supply. Even if there is an infinite supply (and one may never know) sustainability remains to be the best way to totally ensure that natural resources are used in the most effective manner. But if natural resources are not infinite the future of human survival is in jeopardy.
Works Cited
Bongaarts, John. "Can the Growing Population Feed Itself?" Scientific American, March 1994, pp. 36-43.
Brimelow, Peter, and Joseph E. Fallon. "Controlling our Demographic Destiny." National Review, 21 February 1994, p. 42.
Brown, Lester R. "The Earth is Running Out of Room." USA Today Magazine, January 1995, pp. 30-32.
Calhoun, John B. "Not by Bread Alone: Overcrowding in Mice." Man and the Environment. Dubuque, Iowa: William C. Brown Company Publishers, 1971.
Chen, Lincoln C. "A New Modest Proposal." Issues in Science and Technology, November 1993, pp. 88-92.
Day, Henry C. The New Morality: A Candid Criticism. London: Heath Cranton Limited, 1924.
Douglis, Carole, and Gaylord Nelson. "Images of Home." Wilderness, Fall 1993, pp. 10-23.
Hardin, Garrett. Stalking the Wild Taboo. Los Altos, California: William Kaufmann, Inc., 1978.
Hardin, Garrett. The Limits of Altruism: An Ecologist's View of Survival. London: Indiana University Press, 1977.
James, Daniel. "Close the Borders to all Newcomers." Taking Sides: Clashing Views on Controversial Political Issues. Ed. George Mckenna and Stanley Feingold. 9th ed. Guilford, CT: Dushkin Publishing Group, Inc., 1995.
Malthus, Thomas Robert. An Essay on the Principle of Population. Ed. Phillip Appleman. New York: W.W. Norton & Company, Inc., 1976.
Mandel, Michael J., and Christopher Farrell. "The Price of Open Arms." Business Week, 21 June 1993, pp. 32-35.
Morganthau, Tom. "America: Still a Melting Pot?" Newsweek, 9 August 1993, pp. 16-23.
Thomas, Rich, and Andrew Murr. "The Economic Cost of Immigration." Newsweek, 9 August 1993, pp. 18-19.
Weiskel, Timothy C. "Can Humanity Survive Unrestricted Population Growth?" USA Today Magazine, January 1995, pp. 38-41.
f:\12000 essays\sciences (985)\Enviromental\The Potential Effects of a Depleted Ozone Layer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Potential Effects of a Depleted Ozone Layer
" And God said, let there be light and there was light and then God saw the light, that
it was good " ( Genesis 1: 3-4 ). Undoubtedly, light is good. Without light man could not
survive. Light is the ultimate cosmic force in this universe allowing man to progress and
flourish. In the form of heat, light from the sun warms the Earth. Light, also, is the single
most important factor influencing the growth and development of plants. Photosynthesis, a
process by which plants incorporate light from the sun, allow plants to botanically grow
and survive. Certain forms of light are harmful and thus can be said are 'bad'. A natural
umbrella called the ozone layer protects the Earth and its inhabitants by screening out this
harmful light. For " millions of years ozone has been protecting the earth " by absorbing
ultraviolet or bad radiation from the sun ( Rowland, 1992, p.66 ). This natural umbrella
protecting mankind has recently suffered the effects of industrialized society. This " ozone
shield is dissipating " and the cause is laid primarily to man - made chemicals
( Bowermaster et al, 1990, p.27 ). If enough of these man - made chemicals are released,
" the ozone layer would be weakened to such an extent that it does not filter out the sun's
invisible and dangerous ultraviolet rays " ( Jones, 1992, p.36 ). Such a scenario would
drastically alter society and the environment. Ozone depletion has been described as
" potential catastrophe " and " a planetary time - bomb " ( Way, 1988, p.9 ). The four main
areas affected by a depleted ozone layer and thus by the corresponding increase in harmful
ultraviolet radiation are agriculture, wildlife, the environment, and human health. A
depleted ozone layer has a profoundly negative and potentially devastating effect on
humanity and its surroundings.
2
From an agricultural perspective, a diminished ozone layer poses great risks. Since
man's evolution from 'man the hunter and gatherer' to 'man the food producer' , mankind
has grown ever more dependent on his surroundings. In the case of food production man
relies greatly on these surroundings. The land on which man attempts to grow food for
himself, and certainly for others as well, has sufficed for thousands of years. The crops
grown on his land have provided thousands with food to eat in the ancient world, millions
with food to eat in the medieval world, and billions with food to eat in the present world.
Regrettably, there have always been times of hunger and shortages. More frighteningly, in
the present world man is confronted with a population boom which is burgeoning near the
six billion mark. It is now more important than ever to protect, maintain, and hopefully
increase the amount of food grown. One of the drawbacks of industrialization has been the
significant depletion of the ozone layer. This depletion could have an incredibly
devastating impact on the world and more specifically agriculture. In general, " plants are
quite sensitive and fragile when confronted with ultraviolet increases " ( Zimmer, 1993,
p.28 ). Words such as sensitivity and fragility only add to the urgency of the possible
agricultural holocaust. One agricultural scientist remarked, " soybeans, tomatoes, tobacco,
potatoes, corn, beans, and wheat are all especially sensitive to UV light " ( Jones, 1992,
p.39 ). Since most of the mentioned crops are considered cash crops the economic aspect
of lower crop yields could also spell disaster. Food supplies are surely in jeopardy when
taking in to account that " more than two - thirds of the plant species - mainly crops -
tested for their reaction to ultraviolet light have been found to be damaged by it " ( Lean
et al, 1990, p.97 ). An increase in ultraviolet light radiating towards plants accelerates the
pace at which man must decide what to do with the dilemma of a booming and more
3
importantly hungry population. Conceedingly, plants, as any element of life, have been
known to adapt to contemporary and dangerous changes in its surroundings but it cannot
be dismissed that " UV radiation can also mutate the genes of plants " which are the
fundamental building blocks of all life ( Bowermaster et al, 1990, p.44 ). Interference with
the foundations of life can also lead to calamity and more importantly a yet foreseen and
unknown calamity. In 1988, then U.S. Interior secretary Donald Hoedel " proposed
coping with ozone depletion by simply wearing sunglasses and hats " but what Hoedel
doesn't understand is that plants lack the ability to wear such human - like possessions
( Bowermaster et al, 1990, p.31 ). With an ever - increasing population it is critical to act
or react to the ozone depletion saga in mankind's midst. More importantly there are and
foreseeably will be even more heralded environmental issues which need to be addressed.
The ozone depletion story can seen as a warning sign to humanity exposing the fact that
the earth can only endure a certain amount of hardship before it will surrender to the
onslaught of industrial might. One author explains the gravity of the situation by pointing
out, " There's only one atmosphere and once that is gone who knows " ( Cox, 1994,
p.546 ). Agriculturally, a depletion in the ozone layer could lead to economic and societal
ruin for many.
In addition to having a profound potential effect on agriculture, a depleted ozone layer
affects wildlife in the same indiscriminate manner. Since ozone depletion leads to increases
in harmful UV light, it comes as no surprise that this 'bad' light would affect the various
forms of life on Earth other than plants. Marine life is currently the most affected by
increases in UV light associated with ozone depletion. " There has been speculation that
this UV could cause a population collapse in the marine food chain, especially in
4
phytoplankton " ( Zimmer, 1993, p.28 ). Phytoplankton, are free floating aquatic plants
which " are the mainstay of the oceanic food chain " ( Lemonick, 1992, p.43 ). Concerning
phytoplankton, " it has been shown through laboratory experiments that UV-A and UV-B
do indeed inhibit phytoplankton photosynthesis " ( Zimmer, 1993, p.28 ). Since
phytoplankton occupy such a strategic position in the aquatic food chain, interference with
phytoplanktic photosynthesis affects the growth, development, and reproductive aspects
of all marine life. Scientists agree that " right now, the lowest levels of life are being hit
hardest " by the increase in ultraviolet light ( Rowland, 1992, p.36 ). If the lowest levels of
marine life, being phytoplankton, are oppressed by increases in UV light, species relying
on the phytoplankton for sustenance cannot be far behind in suffering the effects of a
ravaged food chain. One of the species which relies on the phytoplankton is krill which are
shrimplike - vegetarians of the seas which in turn are a principle source of sustenance for
whales and the like. If krill were to be harvested as a food resource for mankind it has
been said that, " a krill harvest would provide us with the same amount of food as 10% of
the global annual fish catch " ( Boisseau, 1987, p.4 ). Clearly, if the location of krill were
more available to man, than being mostly confined to polar water regions, another
principal food resource could be added to man's long list of them. Another important
feature involved in a decline in phytoplankton numbers and productivity is the fact that
" phytoplankton helps produce and recycle the world's oxygen supply " ( Bowermaster et
al, 1990, p.40 ). An increase in ultraviolet light can thus endanger an entire ecosystem
without necessarily killing off the masses. By altering the respiratory balance in an
ecosystem a variety of species would be affected. Furthermore, the same oxygen recycled
by phytoplankton is breathed by all animals and man himself thus adding to the importance
5
of the threatened oceanic food chain. A weakened ozone umbrella could also have a
tremendous impact on wildlife.
Moreover, in support of devastating impact on the crops and animals, a diminished
ozone layer has been associated with environmental damage and concern. The potential
effect on the earth's climate systems and weather is another negative aspect joined at the
hip with a weakened ozone shield. The ozone layer is located in the stratosphere " 15 - 50
km above the earth's surface " and plays a key role in the development of weather patterns
( Boisseau, 1987, p.7 ). " When stratospheric ozone intercepts UV light, heat is generated.
This heat helps create stratospheric winds, the driving force behind weather patterns "
( Lemonick, 1992, p.42 ). By changing the amount of ozone in the atmosphere, through
man - made chemical interference, the regular wind patterns are affected. Ultimately, " a
diminished ozone layer will help heat up the atmosphere, adding to the threat of global
warming " ( Bowermaster, 1990, p.33 ). Convincingly, climatologists have noted that,
" Weather patterns have already begun to change over Antarctica " ( Lemonick, 1992,
p.42 ). " Virtually all the CFCs and halons that have ever been released are still in the
atmosphere " ( Jones, 1992, p.39 ). This means that all the potent ozone destroyers which
indirectly cause an increase in harmful ultraviolet light are still in the atmosphere
accomplishing their chemically destructive tasks. Moreover, this destructive process will
continue in the sky for the CFC's and halon's " atmospheric lifetime of between 70 and 150
years " ( Brune et al, 1992, p.38 ). The changing weather patterns and global warming will
continue to exist as long as this ozone depletion is still occurring. Ozone replenishes itself
naturally but " it will take the entire 21st century to return to pre - CFC levels "
( Rowland, 1992, p.67 ). Ozone destruction has left an indelible mark on the atmosphere
6
and will continue to do so for at least another century. The depletion of the ozone layer
has a potential catastrophic effect towards the environment.
Furthermore, a diminished ozone layer provides man with another of his already many
viable health concerns. Man continually strives to better his health and tries desperately to
stave off his self - acknowledged mortality. One of the many health concerns brought to
light in wake of the ozone depletion story is cataracts. Cataracts is a medical condition in
which the lens of the eye deteriorates causing blurriness and even blindness. Statistics
show that " if the ozone layer is depleted by 1%, 100, 000 people worldwide would be
blinded " ( Brune et al, 1992, p.39 ). In addition to higher rates of cataracts, rates of skin
cancer have also been linked to increased ultraviolet light in recent years. " On a
population wide - basis the connection between ultraviolet exposure and an increased risk
of skin cancer have been established beyond question " ( Cox, 1994, p.546 ). Admittedly,
some of the recent increases in skin cancer rates can be attributed to the growth in
popularity and fascination with tanning and sun bathing but another, and more convincing
statistic states, " In the 1930s, Canadians had one chance in 3500 of getting melanoma. In
the 1990's, the chance is one in 100. " ( Brune et al, 1992, p.38 ) Most forms of skin
cancer are not serious, but " melanoma is fatal but in only 20 percent of the cases. "
( Rowland, 1992, p.66 ) One doctor simplifies the matter by noting, " Increased UV
radiation has a negative effect on all biology. " ( Boisseau, 1988, p.8 ) All biology having
been pertained to all life without consequence to size, type, or location. Another negative
effect of increased ultraviolet radiation is its link to immunological drawbacks. According
to the World Resources Institute, " A diminished ozone layer may also make people more
vulnerable to a variety of infectious diseases like malaria " ( Bowermaster et al, 1990,
7
p.31 ). Scientists agree that they " already know that ultraviolet light can impair immunity
to infectious diseases in animals " ( Lemonick, 1992, p.41 ). Since it has been determined
that immuno - effects have occurred in animals it cannot be preposterous to assume a
similar effect can be set upon humanity which is genetically and historically, according to
Darwinists, related to the animal kingdom. Immunological processes are carried out at the
cellular level just as any other life processes supporting the notion that " ultraviolet light
carries enough energy to damage DNA and thus disrupt the working of the cells "
( Lemonick, 1989, p.41 ). Furthermore, a U.N. research team stated increases in
ultraviolet light " speed up the onset of the AIDS virus " ( Brune et al, 1992, p.32 ).
Ultraviolet light reduces immune efficiency by " suppressing the production of antibodies,
helping cancers to be established and grow and increasing the susceptibility to herpes and
leishmaniasis " ( Lean et al, 1990, p.97 ). A suppressed immune system is just one more of
many health concerns linked conclusively to a depleted ozone layer and the resultant UV
increases. The medical ramifications of increased UV light is another effect linked to the
ecological 'lit fuse' we call ozone depletion.
Ozone layer depletion possesses a potentially catastrophic cargo of harmful ultraviolet
light concerning mankind and the planet Earth. Agriculture, wildlife, the environment and
human health are all aspects of the planet Earth which are affected by a dramatic loss in
atmospheric ozone stability. In the name of progress and societal advancement, mankind
has released millions of tonnes of potent ozone destroyers in the last sixty years. The
immediate scientific result of a depleted ozone layer is an increase in the amount of
harmful ultraviolet light which reaches the Earth's surface. Historically, mankind has
endured atrocity, calamity, and ferocity. In terms of the environment, it too has endured.
8
Environmental endurance is tested regularly at the benefits of society. What makes man
man is his ability to survive and repair the damage he has done. Al Gore defined man's
relationship with the sky in posing the frightening question, " What will it do to our
children's outlook on life if we have to teach them to be afraid to look up? " ( Lemonick,
1992, p.40 ). If the ozone layer can be freed from the clutches of chemical villainy, only
then can it be truly said once again " let there be light " and not worry about the
consequences.
Works Cited
Boisseau, Peter R. " The Mysterious Threat. " Probe Post, Spring 1987, 10: 1-9.
Bowermaster, Jon, and Will Steger, eds. Saving the Earth. New York City: Alfred A.
Knopf Inc., 1990.
Brune, Nick, and Fisher, Bob, eds. Disappearing Ozone: Danger in the Sun? Toronto:
Canadian Broadcating Corporation, 1992.
Cox, Gary. " The Ozone Hole. " Consumer Reports Aug. 1994: 546.
Jones, David. " Ozone. " Earthkeeper Oct./ Nov. 1992: 36 - 46.
Lean, Geoffrey, Don Hinrichsen, and Adam Markham, eds. Atlas of the Environment.
New York City: Prentice Hall Press Inc., 1990.
Lemonick, Michael D. " Deadly Danger in a Spray Can. " Time 2 Jan. 1989: 41.
Lemonick, Michael D. " The Ozone Vanishes. " Time 17 Feb. 1992: 40 - 44.
Rowland, F. Sherwood. " Northern Exposure. " People 20 Apr. 1992: 66 - 68.
Way, David. " Twilight on the Ozone. " Probe Post, Winter 1988, 10: 3- 6.
Zimmer, Carl. " Son of Ozone Hole. " Nature Oct. 1993: 28 - 30.
f:\12000 essays\sciences (985)\Enviromental\The recent Negative effect of technology on society.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Roy Kantrowitz Mr. Ingram
English 101/105 Report
"The Recent Negative Effect of Technology on Society"
Ever since the Industrial revolution, technology has been changing at a fast pace.
People are always wanting a better lifestyle therefore there is always something new
arising so humans can cope with their physical environment. One of the most important
breakthroughs for technology was the agricultural system. The agricultural system was
the basis for the technology of the future. The agricultural system brought on the need
for transportation, workers and even, battles over land. The need for transportation
brought vehicles into the market. The need for employees brought mechanical robots
into society. Battles over land brought on the need for sophisticated weapons. The
agricultural system brought on a revolution. The invention of the television can bring
media and other forms of entertainment into your house with video and audio combined.
Before 1950, newspapers and radio were the only ways to bring media or entertainment
into the house. Mass production and other job opportunities brought many people from
the suburbs and farms into the city. We can now have forms of electricity directed into
our houses for heating and light. Humans are more reliant on technology then ever
before. All of these technological advances sound great, however, there is a negative
effect to all this technology. Technology can serve to actually harm humans rather then
help them. Competition between companies or even cities can sometimes make lives for
humans even worse. Take for example when a city builds better and more roads to
attract tourists. This actually creates more traffic, not less. Technology also changes our
sense of common purpose. New inventions such as the personal computer and machines
can change our lifestyles. Even things we take for granted such as the automobile have
negative effects on technology. The oil needed for a car to run needs to be imported and
sometimes accidents such as the Exxon Valdez incident spills many gallons of oil into
the ocean. All of these examples show how technology has negative effects on society.
First, competition can lead to a negative effect of technology. When a company
in the U.S. produces shoes and a company in Great Britain produces shoes as well, they
must fight for their market share. Lets say the company in Great Britain purchases more
machines that will reduce the amount of workers needed and improve output, then they
can reduce the price of their product. If the company in New York doesn't follow in
their footsteps then they could be forced out of business. In this case the company is
forced into buying the machines just so they can stay in business. This has a negative
effect on the employees who will be replaced by the new machines. When a city wants
to attract tourists by building better roads to lessen traffic there is a mistake because this
will only create more traffic since there will be more people wanting to travel these
roads. McManus says the inability to see the future is responsible for the negative effects
of new technologies. He also states better roads cause more traffic congestion, not less.
By creating better roads, more people will want to travel these roads (A-1). If New York
City built a new sophisticated highway to attract more tourists then more New Yorkers
will want to travel these roads as well. Many New Yorkers who previously used mass
transit to travel to work will now want to use a car to travel to work. In effect there will
be more traffic and more pollution. There will be other side effects as well. Real estate
values of areas near the highway could go down. Competition can help a community in
one aspect however it can hurt it as well. Competition can directly stimulate the
economy, however, long term effects such as pollution and the loss of jobs could explain
why the City of New York doesn't complete a project like this.
Second, technology can change our sense of common purpose. For millions of
years, mankind has been used to doing everything for themselves. For a long time
peoples' main concerns were survival. To survive means to go out into the woods or
forests and shoot animals for the food which the family needs to eat for the day. People
of modern society never think about hunting for food or clothes. Now, it is all brought to
people instantly through a new standard of survival. The new standard for survival
means making money to go to a mall or supermarket and getting everything a family
needs. A family can get food and clothing at these places without ever having to go into
a forest or a lake. This thought is ever so frightening. When a person from modern
society goes into a supermarket and buys a pound of fish, he or she doesn't even think of
the process that went into the arrival of that piece of fish. He or she didn't need to go to
a lake, all that was needed was to drive to your local supermarket and buy it. No fishing
or hunting was necessary. Humans are losing their sense of common purpose. "But what
'revenge effect' will this have? The technology- resistance movement begins by pointing
out that we are cobbling together virtual communities while our real cities crumble, at
least partly because our sense of common purpose has frayed. Today, only about 5
percent of American households are on-line, but what happens, the critics wonder, when
half the country is wired? Will we escape the unpleasant complications of the world
outside our locked doors by opting for communities in 'cyberspace,' where we can enjoy
the company of people who share our interests and our views? Where the streets never
need to be cleaned and you don't have to keep an eye on your neighbor's house? What
happens if the sirens outside become too distracting? Will we simply buy insulated
drapes? (Reed 46)." Humans are getting lazy. Almost everything must be done for them
in advance. However, sometimes this change in lifestyle is forced upon humans. When a
company decides to buy robots to do the job that man once did, then the human is forced
into either getting fired or watching the machine all day long. Hopefully humans will not
get used to watching a robot do all the work for them. Technology has definitely
changed the lifestyle and common purpose of many humans.
Finally, tecnology we take for granted such as the automobile can have a negative
"domino chain effect" on society. The automobile must have been one of the greatest
inventions in the last 100 years. It has helped the United States to grow in ways never
imaginable before. It allowed people to move out of highly congested cities and move
into more peaceful neighborhoods. Yet it also let people feel as if they were still a short
drive away from the city. However, the negative effects of the automobile were not
thought of in those days. Now it is clear of what negative effects the automobile has on
society.
In recent years, though, we have begun to realize just how much
these ways of making life easier are costing us--or future
generations. Think about everything that's involved in the act of
driving a car. First, the metal in the body of the car has to be
mined. The plastic on the dashboard and other places probably
came from petroleum. (Plastics can also be made from coal or
natural gas.) Petroleum also becomes the gasoline that powers the
car. Extracting the petroleum involves wells and refineries all
over the world. Tankers carry the oil across the seas. Sometimes,
as in the case of the 1989 Exxon Valdez incident in Alaska's
Prince William Sound, they spill it. Spills can kill birds, fish, and
marine mammals by the thousand. But major tanker spills
account for only 12 percent or so of the oil that enters the sea in a
typical year. The rest--less dramatic but more
destructive--comes from routine operations such as loading and
unloading of tankers. Cleaning up after spills and urban pollution
is a daunting task, but it can be done. (Herring 19)
Automobiles are a good example of how technology can backfire on this world.
Automobile exhausts are polluting the atmosphere so much that the next big
technological advancement should be to find a way to heal the environment. However,
healing the environment sounds like a great plan except the demand for the use of
automobiles, planes and trains keep rising. As the demand rises, there will be a need to
find enough oil and petroleum to run these modes of transportation. Soon, there will
not be enough of these resources left and there will be severe counter-effects on society
if all of these modes of transportation are taken away. Humans take for granted these
modes of transportation. Humans often live miles away from their place of business.
No transportation means there is no way to get to work unless you switch to a job
within walking range of your house. No transportation means no money and eventually
no food. The reliance on technologies we take for granted is also a negative effect of
tecnology.
In conclusion, society has recently seen the negative effects of society.
Competition between cities and companies has taken away jobs and brought unwanted
and costly projects into pleasurable areas. A change in lifestyle among almost every
human being is yet another negative effect of technology. What has happened to
people since supermarkets came to town? People do not want to hunt for food
anymore. They find it much easier to walk into a store and purchase it. A third reason
why technology has a negative effect on society is the advent of highly reliant
possessions such as the automobile. Many people count on traveling to work everyday
by car. If the car was somehow taken away from people then there would be chaos. It
is much too late to take it away. Humans are much too reliant on it. There is not
enough mass transit to transport all of the present car users. Hopefully, future
technologies will be fully considered. We must look at the advantages and
consequences and measure if society will benefit or suffer from the technology. Past
technologies weren't fully considered and if they were, there is a chance that the
automobile never would have went into production.
f:\12000 essays\sciences (985)\Enviromental\The selection of a Landfill Site 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Choosing of a Landfill Site
There is currently much debate on the desirability of landfilling particular wastes, the practicability of alternatives
such as waste minimisation or pre-treatment, the extent of waste pre-treatment required, and of the most appropriate
landfilling strategies for the final residues. This debate is likely to stimulate significant developments in landfilling
methods during the next decade. Current and proposed landfill techniques are described in this information sheet.
Types of landfill
Landfill techniques are dependent upon both the type of waste and the landfill management strategy. A commonly used
classification of landfills, according to waste type only, is described below, together with a classification according to landfill
strategy.
The EU Draft Landfill Directive recognises three main types of landfill:
Hazardous waste landfill
Municipal waste landfill
Inert waste landfill
Similar categories are used in many other parts of the world. In practice, these categories are not clear-cut. The Draft Directive
recognises variants, such as mono-disposal - where only a single waste type (which may or may not be hazardous) is deposited
- and joint-disposal - where municipal and hazardous wastes may be co-deposited in order to gain benefit from municipal
waste decomposition processes. The landfilling of hazardous wastes is a contentious issue and one on which there is not
international consensus.
Further complications arise from the difficulty of classifying wastes accurately, particularly the distinction between
'hazardous'/'non-hazardous' and of ensuring that 'inert' wastes are genuinely inert. In practice, many wastes described as 'inert'
undergo degradation reactions similar to those of municipal solid waste (MSW), albeit at lower rates, with consequent
environmental risks from gas and leachate.
Alternatively, landfills can be categorised according to their management strategy. Four distinct strategies have evolved for the
management of landfills (Hjelmar et al, 1995), their selection being dependent upon attitudes, economic factors, and
geographical location, as well as the nature of the wastes. They are Total containment; Containment and collection of leachate;
Controlled contaminant release and Unrestricted contaminant release.
A) Total containment
All movement of water into or out of the landfill is prevented. The wastes and hence their pollution potential will remain largely
unchanged for a very long period. Total containment implies acceptance of an indefinite responsibility for the pollution risk, on
behalf of future generations. This strategy is the most commonly used for nuclear wastes and hazardous wastes. It is also used
in some countries for MSW and other non-hazardous but polluting wastes.
B) Containment and collection of leachate
Inflow of water is controlled but not prevented entirely, and leakage is minimised or prevented, by a low permeability basal
liner and by removal of leachate. This is the most common strategy currently for MSW landfills in developed countries. The
duration of a pollution risk is dependent on the rate of water flow through the wastes. Because it requires active leachate
management there is currently much interest in accelerated leaching to shorten this timescale from what could be centuries to
just a few decades.
C) Controlled contaminant release
The top cover and basal liner are designed and constructed to allow generation and leakage of leachate at a calculated, controlled rate. An environmental assessment is always necessary to that the impact of the emitted leachate is acceptable. No active leachate control measures are used. Such sites are only suitable in certain locations and for certain wastes. A typical example would be a landfill in a coastal location, receiving an inorganic waste such as bottom ash from MSW incineration.
D) Unrestricted contaminant release
No control is exerted over either the inflow or the outflow of water. This strategy occurs by default for MSW, in the form of dumps, in many rural locations, particularly in less developed countries. It is also in common use for inert wastes in developed countries.
Options C and D might be considered unacceptable in some European countries.
Landfill techniques
Landfill techniques may be considered under seven headings:
location and engineering
phasing and cellular infilling
waste emplacement methods
waste pre-treatment
environmental monitoring
gas control
leachate management
1) Location and engineering
Site specific factors determine the acceptability of a particular landfill strategy for particular wastes in any given location. In theory an engineered total containment landfill could be located anywhere for any wastes, given a high enough standard of engineering. In practice, the perceived risk of containment failure is such that many countries restrict landfills for hazardous wastes, and perhaps for MSW, to less sensitive locations such as non-aquifers and may also stipulate a minimum unsaturated depth beneath the landfill. In other cases, acceptability is dependent on the results of a risk assessment that examines the impact on groundwater quality of possible worst-case rates of leakage.
For the controlled contaminant release strategy, the characteristics of the external environment in the location of the landfill, particularly its hydrogeology and geo-chemistry, are integral components of the system. As such they need to be understood in more detail than for any other strategy.
An environmental impact assessment (EIA) is essential and it must include estimation of the maximum acceptable rates of leachate leakage. This estimation will determine the degree of engineered containment necessary for the base liner and top cover and any associated restrictions on leachate head within the landfill.
The principal components of landfill engineering are usually the containment liner, liner protection layer, leachate drainage layer and top cover. The most common techniques to provide containment are mineral liners (eg clay), polymeric flexible membrane liners (FMLs), such as high density polyethylene (HDPE), or composite liners consisting of a mineral liner and FML in intimate contact. Other materials are also in use, such as bentonite enhanced soil (BES) and asphalt concrete.
Approximately 20 years experience has now accumulated in the installation of engineered liners at landfills but there remains uncertainty over how long their integrity can be guaranteed, and some disagreement as to the suitability of particular liner materials for the containment of hazardous wastes and MSW, and the gas and leachate derived from them.
At landfills with engineered containment it is necessary to make provision for collection and removal of leachate. Often it is necessary to restrict the head of leachate to minimise the rate of basal leakage. Head limits are typically set at 300-1000mm leachate depth. This usually requires the installation of a drainage blanket. This is a layer of high voidage free-draining material such as washed stone, over the whole of the base of the landfill, to allow leachate to flow freely to abstraction points. Drainage blankets are necessary because the permeability of waste such as MSW is usually too low, after compaction, to conduct leachate to abstraction points while maintaining the leachate head below the stipulated maximum. The hydraulic conductivity of MSW can fall to less than 10-7m/s in the lower layers of even a moderately deep landfill. Under greater compaction, values as low as 10-9m/s have been measured, which is of a similar magnitude to that of mineral liner materials.
For the controlled release strategy the most critical engineered component is the top cover, whose function is to control the rate of leakage by restricting the rate of leachate formation. In any given location, percolation through the top cover is a complex function of several factors, namely:
slope
the hydraulic conductivity of the barrier layer
the hydraulic conductivity of the soils or materials placed above the barrier layer
the spacing of drainage pipes within the soil layer
Mineral barrier layers are typical for this application. They may also be used for total containment sites, where FMLs or even
composite liners have also been used for the top cover. A review of mineral top cover performance (UK Department of the
Environment, 1991) found that percolation ranged from zero up to ~200mm/a. To obtain very low percolation rates, protection
of the barrier layer from desiccation was necessary, drainage pipes should be at a spacing of not greater than 20m, and the
ratio of the hydraulic conductivity in the barrier layer to that in the soil or drainage layer above it should be no greater than 10-4.
Under northern European conditions, protection of the barrier layer from desiccation would typically require on the order of
~900mm of soil material. Under hotter, drier conditions, a greater depth might be needed.
2) Phasing and cellular infilling
Landfills are often filled in phases. This is usually done for purely logistic reasons. Because of the size of some landfills it is economical to prepare and fill portions of the site sequentially. In addition, active phases are sometimes further sub-divided into smaller cells which may typically vary from 0.5ha to 5ha in area. Often these cells may be engineered to be hydraulically isolated from each other.
There are two main reasons for cellular infilling:
To allow the segregation of different waste types within a single landfill.
For example, one cell might receive MSW bottom ash, another inert wastes and another non-hazardous industrial wastes. In hazardous waste landfills different classes of hazardous waste may be allocated to dedicated cells.
To minimise the active area and thus minimise leachate formation, by allowing clean rain water to be
discharged from unfilled areas while individual cells are filled.
Where cellular infilling is carried out, the landfill is effectively sub-divided into separate leachate collection areas and each may need an abstraction sump and pumping system. This can increase the physical complexity of leachate removal arrangements and if the cells receive different waste types, each cell may produce leachate with different characteristics. This may in turn influence the design of leachate treatment and disposal facilities.
3) & 4) Waste emplacement methods and pre-treatment
Wastes are usually compacted at the time of deposit. This is done to gain maximum economic benefit from the void space and to minimise later problems caused by excessive settlement. The degree of compaction achieved depends on the equipment used, the nature of the wastes and the placement techniques.
Equipment may vary from small, tracked bulldozers, up to specialised steel-wheeled compactors. The latter are claimed to be able to achieve in situ waste densities in excess of 1 tonne/m3 with MSW. Experience suggests that, to achieve this, it is necessary to place wastes in thin layers, not more than 1m thick, and to make many passes with the compactor. At many landfills, waste is placed in much thicker lifts of 2.5m or more and receives relatively few passes by the compactor. Densities of ~0.7 - 0.8t/m3 are more typical in such situations.
Some wastes are easier to compact to high densities than others. At some landfills in Germany receiving final residues from MSW recycling facilities, it has proved difficult to achieve densities greater than ~0.6t/m3 because the residual materials tend to spring back after compaction. This low density has led to problematic leachate production patterns because the waste allows very rapid channelling during high rainfall, so that leachate flow rates exhibit more extreme variability than at conventional landfills.
Common practice at MSW landfills in some EU countries is to place the first layer of waste across the base of the site with little or no compaction and allow it to compost, uncovered, for a period of six months or more. Subsequent lifts are then placed and compacted in the usual way. This practice was developed from research studies in Germany and has been found to generate an actively methanogenic layer very rapidly. Leachate quality is found to be methanogenic (1) from the start, and as a result, leachate management and treatment is more straightforward.
Some operators of MSW landfills add moisture, or wet organic wastes such as sewage sludge, at the time of waste emplacement, to encourage rapid degradation, and in particular to encourage the early establishment of methanogenesis. There is ample experimental and field evidence to show that this can be effective.
The covering of wastes with inert material at the end of each working day has been an integral feature of sanitary landfilling techniques as developed in the USA during the 1960s and 1970s. It is common practice at MSW landfills in many countries around the world but is by no means universal practice within the EU. Its continued use is increasingly being questioned, particularly where enhanced leaching is to be undertaken to accelerate stabilisation, because many materials used as daily cover can form barriers to the even flow of leachate and gas. The primary role of daily cover is to prevent nuisance from smell,
vectors (eg rats, seagulls), and wind blown litter and this remains an important objective. No universally applicable alternative has yet been found but the following measures have been successful in some cases:
Pre-shredding of wastes, combined with good compaction, is said to render them unattractive to vectors and to reduce wind pick-up. Spraying of lime has also been used with the same benefits.
Commercial systems that spray urea-formaldehyde foam, or similar, onto the wastes. The foam collapses when subsequent lifts are applied. This technique has been slow to be accepted, mainly because of cost and convenience factors, but it is now used at several sites in the EU.
Commercial systems that apply a spray-on pulp made from shredded paper, usually separated from the
incoming wastes. Removable membranes such as tarpaulins.
5) Monitoring
Monitoring is an essential part of landfill management and has two important functions:
It is necessary in order to confirm the degradation and stabilisation of the wastes within the landfill
It is necessary to detect any unacceptable impact of the landfill on the external environment so that action can be taken.
Monitoring can be divided into a number of distinct aspects, as follows:
Gas - Landfill gas quality within the site; soil gas quality outside the site; air quality in and around the site
Leachate - Leachate level within the site; leachate flow rate leaving the site; leachate quality within the site;
leachate quality leaving the site
Water - Groundwater quality outside the site; surface water quality outside the site
Settlement - Settlement of wastes after infilling
The relative importance of each of these areas of monitoring depends on the type of waste and the landfill management strategy. A controlled release landfill for inorganic wastes is likely to need much effort focused on groundwater quality. A containment and leachate control landfill for MSW will require more monitoring of conditions inside the landfill than many other types of site.
6) Gas control
At most landfills receiving degradable wastes such as MSW and many non-hazardous industrial wastes, it is necessary to extract landfill gas in order to prevent it from migrating away from the landfill. Landfill gas (LFG), a mixture of methane and carbon dioxide, has the potential to cause harm to human health, via explosion or asphyxiation, and to cause environmental damage such as crop failure. Examples of all three have occurred both within and outside landfills. The techniques for extracting and controlling LFG are now reasonably well established and in common use. Vertical gas extraction wells are usually installed
after infilling has ceased in a particular area. Gas is extracted, usually under applied suction, and routed either to a flare or to a gas utilisation scheme. It is now quite common to generate electrical power from LFG and to recover heat. In some cases LFG has been used directly as a fuel source in brick kilns, cement manufacture and for heating greenhouses.
In conjunction with extraction wells it is often necessary to install passive control systems, in the form of barriers and venting trenches around the perimeter of land-fills. An appropriate barrier will often be provided by the continuation of basal leachate containment engineering or in some cases by in situ clay strata. Reliance on the latter has, however, occasionally been misplaced. Where 'clays' have included mudstone and siltstone layers, migration of LFG has sometimes occurred and has proved particularly difficult to remedy.
An area of continuing development is in the control of LFG at older sites, where methane concentrations may become too low to be flared, but are still high enough to require control. One technique being studied is methane oxidation, in which bacteria in aerobic surface soils oxidise methane to carbon dioxide as it diffuses into the atmosphere. These techniques, and design criteria for the soil layers, are not fully developed, but research results have indicated great potential.
7) Leachate management
There are two aspects to active leachate management:
the treatment and disposal of surplus leachate abstracted from the base of the landfill
the flushing of soluble pollutants from waste until they reach a non-polluting state.
Treatment techniques depend on the nature of the leachate and the discharge criteria. Leachates may broadly be divided into five main types, described by Hjelmar et al (1995).
Leachate types
1) Hazardous waste leachate
Leachate with highly variable concentrations of a wide range of components. Extremely high concentration of substances such as salts, halogenated organics, and trace elements can occur.
2) Municipal solid waste leachate
Leachate with high initial concentrations of organic matter (COD >20,000 mg/l and a BOD/COD ratio >0.5) falling to low concentrations (COD in the range of 2,000 mg/l and a BOD/COD ratio <0.25) within a period of 2-10 years. High concentrations of nitrogen (>1000 mg/l) of which more than 90% is Ammonia-N. This type of leachate is relatively consistent for landfills receiving MSW, mixed non-hazardous industrial and commercial waste and for many uncontrolled dumps.
3) Non-hazardous, low-organic waste leachate
Leachate with a relatively low content of organic matter (COD does not exceed 4,000 mg/l and it has a typical BOD/COD
ratio of <0.2) and a low content of nitrogen (typically total N is in the range of 200 mgN/l, but can be as high as 500 mgN/l). Relatively low trace element concentrations are observed. This type of leachate comes from landfills receiving only non-hazardous waste exclusive of MSW.
4) Inorganic waste leachate
Leachate with relatively high initial concentrations of salts (chlorides plus sulphates in the range of 15,000 mg/l) and a low content of organic matter (typically COD <1,000 mg/l) and low content of nitrogen (total-N <100 mg/l). Trace element concentrations are often negligible. This type of leachate is typical of landfills for MSW incineration ash.
5) Inert waste leachate
Leachate with low strength of any component. This type of leachate is representative for inert waste landfills.
Leachate treatment to almost any desired quality for discharge is now technically achievable. Aerobic biological treatment forms the basis of the large majority of treatment plants but many other techniques are also in use, to remove components that are not adequately removed by biological methods. The extent of treatment, and the most appropriate methods, are site-specific. The timescale required for active leachate management is dependent on the rate at which pollutants are flushed from the landfill. With conventional low-permeability top covers and containment strategies, it is likely that the timescale will be
several centuries, for wastes with a high pollution potential, such as MSW.
There is currently a great deal of interest in shortening this period by high-rate recirculation and partial treatment. As yet, these accelerated flushing techniques have not been proven at full-scale. Until they are, or until waste minimisation and pre-treatment reduce the pollution potential of the wastes that are landfilled, the long time-scales for pollution control arising from current landfill techniques will remain.
References:
1.Hjelmar O, Johannessen LM, Knox K & Ehrig HJ, Composition and management of leachate from landfills
the EU. To be presented at 5th International Landfill Symposium, Sardinia, October 1995
[return to text] within
2.Dept of the Environment, A review of water balance methods and their application to landfill in the UK, UK
Dept of the Environment Report No. CWM 031/91.
f:\12000 essays\sciences (985)\Enviromental\The selection of a Landfill Site.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Choosing of a Landfill Site
There is currently much debate on the desirability of landfilling particular wastes, the practicability of alternatives
such as waste minimisation or pre-treatment, the extent of waste pre-treatment required, and of the most appropriate
landfilling strategies for the final residues. This debate is likely to stimulate significant developments in landfilling
methods during the next decade. Current and proposed landfill techniques are described in this information sheet.
Types of landfill
Landfill techniques are dependent upon both the type of waste and the landfill management strategy. A commonly used
classification of landfills, according to waste type only, is described below, together with a classification according to landfill
strategy.
The EU Draft Landfill Directive recognises three main types of landfill:
Hazardous waste landfill
Municipal waste landfill
Inert waste landfill
Similar categories are used in many other parts of the world. In practice, these categories are not clear-cut. The Draft Directive
recognises variants, such as mono-disposal - where only a single waste type (which may or may not be hazardous) is deposited
- and joint-disposal - where municipal and hazardous wastes may be co-deposited in order to gain benefit from municipal
waste decomposition processes. The landfilling of hazardous wastes is a contentious issue and one on which there is not
international consensus.
Further complications arise from the difficulty of classifying wastes accurately, particularly the distinction between
'hazardous'/'non-hazardous' and of ensuring that 'inert' wastes are genuinely inert. In practice, many wastes described as 'inert'
undergo degradation reactions similar to those of municipal solid waste (MSW), albeit at lower rates, with consequent
environmental risks from gas and leachate.
Alternatively, landfills can be categorised according to their management strategy. Four distinct strategies have evolved for the
management of landfills (Hjelmar et al, 1995), their selection being dependent upon attitudes, economic factors, and
geographical location, as well as the nature of the wastes. They are Total containment; Containment and collection of leachate;
Controlled contaminant release and Unrestricted contaminant release.
A) Total containment
All movement of water into or out of the landfill is prevented. The wastes and hence their pollution potential will remain largely
unchanged for a very long period. Total containment implies acceptance of an indefinite responsibility for the pollution risk, on
behalf of future generations. This strategy is the most commonly used for nuclear wastes and hazardous wastes. It is also used
in some countries for MSW and other non-hazardous but polluting wastes.
B) Containment and collection of leachate
Inflow of water is controlled but not prevented entirely, and leakage is minimised or prevented, by a low permeability basal
liner and by removal of leachate. This is the most common strategy currently for MSW landfills in developed countries. The
duration of a pollution risk is dependent on the rate of water flow through the wastes. Because it requires active leachate
management there is currently much interest in accelerated leaching to shorten this timescale from what could be centuries to
just a few decades.
C) Controlled contaminant release
The top cover and basal liner are designed and constructed to allow generation and leakage of leachate at a calculated, controlled rate. An environmental assessment is always necessary to that the impact of the emitted leachate is acceptable. No active leachate control measures are used. Such sites are only suitable in certain locations and for certain wastes. A typical example would be a landfill in a coastal location, receiving an inorganic waste such as bottom ash from MSW incineration.
D) Unrestricted contaminant release
No control is exerted over either the inflow or the outflow of water. This strategy occurs by default for MSW, in the form of dumps, in many rural locations, particularly in less developed countries. It is also in common use for inert wastes in developed countries.
Options C and D might be considered unacceptable in some European countries.
Landfill techniques
Landfill techniques may be considered under seven headings:
location and engineering
phasing and cellular infilling
waste emplacement methods
waste pre-treatment
environmental monitoring
gas control
leachate management
1) Location and engineering
Site specific factors determine the acceptability of a particular landfill strategy for particular wastes in any given location. In theory an engineered total containment landfill could be located anywhere for any wastes, given a high enough standard of engineering. In practice, the perceived risk of containment failure is such that many countries restrict landfills for hazardous wastes, and perhaps for MSW, to less sensitive locations such as non-aquifers and may also stipulate a minimum unsaturated depth beneath the landfill. In other cases, acceptability is dependent on the results of a risk assessment that examines the impact on groundwater quality of possible worst-case rates of leakage.
For the controlled contaminant release strategy, the characteristics of the external environment in the location of the landfill, particularly its hydrogeology and geo-chemistry, are integral components of the system. As such they need to be understood in more detail than for any other strategy.
An environmental impact assessment (EIA) is essential and it must include estimation of the maximum acceptable rates of leachate leakage. This estimation will determine the degree of engineered containment necessary for the base liner and top cover and any associated restrictions on leachate head within the landfill.
The principal components of landfill engineering are usually the containment liner, liner protection layer, leachate drainage layer and top cover. The most common techniques to provide containment are mineral liners (eg clay), polymeric flexible membrane liners (FMLs), such as high density polyethylene (HDPE), or composite liners consisting of a mineral liner and FML in intimate contact. Other materials are also in use, such as bentonite enhanced soil (BES) and asphalt concrete.
Approximately 20 years experience has now accumulated in the installation of engineered liners at landfills but there remains uncertainty over how long their integrity can be guaranteed, and some disagreement as to the suitability of particular liner materials for the containment of hazardous wastes and MSW, and the gas and leachate derived from them.
At landfills with engineered containment it is necessary to make provision for collection and removal of leachate. Often it is necessary to restrict the head of leachate to minimise the rate of basal leakage. Head limits are typically set at 300-1000mm leachate depth. This usually requires the installation of a drainage blanket. This is a layer of high voidage free-draining material such as washed stone, over the whole of the base of the landfill, to allow leachate to flow freely to abstraction points. Drainage blankets are necessary because the permeability of waste such as MSW is usually too low, after compaction, to conduct leachate to abstraction points while maintaining the leachate head below the stipulated maximum. The hydraulic conductivity of MSW can fall to less than 10-7m/s in the lower layers of even a moderately deep landfill. Under greater compaction, values as low as 10-9m/s have been measured, which is of a similar magnitude to that of mineral liner materials.
For the controlled release strategy the most critical engineered component is the top cover, whose function is to control the rate of leakage by restricting the rate of leachate formation. In any given location, percolation through the top cover is a complex function of several factors, namely:
slope
the hydraulic conductivity of the barrier layer
the hydraulic conductivity of the soils or materials placed above the barrier layer
the spacing of drainage pipes within the soil layer
Mineral barrier layers are typical for this application. They may also be used for total containment sites, where FMLs or even
composite liners have also been used for the top cover. A review of mineral top cover performance (UK Department of the
Environment, 1991) found that percolation ranged from zero up to ~200mm/a. To obtain very low percolation rates, protection
of the barrier layer from desiccation was necessary, drainage pipes should be at a spacing of not greater than 20m, and the
ratio of the hydraulic conductivity in the barrier layer to that in the soil or drainage layer above it should be no greater than 10-4.
Under northern European conditions, protection of the barrier layer from desiccation would typically require on the order of
~900mm of soil material. Under hotter, drier conditions, a greater depth might be needed.
2) Phasing and cellular infilling
Landfills are often filled in phases. This is usually done for purely logistic reasons. Because of the size of some landfills it is economical to prepare and fill portions of the site sequentially. In addition, active phases are sometimes further sub-divided into smaller cells which may typically vary from 0.5ha to 5ha in area. Often these cells may be engineered to be hydraulically isolated from each other.
There are two main reasons for cellular infilling:
To allow the segregation of different waste types within a single landfill.
For example, one cell might receive MSW bottom ash, another inert wastes and another non-hazardous industrial wastes. In hazardous waste landfills different classes of hazardous waste may be allocated to dedicated cells.
To minimise the active area and thus minimise leachate formation, by allowing clean rain water to be
discharged from unfilled areas while individual cells are filled.
Where cellular infilling is carried out, the landfill is effectively sub-divided into separate leachate collection areas and each may need an abstraction sump and pumping system. This can increase the physical complexity of leachate removal arrangements and if the cells receive different waste types, each cell may produce leachate with different characteristics. This may in turn influence the design of leachate treatment and disposal facilities.
3) & 4) Waste emplacement methods and pre-treatment
Wastes are usually compacted at the time of deposit. This is done to gain maximum economic benefit from the void space and to minimise later problems caused by excessive settlement. The degree of compaction achieved depends on the equipment used, the nature of the wastes and the placement techniques.
Equipment may vary from small, tracked bulldozers, up to specialised steel-wheeled compactors. The latter are claimed to be able to achieve in situ waste densities in excess of 1 tonne/m3 with MSW. Experience suggests that, to achieve this, it is necessary to place wastes in thin layers, not more than 1m thick, and to make many passes with the compactor. At many landfills, waste is placed in much thicker lifts of 2.5m or more and receives relatively few passes by the compactor. Densities of ~0.7 - 0.8t/m3 are more typical in such situations.
Some wastes are easier to compact to high densities than others. At some landfills in Germany receiving final residues from MSW recycling facilities, it has proved difficult to achieve densities greater than ~0.6t/m3 because the residual materials tend to spring back after compaction. This low density has led to problematic leachate production patterns because the waste allows very rapid channelling during high rainfall, so that leachate flow rates exhibit more extreme variability than at conventional landfills.
Common practice at MSW landfills in some EU countries is to place the first layer of waste across the base of the site with little or no compaction and allow it to compost, uncovered, for a period of six months or more. Subsequent lifts are then placed and compacted in the usual way. This practice was developed from research studies in Germany and has been found to generate an actively methanogenic layer very rapidly. Leachate quality is found to be methanogenic (1) from the start, and as a result, leachate management and treatment is more straightforward.
Some operators of MSW landfills add moisture, or wet organic wastes such as sewage sludge, at the time of waste emplacement, to encourage rapid degradation, and in particular to encourage the early establishment of methanogenesis. There is ample experimental and field evidence to show that this can be effective.
The covering of wastes with inert material at the end of each working day has been an integral feature of sanitary landfilling techniques as developed in the USA during the 1960s and 1970s. It is common practice at MSW landfills in many countries around the world but is by no means universal practice within the EU. Its continued use is increasingly being questioned, particularly where enhanced leaching is to be undertaken to accelerate stabilisation, because many materials used as daily cover can form barriers to the even flow of leachate and gas. The primary role of daily cover is to prevent nuisance from smell,
vectors (eg rats, seagulls), and wind blown litter and this remains an important objective. No universally applicable alternative has yet been found but the following measures have been successful in some cases:
Pre-shredding of wastes, combined with good compaction, is said to render them unattractive to vectors and to reduce wind pick-up. Spraying of lime has also been used with the same benefits.
Commercial systems that spray urea-formaldehyde foam, or similar, onto the wastes. The foam collapses when subsequent lifts are applied. This technique has been slow to be accepted, mainly because of cost and convenience factors, but it is now used at several sites in the EU.
Commercial systems that apply a spray-on pulp made from shredded paper, usually separated from the
incoming wastes. Removable membranes such as tarpaulins.
5) Monitoring
Monitoring is an essential part of landfill management and has two important functions:
It is necessary in order to confirm the degradation and stabilisation of the wastes within the landfill
It is necessary to detect any unacceptable impact of the landfill on the external environment so that action can be taken.
Monitoring can be divided into a number of distinct aspects, as follows:
Gas - Landfill gas quality within the site; soil gas quality outside the site; air quality in and around the site
Leachate - Leachate level within the site; leachate flow rate leaving the site; leachate quality within the site;
leachate quality leaving the site
Water - Groundwater quality outside the site; surface water quality outside the site
Settlement - Settlement of wastes after infilling
The relative importance of each of these areas of monitoring depends on the type of waste and the landfill management strategy. A controlled release landfill for inorganic wastes is likely to need much effort focused on groundwater quality. A containment and leachate control landfill for MSW will require more monitoring of conditions inside the landfill than many other types of site.
6) Gas control
At most landfills receiving degradable wastes such as MSW and many non-hazardous industrial wastes, it is necessary to extract landfill gas in order to prevent it from migrating away from the landfill. Landfill gas (LFG), a mixture of methane and carbon dioxide, has the potential to cause harm to human health, via explosion or asphyxiation, and to cause environmental damage such as crop failure. Examples of all three have occurred both within and outside landfills. The techniques for extracting and controlling LFG are now reasonably well established and in common use. Vertical gas extraction wells are usually installed
after infilling has ceased in a particular area. Gas is extracted, usually under applied suction, and routed either to a flare or to a gas utilisation scheme. It is now quite common to generate electrical power from LFG and to recover heat. In some cases LFG has been used directly as a fuel source in brick kilns, cement manufacture and for heating greenhouses.
In conjunction with extraction wells it is often necessary to install passive control systems, in the form of barriers and venting trenches around the perimeter of land-fills. An appropriate barrier will often be provided by the continuation of basal leachate containment engineering or in some cases by in situ clay strata. Reliance on the latter has, however, occasionally been misplaced. Where 'clays' have included mudstone and siltstone layers, migration of LFG has sometimes occurred and has proved particularly difficult to remedy.
An area of continuing development is in the control of LFG at older sites, where methane concentrations may become too low to be flared, but are still high enough to require control. One technique being studied is methane oxidation, in which bacteria in aerobic surface soils oxidise methane to carbon dioxide as it diffuses into the atmosphere. These techniques, and design criteria for the soil layers, are not fully developed, but research results have indicated great potential.
7) Leachate management
There are two aspects to active leachate management:
the treatment and disposal of surplus leachate abstracted from the base of the landfill
the flushing of soluble pollutants from waste until they reach a non-polluting state.
Treatment techniques depend on the nature of the leachate and the discharge criteria. Leachates may broadly be divided into five main types, described by Hjelmar et al (1995).
Leachate types
1) Hazardous waste leachate
Leachate with highly variable concentrations of a wide range of components. Extremely high concentration of substances such as salts, halogenated organics, and trace elements can occur.
2) Municipal solid waste leachate
Leachate with high initial concentrations of organic matter (COD >20,000 mg/l and a BOD/COD ratio >0.5) falling to low concentrations (COD in the range of 2,000 mg/l and a BOD/COD ratio <0.25) within a period of 2-10 years. High concentrations of nitrogen (>1000 mg/l) of which more than 90% is Ammonia-N. This type of leachate is relatively consistent for landfills receiving MSW, mixed non-hazardous industrial and commercial waste and for many uncontrolled dumps.
3) Non-hazardous, low-organic waste leachate
Leachate with a relatively low content of organic matter (COD does not exceed 4,000 mg/l and it has a typical BOD/COD
ratio of <0.2) and a low content of nitrogen (typically total N is in the range of 200 mgN/l, but can be as high as 500 mgN/l). Relatively low trace element concentrations are observed. This type of leachate comes from landfills receiving only non-hazardous waste exclusive of MSW.
4) Inorganic waste leachate
Leachate with relatively high initial concentrations of salts (chlorides plus sulphates in the range of 15,000 mg/l) and a low content of organic matter (typically COD <1,000 mg/l) and low content of nitrogen (total-N <100 mg/l). Trace element concentrations are often negligible. This type of leachate is typical of landfills for MSW incineration ash.
5) Inert waste leachate
Leachate with low strength of any component. This type of leachate is representative for inert waste landfills.
Leachate treatment to almost any desired quality for discharge is now technically achievable. Aerobic biological treatment forms the basis of the large majority of treatment plants but many other techniques are also in use, to remove components that are not adequately removed by biological methods. The extent of treatment, and the most appropriate methods, are site-specific. The timescale required for active leachate management is dependent on the rate at which pollutants are flushed from the landfill. With conventional low-permeability top covers and containment strategies, it is likely that the timescale will be
several centuries, for wastes with a high pollution potential, such as MSW.
There is currently a great deal of interest in shortening this period by high-rate recirculation and partial treatment. As yet, these accelerated flushing techniques have not been proven at full-scale. Until they are, or until waste minimisation and pre-treatment reduce the pollution potential of the wastes that are landfilled, the long time-scales for pollution control arising from current landfill techniques will remain.
References:
1.Hjelmar O, Johannessen LM, Knox K & Ehrig HJ, Composition and management of leachate from landfills
the EU. To be presented at 5th International Landfill Symposium, Sardinia, October 1995
[return to text] within
2.Dept of the Environment, A review of water balance methods and their application to landfill in the UK, UK
Dept of the Environment Report No. CWM 031/91.
f:\12000 essays\sciences (985)\Enviromental\The Value of Enviornmental agencies.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Value of Environmental Agencies'
In current times man has become so consumed with weapons and money that the planet has been neglected. With something so typical and now common as chopping down the rainforest to produce trees for mass abundance of political paper and land to graze more cattle this thoughtless destruction, disturbs all aspects of the environment. This is business as usual. The lands being destroyed are the most unique and diverse lands in the world. Chris Park Senior lecturer in the Department of Geographer at Lancaster University states, "The available statistics are impressive and... the rainforest's claim to fame as the richest ecological zone on earth"(26). In order to efficiently restore and protect the damaged land, animals, and people environmental agencies' must be valued.
By destroying the forest, we are creating an open-door policy for disease. For example, the S.Amerindians have long adopted to endemic disease and have prevented
them, in large part, by their adaptation to conditions of life over the 20,000 years they have inhabited the tropical forest. With the lumber companies invading these towns and villages, their western germs are exposing isolated, once-contained people. Kathlyn Gay, author of Rainforests of the World, mentions, "Indigenous people in many countries have died because of contact with outsiders-usually whites of northern European extraction-who have brought contagious diseases, ranging from measles to influenza, and sexually transmitted disease"(20). With the importance of the land resources comes the ever significance of the atmosphere. The atmosphere's most predictive component is the ozone layer. The distribution of the forests and multiplying of grazing cattle are causes immense damage to the ozone. John Nichol, head of Worldfest 90' production and marketing, alludes, "In Brazil and other countries in South and Central America the smoke from fires burning the jungle is sometimes so thick that great palls of it drift for miles(140). These smoke clouds are affecting the weather patterns. "Weather patterns are changing too, and the consensus of informed opinion is that this too is a direct result of destruction of the forest"(Nichol 136). The slashing and burning of the Amazon forest is causing carbon monoxide build-up, promising severe damage to our security blanket of the ozone. This damage and the critically harsh and uncharacteristic weather pattern is slowly erasing some of our animals.
The animals are the most diverse and ecologically sound species on this planet. They are not only being destroyed but exterminated. Many ecologist, say that such a
species' loss has not occurred since the dinosaurs became extinct 65 million years ago. Why is this so? The last drastic species loss occurred when glaciers melted. Although converted waves of extinction have certainly occurred in paleolithic past, current and future losses will be so exponential that the implications are chilling. Average extinction "background rate" has a range of 2.0 and 4.6 families/species per million years and may rise to 19.3 during periods of mass extinction. The most complex and immense species that will not be present for much longer are insects. "The recent overburgeoning numbers of crop-destructive insects have been shown to be caused at least in part by a decrease in the country's population of insect eating birds," advises, Arnold Newman the author of Tropical Rainforest (135). A terrific example is the leaf cutter or parasol ants that are seen in the neotropical forests. These ants climb trees that are only indigenous to rainforest and cut out dime-sized pieces of leaves and flowers with their sharp mandibles. The leaves and flowers of these trees are the main and only food for these species of ants. And with the elimination of the forest will come the elimination of the leaf cutter ants. "All forms of life within the rainforest are highly interdependent, so that even small changes in habitat or species can have serious knock on effects throughout the ecosystem"(Newman 19). This disturbance of the food cycle is wickedly important. In general, the food cycle literally goes from the ground up, plants being the primary producers. The plants are eaten by herbivores and grazers and the carnivores eat both
herbivores and themselves(carnivores) when the forests are destroyed along with the animals of all sizes huge gaps in the food cycle are vacant. This is a "serious concern in recent years over stability and very survival of some rainforests which are threatened with irreversible change if not wholesale clearance"(Park 19). There mast be a way in which we can preserve nature. "A common and effective approach to protecting nature in many countries has been to designate particular areas as national parks or nature reserves, and restrict land use changes or damaging activities within the designated areas"(Park 132). Many people in the world do not want to see the rainforest disappear; as a result , reserves are set up. In 1990 there were roughly 560 tropical forest parks and reserves covering a total of 780,000 km squared and accounting for about 4 per cent of all tropical forest.
When the forest people are taken from their homeland and put somewhere else they do not know how to change. "They are being pushed to the edge of extinction, and public sympathies are swinging in their direction"(Park 105). The modern world is so crazy to think the forest people can make such a drastic change. The forest people loss their culture because they can not bring their forests resources into the modern world. Displacement happens from taking away of land which the forest people use to support themselves. It is almost impossible to think that they can change their lifestyle and experiences and start all over. Families in the United States have a difficult time moving from state to state in most cases.
Everything in the jungle was fine until money-hungry man wanted to make even more money and ruin everyone's lives. Everyone should just leave the jungle alone to live in peace and harmony. If all the people of the world work together then maybe we can help save the land, animals, and people.
f:\12000 essays\sciences (985)\Enviromental\THE WATER CYCLE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----------------------------------------------------------
Microsoft Windows 95 README for Frequently Asked Questions
August 1996
-----------------------------------------------------------
(c) Copyright Microsoft Corporation, 1996
This document provides complementary or late-breaking information to
supplement the Microsoft Windows 95 documentation.
------------------------
How to Use This Document
------------------------
To view FAQ.txt on screen in Notepad, maximize the Notepad window.
To print FAQ.txt, open it in Notepad or another word processor,
and then use the Print command on the File menu.
NOTE: Some of the information in this document applies only to the
Windows 95 Upgrade. If you received Windows 95 preinstalled on your
computer, the upgrade-specific information may not apply.
---------
CONTENTS
---------
Windows 95 Setup
MS-DOS
Disk Compression
Using CD-ROMs and Windows
Networking
Accessing the Internet with Windows 95
Desktop
PCMCIA Cards
Modems
FAT32
Miscellaneous
=================
Windows 95 Setup
=================
[Q: What are some things I can do to make it easier to install
Windows 95?]
Some steps to follow to ensure a trouble-free installation are:
- Run a virus scan before running Setup.
- Run ScanDisk or Chkdsk before running Setup.
- Make sure you have at least 100 to 110 MB of free disk space
(110 to 120 MB to back up your previous MS-DOS and Windows
system files so you can uninstall Windows 95 if needed).
- If you have had any problems with your hardware or software,
fix them before installing Windows 95.
- Turn off any screen savers or utilities that are running.
- Back up your Autoexec.bat and Config.sys files to a floppy
disk.
- Remove any unnecessary programs from Config.sys and
Autoexec.bat. These may include Undelete programs, anti-virus
software, start-up configuration programs, or any disk utilities.
- Remark the LOAD= and RUN= lines in Win.ini by placing a
semicolon (;) in front of the LOAD and RUN lines--for example:
;load=C:\Msoffice\Msoffice.exe
- Shut down any anti-virus software you are running. If you
install Windows 95 on a computer that has a CMOS or system
BIOS-based anti-virus setting, you will receive an error
message and Setup will stop. Consult the hardware documentation
for information about system BIOS or CMOS-enabled settings such
as virus detection.
- Run Setup from Windows or Windows for Workgroups.
- Shut down any running programs.
- Remove programs from the Startup group before installation.
[Q: How do I install Windows 95 from a CD-ROM drive?]
Windows 95 can be installed from a CD-ROM drive from within MS-DOS
or from within an existing version of Windows. The preferred and
most reliable method is to install it from an existing version of
Windows.
To install Windows 95 from MS-DOS:
1. Insert the Windows 95 CD in the CD-ROM drive.
2. At the C:\ prompt, type the drive letter of your CD-ROM drive
followed by a colon (:) and a backslash (\), and the word Setup.
For example:
D:\Setup
3. Press ENTER, and then follow the instructions on your screen.
4. Click Next to continue Setup, and then follow the instructions
on your screen.
To install from your current version of Windows:
1. Start Windows, and then insert the Windows 95 CD in the
appropriate drive.
2. In File Manager or Program Manager, click the File menu, and
then click Run.
3. Type:
x:\Setup
where x is the drive letter of your CD-ROM drive.
4. Follow the instructions on your screen.
5. Click Next to continue Setup.
[Q: How do I install Windows 95 from a remote CD-ROM drive?]
If the computer with the CD-ROM drive is running Windows for
Workgroups or Windows 95, share the CD-ROM drive, and then follow
these steps:
1. Connect to the shared CD-ROM by connecting to a network drive
in File Manager or by typing the NET USE command at the command
prompt. For example:
net use * \\machine\cdshare
2. Double-click Setup.exe, or at the command prompt type Setup.
[Q: How do I prepare my computer for a clean installation of
Windows 95?]
Windows 95 will install over MS-DOS, as well as over existing
versions of Windows and Windows for Workgroups.
From File Manager in Windows or Windows for Workgroups:
1. Click the drive letter for the drive that Windows 95 will be
installed from. For example:
a:\ (floppy disk users)
d:\ (CD-ROM users)
2. Double-click Setup.exe to start the installation process.
During installation, Windows 95 checks for available disk space.
If the required hard-disk space is not available, Windows 95
displays how much free space is available and how much is
required. To free up space on the hard disk, remove unnecessary
files.
[Q: Do I need to reinstall my programs when I install Windows 95?]
Windows 95 will pick up program settings when you upgrade an
existing version of Windows or Windows for Workgroups. If
Windows 95 is installed in a separate directory, all Windows-based
programs need to be reinstalled.
[Q: How do I set up Windows 95 on a computer running Windows NT?]
The Windows NT computer must be configured to multi-boot between
Windows NT and MS-DOS.
1. Start the Windows NT computer in MS-DOS mode.
2. Run Windows 3.x, and then in Program Manager, select the File
menu, and then choose the Run command.
3. Type:
x:\Setup.exe
where x is the drive letter containing your Windows 95
Setup disk or CD-ROM.
4. Install Windows 95 in a new directory.
NOTES:
* Windows 95 cannot be installed into the same directory as
Windows NT or into a shared Windows NT/Windows 3.x directory.
* A FAT16 partition is required for the Windows 95 / Windows NT
dual-boot configuration to work. Windows 95 must be installed
into a separate directory on the FAT partition. The Windows NT
OS Loader automatically provides a choice for Windows 95 or
MS-DOS on the menu.
* Windows 95 cannot access data stored in NTFS partitions and,
Windows NT cannot access data stored in FAT32 partitions.
[Q: I have 25 MB free on my hard disk, and when I try to upgrade
to Windows 95 it tells me I do not have enough disk space.
How much do I need for Windows 95 if I am upgrading?]
When you upgrade over Windows 95, you need 90 to 100 MB of free
disk space, as opposed to 100 to 110 MB for a full installation.
NOTE: Actual numbers vary depending on the options and accessories
you select during Setup. If you use disk compression (MS-DOS
DoubleSpace or DriveSpace, or Stacker), Setup may require more
than 90 to 100 MB because of the way disk compression estimates
available space. Setup adjusts the required free space to ensure
that you do not run out of disk space during Setup.
[Q: Can I install Windows 95 on a computer that has
OS/2\MS-DOS\Windows? Can I still dual boot?]
Windows 95 Setup.exe will not run on OS/2. To install Windows 95,
start the computer in MS-DOS mode, and then run Setup.exe from
the MS-DOS prompt.
NOTE: If you are upgrading over OS/2 on an HPFS partition, you
will need your OS/2 disk 1 during Setup.
If you are using OS/2 Boot Manager to choose operating systems
at startup, Setup will disable Boot Manager to ensure that
Windows 95 can restart the computer and complete its installation.
You can reactivate Boot Manager by running the FDISK utility that
comes with Windows 95 (see procedure at the end of this section).
If you are not using Boot Manager, configure your computer to use
Boot Manager, and then follow the instructions above. Consult your
OS/2 documentation for information about Boot Manager.
If you start MS-DOS from a floppy disk and then run Setup, you will
not be able to start OS/2 after Windows 95 is installed. You need
to delete the Autoexec.bat and Config.sys files that OS/2 uses
before running Setup.
To remove OS/2 from your computer after you install Windows 95:
1. Back up the files you want to keep onto a floppy disk or
network drive.
2. Delete the files in each of your OS/2 directories and
subdirectories, and then delete the OS/2 directories.
3. In the root directory, delete the following hidden files:
EA DATA.SF
OS2LDR.MSG
OS2KRNL
OS2BOOT
WP DATA.SF
To make sure hidden files are visible, in My Computer or Windows
Explorer, click the View menu, click Options, and then click Show
All Files. Then delete the OS/2 files listed above.
NOTE: If you have a version of OS/2 other than version 2.0, the
names of your OS/2 files may differ from those in this procedure.
Also, depending on which version of OS/2 you have, you may see
the following files in your root directory. You can also delete
these files:
OS2DUMP
OS2LDR
OS2LOGO
OS2VER
4. Empty the Recycle Bin to permanently remove the files from
your computer.
5. If you had Boot Manager installed and want to remove it,
restart your computer and then complete the following steps.
(It is recommended that you print this file before restarting
your computer.)
6. When you see the Boot Manager menu, choose to restart your
computer in MS-DOS mode, and then run FDisk.
7. Make the MS-DOS partition (C) your active partition.
8. Quit Fdisk, and then restart your computer.
To reinstall Boot Manager after you install Windows 95:
1. From the Windows 95 Start menu, click Run ,and then type
FDISK.
2. Choose Set Active Partition.
3. Enter the number of the Boot Manager Partition. This partition
is the 1MB Non-DOS partition usually placed at the top or
bottom.
4. Quit FDISK, and then restart your computer as instructed. You
can now start OS/2 at any time and change labels of partitions
in Boot Manager through the OS/2 FDISK program.
NOTE: OS/2 cannot access data stored in FAT32 partitions.
[Q: How do I make copies of my original disks to install from?]
The DMF disk format is not compatible with the DISKCOPY or COPY
commands and increases the amount of data stored on a standard
1.44/3.5" disk. There is no way to make a direct copy of these
disks.
[Q: Can I make floppy disk images from the CD?]
The CD-ROM contains cabinet files (*.cab) files that are 2 MB
each and cannot be copied onto floppy disks.
[Q: Setup stops responding while it is gathering information.
How can I bypass the problem?]
Occasionally, Setup stops while detecting a device on the computer.
To work around this:
1. Turn the computer off for 10 seconds, and then turn it back on.
2. Rerun Setup, and then choose Safe
f:\12000 essays\sciences (985)\Enviromental\Thed Effect of Viewing Television Violence on Childhood Aggre.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aggression
1
The Effect of Viewing Television
Violence on Childhood Aggression
Running Head: AGGRESSION
Aggression
2
Abstract
There is a great deal of speculation on the effect television plays in childhood aggression. Two contrasting views regarding this issue are violent television increases aggressive behavior and violent television does not increase aggressive behavior. Later research demonstrates there may be other intervening variables causing aggression. These include IQ, social class, parental punishment, parental aggression, hereditary, environmental, and modeling. With all of these factors to take into consideration it is difficult to determine a causal relationship between violent television and aggression. It is my hypothesis this relationship is bi-directional. I feel violent television causes aggressive behavior and aggressive people tend to watch more violent television.
Aggression
3
The Effect of Viewing Television
Violence on Childhood Aggression
Over the years there has been a large amount of research published, many with conflicting results, to the question of a causal link existing between the viewing of televised violence and childhood aggression. It is an important question because if violent television is linked to childhood aggression we need to adapt our television shows accordingly.
Early 1960's Research
There is earlier research, but the first association between violent television and aggression was in the early 1960's when Albert Bandura began researching his modeling theory. His series of experiments first set the precedent for a relationship between violent television viewing and aggression. He felt children would model or imitate adult behavior. In one study he subjected children to both aggressive and non- aggressive adult models and then tested them for imitative behavior in the presence of the model. His theory was demonstrated when children readily Aggression
4
imitated behavior exhibited by an adult model in the presence of the model (Bandura, Ross & Ross, 1961). In a similar experiment children were exposed to aggressive and non-aggressive adult models, but then tested for amount of imitative learning in the absence of the model. Subjects in the aggression condition reproduced a good deal of physical and verbal aggressive behavior resembling that of the models. The data clearly confirmed the prediction that exposure of subjects to aggressive models increases the probability of aggressive behavior (Bandura et al. 1961). Another study sought to determine the extent to which film- mediated aggressive models may serve as an important source of imitative behavior. Children were divided and then exposed to four different aggression models. A real-life aggression condition, a human film- aggression condition, a cartoon film-aggression condition, and a control group. The results showed that exposure to humans on film portraying aggression was the most influential in eliciting aggressive behavior. Subjects in this condition, in comparison to Aggression
5
the control subjects, exhibited more aggression and more imitative aggression. Subjects who viewed the aggressive human and cartoon models on film exhibited almost twice as much aggression as subjects in the control group. These results provide strong evidence that exposure to filmed aggression heightens aggressive reactions in children (Bandura et al. 1963a). These results add to the conclusion that viewing violent television produces aggressive behavior.
But, in Banduras next experiment he begins to question if other factors are involved in the relationship between televised violence and aggression. His subjects are divided into three groups, model-reward, model-punished, and control. All view an aggressive filmed model with a task appropriate ending. The results show mere exposure to modeling stimuli does not provide sufficient conditions for imitative learning. The fact that most of the children in the experiment failed to reproduce the entire repertoire of behavior exhibited by the model, even under positive-incentive conditions indicates other factors are Aggression
6
influencing the imitative response acquisition (Bandura 1965).
At the time Banduras work seemed on target and with no one challenging his theory many were soon quick to follow in agreement. His modeling theory seems plausible, but the fact that he only completed experiments in a laboratory setting leaves one skeptical. Many times results from a laboratory setting will not correlate to real life or in vivo results. Another problem was he only had acts of aggression toward blown up dolls and not real people. It would have been interesting to see how children reacted to a real life person receiving an act of aggression. Another problem is he only used adults as models. He should have also used children. With only adults as models he can't explain how viewing an aggressive child in vivo or on television increases aggression. I feel Bandura was on the right track in his last experiment when he determined other factors were involved, but he failed to follow up on this question. This is an area in need of additional Aggression
7
investigation.
1970's Research
Up until now the relation between television viewing habits and aggression had been shown in several experiments, but what was lacking was the ability to determine cause and effect. One possible way to demonstrate cause and effect is to use a longitudinal context (Eron, Huesmann, Lefkowitz & Walder, 1972). In Eron et al. (1972) subjects were tested over a ten year time period for measures of aggression and predictors of aggression. Several other factors were taken into consideration in this study. They included IQ, social status, mobility aspirations, religious practice, ethnicity, and parental disharmony. The results support the hypothesis that a preference for watching violent television in the third-grade time period is a cause of aggressive habits later in life independent of the other causal contributors studied. It is not claimed that television violence is the only cause of aggressive behavior since a number of other variables are also related to aggression. However, the effect of Aggression
8
television violence on aggression is relatively independent of these other factors and explains a larger portion of the variance than does any other single factor which we studied (Eron et al 1972). Although a longitudinal study was used to try to apply causation no such correlation could be found. The safest conclusion was the study does not establish a causal link between television violence and aggression in one direction or another (Kaplan & Singer 1976).
In another study Kaplan and Singer (1976) propose another view toward televised violence increasing aggressive behavior. They suggest three different positions on the subject. An activation view that watching televised fantasy violence causes aggressive behavior. A catharsis view that aggression in some groups may be decreased following the observation of such violence. And a null view that such violence on television has not been demonstrated to have significant effects on aggressive behavior. The evidence in this study is in support of the null view (Kaplan & Singer, 1976). They have built their case Aggression
9
around several valid arguments. The first being that the evidence that television causes aggression is not strong enough to justify restrictions in programming. Most of the research on television and aggression has been done in the laboratory and we really just don't know how this correlates with in vivo settings. Secondly, we must look at the sample from which the subjects for these studies are drawn. Are they representative samples from a variety of social classes? If not then we cannot speak of overall effects, but only for that limited sample. Third is method of viewing. In several studies children are gathered together to view a film. If some of the children begin to get more active they may stimulate the other children to act accordingly (Kaplan & Singer 1976).
Kaplan and Singer cite several studies in which viewing violence did not cause an increase in aggression. Feshbach and Singer (1971) conducted an experimental field study controlling the television viewing of nine to fifteen-year-old boys. For six Aggression
10
weeks they were required to watch two hours of television per day. Half watched aggressive shows and half watched non-aggressive shows. Feshbach and Singer found no evidence that violence on television leads to increases in aggressive behavior. Certainly the study shows no support for the theory that viewing of aggressive television increases real life aggression (Kaplan & Singer, 1976).
In a study by Carlisle and Howell (1974), angered and nonangered college students were exposed to either violent or nonviolent movie scenes. Results revealed that the violent film was more likely than the nonviolent film to disinhibit aggression among either angry or nonangry subjects (Kaplan & Singer, 1976). With the above data it is certainly possible to see why Kaplan and Singer feel the null-effect view to be the most plausible one. Still as our research moves into the 80's the question of intervening variables has yet to be well addressed.
1980's Research
It is the research of Leonard Eron (1982) that Aggression
11
first suggests the relation between violent television and aggression does not go just one way. It is a bi-directional relationship. He demonstrates that television violence is one cause of aggressive behavior, it is also probable that aggressive children prefer to watch more violent television (Eron, 1982). This seems to be a more plausible alternative because it allows for a more circular theory. It means that violent television may or may not be causing an increase in aggression. I feel it means more aggressive children tend to watch more violent television shows. These children are aggressive to begin with and the violence they witness on television does not have a great deal to do with their aggressive tendencies. I agree televised violence may be an intervening factor, but I don't think it is the sole contributor to aggression in children.
Johnathan L. Freedman (1984) reviewed the available field and correlational research on televised violence and increases in aggressiveness. He only reviewed studies concerning long-term effects or Aggression
12
natural settings. He found no reason to support the conclusion that violence on television increases aggressive behavior in a natural setting. It remains a plausible hypothesis, but one for which there is little supporting evidence (Freedman, 1984).
In another review by Friedrich-Cofer and Huston (1986) their data reveal there is in fact a bi-directional causal relation between viewing television violence and aggression. They support their findings with several longitudinal studies including Eron et al. (1972), Freedman (1984) and Cook et al. (1983). They also measured for several perceived intervening variables and found none of these variables accounted for the relation between viewing and aggression (Friedrich-Cofer & Huston, (1986). Still even through the 80's no one has really addressed the question of whether there may be an intervening variable in this great debate.
Discussion
As one can see from reading the above studies the question of whether televised violence increases Aggression
13
aggression is still unanswered. There is as much data for as for against so it is hard to distinguish an answer. There is no concrete evidence to argue for one way or the other. It is a debate that continues today. The best possible answer I can come up with is that the causation is bi- directional. Also, it is pertinent to many intervening factors. Aggression levels of the child to begin with and their home environment play a big role in determining aggressive tendencies. I think the best way to test for a causal relationship is a well documented longitudinal study. The subjects must be able to be contacted in five year increments to answer questions. With this method of testing and by controlling for all possible intervening variables one can get the best results. It was interesting to see over the years how thoughts and ideas had changed about viewing televised violence and aggression. But, even today there are still many unanswered questions. Maybe sometime in the future we will have a definite answer to this relevant question.
Aggression
14
References
Bandura, A. (1965). Influence of models' reinforcement contingencies on the acquisition of imitative responses. Journal of Personality and Social Psychology, 1, 589-595.
Bandura, A., Ross, D. & Ross, S.A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63, 575-582.
Bandura, A., Ross, D. & Ross, S.A. (1963a). Imitation of film mediated aggressive models. Journal of Abnormal and Social Psychology, 66, 3-11.
Bandura, A., Ross, D. & Ross, S.A. (1963b). Vicarious reinforcement and imitative learning. Journal of Abnormal and Social Psychology, 67, 601-607.
Eron, L.D. (1963). Relationship of television viewing habits and aggressive behavior in children. Journal of Abnormal and Social Psychology, 67, 193- 196.
Aggression
15
Eron, L.D. (1982). Parent-child interaction, television, violence and aggression of children. American Psychologist, 37, 197-211.
Eron, L.D., Huesmann, L.R., Lefkowitz, M.M. & Walder, L.O. (1972). Does television violence cause aggression? American Psychologist, 27, 253-263.
Freeman, J.L. (1984). Effect of television violence on aggressiveness. Psychological Bulletin, 96, 227- 246.
Friedrich-Cofer, L. & Huston, A.C. (1986). Television violence and aggression: The debate continues. Psychological Bulletin, 100, 364- 371.
Kaplan, R.M. & Singer, R.D. (1976). TV violence and viewer aggression: A reexamination of the evidence. Journal of Social Issues, 32, 33-70.
Aggression
1
The Effect of Viewing Television
Violence on Childhood Aggression
Chapter 16, pages 622 - 627
Thomas Tomasian
Arizona State University
Running Head: AGGRESSION
Aggression
2
Abstract
There is a great deal of speculation on the effect television plays in childhood aggression. Two contrasting views regarding this issue are violent television increases aggressive behavior and violent television does not increase aggressive behavior. Later research demonstrates there may be other intervening variables causing aggression. These include IQ, social class, parental punishment, parental aggression, hereditary, environmental, and modeling. With all of these factors to take into consideration it is difficult to determine a causal relationship between violent television and aggression. It is my hypothesis this relationship is bi-directional. I feel violent television causes aggressive behavior and aggressive people tend to watch more violent television.
Aggression
3
The Effect of Viewing Television
Violence on Childhood Aggression
Over the years there has been a large amount of research published, many with conflicting results, to the question of a causal link existing between the viewing of televised violence and childhood aggression. It is an important question because if violent television is linked to childhood aggression we need to adapt our television shows accordingly.
Early 1960's Research
There is earlier research, but the first association between violent television and aggression was in the early 1960's when Albert Bandura began researching his modeling theory. His series of experiments first set the precedent for a relationship between violent television viewing and aggression. He felt children would model or imitate adult behavior. In one study he subjected children to both aggressive and non- aggressive adult models and then tested them for imitative behavior in the presence of the model. His theory was demonstrated when children readily Aggression
4
imitated behavior exhibited by an adult model in the presence of the model (Bandura, Ross & Ross, 1961). In a similar experiment children were exposed to aggressive and non-aggressive adult models, but then tested for amount of imitative learning in the absence of the model. Subjects in the aggression condition reproduced a good deal of physical and verbal aggressive behavior resembling that of the models. The data clearly confirmed the prediction that exposure of subjects to aggressive models increases the probability of aggressive behavior (Bandura et al. 1961). Another study sought to determine the extent to which film- mediated aggressive models may serve as an important source of imitative behavior. Children were divided and then exposed to four different aggression models. A real-life aggression condition, a human film- aggression condition, a cartoon film-aggression condition, and a control group. The results showed that exposure to humans on film portraying aggression was the most influential in eliciting aggressive behavior. Subjects in this condition, in comparison to Aggression
5
the control subjects, exhibited more aggression and more imitative aggression. Subjects who viewed the aggressive human and cartoon models on film exhibited almost twice as much aggression as subjects in the control group. These results provide strong evidence that exposure to filmed aggression heightens aggressive reactions in children (Bandura et al. 1963a). These results add to the conclusion that viewing violent television produces aggressive behavior.
But, in Banduras next experiment he begins to question if other factors are involved in the relationship between televised violence and aggression. His subjects are divided into three groups, model-reward, model-punished, and control. All view an aggressive filmed model with a task appropriate ending. The results show mere exposure to modeling stimuli does not provide sufficient conditions for imitative learning. The fact that most of the children in the experiment failed to reproduce the entire repertoire of behavior exhibited by the model, even under positive-incentive conditions indicates other factors are Aggression
6
influencing the imitative response acquisition (Bandura 1965).
At the time Banduras work seemed on target and with no one challenging his theory many were soon quick to follow in agreement. His modeling theory seems plausible, but the fact that he only completed experiments in a laboratory setting leaves one skeptical. Many times results from a laboratory setting will not correlate to real life or in vivo results. Another problem was he only had acts of aggression toward blown up dolls and not real people. It would have been interesting to see how children reacted to a real life person receiving an act of aggression. Another problem is he only used adults as models. He should have also used children. With only adults as models he can't explain how viewing an aggressive child in vivo or on television increases aggression. I feel Bandura was on the right track in his last experiment when he determined other factors were involved, but he failed to follow up on this question. This is an area in need of additional Aggression
7
investigation.
1970's Research
Up until now the relation between television viewing habits and aggression had been shown in several experiments, but what was lacking was the ability to determine cause and effect. One possible way to demonstrate cause and effect is to use a longitudinal context (Eron, Huesmann, Lefkowitz & Walder, 1972). In Eron et al. (1972) subjects were tested over a ten year time period for measures of aggression and predictors of aggression. Several other factors were taken into consideration in this study. They included IQ, social status, mobility aspirations, religious practice, ethnicity, and parental disharmony. The results support the hypothesis that a preference for watching violent television in the third-grade time period is a cause of aggressive habits later in life independent of the other causal contributors studied. It is not claimed that television violence is the only cause of aggressive behavior since a number of other variables are also related to aggression. However, the effect of Aggression
8
television violence on aggression is relatively independent of these other factors and explains a larger portion of the variance than does any other single factor which we studied (Eron et al 1972). Although a longitudinal study was used to try to apply causation no such correlation could be found. The safest conclusion was the study does not establish a causal link between television violence and aggression in one direction or another (Kaplan & Singer 1976).
In another study Kaplan and Singer (1976) propose another view toward televised violence increasing aggressive behavior. They suggest three different positions on the subject. An activation view that watching televised fantasy violence causes aggressive behavior. A catharsis view that aggression in some groups may be decreased following the observation of such violence. And a null view that such violence on television has not been demonstrated to have significant effects on aggressive behavior. The evidence in this study is in support of the null view (Kaplan & Singer, 1976). They have built their case Aggression
9
around several valid arguments. The first being that the evidence that television causes aggression is not strong enough to justify restrictions in programming. Most of the research on television and aggression has been done in the laboratory and we really just don't know how this correlates with in vivo settings. Secondly, we must look at the sample from which the subjects for these studies are drawn. Are they representative samples from a variety of social classes? If not then we cannot speak of overall effects, but only for that limited sample. Third is method of viewing. In several studies children are gathered together to view a film. If some of the children begin to get more active they may stimulate the other children to act accordingly (Kaplan & Singer 1976).
Kaplan and Singer cite several studies in which viewing violence did not cause an increase in aggression. Feshbach and Singer (1971) conducted an experimental field study controlling the television viewing of nine to fifteen-year-old boys. For six Aggression
10
weeks they were required to watch two hours of television per day. Half watched aggressive shows and half watched non-aggressive shows. Feshbach and Singer found no evidence that violence on television leads to increases in aggressive behavior. Certainly the study shows no support for the theory that viewing of aggressive television increases real life aggression (Kaplan & Singer, 1976).
In a study by Carlisle and Howell (1974), angered and nonangered college students were exposed to either violent or nonviolent movie scenes. Results revealed that the violent film was more likely than the nonviolent film to disinhibit aggression among either angry or nonangry subjects (Kaplan & Singer, 1976). With the above data it is certainly possible to see why Kaplan and Singer feel the null-effect view to be the most plausible one. Still as our research moves into the 80's the question of intervening variables has yet to be well addressed.
1980's Research
It is the research of Leonard Eron (1982) that Aggression
11
first suggests the relation between violent television and aggression does not go just one way. It is a bi-directional relationship. He demonstrates that television violence is one cause of aggressive behavior, it is also probable that aggressive children prefer to watch more violent television (Eron, 1982). This seems to be a more plausible alternative because it allows for a more circular theory. It means that violent television may or may not be causing an increase in aggression. I feel it means more aggressive children tend to watch more violent television shows. These children are aggressive to begin with and the violence they witness on television does not have a great deal to do with their aggressive tendencies. I agree televised violence may be an intervening factor, but I don't think it is the sole contributor to aggression in children.
Johnathan L. Freedman (1984) reviewed the available field and correlational research on televised violence and increases in aggressiveness. He only reviewed studies concerning long-term effects or Aggression
12
natural settings. He found no reason to support the conclusion that violence on television increases aggressive behavior in a natural setting. It remains a plausible hypothesis, but one for which there is little supporting evidence (Freedman, 1984).
In another review by Friedrich-Cofer and Huston (1986) their data reveal there is in fact a bi-directional causal relation between viewing television violence and aggression. They support their findings with several longitudinal studies including Eron et al. (1972), Freedman (1984) and Cook et al. (1983). They also measured for several perceived intervening variables and found none of these variables accounted for the relation between viewing and aggression (Friedrich-Cofer & Huston, (1986). Still even through the 80's no one has really addressed the question of whether there may be an intervening variable in this great debate.
Discussion
As one can see from reading the above studies the question of whether televised violence increases Aggression
13
aggression is still unanswered. There is as much data for as for against so it is hard to distinguish an answer. There is no concrete evidence to argue for one way or the other. It is a debate that continues today. The best possible answer I can come up with is that the causation is bi- directional. Also, it is pertinent to many intervening factors. Aggression levels of the child to begin with and their home environment play a big role in determining aggressive tendencies. I think the best way to test for a causal relationship is a well documented longitudinal study. The subjects must be able to be contacted in five year increments to answer questions. With this method of testing and by controlling for all possible intervening variables one can get the best results. It was interesting to see over the years how thoughts and ideas had changed about viewing televised violence and aggression. But, even today there are still many unanswered questions. Maybe sometime in the future we will have a definite answer to this relevant question.
Aggression
14
References
Bandura, A. (1965). Influence of models' reinforcement contingencies on the acquisition of imitative responses. Journal of Personality and Social Psychology, 1, 589-595.
Bandura, A., Ross, D. & Ross, S.A. (1961). Transmission of aggression through imitation of aggressive models. Journal of Abnormal and Social Psychology, 63, 575-582.
Bandura, A., Ross, D. & Ross, S.A. (1963a). Imitation of film mediated aggressive models. Journal of Abnormal and Social Psychology, 66, 3-11.
Bandura, A., Ross, D. & Ross, S.A. (1963b). Vicarious reinforcement and imitative learning. Journal of Abnormal and Social Psychology, 67, 601-607.
Eron, L.D. (1963). Relationship of television viewing habits and aggressive behavior in children. Journal of Abnormal and Social Psychology, 67, 193- 196.
Aggression
15
Eron, L.D. (1982). Parent-child interaction, television, violence and aggression of children. American Psychologist, 37, 197-211.
Eron, L.D., Huesmann, L.R., Lefkowitz, M.M. & Walder, L.O. (1972). Does television violence cause aggression? American Psychologist, 27, 253-263.
Freeman, J.L. (1984). Effect of television violence on aggressiveness. Psychological Bulletin, 96, 227- 246.
Friedrich-Cofer, L. & Huston, A.C. (1986). Television violence and aggression: The debate continues. Psychological Bulletin, 100, 364- 371.
Kaplan, R.M. & Singer, R.D. (1976). TV violence and viewer aggression: A reexamination of the evidence. Journal of Social Issues, 32, 33-70.
f:\12000 essays\sciences (985)\Enviromental\tidle power in the bay of fundy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ENVIRONMENT REPORT
TIDAL POWER IN THE BAY OF FUNDY
Prepared for
Bill Andrson
Professor at St.Lawrence College
for Environmental Science.
By
November 22,1996
INTRODUCTION
The Bay of Fundy, which is found off the shores of Nova Scotia, has the highest tides in the world .
Extraordinary tides occur when the tidal wave length is two to four times the length of the Bay. By virtue of blind luck or physics, the tide is amplified into a standing wave, like water sloshing in a bathtub. For a breaking wave to form, the surging tide must meet an obstacle. When the ocean meets the river going in the opposite direction, the sea hesitates, piles up behind the front line, and advances anew in a tidal bore.
Usually the ingredients occur during a new moon with 15 feet tides and the opposing force of the Shubenacadie river to display the true Bay's magnificence.
This part of St. John is divided into 3 main areas: the main Harbor, Courtenay Bay and the Outer Harbor. These areas are influenced by the Bay of Fundy tides and the currents of the St John River which flow out of the main Habour into the Bay.
This section also experiences two high and two low tides each day (semi - diurnal), with a tidal range varying from 15 to 18 feet, depending on the type of tides. High - water heights vary from 22 to 28 feet and low - water heights vary from 0 to 7 feet above chart data. Because of these semi - diurnal tides and the action of the St John River, slack water in the Habour occurs at approximately tides and not at high or low water as would be the case at other parts.
THE RHYTHMIC RISE AND FALL
In the Bay of Fundy, the tides are spectacularly large. While the rise and fall of sea level due to tides is the most apparent aspect, it is also the tidal currents that direct magnification of tides, and the sea level rises or declines are due to resulting convergences and divergencies. These tides rise and fall over a range that is greater than 50 feet; such massive water movement combined with accumulation of sediment through erosion has built up a large salt marsh that is a feeding station for migrating shore birds. The low fundy also feeding a ground for marine life including whales. A long time ago between about 15000 and 10000 year ago at the glacier retreated from the last ice age, part of Georges Bank were dry land. Such as fragment of trees and mammoth teeth from this are still found occasional in fishing travels. The sun and the moon are the only important celestial bodies in producing Terrestrial tides. While the moon is much smaller than the sun, it is nevertheless more important for tidal processes, because of its proximity to the earth. There is a small imbalance between the centrifugal force and the gravitational attraction of the moon on the water column that gives rise to horizontal forces, causing water motion that causes two bulges in the sea surface. One immediately under the earth, and the other on the other side of the earth. These bulges tend to rotate around the globe along with the moon resetting in semi-diurnal tides with a period of half a lunar day (12.4 diurnal hours) even though the earth's rotation is a diurnal period of 24 hours.
PROLOGUE
The Bay of Fundy is an area of about 1.6(100000 Km2).
The Bay of Fundy is a part of the Continental Shelf off eastern Canada and New England. It also serves as an extension that divides New Brunswick from Western Nova Scotia. At the Bay of Fundy's tidal river at the Southwestern tip of Nova Scotia, sea water overflows the other riverbank in spring to deposit loads of North Atlantic Salt twice daily. In the tidal river, fresh water and salt water are mixed. Fundy of Bay is famous for its tides which is the best and highest in the whole world. The marsh is a home to mammals, a breeding place for birds and a feeding ground for estuary fish. It is a land that leaves even the most experienced naturalists awestruck by the aerial ballet performed annually by thousands of birds flying wing to wing during annual migration.
The first experiment dealing with the consequences of environmental pollution was conducted at Yarmouth. There was a polluted brook on a farm sullied by foul-smelling effluent. Part of problem came from the regional airport where noxious run of f had spilled into the head water of the brook. This pollution stayed in the brook for over 25 years. The area was putrid smelling from fish meal and made people sick.
TIDAL POWER
The Fundy tides are a renewable source of energy with potentially hundreds of billions of kilowatts generated each year. It has the potential to provide viable energy, as there is a growing need for pollution-free sources. The Bay of Fundy tidal power has, over half a century, been sparked interest and successful investigation into the potential of its development. Technological advancement and "the new economy"brought renewed interest into developing energy from the Fundy tides.
SUN AND MOON
While the rhythmic modulation of sea level and its association with the motion of the sun and the moon must have been noticed since prehistoric time, a better understanding had to wait until Sir Isacc Newton applied his theory of gravitation to explain the underlying physical mechanism. He was able to construction an equilibrium theory of tides, that explained the semi-diurnal nature of tides in most parts of the world. If there were infinite time allowed for adjustment of the ocean to the astronomical forces it is the equilibrium tides that would be the result. This is, however, not the case since the tidal forcing varies quite rapidly with time. Resonance in the oceanic response push tides in certain localities to be above the value predicted by the equilibrium theory. While the equilibrium theory products two bulges to form, one underneath the moon and the other on the opposite side of the globe, in reality the high water may significantly precede or lag the transit of the moon. These differences are due to the dynamic response of the oceans to tidal forcing. It was Laplace who a century later laid the theoretical and mathematical foundation for modern dynamic theory of ocean tides by considering oceanic tides to be the response of a fluid medium to the astronomical forcing by the sun and moon's gravitational attractions.
THE OIL OF FUNDY BAY
The transportation of oil from the Bay of Fundy and the generation of nuclear power are two aspects of the same issue in that the supply of energy that present inherent risks to the environment, but opposing arguments against the use of foreign oil and nuclear power might be base on purely economic grounds. The risk of oil spills with catastrophic and long lasting effects on Marine organisms and the coastal environment is always a possibility.
TIDAL POWER OF ELECTRICAL ENERGY
The monumental Mains Basin scheme produces more than twice as much electricity in Nova Scotia at 4560 megawatts from all sources-coal, oil and hydro as the largest water-driven electrical power plant in the world. Nova Scotia produces more power than Newfoundland's Churchill Falls (about 2660 megawatts) and Ontario Hydro's Picketing nuclear-power plant ( 2160 megawatts). Tidal power would not replace conventional electrical energy derived from nuclear or fossil fuels for peak demand. Tidal power has fluctuating peaks so. At 12 noon is when you need the power might not be quailabler. Utilities would still meet peak demand whether or not tidal power was on the line. The renewable energy source using lunar gravitation and hydroelectricity has become increasingly important. Compared with a river dam, tidal power has a difficult saltwater environment, where machines are needed to produce of power and also have saltwater durability. The electric power output is the twice-daily ebb average of tidal electricity less than 40% of the generating capacity of a river dam.
TIDAL POWER AND THE MILL
In the 15 century, a construction handbook was published ,showing how tidal water was held behind a dam at high tide so that when a sufficient water level was reached between the land and sea sides of the dam . a mill could use tidal water to mix with the fresh water to turn the waterwheel that provided power for grinding grain. The first mill in the would was built in 1607 by Samuel de Champlain on the Lequille River. By 1910 Turnbull and an American engineer, designed a double basin scheme that would cross the international boundary between New Brunswick and Maine.
CONCLUSION
Given the grave environmental challenges such as global warning or environmental pollution facing many kind in the coming century and because oceans play such a very important role in governing the degree of global warming, fisheries yield, and degrees of pollution along our beaches, the study of the tides through a variety of means such as ship surveys, and remote sensing will lead to a better understanding of how the oceans work. The hope is that as a result, we will leave behind for our children a world that is both livable as well as enjoyable in all its majesty . If we an avoid oil spills into the ocean the water and environment will be more beautiful and ecologically safe for all living things.
BIBLIOGRAPHY
ENVIRONMENT REPORT
TIDAL POWER IN THE BAY OF FUNDY
Prepared for
Bill Andrson
Professor at St.Lawrence College
for Environmental Science.
By
November 22,1996
INTRODUCTION
The Bay of Fundy, which is found off the shores of Nova Scotia, has the highest tides in the world .
Extraordinary tides occur when the tidal wave length is two to four times the length of the Bay. By virtue of blind luck or physics, the tide is amplified into a standing wave, like water sloshing in a bathtub. For a breaking wave to form, the surging tide must meet an obstacle. When the ocean meets the river going in the opposite direction, the sea hesitates, piles up behind the front line, and advances anew in a tidal bore.
Usually the ingredients occur during a new moon with 15 feet tides and the opposing force of the Shubenacadie river to display the true Bay's magnificence.
This part of St. John is divided into 3 main areas: the main Harbor, Courtenay Bay and the Outer Harbor. These areas are influenced by the Bay of Fundy tides and the currents of the St John River which flow out of the main Habour into the Bay.
This section also experiences two high and two low tides each day (semi - diurnal), with a tidal range varying from 15 to 18 feet, depending on the type of tides. High - water heights vary from 22 to 28 feet and low - water heights vary from 0 to 7 feet above chart data. Because of these semi - diurnal tides and the action of the St John River, slack water in the Habour occurs at approximately tides and not at high or low water as would be the case at other parts.
THE RHYTHMIC RISE AND FALL
In the Bay of Fundy, the tides are spectacularly large. While the rise and fall of sea level due to tides is the most apparent aspect, it is also the tidal currents that direct magnification of tides, and the sea level rises or declines are due to resulting convergences and divergencies. These tides rise and fall over a range that is greater than 50 feet; such massive water movement combined with accumulation of sediment through erosion has built up a large salt marsh that is a feeding station for migrating shore birds. The low fundy also feeding a ground for marine life including whales. A long time ago between about 15000 and 10000 year ago at the glacier retreated from the last ice age, part of Georges Bank were dry land. Such as fragment of trees and mammoth teeth from this are still found occasional in fishing travels. The sun and the moon are the only important celestial bodies in producing Terrestrial tides. While the moon is much smaller than the sun, it is nevertheless more important for tidal processes, because of its proximity to the earth. There is a small imbalance between the centrifugal force and the gravitational attraction of the moon on the water column that gives rise to horizontal forces, causing water motion that causes two bulges in the sea surface. One immediately under the earth, and the other on the other side of the earth. These bulges tend to rotate around the globe along with the moon resetting in semi-diurnal tides with a period of half a lunar day (12.4 diurnal hours) even though the earth's rotation is a diurnal period of 24 hours.
PROLOGUE
The Bay of Fundy is an area of about 1.6(100000 Km2).
The Bay of Fundy is a part of the Continental Shelf off eastern Canada and New England. It also serves as an extension that divides New Brunswick from Western Nova Scotia. At the Bay of Fundy's tidal river at the Southwestern tip of Nova Scotia, sea water overflows the other riverbank in spring to deposit loads of North Atlantic Salt twice daily. In the tidal river, fresh water and salt water are mixed. Fundy of Bay is famous for its tides which is the best and highest in the whole world. The marsh is a home to mammals, a breeding place for birds and a feeding ground for estuary fish. It is a land that leaves even the most experienced naturalists awestruck by the aerial ballet performed annually by thousands of birds flying wing to wing during annual migration.
The first experiment dealing with the consequences of environmental pollution was conducted at Yarmouth. There was a polluted brook on a farm sullied by foul-smelling effluent. Part of problem came from the regional airport where noxious run of f had spilled into the head water of the brook. This pollution stayed in the brook for over 25 years. The area was putrid smelling from fish meal and made people sick.
TIDAL POWER
The Fundy tides are a renewable source of energy with potentially hundreds of billions of kilowatts generated each year. It has the potential to provide viable energy, as there is a growing need for pollution-free sources. The Bay of Fundy tidal power has, over half a century, been sparked interest and successful investigation into the potential of its development. Technological advancement and "the new economy"brought renewed interest into developing energy from the Fundy tides.
SUN AND MOON
While the rhythmic modulation of sea level and its association with the motion of the sun and the moon must have been noticed since prehistoric time, a better understanding had to wait until Sir Isacc Newton applied his theory of gravitation to explain the underlying physical mechanism. He was able to construction an equilibrium theory of tides, that explained the semi-diurnal nature of tides in most parts of the world. If there were infinite time allowed for adjustment of the ocean to the astronomical forces it is the equilibrium tides that would be the result. This is, however, not the case since the tidal forcing varies quite rapidly with time. Resonance in the oceanic response push tides in certain localities to be above the value predicted by the equilibrium theory. While the equilibrium theory products two bulges to form, one underneath the moon and the other on the opposite side of the globe, in reality the high water may significantly precede or lag the transit of the moon. These differences are due to the dynamic response of the oceans to tidal forcing. It was Laplace who a century later laid the theoretical and mathematical foundation for modern dynamic theory of ocean tides by considering oceanic tides to be the response of a fluid medium to the astronomical forcing by the sun and moon's gravitational attractions.
THE OIL OF FUNDY BAY
The transportation of oil from the Bay of Fundy and the generation of nuclear power are two aspects of the same issue in that the supply of energy that present inherent risks to the environment, but opposing arguments against the use of foreign oil and nuclear power might be base on purely economic grounds. The risk of oil spills with catastrophic and long lasting effects on Marine organisms and the coastal environment is always a possibility.
TIDAL POWER OF ELECTRICAL ENERGY
The monumental Mains Basin scheme produces more than twice as much electricity in Nova Scotia at 4560 megawatts from all sources-coal, oil and hydro as the largest water-driven electrical power plant in the world. Nova Scotia produces more power than Newfoundland's Churchill Falls (about 2660 megawatts) and Ontario Hydro's Picketing nuclear-power plant ( 2160 megawatts). Tidal power would not replace conventional electrical energy derived from nuclear or fossil fuels for peak demand. Tidal power has fluctuating peaks so. At 12 noon is when you need the power might not be quailabler. Utilities would still meet peak demand whether or not tidal power was on the line. The renewable energy source using lunar gravitation and hydroelectricity has become increasingly important. Compared with a river dam, tidal power has a difficult saltwater environment, where machines are needed to produce of power and also have saltwater durability. The electric power output is the twice-daily ebb average of tidal electricity less than 40% of the generating capacity of a river dam.
TIDAL POWER AND THE MILL
In the 15 century, a construction handbook was published ,showing how tidal water was held behind a dam at high tide so that when a sufficient water level was reached between the land and sea sides of the dam . a mill could use tidal water to mix with the fresh water to turn the waterwheel that provided power for grinding grain. The first mill in the would was built in 1607 by Samuel de Champlain on the Lequille River. By 1910 Turnbull and an American engineer, designed a double basin scheme that would cross the international boundary between New Brunswick and Maine.
CONCLUSION
Given the grave environmental challenges such as global warning or environmental pollution facing many kind in the coming century and because oceans play such a very important role in governing the degree of global warming, fisheries yield, and degrees of pollution along our beaches, the study of the tides through a variety of means such as ship surveys, and remote sensing will lead to a better understanding of how the oceans work. The hope is that as a result, we will leave behind for our children a world that is both livable as well as enjoyable in all its majesty . If we an avoid oil spills into the ocean the water and environment will be more beautiful and ecologically safe for all living things.
BIBLIOGRAPHY
ENVIRONMENT REPORT
TIDAL POWER IN THE BAY OF FUNDY
Prepared for
Bill Andrson
Professor at St.Lawrence College
for Environmental Science.
By
November 22,1996
INTRODUCTION
The Bay of Fundy, which is found off the shores of Nova Scotia, has the highest tides in the world .
Extraordinary tides occur when the tidal wave length is two to four times the length of the Bay. By virtue of blind luck or physics, the tide is amplified into a standing wave, like water sloshing in a bathtub. For a breaking wave to form, the surging tide must meet an obstacle. When the ocean meets the river going in the opposite direction, the sea hesitates, piles up behind the front line, and advances anew in a tidal bore.
Usually the ingredients occur during a new moon with 15 feet tides and the opposing force of the Shubenacadie river to display the true Bay's magnificence.
This part of St. John is divided into 3 main areas: the main Harbor, Courtenay Bay and the Outer Harbor. These areas are influenced by the Bay of Fundy tides and the currents of the St John River which flow out of the main Habour into the Bay.
This section also experiences two high and two low tides each day (semi - diurnal), with a tidal range varying from 15 to 18 feet, depending on the type of tides. High - water heights vary from 22 to 28 feet and low - water heights vary from 0 to 7 feet above chart data. Because of these semi - diurnal tides and the action of the St John River, slack water in the Habour occurs at approximately tides and not at high or low water as would be the case at other parts.
THE RHYTHMIC RISE AND FALL
In the Bay of Fundy, the tides are spectacularly large. While the rise and fall of sea level due to tides is the most apparent aspect, it is also the tidal currents that direct magnification of tides, and the sea level rises or declines are due to resulting convergences and divergencies. These tides rise and fall over a range that is greater than 50 feet; such massive water movement combined with accumulation of sediment through erosion has built up a large salt marsh that is a feeding station for migrating shore birds. The low fundy also feeding a ground for marine life including whales. A long time ago between about 15000 and 10000 year ago at the glacier retreated from the last ice age, part of Georges Bank were dry land. Such as fragment of trees and mammoth teeth from this are still found occasional in fishing travels. The sun and the moon are the only important celestial bodies in producing Terrestrial tides. While the moon is much smaller than the sun, it is nevertheless more important for tidal processes, because of its proximity to the earth. There is a small imbalance between the centrifugal force and the gravitational attraction of the moon on the water column that gives rise to horizontal forces, causing water motion that causes two bulges in the sea surface. One immediately under the earth, and the other on the other side of the earth. These bulges tend to rotate around the globe along with the moon resetting in semi-diurnal tides with a period of half a lunar day (12.4 diurnal hours) even though the earth's rotation is a diurnal period of 24 hours.
PROLOGUE
The Bay of Fundy is an area of about 1.6(100000 Km2).
The Bay of Fundy is a part of the Continental Shelf off eastern Canada and New England. It also serves as an extension that divides New Brunswick from Western Nova Scotia. At the Bay of Fundy's tidal river at the Southwestern tip of Nova Scotia, sea water overflows the other riverbank in spring to deposit loads of North Atlantic Salt twice daily. In the tidal river, fresh water and salt water are mixed. Fundy of Bay is famous for its tides which is the best and highest in the whole world. The marsh is a home to mammals, a breeding place for birds and a feeding ground for estuary fish. It is a land that leaves even the most experienced naturalists awestruck by the aerial ballet performed annually by thousands of birds flying wing to wing during annual migration.
The first experiment dealing with the consequences of environmental pollution was conducted at Yarmouth. There was a polluted brook on a farm sullied by foul-smelling effluent. Part of problem came from the regional airport where noxious run of f had spilled into the head water of the brook. This pollution stayed in the brook for over 25 years. The area was putrid smelling from fish meal and made people sick.
TIDAL POWER
The Fundy tides are a renewable source of energy with potentially hundreds of billions of kilowatts generated each year. It has the potential to provide viable energy, as there is a growing need for pollution-free sources. The Bay of Fundy tidal power has, over half a century, been sparked interest and successful investigation into the potential of its development. Technological advancement and "the new economy"brought renewed interest into developing energy from the Fundy tides.
SUN AND MOON
While the rhythmic modulation of sea level and its association with the motion of the sun and the moon must have been noticed since prehistoric time, a better understanding had to wait until Sir Isacc Newton applied his theory of gravitation to explain the underlying physical mechanism. He was able to construction an equilibrium theory of tides, that explained the semi-diurnal nature of tides in most parts of the world. If there were infinite time allowed for adjustment of the ocean to the astronomical forces it is the equilibrium tides that would be the result. This is, however, not the case since the tidal forcing varies quite rapidly with time. Resonance in the oceanic response push tides in certain localities to be above the value predicted by the equilibrium theory. While the equilibrium theory products two bulges to form, one underneath the moon and the other on the opposite side of the globe, in reality the high water may significantly precede or lag the transit of the moon. These differences are due to the dynamic response of the oceans to tidal forcing. It was Laplace who a century later laid the theoretical and mathematical foundation for modern dynamic theory of ocean tides by considering oceanic tides to be the response of a fluid medium to the astronomical forcing by the sun and moon's gravitational attractions.
THE OIL OF FUNDY BAY
The transportation of oil from the Bay of Fundy and the generation of nuclear power are two aspects of the same issue in that the supply of energy that present inherent risks to the environment, but opposing arguments against the use of foreign oil and nuclear power might be base on purely economic grounds. The risk of oil spills with catastrophic and long lasting effects on Marine organisms and the coastal environment is always a possibility.
TIDAL POWER OF ELECTRICAL ENERGY
The monumental Mains Basin scheme produces more than twice as much electricity in Nova Scotia at 4560 megawatts from all sources-coal, oil and hydro as the largest water-driven electrical power plant in the world. Nova Scotia produces more power than Newfoundland's Churchill Falls (about 2660 megawatts) and Ontario Hydro's Picketing nuclear-power plant ( 2160 megawatts). Tidal power would not replace conventional electrical energy derived from nuclear or fossil fuels for peak demand. Tidal power has fluctuating peaks so. At 12 noon is when you need the power might not be quailabler. Utilities would still meet peak demand whether or not tidal power was on the line. The renewable energy source using lunar gravitation and hydroelectricity has become increasingly important. Compared with a river dam, tidal power has a difficult saltwater environment, where machines are needed to produce of power and also have saltwater durability. The electric power output is the twice-daily ebb average of tidal electricity less than 40% of the generating capacity of a river dam.
TIDAL POWER AND THE MILL
In the 15 century, a construction handbook was published ,showing how tidal water was held behind a dam at high tide so that when a sufficient water level was reached between the land and sea sides of the dam . a mill could use tidal water to mix with the fresh water to turn the waterwheel that provided power for grinding grain. The first mill in the would was built in 1607 by Samuel de Champlain on the Lequille River. By 1910 Turnbull and an American engineer, designed a double basin scheme that would cross the international boundary between New Brunswick and Maine.
CONCLUSION
Given the grave environmental challenges such as global warning or environmental pollution facing many kind in the coming century and because oceans play such a very important role in governing the degree of global warming, fisheries yield, and degrees of pollution along our beaches, the study of the tides through a variety of means such as ship surveys, and remote sensing will lead to a better understanding of how the oceans work. The hope is that as a result, we will leave behind for our children a world that is both livable as well as enjoyable in all its majesty . If we an avoid oil spills into the ocean the water and environment will be more beautiful and ecologically safe for all living things.
BIBLIOGRAPHY
f:\12000 essays\sciences (985)\Enviromental\Time To Change.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The earth and many of its contents, thanks in large part to humans, is deteriorating and it has been for quite a time now. It is overwhelmingly populated with both ignorant and lazy people. In effect, not much is being done to prevent this deterioration. For instance, we are killing off vital animal populations every day. We have caused the extinction or endangerment of numerous species for absolutely no reason other than selfishness. An example is the poaching of elephants. We are killing these animals only for our own wealth. We take their ivory and leave them behind to die. As a result, they are on the verge of extinction. Also, pollution caused by humans and their inventions cause a major dilemma. Automobile exhaust fumes and factory pollutants are only a couple of the impurities causing damaging affects to the ozone layer and atmosphere. We depend on the ozone to defend us from harmful UV rays. Finally, we have a major impact on the degeneration of natural resources. Millions of gallons of oil, coal, and other valuable resources are wasted each day. These are just a few of the human disruptions to nature available to our knowledge. We are conscious of many more, and there are
probably others that we are not aware of. If we do not start taking
them seriously soon it will be too late, if it is not already. We need to
reevaluate our priorities and plan for the future existence of this
world.
A group labeled the Earth-Firsters' often attempt to accomplish this task through drastic and sometimes dangerous methods. As Joni Seager states ( The Eco-Fringe: Deep Ecology, Pg. 636), "In Australia, Earth-First protesters buried themselves up to their necks in the sand in the middle of logging roads to stop lumbering operations; in the American Southwest, Earth Firsters handcuffed themselves to trees and bulldozers to prevent logging; and in California, they dressed in dolphin and mermaid costumes to picket the stockholders' meeting of a tuna-fishing company." The Earth-Firsters' tactics are not the only drastic only measures they practice. Their ideas seem to be quite extreme as well. For example, they believe the population of the world is entirely too high, by as much as ninety percent, causing too much "wear and tear" to the earth. To resolve this issue, some say we should cease all study toward the curing of disease. Others said we should stop aid to the poor, sick, and homeless. They ration in Africa that the sickness is a natural occurrence. Also, some of the Earth-Firsters' believe in order to conserve land and nature, people
should be banned from a large portion of the earth. These are only a few examples of the many bold philosophies of the Earth-Firster society.
The Earth-Firsters' have some good propositions, but most of them are not practicable. We need to set standards and ideals that can realistically be accomplished. It is not possible to change the world in one day. Instead of killing ninety percent of the population, we might control the number of offspring allowed to each individual. This approach would more likely be sustained. In order to conserve the earth, we should section off small portions all over the globe, as opposed to completely zoning off one giant segment. This would provide a habitat suitable for wildlife in any conditions.
Earth-Firsters' lay equal blame on everyone, no matter their position, views, or actions. Obviously, some people are more viable than others when it comes to preserving the environment. The president of a logging business is achieving much more harm than the average worker. Businesses need more regulations. Most businesses act upon deals that are only going to profit themselves. As a result, many products on the market today are harmful to the environment. Finally, politicians have to get into the picture. We need
honest leaders who are aware of what is going on and who truthfully
want to help the people.
Obviously, it is past time for the world to change in its
habits. This essay could go on for pages resolving solutions to make these alterations. Nevertheless, each solution would be composed of the same principle: Compromise! It is impossible to force everyone to quit their jobs and become "Earth-Firsters." The world, as a whole, needs to come together and form a lawful, yet realistic plan. By doing this we will know what we are aiming for and society will understand the damaging effects we are having on our planet.
f:\12000 essays\sciences (985)\Enviromental\Urban Heat Islands.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Urban Heat Islands
For more than 100 years, it has been known that two adjacent cities are generally warmer than the surrounding areas. This region of city warmth, known as an urban heat island, can influence the concentration of air pollution. The urban heat island is formed when industrial and urban areas are developed and heat becomes more abundant. In rural areas, a large part of the incoming solar energy is used to evaporate water from vegetation and soil. In cities, where less vegetation and exposed soil exists, the majority of the sun's energy is absorbed by urban structures and asphalt. Hence, during warm daylight hours, less evaporative cooling in cities allows surface temperatures to rise higher than in rural areas. Additional city heat is given off by vehicles and factories, as well as by industrial and domestic heating and cooling units.
At night, the solar energy, which is stored as vast quantities of heat in city buildings and roads, is released slowly into the city. The dissipation of heat energy is slowed and even stopped by the tall building walls that do not allow infrared radiation to escape as readily as do the relative level surfaces of the surrounding countryside. The slow release of heat tends to keep city temperatures higher than those of the unpaved faster cooling areas.
On clear, still nights when the heat island is pronounced, a small thermal low-pressure area forms over the city. Sometimes a light breeze, called a country breeze which blows from the countryside into the city. If there are major industrial areas along the city's outskirts, pollutants are carried into the heart of town, where they tend to concentrate.
At night, the extra warmth of the city occasionally produces a shallow unstable layer near the surface. Pollutants emitted from low-level sources, such as home heating units, tend to concentrate in this shallow layer, often making the air unhealthy to breathe.
The constant outpouring of pollutants into the environment may actually influence the climate of a city. For an example, certain pollutants reflect solar energy, thereby reducing the sunlight that reaches the surface. Some particles serve as nuclei upon which water and ice form. Water vapor condenses onto these particles, forming haze that greatly reduces visibility. Moreover, the added nuclei increases the frequency of city fog.
Pollutants from urban areas may even affect the weather downwind from them. Just such a situation is described in a controversial study conducted at La Porte, Indiana, a city located about thirty miles downwind of the industries of south Chicago. Scientists suggested that La Porte had experienced a notable increase in annual precipitation since 1925. Because this rise closely followed the increase in steel production, it was proposed that the phenomenon was due to the additional emission of particles. As industrial output increases pollution particles increase available condensation nuclei, thus increasing rainfall. This process of increasingly wet climate is the result of industries to the west of La Porte.
Bibliography :
"Disasters", by Charles H. V. Ebert
"Physical Geography Of The Global Environment", by H. J. de Blij & Peter O. Muller
"Essentials Of Meteorology", by C. Donald Ahrens
"Comptons Encyclopedia", Prodigy On-line Edition
f:\12000 essays\sciences (985)\Enviromental\Using Bicycles As An ALternative To Automobiles.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
October 21, 1996
Ecology & Design
University of Colorado
Using Bicycles As An Alternative
To Automobiles
Abstract: This paper basically shows the reasons to use the bicycle as an alternative mode of transportation. It will points out the benefits of the use of a bicycle. It will also show what is being done to get rid of the negative aspects of using a bicycle for transportation.
Bicycling is one of the fastest growing forms of recreation. People are drawn to it for many reasons, being out in the fresh air, the thrill of speed, the physical challenge, along with many other things. But there can be many more uses for the bicycle. The use that this paper will focus on is transportation.
The use of bicycles can greatly improve the economy of a nation. A comparison between the efficiency of the transportation systems of the United Stated and Japan points this out. In 1990 Americans spent 17.9 percent of the GNP on transportation, whereas the Japanese spent only 10.79 percent on transportation. This difference of nearly 7 percent, gives the Japanese economy much more money for investing in their future.
Our Economy is not the only thing we should worry about, and it is also not the only thing that can be improved by the use of bicycles. There are several major problems that could be drastically reduced by the increased use of bicycles. Traffic would be a lot lighter due to the extremely small size of bicycles. It would also greatly reduce the wear and tear on our roads and highways, and therefore reduce government expenditure. But one of the most serious problems it would reduce is that of pollution and smog in out larger cities.
There are more benefits to biking, though. There are benefits that come at a more personal level.
Biking greatly improves ones health. It can be a way to exercise without taking much times out of ones schedule. The time one would spend biking to work serves two important purposes. One, getting to work, but also as a great form of exercise.
Improved mobility in crowded situations. In downtown areas, biking to work may actually save time. Cars crawl through congested traffic, while bicyclists ride around it. The time it takes to park a car could also be factored in. Finding a parking space takes time and may be far away, while bikes are easy to lock and can be locked close to any destination.
Personal economics are also important. Cars are expensive to own and operate. On top of the high prices for new cars, one must also pay for insurance, fuel, and maintenance. Not only is the price of a new bicycle much lower, they cost almost nothing to operate.
Still with all of these benefits, many people choose not to consider a bicycle as a viable form for transportation. People feel that it is to time consuming, to inconvenient, and to dangerous. But there are things that can be done to change these facts.
How a city is designed will play a large part in whether or not people choose to use bicycle as a form of transportation. Many of America's large cities are not very friendly to the bicycle commuter. City streets should be wide enough to have room for a safe sized bike path that is separate from automobiles and pedestrians. This would improve the safety of bicycling.
Another method that can be used is traffic calming. Traffic calming is a term that has emerged in Europe to describe a full range of methods to slow cars, but not necessarily ban them, as they move through commercial areas and residential neighborhoods. Traffic calming exists in certain downtown areas as a natural outcome of design initiatives to accommodate sizable special populations.
Some the best examples of traffic calming are not in the United States. Traffic calming was originally introduced in the Netherlands and Germany, but is now being put to use in Denmark, Sweden, and the United Kingdom.
In 1981, Germany set up six traffic-calming demonstration projects in various places with varying density. The initial reports showed that there was a reduction of speed from 23 mph to 12. The traffic volume remained constant, but there was a 60 percent decrease in injuries, and a 43 to 53 percent reduction in fatalities.
In a recent survey, most people showed that if conditions where improved, more people use bicycles to commute. Things are being done to make things better. Private organizations are offering incentives and promotions, and our government is also making legislation to improve things.
The need for bicycle and pedestrian provisions to be fully integrated into state and local plans and transportation policy documents has assumed even greater significance due to the ISTEA and the Clean Air Act Amendments of 1990.
States were not required to have long-range transportation plans until ISTEA was passed, and Metropolitan Planning Organizations have had little or no control over project selection until now. Because of this fact, in the past, State highway agencies have dominated the spending of highway and transportation dollars. Plans developed at the city level would often contain many worthy transit and non-motorized projects.
ISTEA makes a number of important changes. Both levels of government are now required to produce annual transportation improvement programs and long range transportation plans.
These plans "shall provide for the development of transportation facilities (including pedestrian walkways and bicycle transportation facilities) which will function as an intermodal transportation system." (Section 1024 (a) and 1025 (a))
State long-range plans are required to have "consider strategies for incorporating bicycle transportation facilities and pedestrian walkways in projects where appropriate throughout the state." (Section 1025 (c)(3))
State long-range plans are also required to have "a long-range plan for bicycle transportation facilities and pedestrian walkways for appropriate areas of the State, which shall be incorporated into the long-range transportation plan."
People need to realize what the over use of automobiles is doing to our country. Our nations wealth is probably the greatest contributor to this problem. Americans generally feel that a car is a necessity and not a luxury. We are also spoiled with some of the lowest gasoline prices in the world.
Some suggest an increase in gasoline taxes to drive people towards the use of alternative modes of transportation. Surveys shows that it would influence more people to not drive as frequently. But economists feel when the government imposes an intentional price floor on a common product, it can only hurt the economy.
All of these things will help influence people to use alternative modes of transportation. But when it comes down to it, everyone must make a personal choice. Bicycles will probably never be as convenient as automobiles, and in this writer's opinion, they shouldn't be. Commuting on a bike is a sacrifice in some ways, but we need to set our priorities straight. No legislation will do that for us.
Boulder is probably one of the best place to get into the habit of frequently using a bicycle. In this community bikes are generally a lot more convenient than cars, in pretty much every aspect.
Probably more than half of the time, I can get to wherever I want to in less time on a bike than in a car. Not to mention the time saved by not having to find a parking spot. This is accomplished by the use of good bike routes, underpasses, and having the right of way over cars. I use my bike almost daily, whereas I would probably use a car about once a week.
It is also a lot more economical to ride a bike than to drive a car, especially on campus. As I already mentioned cars require several expenses, whereas bikes require almost none. Also on campus, if you have a car, you must pay for a parking permit.
I plan to use a bicycle whenever and where ever possible. I think that everyone should own a bicycle and a least use it occasionally. I would like to inform other people of how easy it is to use a bicycle for transportation.
References
1. United States, Integrating Bicycle and Pedestrian Considerations Into State and Local Transportation Planning (Washington: The Administration, 1994)
2. United States, Transportation Research Record, Pedestrian and Bicycle Planning With Safety Considerations (Washington: Transportation Research Board, 1987)
3. United States, Actions Needed To Increase Bicycle/Moped Use In The Federal Community (Washington: U.S. General Accounting Office, 1981)
4. Mike Hudson, Bicycle Planning (The Architectural Press: London, 1982)
5. National Research Council. Transportation Research Board. Pedestrian Behavior and Bicycle Traffic (Washington: National Academy of Sciences, 1980)
6. National Research Council. Transportation Research Board. Nonmotorized Transportation Around The World (Washington: National Academy Press, 1994)
7. National Research Council. Transportation Research Board. Nonmotorized Transportation Research, Issues, and Use (Washington: National Academy Press, 1995)
8. John T. Doolittle, Integration of Bicycles and Transit (Washington: National Academy Press, 1994)
9. http://www.tnrcc.state.tx.us/air/ms/vexercis.htm
10. http://www.nd.edu/~ktrembat/www-bike/BCY/TryBikeCommute.html
f:\12000 essays\sciences (985)\Enviromental\Veterinary medicine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Veterinary Medicine
For my agriculture report, I chose to do a report on veterinarians. I chose this career field because I like working with animals and learning about them. While doing my report I learned more then I thought there was to learn about animals and becoming a veterinarian. I learned how long it takes to become a veterinarian, what my chances are on being accepted by a veterinary college, what veterinarians do, and much more.
Veterinary Medicine is a branch of medical science that deals with the prevention, cure, or alleviation of diseases and injuries of animals. There are about 55,000 veterinarians and of that only 15,000 or so are women. Many veterinarians work for federal, state, or local governments, inspecting food, supervising laws that protect human and animal health, or dealing with environmental problems. Many veterinarians treat all animals, but in recent years and in the densely populated areas of the country, many have limited their practice to pets.
Some specialize in the treatment of certian populations such as horses, cattle, poultry, or zoo animals. A small number of veterinarians are employed as managers of large feedlots for beef - cattle, large dairy cattle operations, and many of the increasingly large poultry farms. A few veterinarians are now becoming involved in embryo transfer work, in which fertilized eggs are removed from superior donors and transferred into the uterus of a cow of lesser genetic qualities.
A minimum of six years of study after high school graduation is usually required for a student to become a veterinarian. At least two and sometimes up to four years of college are completed before the acceptance into veterinary college. The competition is stiff for such acceptance, with usually five to ten qualified applicants for each one admitted.
Demand is increasing for food inspection by veterinarians, particulary in meat, milk, and processed foods and for the regulation of traffic in all kinds of livestock and the eradication of animal plagues. In addition, the increasingly close confinement of many farm animals requires veterinary expertise in vaccination, immunization, and particular methods of hygien.
The Meat Packing Industry is a large industry involving the slaughtering, processing, and distribution of cattle, sheep, and hogs. Meat slaughtering is another job in the veterinarian practice. It includes slaughtering of hogs, cattle, and sheep.
Animals furnish about 28 percent of the worlds total value of agricultural products. Most domesticated animals have multiple uses, for example, animals kept primarily for work also supply milk, meat, and clothing materials. The animals and thier uses are closely associated with the culture and the experience of the people who care for them.
After researching and doing this report on veterinarians, I have decided that this field is not what I am interested in doing. But I do now have a knew understanding for veterinarians and thier jobs. When I started this report I thought that there was no way that veterinarians where anywhere even closely assocated with agriculture and know I know that there was a lot that I did not know about agriculture and veterinarians.
Two references that I used are the internet and an Encyclopedia for the computer, called Microsoft Encarta.
f:\12000 essays\sciences (985)\Enviromental\VIVISECTION.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
VIVIECTION
Many people today, including scientists and doctors, are questioning the suffering and killing of animals for the sake of human beings. Is it morally correct to dissect a frog or a worm for the purpose of educating a high school student? On the other hand, must "We study life to protect life" (1:131) The issue of killing animals for the use of biomedical research, education, and cosmetics can be referred as "vivisection". Twenty-five to thirty-five million animals are spared in the U.S.A. each year for the purpose of research, testing, and education. Although vivisection serves as an important tool for scientists and doctors to work in research and may benefit humans, the harms indeed outweigh the benefits.
Animal experimentation was not common until the early nineteenth century and emerged as an important method of science. The first recorded action of vivisection was the study of body humors by Erasistratus in Alexandria during the third century (1:3). Later, in A.D. 129-200, the physician, Galen, used five pigs to investigate the effects of several nerves (1:4). He is considered to be the founder of experimental physiology. During the Renaissance Era, Andreas Vesalius conducted experiments on monkeys, swine, and goats (1:3). By the late eighteenth century, the methods of scientific discovery were changer to experimentation of live animals by two French physiologists, Claude Bernard and Francious Magnedie. They revolutionized methods of scientific discovery by establishing live animal as common practice (1:4). Claude Bernard believed that in order for medicine to progress, there must be experimental research, and affirmed that "vivisection is indispensable for physical research". This is when the anti-vivisection movement was established ("vivisection").
There are different views as to why or why not there should be animal experimentation. For example, Descartes believed that animals are incapable of feeling pain. He said "The greatest of all the prejudices we have retained from our infancy is that of believing that beasts think" (1:4). In other words, Descartes believes that animals have no sensations. Singer argues and thinks that animals have feelings, desires, and preferences. He observed that stimuli that cause pain to humans, such as hitting and burning, cause pain to animals (1:25). Singer 's position is that equal harms should be counted equally and not downgraded for animals. However, he does not say that humans and animals have an equal moral status, for he believes that "humans are superior to their fellow animals by virtue of God-given soul" (12:37). Regan, another opposer to Descarte's view, feels that animals do feel pain and have desires as well. He believes that animals are "Subjects of a life just as human beings are and a subject of a life have inherent values" (1:26). He also feels that animals should not be tested for toxic substances, instead one should use cell tissue cultures (5:26).
The people who favor animal experimentation feel that research is for the purpose of humans. Research is a cultural value to acquire knowledge for knowledge's sake. In other words, the means justifies the end if the end benefits society. (4:62). They also believe that humans are superior to all other creatures (1:28). Research is for biomedical purposes; 1) to add scientific understanding of basic biological behavior, functions, and processes 2) to improve human or animal health by studying the natural history of the disease (1:22). Henry Foster, the founder of Charles River Breeding Laborator, said that "the use of animals in experiments is all for the benefit of mankind. If you don't use animals you don't do research!" (2:45).
Most of the times by doing research one performs tests on animals. For example, rabbits are locked in a chamber and forced to inhale grass, sprays, and vapors. In dermal toxicity studies, rabbits have their fur removed to have substances placed on their skin. In this case they are restrained so they don't scratch (2:55). Testing is conducted to assess the potency, effectiveness, or toxicity of substances that have established or potential usefulness for medical, scientific, or commercial purposes (1:39). For instance, new drugs are tested for efficiency and safety before clinical trials are conducted on humans. Tests on animals are done to establish safety levels for humans of known toxic substances (1:40).
Although testing might seem like the most efficient way to gain knowledge in these areas, alternatives exist. The use of slides, films, computer programs, and models can fulfill the same job without any harm. For example, in vet schools the symptoms of strychnine poisoning were demonstrated by poisoning dogs and then put on a video tape. On the video the students can go over steps repeatedly and see what is taking place more clearly than in a lecture hall (9:234). In medical schools procedures are easier to follow by camera. Students can watch surgery performed by the top practitioner of the area. By using videos lives are saved, suffering is reduced, and money is saved (6:107).
Animals are also used to teach human concepts at all levels of education, to instruct students in biology, to teach certain skills, and to train the next generation of scientists (1:40). For example, high school students have been conducting frog dissections for the past fifty years. In some schools it is part of the curriculum. A fifteen year old, Jennifer Graham, refused to take part in the dissection of frogs. She was told by her teacher that if she did not do the assignment she would fail the course. She was an "A" student and received a "C" for the course. She took this matter to court. The judge compromised and told her to dissect a frog that died of natural causes, however she never found one and the matter was not resolved (1:195). There are benefits using animals in school like the previous case. It gives students opportunities for detailed function and observations of the structure. When studied in school an increase in interest and motivation to study living animals is provided. It also stimulates children's creative ability and encourages appreciation towards animals. Finally, it contributes to the personal development of students. Thus a responsibility for animals and the growth of caring attitudes is established (9:221).
In veterinary training, animals are used as models for other animals. In the United States students dissect animals to study their anatomy and function (10:233). Veterinarians practice with animals so they can form technical abilities. In the United Kingdom, on the other hand, veterinarians are trained without ever touching animals. Their first experience with live animals is on the job (9:232). The issue of vet students is serious and is left as an endless debate (10:232).
The United States Congress Office of Technology Assessment has estimated that several million animals are used each year for toxicological testing in the United States (2:54). Cosmetics and other substances are tested in animals' eyes. J.H. draize developed a scale for assessing how irritating a substance is when placed in a rabbit's eye (2:55). The animals are usually placed in holding devices from which only their heads protrude. This prevents the rabbits from scratching their eyes. Shampoo, ink, or bleach is placed in the eye of each rabbit and is then observed daily for eye infection or swelling (2:56).
Antivivisectionists are unconvinced that animal experimentation has benefits. They feel that vivisection is cruel to animals and detrimental to the moral character of humans. Humanitarians believe that this suffering leads to insensitivity. Their sensibilities hardened thus vivisects became capable of barbarous acts against humans as well as animals (7:59). Medical students, corrupted by hospital teaching, absorb such a love of cruelty that when they visit their homes they practice it for their own sake. Antivivisectionists say that vivisection "reverses the order of the refining forces of civilization" (7:60). The use of animals in these issues should be limited and controlled even though some cases are justified (8:338). The debate among moral philosophers is never ending. There are undoubtedly many moral choices in animal experimentation. But most philosophers agree that animals should be granted a higher status when people decide to "use" them (11:9).
In conclusion, animals should not be viewed as man's gift to the world and, therefore, should not be used excessively without a proper justification. When educating students in grade school, the killing of animals is unnecessary. For educating in high school, college, and graduate school, alternatives can be used. It is also inhuman and unnecessary to kill animals for the use of cosmetics. For this purpose animals are being killed to perfect the beauty of humans. In some cases, such as research to help cure diseases, many questions are raised about morality. These cases must be researched individually . The bottom line is that "a life is a life and animals have the same right as humans do to enjoy it!"
VIVIECTION
Many people today, including scientists and doctors, are questioning the suffering and killing of animals for the sake of human beings. Is it morally correct to dissect a frog or a worm for the purpose of educating a high school student? On the other hand, must "We study life to protect life" (1:131) The issue of killing animals for the use of biomedical research, education, and cosmetics can be referred as "vivisection". Twenty-five to thirty-five million animals are spared in the U.S.A. each year for the purpose of research, testing, and education. Although vivisection serves as an important tool for scientists and doctors to work in research and may benefit humans, the harms indeed outweigh the benefits.
Animal experimentation was not common until the early nineteenth century and emerged as an important method of science. The first recorded action of vivisection was the study of body humors by Erasistratus in Alexandria during the third century (1:3). Later, in A.D. 129-200, the physician, Galen, used five pigs to investigate the effects of several nerves (1:4). He is considered to be the founder of experimental physiology. During the Renaissance Era, Andreas Vesalius conducted experiments on monkeys, swine, and goats (1:3). By the late eighteenth century, the methods of scientific discovery were changer to experimentation of live animals by two French physiologists, Claude Bernard and Francious Magnedie. They revolutionized methods of scientific discovery by establishing live animal as common practice (1:4). Claude Bernard believed that in order for medicine to progress, there must be experimental research, and affirmed that "vivisection is indispensable for physical research". This is when the anti-vivisection movement was established ("vivisection").
There are different views as to why or why not there should be animal experimentation. For example, Descartes believed that animals are incapable of feeling pain. He said "The greatest of all the prejudices we have retained from our infancy is that of believing that beasts think" (1:4). In other words, Descartes believes that animals have no sensations. Singer argues and thinks that animals have feelings, desires, and preferences. He observed that stimuli that cause pain to humans, such as hitting and burning, cause pain to animals (1:25). Singer 's position is that equal harms should be counted equally and not downgraded for animals. However, he does not say that humans and animals have an equal moral status, for he believes that "humans are superior to their fellow animals by virtue of God-given soul" (12:37). Regan, another opposer to Descarte's view, feels that animals do feel pain and have desires as well. He believes that animals are "Subjects of a life just as human beings are and a subject of a life have inherent values" (1:26). He also feels that animals should not be tested for toxic substances, instead one should use cell tissue cultures (5:26).
The people who favor animal experimentation feel that research is for the purpose of humans. Research is a cultural value to acquire knowledge for knowledge's sake. In other words, the means justifies the end if the end benefits society. (4:62). They also believe that humans are superior to all other creatures (1:28). Research is for biomedical purposes; 1) to add scientific understanding of basic biological behavior, functions, and processes 2) to improve human or animal health by studying the natural history of the disease (1:22). Henry Foster, the founder of Charles River Breeding Laborator, said that "the use of animals in experiments is all for the benefit of mankind. If you don't use animals you don't do research!" (2:45).
Most of the times by doing research one performs tests on animals. For example, rabbits are locked in a chamber and forced to inhale grass, sprays, and vapors. In dermal toxicity studies, rabbits have their fur removed to have substances placed on their skin. In this case they are restrained so they don't scratch (2:55). Testing is conducted to assess the potency, effectiveness, or toxicity of substances that have established or potential usefulness for medical, scientific, or commercial purposes (1:39). For instance, new drugs are tested for efficiency and safety before clinical trials are conducted on humans. Tests on animals are done to establish safety levels for humans of known toxic substances (1:40).
Although testing might seem like the most efficient way to gain knowledge in these areas, alternatives exist. The use of slides, films, computer programs, and models can fulfill the same job without any harm. For example, in vet schools the symptoms of strychnine poisoning were demonstrated by poisoning dogs and then put on a video tape. On the video the students can go over steps repeatedly and see what is taking place more clearly than in a lecture hall (9:234). In medical schools procedures are easier to follow by camera. Students can watch surgery performed by the top practitioner of the area. By using videos lives are saved, suffering is reduced, and money is saved (6:107).
Animals are also used to teach human concepts at all levels of education, to instruct students in biology, to teach certain skills, and to train the next generation of scientists (1:40). For example, high school students have been conducting frog dissections for the past fifty years. In some schools it is part of the curriculum. A fifteen year old, Jennifer Graham, refused to take part in the dissection of frogs. She was told by her teacher that if she did not do the assignment she would fail the course. She was an "A" student and received a "C" for the course. She took this matter to court. The judge compromised and told her to dissect a frog that died of natural causes, however she never found one and the matter was not resolved (1:195). There are benefits using animals in school like the previous case. It gives students opportunities for detailed function and observations of the structure. When studied in school an increase in interest and motivation to study living animals is provided. It also stimulates children's creative ability and encourages appreciation towards animals. Finally, it contributes to the personal development of students. Thus a responsibility for animals and the growth of caring attitudes is established (9:221).
In veterinary training, animals are used as models for other animals. In the United States students dissect animals to study their anatomy and function (10:233). Veterinarians practice with animals so they can form technical abilities. In the United Kingdom, on the other hand, veterinarians are trained without ever touching animals. Their first experience with live animals is on the job (9:232). The issue of vet students is serious and is left as an endless debate (10:232).
The United States Congress Office of Technology Assessment has estimated that several million animals are used each year for toxicological testing in the United States (2:54). Cosmetics and other substances are tested in animals' eyes. J.H. draize developed a scale for assessing how irritating a substance is when placed in a rabbit's eye (2:55). The animals are usually placed in holding devices from which only their heads protrude. This prevents the rabbits from scratching their eyes. Shampoo, ink, or bleach is placed in the eye of each rabbit and is then observed daily for eye infection or swelling (2:56).
Antivivisectionists are unconvinced that animal experimentation has benefits. They feel that vivisection is cruel to animals and detrimental to the moral character of humans. Humanitarians believe that this suffering leads to insensitivity. Their sensibilities hardened thus vivisects became capable of barbarous acts against humans as well as animals (7:59). Medical students, corrupted by hospital teaching, absorb such a love of cruelty that when they visit their homes they practice it for their own sake. Antivivisectionists say that vivisection "reverses the order of the refining forces of civilization" (7:60). The use of animals in these issues should be limited and controlled even though some cases are justified (8:338). The debate among moral philosophers is never ending. There are undoubtedly many moral choices in animal experimentation. But most philosophers agree that animals should be granted a higher status when people decide to "use" them (11:9).
In conclusion, animals should not be viewed as man's gift to the world and, therefore, should not be used excessively without a proper justification. When educating students in grade school, the killing of animals is unnecessary. For educating in high school, college, and graduate school, alternatives can be used. It is also inhuman and unnecessary to kill animals for the use of cosmetics. For this purpose animals are being killed to perfect the beauty of humans. In some cases, such as research to help cure diseases, many questions are raised about morality. These cases must be researched individually . The bottom line is that "a life is a life and animals have the same right as humans do to enjoy it!"
VIVIECTION
Many people today, including scientists and doctors, are questioning the suffering and killing of animals for the sake of human beings. Is it morally correct to dissect a frog or a worm for the purpose of educating a high school student? On the other hand, must "We study life to protect life" (1:131) The issue of killing animals for the use of biomedical research, education, and cosmetics can be referred as "vivisection". Twenty-five to thirty-five million animals are spared in the U.S.A. each year for the purpose of research, testing, and education. Although vivisection serves as an important tool for scientists and doctors to work in research and may benefit humans, the harms indeed outweigh the benefits.
Animal experimentation was not common until the early nineteenth century and emerged as an important method of science. The first recorded action of vivisection was the study of body humors by Erasistratus in Alexandria during the third century (1:3). Later, in A.D. 129-200, the physician, Galen, used five pigs to investigate the effects of several nerves (1:4). He is considered to be the founder of experimental physiology. During the Renaissance Era, Andreas Vesalius conducted experiments on monkeys, swine, and goats (1:3). By the late eighteenth century, the methods of scientific discovery were changer to experimentation of live animals by two French physiologists, Claude Bernard and Francious Magnedie. They revolutionized methods of scientific discovery by establishing live animal as common practice (1:4). Claude Bernard believed that in order for medicine to progress, there must be experimental research, and affirmed that "vivisection is indispensable for physical research". This is when the anti-vivisection movement was established ("vivisection").
There are different views as to why or why not there should be animal experimentation. For example, Descartes believed that animals are incapable of feeling pain. He said "The greatest of all the prejudices we have retained from our infancy is that of believing that beasts think" (1:4). In other words, Descartes believes that animals have no sensations. Singer argues and thinks that animals have feelings, desires, and preferences. He observed that stimuli that cause pain to humans, such as hitting and burning, cause pain to animals (1:25). Singer 's position is that equal harms should be counted equally and not downgraded for animals. However, he does not say that humans and animals have an equal moral status, for he believes that "humans are superior to their fellow animals by virtue of God-given soul" (12:37). Regan, another opposer to Descarte's view, feels that animals do feel pain and have desires as well. He believes that animals are "Subjects of a life just as human beings are and a subject of a life have inherent values" (1:26). He also feels that animals should not be tested for toxic substances, instead one should use cell tissue cultures (5:26).
The people who favor animal experimentation feel that research is for the purpose of humans. Research is a cultural value to acquire knowledge for knowledge's sake. In other words, the means justifies the end if the end benefits society. (4:62). They also believe that humans are superior to all other creatures (1:28). Research is for biomedical purposes; 1) to add scientific understanding of basic biological behavior, functions, and processes 2) to improve human or animal health by studying the natural history of the disease (1:22). Henry Foster, the founder of Charles River Breeding Laborator, said that "the use of animals in experiments is all for the benefit of mankind. If you don't use animals you don't do research!" (2:45).
Most of the times by doing research one performs tests on animals. For example, rabbits are locked in a chamber and forced to inhale grass, sprays, and vapors. In dermal toxicity studies, rabbits have their fur removed to have substances placed on their skin. In this case they are restrained so they don't scratch (2:55). Testing is conducted to assess the potency, effectiveness, or toxicity of substances that have established or potential usefulness for medical, scientific, or commercial purposes (1:39). For instance, new drugs are tested for efficiency and safety before clinical trials are conducted on humans. Tests on animals are done to establish safety levels for humans of known toxic substances (1:40).
Although testing might seem like the most efficient way to gain knowledge in these areas, alternatives exist. The use of slides, films, computer programs, and models can fulfill the same job without any harm. For example, in vet schools the symptoms of strychnine poisoning were demonstrated by poisoning dogs and then put on a video tape. On the video the students can go over steps repeatedly and see what is taking place more clearly than in a lecture hall (9:234). In medical schools procedures are easier to follow by camera. Students can watch surgery performed by the top practitioner of the area. By using videos lives are saved, suffering is reduced, and money is saved (6:107).
Animals are also used to teach human concepts at all levels of education, to instruct students in biology, to teach certain skills, and to train the next generation of scientists (1:40). For example, high school students have been conducting frog dissections for the past fifty years. In some schools it is part of the curriculum. A fifteen year old, Jennifer Graham, refused to take part in the dissection of frogs. She was told by her teacher that if she did not do the assignment she would fail the course. She was an "A" student and received a "C" for the course. She took this matter to court. The judge compromised and told her to dissect a frog that died of natural causes, however she never found one and the matter was not resolved (1:195). There are benefits using animals in school like the previous case. It gives students opportunities for detailed function and observations of the structure. When studied in school an increase in interest and motivation to study living animals is provided. It also stimulates children's creative ability and encourages appreciation towards animals. Finally, it contributes to the personal development of students. Thus a responsibility for animals and the growth of caring attitudes is established (9:221).
In veterinary training, animals are used as models for other animals. In the United States students dissect animals to study their anatomy and function (10:233). Veterinarians practice with animals so they can form technical abilities. In the United Kingdom, on the other hand, veterinarians are trained without ever touching animals. Their first experience with live animals is on the job (9:232). The issue of vet students is serious and is left as an endless debate (10:232).
The United States Congress Office of Technology Assessment has estimated that several million animals are used each year for toxicological testing in the United States (2:54). Cosmetics and other substances are tested in animals' eyes. J.H. draize developed a scale for assessing how irritating a substance is when placed in a rabbit's eye (2:55). The animals are usually placed in holding devices from which only their heads protrude. This prevents the rabbits from scratching their eyes. Shampoo, ink, or bleach is placed in the eye of each rabbit and is then observed daily for eye infection or swelling (2:56).
Antivivisectionists are unconvinced that animal experimentation has benefits. They feel that vivisection is cruel to animals and detrimental to the moral character of humans. Humanitarians believe that this suffering leads to insensitivity. Their sensibilities hardened thus vivisects became capable of barbarous acts against humans as well as animals (7:59). Medical students, corrupted by hospital teaching, absorb such a love of cruelty that when they visit their homes they practice it for their own sake. Antivivisectionists say that vivisection "reverses the order of the refining forces of civilization" (7:60). The use of animals in these issues should be limited and controlled even though some cases are justified (8:338). The debate among moral philosophers is never ending. There are undoubtedly many moral choices in animal experimentation. But most philosophers agree that animals should be granted a higher status when people decide to "use" them (11:9).
In conclusion, animals should not be viewed as man's gift to the world and, therefore, should not be used excessively without a proper justification. When educating students in grade school, the killing of animals is unnecessary. For educating in high school, college, and graduate school, alternatives can be used. It is also inhuman and unnecessary to kill animals for the use of cosmetics. For this purpose animals are being killed to perfect the beauty of humans. In some cases, such as research to help cure diseases, many questions are raised about morality. These cases must be researched individually . The bottom line is that "a life is a life and animals have the same right as humans do to enjoy it!"
VIVIECTION
Many people today, including scientists and doctors, are questioning the suffering and killing of animals for the sake of human beings. Is it morally correct to dissect a frog or a worm for the purpose of educating a high school student? On the other hand, must "We study life to protect life" (1:131) The issue of killing animals for the use of biomedical research, education, and cosmetics can be referred as "vivisection". Twenty-five to thirty-five million animals are spared in the U.S.A. each year for the purpose of research, testing, and education. Although vivisection serves as an important tool for scientists and doctors to work in research and may benefit humans, the harms indeed outweigh the benefits.
Animal experimentation was not common until the early nineteenth century and emerged as an important method of science. The first recorded action of vivisection was the study of body humors by Erasistratus in Alexandria during the third century (1:3). Later, in A.D. 129-200, the physician, Galen, used five pigs to investigate the effects of several nerves (1:4). He is considered to be the founder of experimental physiology. During the Renaissance Era, Andreas Vesalius conducted experiments on monkeys, swine, and goats (1:3). By the late eighteenth century, the methods of scientific discovery were changer to experimentation of live animals by two French physiologists, Claude Bernard and Francious Magnedie. They revolutionized methods of scientific discovery by establishing live animal as common practice (1:4). Claude Bernard believed that in order for medicine to progress, there must be experimental research, and affirmed that "vivisection is indispensable for physical research". This is when the anti-vivisection movement was established ("vivisection").
There are different views as to why or why not there should be animal experimentation. For example, Descartes believed that animals are incapable of feeling pain. He said "The greatest of all the prejudices we have retained from our infancy is that of believing that beasts think" (1:4). In other words, Descartes believes that animals have no sensations. Singer argues and thinks that animals have feelings, desires, and prefe
f:\12000 essays\sciences (985)\Enviromental\Walmart is taking Over.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Targos O' Blade
Wal-Mart is taking Over
Is Wal-Mart good for communities, or is Wal-Mart a wolf in sheep's clothing? With a gross annual sales of over $67 billion and more than 2,000 stores, Wal-Mart is one of the biggest corporations in the United States. Wal-Mart opens a new store once every two days in small communities and cities across the United States, however, are these stores good for these communities, or are they wrecking havok? When you look down at the fine print Wal-Mart doesn't earn it's money it steals it money from other businesses. Choking other smaller businesses by offering wider varity of products at a more competitive price. This is actually a very simple business tactic if you want to sell a lot of something cut your profit margin to beat the other competitors and you will sell more. Wal-Mart stole an average of over $10 million in an average sized Iowan Town.
You want to beat Wal-Mart keep by keeping it from invading you town and making it a ghost land? Here are some steps that have been victorious in the past as how to keep Wal-Mart out.
Quote Wal-Marts officers, they have been known to say very contradictory things for instance: Wal-Mart's founder Sam Walton once said "If some community, for whatever reason, doesn't want us in there, we aren't interested in going in and creating a fuss." or is the VP of Wal-Mart once stated, "'We have so many opportunities for building in communities that want Wal-Marts, it would be foolish of us to pursue construction in communities that don't want us." If you raise a good argument then you have something to stand on.
According to Albert Norman in his article Eight Ways to Stop the Store ",Wal-Mart Mathematicians only know how to add. They never talk about the jobs they destroy, the vacant retail space they create or their impact on commercial property values." This is very true Wal- Mart's officers always talk about the jobs and opportunities they create ,however, are 250 minium wage jobs worth 150 $6-10 jobs? Wal-Mart also talks about how they benefit, but except for one scholarship they do little of nothing to fulfill that statement.
Raise money to stop Wal-Mart to influence the public to become active in keeping Wal- Mart out of the community. Wal-Mart will spend money trying to persuade people to want Wal- Mart in their community. Fight this these are your friends, family, and fellow townsfolk, who do you think they are going to listen to? Someone who they have known all their life and who they know only has the towns best interests in mind, or a bumbling fool from New York City who knows nothing of your town people or their situation.
Start a petition influence people to vote, stake out the supermarkets and hardware stores. Get as many signatures as you can. The more signatures the easier it will be to keep Wal-Mart out of the community. If Wal-Mart was being truthful about staying where they were wanted and not where they weren't, then it is the communities responsibility to let Wal-Mart know that they aren't wanted.
Look at the facts Wal-Mart isn't good for small communities, it profits by picking on the small businesses. It drains a town dry and leaves it as a barron waste land. You can fight Wal- Mart, and if you have the opportunity I would advise it.
f:\12000 essays\sciences (985)\Enviromental\War of the Worlds.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A: Summary of
This story is about two Indian twins who live by their mother, because their
father died. The twins and the mother are starting to have some problems,
because the mother has the opinion that the twins no longer have respect
for other people. As a results of this big discussion the mother says that the
twins can leave.
The twins were real evil to the young men who wanted to be with them, and
we hear about how the twins enjoyed to go out with boys and then drop
them again - for fun.
The I-person says that Suki and herself haven't changed, but that it is the
things which had changed. Then the twins think back on the past and we
hear about how wild the twins were and how their father helped them when
there were in trouble. The problem was that the twins wanted to change the
world and therefor wanted to speak in the Gurudware. The twins said some
thing, which specially the men didn't like. Therefor the father had to save
them from the angry men. In the end of the story the twins decided to stay
with their mom and fight for what they believe in.
B: Essay
This story about the two Indian twins shows a typical problem for imigrants.
In this case the twins want to change the system and the rules wich Indian
people live by. Suki and her sister will not tolerate the rules and live by the
normal traditions. Therefor you can say that the two twins are revulutionary.
The twins want to speak in the Gurudwara, so they can tell the other women
and children to fight for their rights. After normal indian traditions women
don't have anything to say. It is the men who make the decitions and therfor
decides over the women and children. The two sisters want to help other
women and children and on a sunday in the Gurudwara they say that
almost all men who is present should be ashamed, but this couses a conflikt
because the men don't want to give up their power.
Indian girls or women like Suki and her sister are very important for the
community, because they make it more posible that Indian people can live
side by side with white people. It is very good that the twins defy the
traditions and try to make a better sociaty.
The twins are not all good, because they are a little childish when they play
with the men like they do. Of cause this might be coursed by the reaction of
their bitterness of men and their behaviour. With that I what the point out
that the reason why the girls are doing these thing are because they are
angry with the men and the indian traditions, which you can say the Indian
men make use of. In a other way it is very important that we have our
traditions. The traditions make us what we are and tell us many things about
our country, nation and forefathers. Therefor you just cannot throw away
tradition, exept from this case where the men just want the keep the
tradition, because they can make use of it.
It is naturally that you will not get popular if you go againts the rest of the
community. In spite of that you can say that it is ok to move away from a
place you don't like and where you don't fell welcommed. In the end of the
story the twins chose the right thing to do. They stay and fight. You can say
that this is one of the authors messages to the readers - Fight for what you
want and for what you believe in.
The behaviour of the two twins areso good as it should be. They don't threet
their mother as they should. Specially in the situation with Shanty where the
twins attacked the mother for driving Shanty back you can say that it wasn't
allright that they didn't believe in ther mother. In this case the twins learned
from the situation and discorvered thart not all Indian women have a choice.
This situation could be the reason way the twins wnated to change the
system or the world. The title War of the Worlds could tell us that the twins
have started a war which should change the world. Or maybe change it to
another world - a westen world.
I would say that the writher wants us the believe that you can achieve many
good things if you are ready to fight for it. The message is also that we have
to threat other people good - speacilly our parents.
Translation:
In a just published book the former British ambassador to Denmark, Sir
James Mellon, says that the Danes don't see themselves as a nation in the
proper sense of the word, that means "a population in a country, with its
own government", but more like a tribe.
The author illustrates his theory with for instance following example: "Even
though my two children are born in Denmark they cannot obtain Danish
citizenship, because they don't speak Danish and don't have Danish parents.
A person, who is born in England will automatically become a British subject,
regardless of he speaks English or not and regardless of how many years he
has been in the country.
It almost never succeed for foreigners to get totally accepted as members of
the Danish Tribe. A Dane, should referably have a Danish "look", control the
Danish language and have been in the country for many years before
people see him as a real Dane, Sir James says.
f:\12000 essays\sciences (985)\Enviromental\waste.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Enviromental\Water Biomes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Water Biomes
Marshland is covered with grasses, reeds, sedges, and cattails. These plants all have their
roots in soil covered or saturated with water and its leaves held above water.Marshes may
be freshwater or salt. Freshwater marshes develop along the shallow edges of lakes and
slow-moving rivers, forming when ponds and lakes become filled with sediment. Salt
marshes occur on coastal tidal flats. Inland salt marshes occupy the edges of lakes. They
affect the supply of nutrients, the movement of water, and the type and deposition of
sediment.
Salt marshes are best developed on the Atlantic coasts of North America and
Europe. In eastern North America the low marsh is dominated by a single species, salt-
marsh cordgrass. The high marsh consists of a short cordgrass called hay, spike grass, and
glasswort. Glasswort is the dominant plant of Pacific Coast salt marshes.
Freshwater marshes provide nesting and wintering habitats for waterfowl and
shorebirds, muskrats, frogs, and many aquatic insects. Salt marshes are wintering
grounds for snow geese and ducks, a nesting habitat for herons and rails, and a source of
nutrients for estuarine waters. Marshes are important in flood control, in sustaining high-
water tables, and as settling basins to reduce pollution downstream. Despite their great
environmental value, marshes are continually being destroyed by drainage and filling.
Marine Life, plants and animals of the sea, from the high-tide mark along the
shore to the depths of the ocean. These organisms fall into three major groups: the
benthos, plants such as kelp and animals such as brittle stars that live on or depend on the
bottom; the nekton, swimming animals such as fishes and whales that move
independently of water currents; and plankton, various small to microscopic organisms
that are carried along by the currents.
Shore Life, the essentially marine organisms that inhabit the region bounded on
one side by the height of the extreme high tide and on the other by the height of the
extreme low tide. Within these boundaries organisms face a severe environment imposed
by the rise and fall of tides. For up to half of a 24-hour period, the environment is marine;
the rest of the time it is exposed, with terrestrial extremes in temperature and the drying
effects of wind and sun.
Life on rocky shores, best developed on northern coasts, is separated into distinct
zones that reflect the length of time each zone is exposed. At the highest position on the
rocks is the black zone, marked by blue-green algae. This transition area between land
and the marine environment is flooded only during the high spring. Below the black zone
lies the white zone, where barnacles are tightly glued to rocks. Living among the
barnacles are rock-clinging mollusks called limpets. At low tide, barnacles keep their
four movable plates closed to avoid drying; at high tide they open the plates and extend
six pairs of wandlike tentacles to sweep the water for microscopic life. Preying on the
barnacles are hole-drilling snails called dog whelks.
Below the white zone and in some places overlying the barnacles are rockweeds,
which have no roots but attach themselves to rocks by holdfasts. Brown algae are
rockweeds that grow more than 8 ft long. The most common are the bladder wracks, with
branching thalli up to 6 in wide. In the lowest zone, uncovered only during the spring
tides, is the large brown alga Laminaria, one of the kelps. Beneath its frondlike thalli live
starfish, sea cucumbers, limpets, mussels, and crabs.
On the sandy shores, life lies hidden beneath the surface, waiting for the next high
tide. Shifting and unstable, sand provides no substrate on which life can anchor itself.
The environment of sand-dwelling animals, however, is less severe than that of animals
dwelling on rocky shores. Although the surface temperature on a beach varies with the
tide, below the surface the temperature remains nearly constant, as does the salinity.
The upper sandy beach, like the upper rocky shore, is transitional from land to sea. It is
occupied by ghost crabs and beach fleas, animals more terrestrial than marine. True
marine life appears at the intertidal zone. Two common inhabitants, active at high tide,
are the lugworm, which burrows through the sand and feeds on organic matter; and the
coquina clam Donax, which advances up the beach and retreats with the tides. Among
the sand grains live small copepods and worms that feed on microscopic algae, bacteria,
and organic matter.
On the lower beach, which remains uncovered for only a short period of time, live
clams, crabs, starfish, and sand dollars, whose calcareous skeletons lie partially buried in
the sand.
f:\12000 essays\sciences (985)\Enviromental\Water Pollution is it as big of a problem as we think .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
WATER POLLUTION:
Is it as big of a problem as we think?
The following essay will be looking at the factors that cause pollution, and the effect that pollution has on our world today. It will also investigate what it has in store for the future if things do not improve. It will also explore some of the methods used to treat and clean-up wastewater, and oil spills.
Today, the industrialization of Canada is severely affecting this nations lakes, streams, and rivers. If something is not done to improve the situation it is going to have some severe environmental problems in its future.
Today pollution is very high in both inland and marine waters. All different types of water pollution are contributing factors in this problem. Here are some things that are associated with pollution:
Pathogens: Pathogens are disease causing bacteria, viruses, and protozoa. They usually come from human sewage. As pathogen numbers increase, so does the risk of human health.
Biochemical Oxygen Demand: Organic wastes that decay in a body of water. decrease the amount of oxygen found in it. The living things in the lake need oxygen to survive. If the oxygen level is depressed to zero, all fish in the lake die. Any decomposition that does not contain oxygen starts to generate noxious gases such as Hydrogen Sulfide. Pulp and paper mills, and municipal sewage causes BOD.
Nutrients: Nutrients, particularly nitrogen and phosphorus, enrich waters and accelerate the aging of lakes and streams. Also, the result of this is rich plant life which prohibits recreational activities. Plankton blooms depress oxygen levels (as mentioned before) and therefore, endanger living organisms. Major sources of nutrients are municipal sewage and agricultural runoff.
Toxic Materials: Can affect the health of aquatic organisms and their consumers, and the people who drink the contaminated water. The toxicants include lead, mercury, DDT, PCB, benzopyrene, oil, and dibutyl phthalate. These chemicals enter the lake through dumping by the factories.
Temperature Changes: Temperature changes from waste heat discharges (like from a nuclear power plant) can cause pollution. This happens if their elevation reduces dissolved-oxygen levels, and accelerates eustrophication, which in turn affects the ecological processes and blocks the migration path of fishes.
Acidification: Acidification (acid rain etc.) Is caused by sulfur and nitrogen oxide in the rain, which is caused by automobiles and large industries.
Temperature Changes: The temperature of a body of water is changed by waste heat discharges, like that of a nuclear power plant. It affects ecological processes and blocks the migration paths of fish.
Because of these pollutants Southern Saskatchewan and Alberta are threatened by water shortages, and the great lakes face problems in serious pollution. Rivers and streams are also greatly affected by these pollutants. The noticeable outcomes of these pollutants are these: Nitrates in drinking water can cause disease in infants that may sometimes end in death. Crops in a field can absorb sludge-derived fertilizer containing cadium, and when humans eat the crop it may result in acute liver and kidney damage. Sometimes lakes become artificially enriched with nutrients from the chemical fertilizers that run off cultivated fields into the water. This causes water that is unpleasant to drink due to bad odor, taste, and algae. Also, acid rain has left many lakes in Canada totally devoid of life.
There are three major sources of water pollution, they are municipal, industrial and agricultural.
Municipal: This type of water pollution comes from the wastewater found only in homes and commercial establishments For many years people have been placing importance on treating the waste to remove harmful bacteria, etc. from it. Recently we are coming aware of the fact that we have to improve the ways in which we dispose of the waste.
Industrial: Industrial waste is wastewater from industrial areas, and companies. There are many different types of chemicals, and they all have different affects. Some are not as severe as others, but all are harmful. They vary due to the amount they contain of specific substances.
Agricultural: Agricultural waste is a form of pollution that is the source of many organic and inorganic pollutants in waters in the ground and on the surface. Wastes from commercial feeders, animal wastes, chemicals, etc. Run of into the land through leaching and runoff.
What is the typical wastewater from these categories made up of? Wastes from toilets, sinks, industrial processes, and agricultural chemicals and leftovers. Treatment of such sewage's as these is required before it may be buried, reused, or sent back into the water system safely. In a treatment plant, the polluted water is passed through a series of chambers, screens, and chemical processes to reduce its bulk and toxic level severity. There are three general steps to water treatment. They are usually classified as being part of primary, secondary, or tertiary treatment.
Primary Treatment: During this level, a large percentage of the suspended solids and in organic material is removed from the sewage waste.
Secondary Treatment: The focus of secondary treatment is to reduce the organic material content. They do this by accelerating the natural biological processes.
Tertiary Treatment: This group of treatment is necessary when the water will be reused. At this time 99% of the solids in the water are removed and various chemical processes are used to ensure the water is as free from as many impurities as possible.
After the water treatment process has been broken into three categories, it can be further broken down into a number of smaller headings. Here is a summary of some of the steps:
Grit Chamber: The wastewater that enters a treatment plant contains debris that might clog or damage the pumps and machinery. So the sewage is passed through the grit chamber. These are long, and narrow settling tanks, that is used to remove such inorganic, and mineral matter such as sand, silt, gravel, and cinders. They made all particles 0.2mm or larger to settle at the bottom.
Sedimentation Tank: When grit is removed, it passes into the sedimentation tank. In this step all organic materials settle on the bottom, and are drawn off for disposal. This procedure can remove about 20 to 40 percent of BOD5 and 40 to 60 percent of suspended solids
Digester: This is a complicated step. The object of it is to make the chemically complex organic sludge to methane, carbon dioxide, and an inoffensive humus type material. First the matter is made soluble by enzymes, then the substance is fermented by a group of acid-producing bacteria, reducing it to simple organic acids such as acetic acid. The organic acids are than converted to methane and carbon dioxide by bacteria. Thickened sludge is heated and added as continuously as possible to the digester, where it remains for 10 to 30 days and is decomposed. Digestion reduces organic matter by 45 to 60 percent.
Drying Beds: Digested sludge is placed on sand beds for air drying. Percolation into the sand and air drying are the main processes involved in the dewatering process. Air drying requires dry, relatively warm weather for greatest efficiency, and some treatment plants have a greenhouse-like structure to shelter the sand beds. Dried sludge is used in most cases as a soil conditioner; sometimes it is used as a soil fertilizer because of its two percent nitrogen and 1 percent phosphorous content.
Trickling Filter: In this process, a waste stream is distributed intermittently over a bed or column. A gelatinous film of microorganisms coats the bed, and functions as the removal agent. The organic matter in the waste stream is absorbed by the microbial film and converted to aerobic products. The reduction of the amount of BOD5 is about 85 percent.
Activated Sludge: This a process where the sludge particles are suspended in an aeration tank and supplied with oxygen. The organic matter is absorbed by the activated sludge particles and converted to aerobic products. The reduction of BOD5 fluctuates between 60 and 85 percent.
There are a few other remaining steps but they are pretty straightforward, such as disposing of the waste.
Sometimes the sewage is not cleaned or treated and is dumped directly into the river. This results in lakes and rivers that look very brown and dirty not clean and clear. The pollution of rivers and streams with chemical contaminants has become among the most critical environmental problems of the century. It is estimated that 10 million people die each year from drinking contaminated water!
Another big problem are Oil spills. These large scale accidental discharges of liquid petroleum products are an important factor of the pollution along shore lines. The most spectacular spills involve the supertankers that are used to transport the product, but offshore drilling contribute to a large share of the pollution. One estimate is that for every million ton of oil that is shipped, one ton is spilled. Some of the largest spills recorded are from the tanker Amoco Cadiz off the French coast in 1978 (1.6 million barrels of crude oil). The Ixtoc I oil well in the gulf of Mexico in 1979 (3.3 million barrels). The largest spill in the US (240,000 barrels) was that of the tanker Exxon Valdez in Prince William Sound, Gulf of Alaska, in march 1989. Within a week, under high winds, this spill had become a 6700-sq.-km., slick that endangered wildlife and fisheries in the entire gulf area. The oil spills in the Persian Gulf in 1983, during the Iran-Iraq conflict, and in 1991, during the Persian Gulf War, resulted in enormous damage to the entire area, especially to the marine life.
One of the methods used to clean up oil spills is a long sponge that they drag along the surface of the water that soaks up all the oil. It is a long and tedious job, but it works quite well.
if you would like to see them you could look back on FIG 1-2.
The wastes treated are the ones that flow into the sewer system and enters the treatment plant. But some wastes are discharged directly into the Marine Waters. In fact, in the U.S. it is estimated that 45 million metric tons of sewage each year ends up in the marine waters. About 80 percent of this amount of waste is produced by dredging, 10 percent is industrial waste, and 9 percent is sewage sludge.
The U.S. alone produces 4,036,300,000 metric tons of sewage per YEAR! The following elements combined have very serious consequences: The presence of toxic substances, the rapid uptake of contaminants by marine organisms, heavy deposits of materials on the seabed near the shore, and the excessive growth of undesirable organisms.
A person living in a wealthy, industrialized nation may produce as much a 875 kg ( more than 1900 ponds! ) of garbage per year! that means that by the time they are seventy years old they will have produced 61250 kg! ( 133000 pounds! ) These numbers are pretty scary, especially when multiplied by millions to find out the whole country's kilograms per year. The average person needs to pay attention to the amount of waste they produce, and try to cut back a little. But what exactly needs to be improved? Well, domestic, or household waste includes a wide variety of items, But it is often a mix of potentially reusable or recyclable items, ( such as newspaper and cans.), and largely non-recyclable material ( such as broken, worn out devices, and plastic wrappings). Due to dwindling space for landfills, many cities have adopted widespread recycling programs in which people separate the recyclable things apart from the non-recyclable garbage. And the recyclable goes to a recycling plant to be reused, and the remaining garbage goes to a landfill.
What is the composition of wastewater, and how do we discover it? Well, the composition is what it is made up of as in, how many parts of water does it have and what the parts are made up of. To find out scientists analyze it. This is done by using several physical, chemical, and biological measurements. The most common analyses include the measurements of solids, biochemical oxygen demand ( BOD5 ) chemical oxygen demand ( COD ) , and pH. the solid wastes include dissolved and suspended solids. the suspended solids are further divided into settleable and nonsettleable solids. All of THESE solids can be divided into volatile or fixed solids being inorganic or mineral matter. The concentration of organic matter is measured by the BOD5 or COD analyses. to see a typical chart of the breakdown, concentration, and composition.
After the water is treated where does it Go? Most often a direct discharge into a receiving lake or stream is the answer. But in some areas of the United States that are faced with worsening shortages of water for both domestic and industrial use, some places are turning to reuse of appropriately treated wastewater for groundwater recharge, irrigation of non edible crops, industrial processing, recreation, and other uses.
Where does the wastewater originate from? Mainly form domestic, industrial, groundwater, and meteorological sources. And these are most commonly known as Domestic Sewage, Industrial Waste, Infiltration, and Storm-Water Drainage.
Domestic Sewage: This type of wastewater results from peoples day to day activities, such as bathing, body elimination, food preparation, and recreation, averaging about 227 liters ( about 60 gallons ) per person per day!
Industrial Wastewater: The quantity and character of industrial wastewater is highly varied, depending on the type of industry, the management of it's water usage, and the amount of treatment the water receives before it is discharged. But to give you an idea of the amount I have a figure for a steel mill.
One steel mill may discharge anywhere from 5,700 to 151,000 liters ( about 1,500 to 40,000 gallons) per ton of steel manufactured. A typical metropolitan area discharges a volume of wastewater equal to about 60 to 80 percent of its total daily requirements! The rest is being used for washing cars, watering lawns, and for manufacturing processes such as food canning and bottling.
Infiltration: This occurs when sewer lines are placed below the water table or when rainfall goes down through the earth to the pipe. We do not like it because it means that the piping system and the treatment plant have to work extra hard
Storm Water Drainage: This is simply the water from rain, melted snow, etc. draining into our pipelines and sewers where it goes to a treatment plant to be treated, but there is nothing wrong with it.
In conclusion, From all the points I have brought up it is easy to see that the more people there are in the world the more water pollution there is going to be. That doesn't mean that we have to stop having children, what it means is that we have to start watching where we drain our polluted water, and start to use our resources more wisely. We should also be more careful with hazardous chemicals , and things like oil drilling, etc.
f:\12000 essays\sciences (985)\Enviromental\Water Pollution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
WATER POLLUTION
Water pollution has affected many people and animals. Water pollution is the disposal of garbage into a water stream. Some of the water pollution is from littering, some water pollution is done by chemical leaks, and others by ships. Also, There is much information about water pollution. I am going to take that education on water pollution a step farther; and explain how water pollution affects us, how it affects marine life, what companies affect it the most, and what people are doing to help.
There are many causes for water pollution. The main one is plastics. The reason for that is that plastics take four hundred and fifty years to decompose in the water. Also many companies use plastic and people throw it in the waterways. Because water can float and be carried by the wind, it can cause harm to unsuspecting creatures hundreds of feet from where it was originally dumped. Such waste includes bags, bottles, cups, straws, cup lids, utensils, six pack holders, cling wrap, fishing line, bait bags, and floats.
The second highest cause of water pollution is ship waste. Ships used to take much garbage with them on their ships and dump them. This was very common until the government took action. They were giving sailors up to one million dollars fines for disposing waste. Because of that, ships now carry less garbage with them.
Animals are not the only thing being harmed by water wastes. Fishing lines, rope, and plastic nets are being caught in the rutter and the engine, but the ships are not exactly perfect.
The other main cause of water pollution is industrial waste. Industries do not be harmed by water pollution but the cause much it. Many companies pour chemicals into the waterways. Some of the businesses that contribute to the water pollution are businesses that repair and maintain motor vehicles, electroplate, operate printing and coping equipment, perform dry cleaning and laundry services, process photographs, operate labs, involve building and construct roads, provide pest control, preserve wood, and make Furniture.
Water pollution doesn¹t just effect humans, it affects are whole ecosystem. Birds and marine life are affected by it. More than fifty species of birds are known to ingest plastic. When they eat plastic, they feel full, so some of them die of starvation. Algal blooms are another thing that kills marine life. Algal blooms are sea scum, whale food, and sea sawdust. Algal blooms are bundles of fine threads, rusty brown, they have a fishy smell, and are common in August through December.
Water is our main source of our life. We need it to live, drink, bathe, recreation, manufacturing, and power. We need water for almost everything; if we don't start cleaning up we will be in big trouble. Bunches of families dispose of chemicals everyday. It affects us drastically and we depend on it to be clean.
Right now the government is fining people for illegal dumping. But that is all the government is doing.
People in cities are organizing water pollution groups. A lot of people are producing fliers and giving them out. The are asking people to adapt a waterway. In Australia they had a national clean up day and went out to the ocean and cleaned it up. I think the people are taking this more seriously than the government.
We need to start cleaning up the water or we will be in big trouble. the government needs to get active and so does the public.
In some places water pollution is a main concern. The last defense of water pollution is water treatment. Their are two main reasons for water treatment. The first one is to protect the public's health. The second one is to protect the water quality. Most of the waste water comes from industries, homes, businesses, storm runoff, ground water, and schools. Also sludge is being treated to remove some of its water. Then it is further processed by stabilizing, dewatering, and disposal.
If more effort isn't being made the human race will die. Eventually all the water will be infested and unable to drink out of. We will have no places to bathe or anything. We will have to do something soon or else we will not make it.
f:\12000 essays\sciences (985)\Enviromental\Weather Forecasting.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Weather Forecasting
In researching this project I was amazed to find the many books
on this topic. After going through much information and reading an
enormous amount of writing on weather forecasting I can only come to
one conclusion that when all is considered the best forecasters can
only give an educated guess of what is in store for weather. Through
the many means at their disposal, such as satellites, ships at the ocean,
infrared, radio, and radar transmissions even with all of these
techniques no prediction is 100% accurate.
One question that I asked myself was "when was the first weather
forecasting ever done?", I found out that in 1863 in Britain there was a
united forecasting system headed by Captain Robert Fitzroy. Captain
Fitzroy would send ships around Britain to warn people of storms and
such. However, he was often wrong and criticized and therefor
committed suicide. Since then there have been many other services,
but the largest one currently is the National Weather Service. The
National Weather Service gives predictions for all of the world through
satellite imagery for all countries. Also in recent history many local
television and radio stations have made private forecasts for small
areas.
Meteorologists are people who interpret the weather, the reason I
don't say predict the weather is because even though all forecasters
have the same information and data at their fingertips, the way that
they interpret what is in front of them can be different. Meteorologists
receive information from various sources, but their interpretation of
the data determines the accuracy of their prediction.
Someone might ask, "If forecasters have so much information on
a particular area; how could they predict a flawed forecast ?" The
answer to that question lies in the fact that any one of a number of
weather conditions may ruin a forecast. A fast cold or hot front
moving in, an unexpected flow from the ocean or a cold wind may
change the whole days forecast.
There are many different materials and devices used by local and
government services to predict the weather. Some of these devices are,
Radar which is actually sound waves, which bounce off clouds and
give location of storms this way.
Another such device is actually a variation of radar called
"Doppler Radar" actually can give the exact location of a storm within
a kilometer. However, Doppler Radar is not used so much for everyday
forecasting, but for tornadoes and very large storms. The way Doppler
Radar works is almost the same as regular radar with one advantage, it
also can measure the speed of an object or storm, which makes its
prime usage tornado watching.
Some other techniques have been adapted to forecast the weather
such as infrared beams, which even at night can show where the
intensity of a storm is. And of course there are other instruments used
to predict the weather such as the barometer and the thermometer.
Of course all of these inventions have proven helpful for
forecasting weather, there was still one problem. The main problem
was communicating. The reason for this was that if the forecasters
were to send letters to each other every time there was a storm, their
counterparts would not learn of a storm or tornado for days! The
solution to this problem began with the invention of the telegraph.
The telegraph provided a simultaneous message carrier to
anywhere in the country. Later the radio was invented and then that
was used, but still something else was needed a system to transport
footage as well as sound.
The solution to that was solved by the Internet. The internet is a
connection by phone lines which can deliver photos and sound
instantaneously.
The next breakthrough in forecasting was the satellite. A satellite
would be launched from Earth and then would take video and photos
of the world and send back to Earth the footage, thereby being able to
show storms coming from the ocean just at the first stages.
The first weather satellite "T.I.R.O.S. 1"the world an infrared
view of the world. However T.I.R.O.S 1 was not specifically built for
forecasting but rather to study clouds. The U.S. government later went
on to build 7 more T.I.R.O.S.'s.
The first weather satellite truly devoted to weather forecasting
was E.S.S.A. 1 which provided detailed data pictures, and its
successor, E.S.S.A. 2 provided pictures from a regular wide angled
lens of the world. Furthermore even though E.S.S.A. and T.I.R.O.S
gave birth to a new generation of technological breakthroughs by
toadies standards the information they gave were fuzzy and
incomplete. Later there was a new satellite built in the image of
T.I.R.O.S. called I.T.O.S. which stood for Improved T.I.R.O.S.
operating system.
Recently there have been many local forecasting stations
popping up all over the world, a big difference from the once exclusive
N.W.S.(national weather service).
Lately most towns now have non-governmental forecasting stations,
which provide weather information for the suburban locations and
areas such as ski resorts, and holiday vacation spots, one thing the
N.W.S. does not do.
The National Weather Service has been in operation for over 100
years since 1870 and has kept all records of weather, thereby making it
possible to make an average for the day with decent results.
In Closing I can only surmise that much research been done on
predicting weather accurately, millions of dollars have been spent, on
satellites, radar, and weather bureaus. Meteorology, which is the study
of weather is an exact science, yet because it deals with the forces of
nature, the essentials of a weather prediction, will never be entirely
accurate, or will they...?
f:\12000 essays\sciences (985)\Enviromental\What is Satanism .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Satanism is the religion of the flesh. Happiness, to the Satanist must be found here and now. No heaven exists to go to after death and no Hell of burning punishment awaits the sinner. Strongly attached to our family and close associations, we make excellent friends. Satanists do not believe that you can love everyone and treat every person the same. By failing to hate you make yourself unable to love. Feared by their enemies and loved by their friends, Satanist's build their stronghold in the community. The term occult means "hidden" or those things or teachings that are unknown information, knowledge that is gained beyond the five senses. Therefore, knowledge is received by some supernatural involvement or connection. Anton LaVey of the first church of Satan in San Francisco, California, says that "Satanism is a blatantly selfish brutal religion- It is based on the belief that man is inherently a selfish, violent creature... that the earth will be ruled by those who fight and win." Satanism challenges the biblical teachings regarding mans relationship to others. Young Satanist's believe that the strong will rule with Satan. Power has become an obsession with young Satanists. It is sought after on the physical, mental, and spiritual levels.
Gaining knowledge that others do not posses is another aspect of the occult. When an individual has more knowledge it allows them a degree of power over those who do not have access to that knowledge. The Ouija Board has proven particularly useful. The Ouija Board is an instrument for communication with the spirits of the dead. The Ouija Board is an open door into the world of the occult and demonic activity. Disembodied speak to the living through the medium of the Ouija Board. This information is believed to be truth from the other side; Lucifer's delusion to gain our allegiance. Most cases are with people who have used the Ouija Board. The Ouija Board is the easiest way to become possessed. The greatest danger of the Ouija is that an individual begins to place his trust and future hope in the message the board brings. Christians can offer several reasons as to why one should not be involved in the use of the Ouija Board. One is simply that the bible condemns it as being involvement in the occult. And then theirs the fact that the message received is often false and misleading. According to scripture (Matt 4:9, Rev, 12:19) "Satan's goal is to deceive man by blinding him to the truth of the gospel and to receive worship for himself. Satan desires to alter an individuals values and turn them against themselves, their beliefs, family, God and society.
Initiation plays a major role in group activity. Through initiation an individual is given a chance to declare total allegiance to Satan by participation. Often one will sever a portion of a finger or a toe to indicate their commitment. Other acts include being a participant ritual where mutilation of an animal or person part of the activity. In some cases a criminal act is perpetrated where the initiate is involved in a key role. An unholy communion of sorts is taken during the initiatory rituals where a cup or chalice (usually stolen from the church) is used containing a mixture of wine, blood (human or animal) and urine. Other methods of initiation include body markings. An inverted cross may be burned into ones forearm or chest. Commonly used markings include a goathead, inverted crosses, skulls, pentagrams, mena(Amen), or a black rose. Anton Szandor LaVey formed the church of Satan in 1966. LaVey, the author of the Satanic bible is probably the most common source of satanic ritual and understanding available. Modern Satanism really begins with Anton Szandor LaVey. On Walpurgisnacht, April 30, 1966 he created the Church of Satan. Anton drew on his previous experience as a lion tamer and side show barker.. And on his readings into the psychology, magick etc., and wrote the Satanic Bible in 1966. It can be found in most large bookstores. The Satanic Bible has sold more than 600, 000 thousand copies since it was published by Avon books in 1969. This was followed by the Compleat Witch(1970) later republished as the Satanic Witch, and the Satanic Rituals in 1972. These are essentially the only available books that accurately describe Satanism. At the core of the Church are nine Satanic statements written by Anton LaVey, they state that Satan represents:
1. indulgence, not abstinence
2. vital existence, not spiritual pipe dreams
3. undefiled wisdom, not hypocrisy self-deceit
4. kindness to those deserving it, not love wasted on ingrates
5. Vengeance, not turning the other cheek
6. Responsibility to the responsible, instead of concern for psychic vampires
7. Man as just another animal, the most vicious of all
8. Gratification of all ones desires
9. The best friend the Christian Church has had as he kept it in business for centuries
LaVey's rituals and ceremonies are pageants used to celebrate a person or element of faith. Magick rituals consist of three types. Sex magic (include materbation), healing or happiness rituals and destruction rituals. Destruction rituals may include sticking pins in a doll; drawing a picture or writing of a victim's death; delivering a soliloquy etc. Destruction rituals are best performed by a group. Male Satanists wear full length black robes, with or without a hood. Young women wear sexually suggestive clothing; older women wear all black. All Satanists wear amulets with a symbol of
Baphomet, a goat's head drawn within an inverted pentagram. When the Satanic Bible was written in 1969 a nude woman was customarily used as an altar, since Satanism is regarded as the religion of the flesh, not the spirit. A live altar is rarely used today. One candle is placed to the right of the altar; it symbolizes the belief of Satanists in the hypocrisy of "White Magicians" insistence to do no harm to others. At least one black candle, representing the Power of Darkness, is placed to the left of the altar, A bell is rung nine times at the beginning and at the end or a ritual. The Satanic priest rotates counter-clockwise as he rings the bell.
A chalice is ideally made of silver, it may be not be formed of gold because that is a metal Wiccans use, and Satanists want to distance themselves as much possible from Wicca. Other ritual tools include a gong, sword, elixir (usually wine), phallus, and parchment. They and the chalice and bell are placed on a small table near the altar. The language used during magical ritual is Enochian, whose words variously sound similar to Arabic, Hebrew or Latin. Its origin is unknown.
The church or Satanic rules of Behavior include:
· Prayer is useless; it distracts people from useful activity
· Enjoy indulgence instead of abstinence, practice with joy all the seven deadly Christian sins (greed, pride, envy, anger, gluttony, lust and sloth).
· If a man smites you on one cheek, smash him on the other
· Do unto others as they do onto you
· Engage in sexual activity freely
· Suicide is frowned on
The Satanist needs no elaborate, detailed list of rules of behavior.
Most religions like Christianity, Hinduism, and Islam have well-defined meanings. Satanism is the exception.
f:\12000 essays\sciences (985)\Enviromental\Wolves Majestic and Maligned.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Destiny, perhaps from the very beginning, claimed the wolf as a symbol. Has any other animal stirred human passions
the way the wolf has? Its haunting howl, its incredible stamina, its brilliant eyes, and its superiority as a predator
all have been reviled as nefarious, and even demonic, traits. Ironically, these same characteristics have also been
revered as belonging to a majestic, and sometimes spiritual, creature - a symbol of the magnificent, untamed wilderness.
In truth, the wolf is neither evil nor exceptionally good - neither demon nor god. Wolves are simply predators.
Their role as a predator must not be reduced, however, to that of savage killer. Wolves, like humans, need to eat to
survive. In this process, wolves also provide a service: they help preserve nature's delicate balance by keeping herds
of deer, elk, moose, and other large mammals in check, as well as keeping these populations strong and genetically
viable by preying on the weak and sick.
Both the idealized wolf and the demonic wolf are creations of the human mind. It is not easy to transcend the image
of the Big Bad Wolf that has filled our myths and legends, but if we know only this wolf we do not truly know the wolf
at all. And what we do not know, we fear. Our fear is perhaps the greatest threat to the survival of the wolf, for it
causes us to react rather than act, to repel rather than respect. But this fear and hatred did not always separate man
and beast
Man the hunter once looked on the wolf the hunter with admiration. Man and wolf both used their keen intelligence
to overcome the disadvantages they faced in their day-to-day existence. Survival for both was enhanced by hunting and
living in groups or packs. And, at one time, the chance of survival for each was also increased by following, learning
from, and adapting the skills of the other to its own advantage.
As long as man's daily living was earned primarily as a hunter, he knew a respect for wolves, and coexistence was
relatively peaceful. Eventually, man and wolf took up together in a process of domestication that brought a different
meaning to their coexistence. Even while those early ancestors of man's best friend enjoyed this new relationship, the
wolves that did not come in from the cold were beginning to be cast in a different and less favorable light, for the
dog was not the only animal toward whom man turned his attention in the early days of animal husbandry. Some ten
thousand years ago, man discovered great value for himself in domesticating animals such as cattle and sheep - it was
far easier to herd sufficient numbers of animals to supply adequate food than to hunt them.
Man left the forest for the field, and the wilderness became a vast and frightening entity. While the domesticated
dog was soon pressed into service to guard these herds of goats, cattle, and sheep, his cousin the wolf was now seen as
a threat and an enemy. The wolf, again a symbol, stood not for majestic, bountiful wilderness, but rather for foreign,
untamed wilderness that must be conquered.
f:\12000 essays\sciences (985)\Enviromental\World Hunger.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"World Hunger"
*** Warning: the following is a look at World hunger which some people may disagree with, if you would look at non-partisan look at World hunger then keep reading ***
Hunger is an issue which many people think lies little importance. Im going to give you a look at World Hunger as a Picture of Poverty, how it affects Third World Nations, and How World Hunger is a disease that is plaguing our society.
"Food is more than a trade commodity," pleaded Sir John Boydorr in 1946. "It is an essential to life." The first director-general of the new Food and Agriculture Organization of the United Nations, Boydorr fruitlessly proposed plans for a World Food Board to protect nations and people from hunger in the world market system. That market system does not distribute food on the basis of nutritional need. This is one of the most troubling and complex realities of the world hunger problem. During recent famines in Ethiopia, in another example of the workings of the marketplace, foreign food aid begins trucked to famine areas from ships at the docks passed food leaving the famine areas on other vehicles. Merchants were taking food from famine areas to parts of the country where there was no famine. World Hunger and poverty can be seen in many ways. But first lets establish a solid definition of poverty : Poverty is a state in which the ability of individuals or groups to use power to bring about good for themselves, their families, and their community is weakened or blocked. When someone lacks food, this is referred to as material poverty. This sort of poverty can hurt people in many ways, it can hurts people's self esteem and it can also hurt their outlook on life. Lets say you come home from work to see your family, instead of seeing a family which is happy because it has a roof over its head you come home to see that your children don't have enough food on the table to keep them properly nourished. This hurts familys and tears some of them apart. It is also just a very cruel punishment because after a while of being hungry, you start to starve to death and when you starve, the body just starts to eat itself up to find the nourishment it needs. It can also effect people's outlook on life and on people in a major way. People who are denied food can start to hate life and everyone around them. There's also two instincts in life that will always kick in when your hungry: The survival instinct which is to survive no matter what the situation is and the instinct to provide food for your family. I am not a father myself, but I talked to my own dad about this. My own father said that if I was ever starving that he would do anything it takes to make sure that I have the proper nourishment. Im not sure my father would go this far, but I know there are some fathers that would reach a point to where they would even kill for food. This feed the monster we call the "spiral of violence" and helps to encourage it grow. Because we have in the long run, poor people stealing and sometimes killing for no reason. They shouldn't have to steal or kill for food, they should at least have enough food to eat to survive. In fact, there is enough food grown in the world to supply ever man with thirty-six thousand calories a day, this is enough to cause weight gain. In my opinion, I feel some governments want their people to be hungry. Babies are being born every second, the population is starting to get out of control, there are more and more people in the world, I think some governments are trying to use their control on food to control the population by starving their people. There are a few other solutions to World Hunger, for instance, one is Food Security. The opposite of being threatened by hunger is to have food security. The two basic elements of food security are, first, a regular food supply large enough to meet human nutritional needs and, second, access to a supply of food in sufficient quantity at all times to maintain a healthy, active life. The world hunger problem is the absence of food security for 10 percent to 20 percent of the world's population. Th steady food supply could come from fellow rich countries.
Hunger affects Third World Nations in many ways also. A Third World Nation or Country is a developing country, for example: Ethiopia could be considered a Third World Country. Currently, around fifteen to twenty million people die each year of hunger-related causes, including diseases brought on by lowered resistance due to malnutrition. Over 40 percent of all deaths in poor countries occur among children under five years old. There are seven main reasons why poor countries can't provide food for their people:
1) The demand for exports: Wealthy countries demand products like coffee, sugar, lumber, and grains from poor countries. So a poor country's economy moves away from providing food and resources for its own people.
2) Government by the elite: Many of the world's poorest and hungriest nations are controlled by governments consisting of the nations' wealthiest members. Government policies tend to favor the interests of the wealthy rather than the poor.
3) Conditions on aid programs: Aid that is offered by wealthy countries to poor countries often comes with strings attached. For example, a wealthy country might give a poor country money, but require the poor country to use the money to build roads, bridges, and airports. This in no way helps the poor country provide food for its people.
4) Debt payments and conditions: Most poor countries are deeply into debt to banks. Paying interest on loans takes money from programs that could help eliminate hunger.
5) Discrimination: Ethnic, religious, and gender discrimination within poor countries also cause poverty. Religious and ethnic minorities are likely to be more impoverished then other groups, and with no money comes no food.
6) Arms sales: Military spending by poor countries is encouraged by rich countries that make and sell arms. Because of this, valuable money which could go to help by food is spent on weapons. The spiral of violence is also fueled by the availability of weapons, which increases repression and adds to poverty and hunger.
7) Abuse of land and other resources: The abuse of a poor country's resources leads to environmental degradation. Damaged land cannot be effectively farmed for local food production. These are seven reasons which contribute to the hunger in some Third World Countries. So a lot of hunger is caused by the military, so even when countries aren't at war, they can cause death just by being selfish. The United states dies offer help to some of these countries , but might wonder how come the us doesn't give more help to these countries? Well, to ally themselves with the interests of the poor, groups such as U.S. AID would have to support those groups throughout the third world that are confronting the issue of power-the issue of control over the resources. To do so would pit these groups against the interests of the powerful dominating most governments in the world today. To do so would go against the "formidable lobbyists of multinational corporations". To do so would be to risk supporting democratic economic alternatives abroad that might lead more Americans to question how just their own economic system has become. For these reasons, no U.S. government group is about to do so. This is where we conclude that agencies of the U.S. government are incapable of arriving at a correct answer of the root causes of hunger- an answer that puts control over resources in the central position. Currently, the United States is trying to remove some obstacles which help contribute to World Hunger, for instance they try to send as much aid as they can to hungry nations. But again, are we really sending as much aid as we could?
Where there is hunger, the path to a workable solution starts by asking what normally entitles people to a share of food in the "particular social context" in which they are living. Well I say they are automatically entitled to a share of food because they are humans that were given life to by god, and that is our guarantee to food. Hunger can also seen as a disease that is plaguing our society. The world currently spends $550 billion a year producing, buying, and selling military weapons. The world currently spends about $22 on military purposes for every dollar it spends on official development aid. One out of every four human beings has no access to safe drinking water. What the world spends in half a day on military purposes could finance the entire malaria eradication program of the World Health Organization. The rice of one military tank vehicle could provide improved storage facilities for 100,000 tons of rice. Everyday, the world produces two pounds of grain for every man, woman, and child on earth. That is enough to provide everyone 3,000 calories a day, well above the recommended daily minimum of 2,300 calories. What am I trying to prove with these facts? I am trying to prove that as human begins and since we are all god's children, we should be worried about giving each other support so we can all survive. Instead we are worried about who will have the most tanks or guns. If we stopped spending so much money on military expansion, we could easily provide every man, woman, and child in the world with enough food to get fat! Hunger is plaguing our society, its a disease that kills many people every year, and the sad thing is that so many people in rich countries like the United States, Japan, and etc, will never know the truth about World Hunger. Everyday I can see America indirectly making the situation worse and worse. I flip on the television everyday and see commercials about the starving the children and they show little kids on laying on a bed starving to death. I find this very disrespectful to the people who are really starving, they shouldn't be shown like animals on a tv screen, they should be given dignity. They shouldn't be shown on tv at all, I am sure they are not proud of the way they look and probably wouldn't want to be on tv. We should stop trying to have the biggest military force and start trying to live like Jesus would want us to and help our fellow brothers in Christ.
Due to many self-centered greedy people, we have fellow humans starving to death. This can't keep going on because every time someone starves, we are not just hurting that person but we are also hurting ourselves. We all live in the world as one race with different sections. The sections being the different nationalities we have in the world. And whenever one division gets hurt, the whole gets weakened. We need to depend on each other to survive from day to day healthy. It is true the poverty is a main cause of world hunger but it isn't the only cause. If the economy was serving the people and not the other way around then more people would have the money needed to buy food to live from day to day. And if greedy governments gave some of he people money or food they would have money to buy food. If the Military stopped using so much money to make machines that kill, there would be more money for people to buy food with. And if more people cared there would be a lot less starving people in this world. If this hunger doesn't end, I can see a very pathetic world in our future.
f:\12000 essays\sciences (985)\Genetics\AnimalTesting.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Animal Testing
Please Read This Warning Before You Use This Essay for Anything (It Might Save Your Life)
Animal Testing
Using animals for testing is wrong and should be banned. They
have rights just as we do. Twenty-four hours a day humans are using
defenseless animals for cruel and most often useless tests. The
animals have no way of fighting back. This is why there should be new
laws to protect them. These legislations also need to be enforced more
regularly. Too many criminals get away with murder.
Although most labs are run by private companies, often
experiments are conducted by public organizations. The US government,
Army and Air force in particular, has designed and carried out many
animal experiments. The purposed experiments were engineered so that
many animals would suffer and die without any certainty that this
suffering and death would save a single life, or benefit humans in
anyway at all; but the same can be said for tens of thousands of other
experiments performed in the US each year. Limiting it to just
experiments done on beagles, the following might sock most people: For
instance, at the Lovelace Foundation, Albuquerque, New Mexico,
experimenters forced sixty-four beagles to inhale radioactive Strontium
90 as part of a larger ^Fission Product Inhalation Program^ which began
in 1961 and has been paid for by the US Atomic Energy Commission. In
this experiment Twenty-five of the dogs eventually died. One of the
deaths occurred during an epileptic seizure; another from a brain
hemorrhage. Other dogs, before death, became feverish and anemic, lost
their appetites, and had hemorrhages. The experimenters in their
published report, compared their results with that of other experiments
conducted at the University of Utah and the Argonne National Laboratory
in which beagles were injected with Strontium 90. They concluded that
the dose needed to produce ^early death^ in fifty percent of the sample
group differed from test to test because the dogs injected with
Strontium 90 retain more of the radioactive substance than dogs forced
to inhale it. Also, at the University of Rochester School Of Medicine
a group of experimenters put fifty beagles in wooden boxes and
irradiated them with different levels of radiation by x-rays.
Twenty-one of the dogs died within the first two weeks. The
experimenters determined the dose at which fifty percent of the animals
will die with ninety-five percent confidence. The irritated dogs
vomited, had diarrhea, and lost their appetites. Later, they
hemorrhaged from the mouth, nose, and eyes. In their report, the
experimenters compared their experiment to others of the same nature
that each used around seven hundred dogs. The experimenters said that
the injuries produced in their own experiment were ^Typical of those
described for the dog^ (Singer 30). Similarly, experimenters for the
US Food and Drug Administration gave thirty beagles and thirty pigs
large amounts of Methoxychlor (a pesticide) in their food, seven days a
week for six months, ^In order to insure tissue damage^ (30). Within
eight weeks, eleven dogs exhibited signs of ^abnormal behavior^
including nervousness, salivation, muscle spasms, and convolutions.
Dogs in convultions breathed as rapidly as two hundred times a minute
before they passed out from lack of oxygen. Upon recovery from an
episode of convulsions and collapse, the dogs were uncoordinated,
apparently blind, and any stimulus such as dropping a feeding pan,
squirting water, or touching the animals initiated another convulsion.
After further experimentation on an additional twenty beagles, the
experimenters concluded that massive daily doses of Methoxychlor
produce different effects in dogs from those produced in pigs. These
three examples should be enough to show that the Air force beagle
experiments were in no way exceptional. Note that all of these
experiments, according to the experimenters^ own reports, obviously
caused the animals to suffer considerably before dying. No steps were
taken to prevent this suffering, even when it was clear that the
radiation or poison had made the animals extremely sick. Also, these
experiments are parts of series of similar experiments, repeated with
only minor variations, that are being carried out all over the
country. These experiments Do Not save human lives or improve them in
any way. It was already known that Strontium 90 is unhealthy before
the beagles died; and the experimenters who poisoned dogs and pigs with
Methoxychlor knew beforehand that the large amounts they were feeding
the animals (amounts no human could ever consume) would cause damage.
In any case, as the differing results they obtained on pigs and dogs
make it clear, it is not possible to reach any firm conclusion about
the effects of a substance on humans from tests on other species. The
practice of experimenting on non-human animals as it exists today
throughout the world reveals the brutal consequences of speciesism
(Singer 29).
In this country everyone is supposed to be equal, but
apparently some people just don^t have to obey the law. That
is, in New York and some other states, licensed laboratories are immune
from ordinary anticruelty laws, and these places are often owned by
state universities, city hospitals, or even The United States Public
Health Service. It seems suspicious that some government run
facilities could be ^immune^ from their own laws (Morse 19). In
relation, ^No law requires that cosmetics or household products be
tested on animals. Nevertheless, by six^o clock this evening, hundreds
of animals will have their eyes, skin, or gastrointestinal systems
unnecessarily burned or destroyed. Many animals will suffer and die
this year to produce ^new^ versions of deodorant, hair spray, lipstick,
nail polish, and lots of other products^ (Sequoia 27). Some of the
largest cosmetics companies use animals to test their products. These
are just a couple of the horrifying tests they use, namely, the Drazie
Test. The Drazie test is performed almost exclusively on albino
rabbits. They are preferred because they are docile, cheap, and their
eyes do not shed tears (so chemicals placed in them do not wash out).
They are also the test subject of choice because their eyes are clear,
making it easier to observe destruction of eye tissue; their corneal
membranes are extremely susceptible to injury. During each test the
rabbits are immobilized (usually in a ^stock^, with only their heads
protruding) and a solid or liquid is placed in the lower lid of one eye
of each rabbit. These substances can range from mascara to aftershave
to oven cleaner. The rabbits^ eyes remain clipped open. Anesthesia is
almost never administered. After that, the rabbits are examined at
intervals of one, twenty-four, forty-eight, seventy-two, and one
hundred an sixty-eight hours. Reactions, which may range from severe
inflammation, to clouding of the cornea, to ulceration and rupture of
the eyeball, are recorded by technicians. Some studies continue for a
period of weeks. No other attempt is made to treat the rabbits or to
seek any antidotes. The rabbits who survive the Drazie test may then
be used as subjects for skin-inflammation tests (27). Another widely
used procedure is the LD-50. This is the abbreviation of the Lethal
Dose 50 test. LD-50 is the lethal dose of something that will kill
fifty percent of all animals in a group of forty to two hundred. Most
commonly, animals are force-feed substances (which may be toothpaste,
shaving cream, drain cleaner, pesticides, or anything else they want to
test) through a stomach tube and observed for two weeks or until
death. Non-oral methods of administering the test include injection,
forced inhalation, or application to animals skin. Symptoms routinely
include tremors, convultions, vomiting, diarrhea, paralysis, or
bleeding from the eyes, nose, mouth. Animals that survive are
destroyed (29). Additionally, when one laboratory^s research on
animals establishes something significant, scores of other labs repeat
the experiment, and more thousands of animals are needlessly tortured
and killed (Morse 8).
Few labs buy their animal test subjects from legitimate pet
stores and the majority use illegal pet dealers. There are many stolen
animal dealers that house the animals before, during , and after
testing. These ^farms^ most frequently hold animals between tests
while the animals recuperate, before facing another research ordeal.
These so called farms in question are mainly old barn-like buildings
used as hospitals and convalescent (recovery) wards are filthy,
overcrowded pens. At one farm in particular dogs with open chest
wounds and badly infected incisions, so weak that many could not stand,
were the order of the day. These dogs were ^recuperating^ from
open-heart and kidney surgery. Secondly, a litter of two-day-old pups
were found in a basket, with no food provisions in sight (Morse 19).
In every pen there were dogs suffering from highly contagious
diseases. An animal^s road to a lab is seldom a direct one. Whether
he^s stolen picked up as a stray, or purchased, there^s a de tour first
to the animal dealer^s farm; There he waits- never under satisfactory
conditions- until his ride, and often life, comes to an end at the
laboratory (23).
Every day of the year, hundreds of thousands of fully conscious
animals are scalded, or beaten, or crushed to death, and more are
subjected to exotic surgery and then allowed to die slowly and in
agony. There is no reason for this suffering to continue (Morse 8).
In conclusion, animal testing is inhumane and no animal should
be forced to endure such torture. Waste in government is one
thing; it seems to be an accepted liability of democracy. But the
wasting of lives is something else. How did it ever get this way?
WORKS CITED
Fox, Michael Allen. The Case For Animal Experimentation. Los
Angeles: University Of California Press, 1986.
Jasper, James M. and Dorothy Nelkin, eds. The Animal Rights
Crusade. New York: Macmillion Inc., 1992, 103-56.
Morse, Mel. Ordeal Of The Animals. Englewood Cliffs: Prentice-Hall
International, 1968.
Sequoia, Anna. 67 Ways To Save The Animals. New York: Harper
Collins, 1990.
Singer, Peter. Animal Liberation. New York: Random House, 1975.
OUTLINE
I. Introduction
II. Supporting evidence on testing
A. Experiments funded by US government
1. Strontium 90
2. Irradiation by X-rays
3. Methoxychlor
B. Background on laws in US
C. Examples of tests
1. The Drazie Test
2. The LD-50 Test
D. What the animals go through
1. Trip to the laboratory
2. Their stay at the lab
3. After the tests are done
III. Conclusion
--------------------------------------------------------------------------------
Essay Data Section
Word Count: 1769 Title: Animal Testing Type: Student Submitted
|| Return To The Student Essay Directory ||
Word Count: 1732
f:\12000 essays\sciences (985)\Genetics\Anti Matter.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Anti-Matter Introduction Ordinary matter has negatively charged electrons circling a positively charged nuclei. Anti-matter has positively charged electrons - positrons - orbiting a nuclei with a negative charge - anti-protons. Only anti-protons and positrons are able to be produced at this time, but scientists in Switzerland have begun a series of experiments which they believe will lead to the creation of the first anti-matter element -- Anti-Hydrogen. The Research Early scientists often made two mistakes about anti-matter. Some thought it had a negative mass, and would thus feel gravity as a push rather than a pull. If this were so, the antiproton's negative mass/energy would cancel the proton's when they met and nothing would remain; in reality, two extremely high-energy gamma photons are produced. Today's theories of the universe say that there is no such thing as a negative mass. The second and more subtle mistake is the idea that anti-water would only annihilate with ordinary water, and could safety be kept in (say) an iron container. This is not so: it is the subatomic particles that react so destructively, and their arrangement makes no difference. Scientists at CERN in Geneva are working on a device called the LEAR (low energy anti-proton ring) in an attempt to slow the velocity of the anti-protons to a billionth of their normal speeds. The slowing of the anti-protons and positrons, which normally travel at a velocity of that near the speed of light, is neccesary so that they have a chance of meeting and combining into anti-hydrogen. The problems with research in the field of anti-matter is that when the anti-matter elements touch matter elements they annihilate each other. The total combined mass of both elements are released in a spectacular blast of energy. Electrons and positrons come together and vanish into high-energy gamma rays (plus a certain number of harmless neutrinos, which pass through whole planets without effect). Hitting ordinary matter, 1 kg of anti-matter explodes with the force of up to 43 million tons of TNT - as though several thousand Hiroshima bombs were detonated at once. So how can anti-matter be stored? Space seems the only place, both for storage and for large-scale production. On Earth, gravity will sooner or later pull any anti-matter into disastrous contact with matter. Anti-matter has the opposite effect of gravity on it, the anti-matter is 'pushed away' by the gravitational force due to its opposite nature to that of matter. A way around the gravity problem appears at CERN, where fast moving anti-protons can be held in a 'storage ring' around which they constantly move - and kept away from the walls of the vacuum chamber - by magnetic fields. However, this only works for charged particles, it does not work for anti-neutrons, for example. The Unanswerable Question Though anti-matter can be manufactured, slowly, natural anti-matter has never been found. In theory, we should expect equal amounts of matter and anti-matter to be formed at the beginning of the universe - perhaps some far off galaxies are the made of anti-matter that somehow became separated from matter long ago. A problem with the theory is that cosmic rays that reach Earth from far-off parts are often made up of protons or even nuclei, never of anti-protons or antinuclei. There may be no natural anti-matter anywhere. In that case, what happened to it? The most obvious answer is that, as predicted by theory, all the matter and anti-matter underwent mutual annihilation in the first seconds of creation; but why there do we still have matter? It seems unlikely that more matter than anti-matter should be formed. In this scenario, the matter would have to exceed the anti-matter by one part in 1000 million. An alternative theory is produced by the physicist M. Goldhaber in 1956, is that the universe divided into two parts after its formation - the universe that we live in, and an alternate universe of anti-matter that cannot be observed by us. The Chemistry Though they have no charge, anti-neutrons differ from neutrons in having opposite 'spin' and 'baryon number'. All heavy particles, like protons or neutrons, are called baryons. A firm rule is that the total baryon number cannot change, though this apparently fails inside black holes. A neutron (baryon number +1) can become a proton (baryon number +1) and an electron (baryon number 0 since an electron is not a baryon but a light particle). The total electric charge stays at zero and the total baryon number at +1. But a proton cannot simply be annihilated. A proton and anti-proton (baryon number -1) can join together in an annihilation of both. The two heavy particles meet in a flare of energy and vanish, their mass converted to high-energy radiation wile their opposite charges and baryon numbers cancel out. We can make antiprotons in the laboratory by turning this process round, using a particle accelerator to smash protons together at such enormous energies that the energy of collision is more than twice the mass/energy of a proton. The resulting reaction is written: p + p p + p + p + p Two protons (p) become three protons plus an antiproton(p); the total baryon number before is: 1 + 1 = 2 And after the collision it is: 1 + 1 + 1 - 1 = 2 Still two. Anti-matter elements have the same properties as matter properties. For example, two atoms of anti-hydrogen and one atom of anti-oxygen would become anti-water. The Article The article chosen reflects on recent advancements in anti-matter research. Scientists in Switzerland have begun experimenting with a LEAR device (low energy anti-proton ring) which would slow the particle velocity by a billionth of its original velocity. This is all done in an effort to slow the velocity to such a speed where it can combine chemically with positrons to form anti-hydrogen. The author of the article, whose name was not included on the article, failed to investigate other anti-matter research laboratories and their advancements. The author focused on the CERN research laboratory in Geneva. 'The intriguing thing about our work is that it flies in the face of all other current developments in particle physics' . The article also focused on the intrigue into the discovering the anti-matter secret, but did not mention much on the destruction and mayhem anti-matter would cause if not treated with the utmost care and safety. Discovering anti-matter could mean the end of the Earth as we know it, one mistake could mean the end of the world and a release of high-energy gamma rays that could wipe out the life on earth in mere minutes. It was a quite interesting article, with a lot of information that could affect the entire world. The article, however, did not focus on the benefits or disadvantages of anti-matter nor did it mention the practical uses of anti-matter. They are too expensive to use for powering rocket ships, and are not safe for household or industrial use, so have no meaning to the general public. It is merely a race to see who can make the first anti-matter element. Conclusion As research continues into the field of anti-matter there might be some very interesting and practical uses of anti-matter in the society of the future. Until there is a practical use, this is merely an attempt to prove which research lab will be the first to manufacture the anti-matter elements.
f:\12000 essays\sciences (985)\Genetics\Atmospheric Circulation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Atmospheric Circulation and More
The global energy balance and atmospheric motion mainly determine
the circulation of the earth's atmosphere. There is a hierarchy of motion
in atmospheric circulation. Each control can be broken down into smaller
controlling factors. The global energy balance is an equal balance of
short-wave radiation coming into the atmosphere and long-wave radiation
going out of the atmosphere. This is called thermal equilibrium. The
earth is at thermal equilibrium; however, there can have a surplus or
deficit of energy in parts of the heat budget. If you have a net
radiation surplus warm air will rise, and a net radiation deficit will
make the air cool an fall. Air gets heated at the equator because of the
inter tropical convergence zone and rises to the poles. There the air is
cooled and it floats back down to the equator where the process is
repeated. Another major contributing factor to the circulation of the air
is due to the subtropical highs. These highs like the ITCZ migrate during
the different seasons.
The idealized belt model is a great representation of the general
circulation of the atmosphere. The equatorial belt of variable winds and
calms ranges from 5 degrees north to 5 degrees south. This wind belt is
characterized by weak winds and low pressure from the inter tropical
convergence zone. As you go further north or south you encounter the
Hadley Cells. Hadley cell circulation is caused by the movement of high
pressure from the latitudes at 5 to 30 degrees north and 5 to 30 degrees
south to low pressure areas around the equator.
The movement of air from high pressure to low pressure causes
convergence. This convergence generates the production of wind. The
winds that are produced from this are the trade winds. The winds blow
from a northwest direction in the northern hemisphere, and in the southern
hemisphere the winds blow from a southeast direction. The trade winds are
the largest wind belt. The westerlies, they lie between 35 and 60 degrees
north and south latitude. The wind blows from the west , thus their name.
The westerlies are in the Ferrell cell. Cold air from the polar regions
falls down and then is heated up and pushed upward with the westerlies.
>From 65 to 90 degrees north and south lie the polar easterlies. It exists
because of the pressure gradient that is created by the temperatures. The
winds are also deflected by the coralias effect. This deflection air is
to the right in the northern hemisphere, and to the left in the southern
hemisphere. The reason that this happens is because of the rotation of
the earth on its axis.
Two moving patterns of the general circulation of the atmosphere
are the cyclones and anticyclones. Cyclones are low pressure systems
characterized by converging and rising air. On the other hand
anticyclones are characterized by high pressure because they have
diverging air that is descending. There are also land and sea breezes
which are produced by daily differences in cooling and heating of the land
and water. Sea breezes bring cooler air in the day, while land breezes
push cooler air over the water at nighttime.
There also exists radiation surpluses and deficits through out the
earth. There is a constant surplus between the latitudes of 15 degrees
north and 15 degrees south. In the latitudes between 15 and 38 degrees
north and south there is a net radiation surplus that varies annually.
There is a net radiation deficit annually in the latitudes from 38 to 90
degrees north and south. These surpluses and deficits are due to the high
sun angle in the low latitudes, as well as the increased length of
daytime.
Finally the seasons of the earth are determined by the tilt of the
earth on its axis. The earth is on a tilt of 23.5 degrees. When it
revolves around the sun the earth is exposed to the sun at different
degrees at different months of the year. Because of this phenomenon we
get seasons on the earth. The earth and all of its circulation patterns,
energy balances, and motions of the atmosphere are all very complex;
however, it can be easily understood by my wonderful summary.
f:\12000 essays\sciences (985)\Genetics\Brain.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF CONTENTS HEADING PAGE NUMBER 1. Table of Contents 1 2. Table of Illustrations 2 3. Introduction 3 4. Body of work 4 to 8 5. Conclusion 9 6. Illustrations 10 to 12 7. Bibliography 13 8. Glossary 14 to 16 9. Index 17 to 19 TABLE OF ILLUSTRATIONS HEADING PAGE NUMBER 1. Inside the Head 10 2. Inside the Brain 11 3. Areas and Jobs 12 INTRODUCTION NOTE: All words in bold print will be found in the glossary. The human body is divided into many different parts called organs. All of the parts are controlled by an organ called the brain, which is located in the head. The brain weighs about 2.75 pounds, and has a whitish-pink appearance. The brain is made up of many cells, and is the control centre of the body. The brain flashes messages out to all the other parts of the body. The messages travel in very fine threads called nerves. The nerves and the brain make up a system somewhat like telephone poles carrying wires across the city. This is called the nervous system. The nerves in the body don't just send messages from the brain to the organs, but also send messages from the eyes, ears, skin and other organs back to your brain. Some nerves are linked directly to the brain. Others have to reach the brain through a sort of power line down the back, called the spinal cord. The brain and spinal cord make up the central nervous system. The brain doesn't just control your organs, but also can think and remember. That part of the brain is called the mind. PROTECTING THE BRAIN Twenty-eight bones make up the skull. Eight of these bones are interlocking plates. These plates form the cranium. The cranium provides maximum protection with minimum weight, the ideal combination. The other twenty bones make up the face, jaw and other parts of the skull. Another way the brain keeps it self safe is by keeping itself in liquid. Nearly one fifth of the blood pumped by the heart is sent to the brain. The brain then sends the blood through an intricate network of blood vessels to where the blood is needed. Specialized blood vessels called choroid plexuses produce a protective cerebrospinal fluid. This fluid is what the brain literally floats in. A third protective measure taken by the brain is called the blood brain barrier. This barrier consists of a network of unique capillaries. These capillaries are filters for harmful chemicals carried by the blood, but do allow oxygen, water and glucose to enter the brain. THE DIFFERENT SECTIONS OF THE BRAIN The brain is divided into three main sections. The area at the front of the brain is the largest. Most of it is known as the cerebrum. It controls all of the movements that you have to think about, thought and memory. The cerebrum is split in two different sections, the right half and the left half. The outer layer of the cerebrum is called the cortex. It is mainly made up of cell bodies of neurons called grey matter. Most of the work the brain does is done in the cortex. It is very wrinkled and has many folds. The wrinkles and folds give the cortex a large surface area, even though it is squeezed up to fit in the skull. The extra surface area gives the cerebrum more area to work. Inside the cortex, the cerebrum is largely made up of white matter. White matter is tissue made only of nerve fibres. The middle region is deep inside the brain. It's chief purpose is to connect the front and the back of the brain together. It acts as a "switchboard", keeping the parts of your brain in touch with each other. The back area of the brain is divided into three different parts. The pons is a band of nerve fibres which link the back of the brain to the middle. The cerebellum sees to it that all the parts of your body work as a team. It also makes sure you keep your balance. The medulla is low down at the back of your head. It links the brain to the top of the spinal cord. The medulla controls the way your heart pumps blood through your body. It also looks after your breathing and helps you digest food. THE DIFFERENT PARTS OF THE BRAIN THE BRAINSTEM: The brainstem is one of the oldest parts of the brain. It controls such functions as breathing, blood pressure, swallowing and heart rate. THE HYPOTHALMUS: This part of the brain is located directly above the brain stem. The hypothalmus controls basic drives like hunger and sex and as well as our response to threat and danger. The hypothalmus also controls the pituitary. THE PITUITARY: The pituitary produces hormones such as testosterone that circulate through out the body. THE THALAMUS: The thalamus is like a relay area; it receives messages from lower brain areas such as the brainstem and hypothalmus and sends them to the two brain hemispheres. The thalamus is located in between above the lower brain and under the two hemispheres. THE DIFFERENT SECTIONS OF THE BRAIN: Most of the above mentioned parts of the brain were produced early in evolution but the higher mammals especially humans went on to produce a sort of "thinking cap" on top of these parts. This "thinking cap" was divided into two different parts, the left hemisphere and the right hemisphere. If the left side of your brain is more developed like most people's are, you are right handed. On the other hand if the right side of your brain is more developed, then you will be left handed. The right side of your brain is more artistic and emotional while the left side of your brain is your "common sense" and practical side, such as figuring out math and logic problems. THE CEREBELLUM: One of the most important part of the Human brain is the cerebellum. The cerebellum is involved with the more complex functions of the brain and sometimes is even referred to as "the brain within the brain". The cerebellum acts as a control and coordination centre for movement. The cerebellum carries small "programs" that have been previously learned. For example, how to write, move, run and jump are all previously learned activities that the brain recorded and can playback when needed. Every time you practice, the brain rewrites the program and makes it better. You may have heard the saying "practice makes perfect". Well this saying is not entirely true; another way of "practising" is just to imagine what you wish to do. Since the cerebellum can't actually feel, it will think that you are doing what your imagining and respond by rewriting it's previous program and carrying out any other actions needed for that function. This is one why to explain wet dreams. THE CEREBRAL CORTEX: The cerebral cortex makes up the top of the two hemispheres of the brain. The cortex is a sheet of greyish matter which produces our thoughts, language and plans. It also controls our sensations and voluntary movements, stores our memories and gives us the ability to imagine, in short it's what makes humans, humans. IN THE FUTURE Today many experiments are being conducted that may be break through's for the future. For instance "brain grafting" is one procedure that may be used in the future. Brain grafting is to transplant a very thin layer of brain skin from one person to another. This would result in control of parkinson's disease and other seizure related diseases. Another radical idea that has already been successfully been tried on rhesus monkey's is, brain transplants. The ethics and legal problems for such a transplant would probably never let this operation be performed on humans. This is because the person would not be the same, would not have the same memories or the same abilities that the host body had had. The last idea of the future that we will list is called "artificial hearing and seeing". Artificial seeing is achieved by planting sixty-four small electrodes in front of the visual cortex of the brain. The electrodes are connected to a small camera that is some where on the person's ear. A computer is attached to the camera. The computer sends the images from the camera directly to the implanted electrodes. They flash as the picture from the camera, thus enabling the person to somewhat see. Artificial hearing is much more complicated then artificial seeing. First a electrodes must be planted in the brain. Then through a microphone a computer produces electrical pulses that are then sent to the electrodes in the brain. But as of yet these procedures are not practical first because of the size of the computer, it cannot be taken out of the laboratory second the cost of the package and third the risks involved. CONCLUSION After all of the work and research that we have done it is very evident to us that the brain is one of the most wondrous organs that humans could have. It guides us through almost every second of our life. Even after exploring vast and distant sky's to the microorganisms that exist today, the brain has never ceased to amaze us and probably never will. BIBLIOGRAPHY 1. The Brain and Nervous System by Lambert, Mark copyright Macmillan Education, 1988 2. The Brain and Nervous System by Parker, Steve copyright Franklin Watts, 1990 3. Encyclopedia Britannica by Britannica, Encyclopedia Inc. copyright Encyclopedia Britannica Inc., 1986 4. The Incredible Machine by Geographic, National Society copyright Geographic, National Society, 1992 GLOSSARY artificial hearing: When a person is able to hear but not naturally. artificial seeing: When a person is able to see but not naturally. blood brain barrier: A set of special capillaries that are only found in brain. There purpose is to filter the blood so only oxygen, glucose and water are able to enter the brain. Unfortuantly they don't prevent narcotics from entering the brain. brain: An organ that is pinkish-white in appearance and is located in the skull. This organ controls almost everything that the body does. brain grafting: Brain grafting is the process of taking a thin layer of brain skin from the donor and moving to new host. brainstem: This is what the brain had used to be early evolution, but now it only controls our basic functions such as breathing and heart rate. capillaries: Tiny blood vessels. cells: What all living thing are built from. central nervous system: This the brain and spinal cord put together. Also see: brain, spinal cord. cerebellum: This part of the brain makes sure that all of your body works together. It also keeps your balance. cerebral cortex: This is one of the most important parts of the brain. It also is produces our thoughts, stores our memories, and plans. cerebrospinal fluid: This what the brain floats in. cerebrum: The cerebrum is split in to two different sides. Left and right. It is located at the front of the head. choroid plexuses: These special blood vessels are what produce the cerebrospinal fluid. cortex: This is the outer layer of the cerebrum. cranium: This is the part of the skull that holds the brain. diseases: Illnesses that can be terminal. electrodes: They are made out metal and emit electricity, usually very little. glucose: This is a combination of sugar and water. grey matter: Mainly made from the cell bodies of neurons. hemisphere: These are the two different part of the cerebrum. Almost all of the brain's work is done there. hormones: Chemicals that can change the chemical make up of your physical body. hypothalmus: This part of the brain is located above the brainstem. It controls basic drives such as hunger and sex. medulla: The medulla is almost right behind the brainstem. It helps you to digest your food. mind: Not just the brain but the actual consciousness that we have. nerves: Pathways that the brain uses to send messages to and from different parts of the body. nervous system: The whole system of nerves that attach to the spinal cord. organs: Important part of the body. The brain, heart and lungs are examples of organs. Parkinson's Disease: This disease causes the victim to have seizures. pituitary: The pituitary produces hormones. pons: A band of nerve fibre that connect the back the brain to the middle. skull: The skull is made up of twenty-eight bones. It is located above the spinal cord. It also contains the brain. spinal cord: This cord goes down your back. Almost all nerves in the body are connected to the spinal cord. thalamus: The thalamus a sort of relay room. It gets messages from the lower brain area and sends them to the higher brain. transplant: To transplant is to take something from one person and put it into another person. white matter: White matter is tissue made from nerve fibres. INDEX NOTE: For the Index, the introduction is the 1st page. artificial seeing 6 artificial hearing 7 balance 3 blood brain barrier 2 blood 2,3 ..harmful chemicals 2 blood pressure 3 blood vessels 2 brain 1,2,3,4,5,6,7 ..hemispheres 4 ..transplants 6 ..grafting 6 ..protecting 2 ..section 2 ..front 2,3 ..middle 2,3 ..back 2,3 brainstem 3,4 breathing 3 capillaries 2 cells 1 central nervous system 1 cerebellum 3,5 cerebral cortex 5 cerebrospinal fluid 2 cerebrum 2,3 choroid plexus 2 cortex 2,3,6 cranium 2 digesting food 3 electrodes 6 glucose 2 grey matter 2 heart 3 hormones 4 hunger 4 hypothalamus 4 medulla 3 memory 2 mind 1 nerves 1,3 nervous system 1 neurons 2 organs 1 oxygen 2 parkinson's disease 6 pituitary 4 pons 3 sex 4 skull 2,3 spinal cord 1,3 thalamus 4 water 2 white matter 3
f:\12000 essays\sciences (985)\Genetics\Concept of Species.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Concept of Species
Over the last few decades the Biological Species Concept (BSC)
has become predominately the dominant species definition used.
This concept defines a species as a reproductive community.
This though has had much refinement through the years. The
earliest precursor to the concept is in Du Rietz (1930), then
later Dobzhansky added to this definition in 1937.But even after
this the definition was highly restrictive. The definition of a
species that is accepted as the Biological species concept was
founded by Ernst Mayr (1942);
"..groups of actually or potentially interbreeding natural
populations which are reproductively isolated from other such
groups"
However, this is a definition on what happens in nature. Mayr
later amended this definition to include an ecological component;
"..a reproductive community of populations (reproductively
isolated from others) that occupies a specific niche in nature
The BSC is greatly accepted amongst vertebrate zoologists &
entomologists. Two reasons account for this .Firstly these are
the groups that the authors of the BSC worked with. (Mayr is an
ornithologist & Dobzhansky has worked mainly with Drosophila).
More importantly Sexual reproduction is the predominate form of
reproduction in these groups. It is not coincidental that the BSC
is less widely used amongst botanists. Terrestrial plants
exhibit much more greater diversity in their mode of reproduction
than vertebrates and insects.
There has been many criticisms of the BSC in its theoretical
validity and practical utility. For example, the application of
the BSC to a number of groups is problematic because of
interspecific hybridisation between clearly delimited
species.(Skelton).
It cant be applied to species that reproduce asexually ( e.g
Bdelloid rotifers,eugelenoid flagellates ).Asexual forms of
normally sexual organisms are also known. Prokaryotes are also
left out by the concept because sexuality as defined in the
eukaryotes is unknown.
The Biological species concept is also questionable in those
land plants that primarily self-pollinate.(Cronquist 1988).
Practically the BSC has its limitations in the most obvious form
of fossils.-It cant be applied to this evolutionary distinct
group because they no longer mate.( Do homo Erectus and homo
sapiens represent the same or different species?)
It also has limitations when practically applied to delimit
species. The BSC suggests breeding experiments as the test of
whether a n organism is a distinct species. But this is a test
rarely made, as the number of crosses needed to delimit a species
can be massive. So the time, effort and money needed to carry out
such tests is prohibitive. Not only this but the experiment
carried out are often inconclusive.
In practice even strong believers of the BSC use phenetic
similarities and discontinuties for delimiting species.
Although more widely known ,several alternatives to the
biological species concept exist.
The Phenetic (or Morphological / Recognition) Species Concept
proposes an alternative to the BSC (Cronquist) that has been
called a "renewed practical species definition". This defines
species as;
"... the smallest groups that are consistently and
persistently distinct and distinguishable by ordinary means."
Problems with this definition can be seen ,once again depending
on the background of the user. For example "ordinary means"
includes any techniques that are widely available, cheap and
relatively easy to apply. These means will differ among different
groups of organisms. For example, to a botanist working with
angiosperms ordinary means might mean a hand lens; to an
entomologist working with beetles it might mean a dissecting
microscope; to a phycologist working with diatoms it might mean a
scanning electron microscope. What means are ordinary are
determined by what is needed to examine the organisms in
question. So once again we see that it is a Subjective view
depending on how the biologist wants to read the definition. It
also has similar difficulties to the BSC in defining between
asexual species and existence of hybrids.
There are several phylogenetic species definitions. All of them
suggest hat classifications should reflect the best supported
hypotheses of the phylogeny of the organisms. Baum (1992)
describes two types of phylogenetic species concepts, one of thes
is that A species must be monophyletic and share one or more
derived character. There are two meanings to monophyletic (Nelson
1989). The first defines a monophyletic group as all the
descendants of a common ancestor and the ancestor. The second
defines a monophyletic group as a group of organisms that
are more closely related to each other than to any other
organisms.
So really, the species concepts are only theoretical and by no
means no standard as to which species should be grouped. However
it can be argued that without a more stuructured approached
proper discussion can not occur due to conflicting species names.
And so, if there are quite large problems with all of the
species concepts, the question about what is used in practicehas
to be asked. Most taxonomists use on or more of four main
criteria; (Stace 1990)
1.The individuals should bear a close resemblance to one another
such that they are always readily recognisable as members
of that group
2.There are gaps between the spectra of variation exhibite by
related species; if there are no such gaps then there is a
case for amalgamating the taxtas a single species.
3.Each species occupies a definable geographical area (wide or
narrow) and is demonstrably suited to the environmental
conditions which it encounters.
4.In sexual taxa, the individuals should be capable of
interbreeding with little or no loss of fertility, and there
are should be some reduction in the levelll or success
(measured in terms of hybrid fetility or competitiveness of
crossing with other species.
Of course, as has been seen, no one of these criteria is
absolute and it is more often left to the taxonomists own
judgement.
Quite frequently a classification system is brought about from
the wrong reasons. Between two taxa similarities and differences
can be found which have to be consisdered ,and it is simply up to
the taxonomists discretion as to which differences or simila
rities should be empahasised. So differences are naturally going
to arise between taxonomists.The system used can be brought
about for convienience, from historical aspects and to save
argument. - It may be a lot easier to stick with a current
concept, although requiring radical changes, because of the
upheaval and confusion that may be caused.
As seen much has been written on the different concepts and
improvements to these concepts but these amount to little more
than personal judgements aimed at producing a workable
classification (Stace).In general most Biologists adopt the
definition of species that is most suited to the type of animal
or plant that they are working with at the time and use their own
judgement as to what that means. It is common practice amongst
most taxonomists to look for discontinuities in variation which
can be used to delimit the kingdoms,divisions etc.. Between a
group of closley related taxa it can be useful, although highly
subjective, to use the crtieria of equivalence or comparibility.
Usually however, the criteria of discontinuity is more accurate
than comparibility ,even if the taxa are widely different.
References
Mayr, Ernst, 1904-/Systematics and the origin of species : from
the viewpoint of a zoologist/1942/QH 366
Cronquist, Arthur / The evolution and classification of flowering
plants/1968/QK 980
Stace, Clive A., Clive Anthony, 1938-/ Plant taxonomy and
biosystematics/1991/QK 990
Stuessy, Tod F / Plant taxonomy : the systematic evaluation of
comparative data/1990/QK 95
Evolution : a biological and palaeontological approach / editor
[for the Course Team] Peter Skelton/1993/QH 366
http://wfscnet.tamu.edu/courses/wfsc403/ch_7.htm - Interspecific
Competition
http://sevilleta.unm.edu/~lruedas/systmat.html - Phylogenetic
Species Concept
Word Count: 1256
f:\12000 essays\sciences (985)\Genetics\Cystic Fibrosis Gene.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biology - Genetics
The Cystic Fibrosis Gene
Introduction:
Cystic fibrosis is an inherited autosomal recessive disease
that exerts its main effects on the digestive system and the
lungs. This disease is the most common genetic disorder
amongst Caucasians. Cystic fibrosis affects about one in
2,500 people, with one in twenty five being a heterozygote.
With the use of antibiotics, the life span of a person
afflicted with CF can be extended up to thirty years
however, most die before the age of thirteen.1 Since so
many people are affected by this disease, it's no wonder
that CF was the first human genetic disease to be cloned by
geneticists. In this paper, I will be focusing on how the
cystic fibrosis gene was discovered while at the same time,
discussing the protein defect in the CF gene, the
bio-chemical defect associated with CF, and possible
treatments of the disease.
Finding the Cystic Fibrosis Gene:
The classical genetic approach to finding the gene that is
responsible for causing a genetic disease has been to first
characterize the bio-chemical defect within the gene, then
to identify the mutated protein in the gene of interest, and
finally to locate the actual gene. However, this classical
approach proved to be impractical when searching for the CF
gene. To find the gene responsible for CF, the principle of
"reverse genetics" was applied. Scientists accomplished
this by linking the disease to a specific chromosome. After
this linkage, they isolated the gene of interest on the
chromosome and then tested its product.2
Before the disease could be linked to a specific
chromosome, a marker needed to be found that would always
travel with the disease. This marker is known as a
Restriction Fragment Length Polymorphism or RFLP for short.
RFLP's are varying base sequences of DNA in different
individuals which are known to travel with genetic
disorders.3 The RFLP for cystic fibrosis was discovered
through the techniques of Somatic Cell Hybridization and
through Southern Blot Electrophoresis (gel separation of
DNA). By using these techniques, three RFLP's were
discovered for CF; Doc RI, J3.11, and Met. Utilizing in
situ hybridization, scientists discovered the CF gene to be
located on the long arm of chromosome number seven. Soon
after identifying these markers, another marker was
discovered that segregated more frequently with CF than the
other markers. This meant the new marker was closer to the
CF gene. At this time, two scientists named Lap-Chu Tsui
and Francis Collins were able to isolate probes from the CF
interval. They were now able to utilize to powerful
technique of chromosome jumping to speed up the time
required to isolate the CF gene much faster than if they
were to use conventional genetic techniques.3
In order to determine the exact location of the CF gene,
probes were taken from the nucleotide sequence obtained from
chromosome jumping. To get these probes, DNA from a horse,
a cow, a chicken, and a mouse were separated using Southern
Blot electrophoresis. Four probes were found to bind to all
of the vertebrate's DNA. This meant that the base pairs
within the probes discovered contained important
information, possibly even the gene. Two of the four probes
were ruled out as possibilities because they did not contain
open reading frames which are segments of DNA that produce
the mRNA responsible for genes.
The Northern Blot electrophoresis technique was then used
to distinguish between the two probes still remaining in
order to find out which one actually contained the CF gene.
This could be accomplished because Northern Blot
electrophoresis utilizes RNA instead of DNA. The RNA of
cell types affected with CF, along with the RNA of
unaffected cell types were placed on a gel. Probe number
two bound to the RNA of affected cell types in the pancreas,
colon, and nose, but did not bind to the RNA from
non-affected cell types like those of the brain and heart.
Probe number one did not bind exclusively to cell types from
CF affected areas like probe number two did. From this
evidence, it was determined that probe number two contained
the CF gene.
While isolating the CF gene and screening the genetic
library made from mRNA (cDNA library), it was discovered
that probe number two did not hybridize. The chances for
hybridization may have been decreased because of the low
levels of the CF gene present within the probe.
Hybridization chances could also have been decreased because
the cDNA used was not made from the correct cell type
affected with CF. The solution to this lack of
hybridization was to produce a cDNA library made exclusively
from CF affected cells. This new library was isolated from
cells in sweat glands. By using this new cDNA library,
probe number two was found to hybridize excessively. It was
theorized that this success was due to the large amount of
the CF gene present in the sweat glands, or the gene itself
could have been involved in a large protein family.
Nevertheless, the binding of the probe proved the CF gene
was present in the specific sequence of nucleotide bases
being analyzed.
The isolated gene was proven to be responsible for causing
CF by comparing its base pair sequence to the base pair
sequence of the same sequence in a non-affected cell. The
entire CF cDNA sequence is approximately 6,000 nucleotides
long. In those 6,000 n.t.'s, three base pairs were found to
be missing in affected cells, all three were in exon #10.
This deletion results in the loss of a phenylalanine residue
and it accounts for seventy percent of the CF mutations. In
addition to this three base pair deletion pattern, up to 200
different mutations have been discovered in the gene
accounting for CF, all to varying degrees.
The Protein Defect:
The Cystic Fibrosis gene is located at 7q31-32 on
chromosome number seven and spans about 280 kilo base pairs
of genomic DNA. It contains twenty four exons.4 This gene
codes for a protein involved in trans-membrane ion transport
called the Cystic Fibrosis Transmembrane Conductance
Regulator or CFTR. The 1,480 amino acid protein structure
of CFTR closely resembles the protein structure of the
ABC-transporter super family. It is made up of similar
halves, each containing a nucleotide-binding fold (NBF), or
an ATP-binding complex, and a membrane spanning domain
(MSD). The MSD makes up the transmembrane Cl- channels.
There is also a Regulatory Domain (R-Domain) that is located
mid-protein which separates both halves of the channels.
The R-Domain is unique to CFTR and is not found in any other
ABC-transporter. It contains multiple predicted binding
sites for protein kinase A and protein Kinase C.4
Mutations in the first MDS are mainly found in exon #4 and
exon #7. These types of mutations have been predicted to
alter the selectivity of the chloride ion channels.4
Mutations that are in the first NBF are predominant in
CFTR. As previously mentioned, 70 percent of the mutations
arising in CF cases are deletions of three base pairs in
exon #10. These three base pairs give rise to phenylalanine
and a mutation at this site is referred to as DF508.5 Such
a mutation appears not to interfere with R-Domain
phosphorylation and has even been reported to transport
chloride ions.6&7
There are five other frequent mutations that occur in the
first NBF. The first is a deletion of an isoleucine
residue, DF507. The second is a substitution of glycine or
amino acid #551 by aspartic acid/F551D. The third involves
stop mutations at arginine #553 and glycine #542. The
fourth is substitutions of serine #549 by various other
residues. The fifth is a predicted splicing mutation at the
start of exon #11.7
Mutations within the R-Domain are extremely rare. The only
reason they do occur is because of frameshifts. Frameshifts
are mutations occurring due to the starting of the reading
frame one or two nucleotides later than in the normal gene
translation.4
Mutations in the second membrane spanning domain of the
CFTR are also very rare and have only been detected in exon
#17b. These have no relevance to mutations occurring in the
first membrane spanning domain. They apparently do not have
a significant impact on the Cystic Fibrosis Transmembrane
Conductance Regulator either.4
Mutations in the second nucleotide-binding fold occur
frequently in exon #19 and exon #20 by the deletion of a
stop signal at amino acid number 1282. Exon #21 is
sometimes mutated by the substitution of asparagine #1303
with lysine #N1303K.4
The Bio-Chemical Defect:
Studies of the chloride channels on epithelial cells lining
the lungs, sweat glands, and pancreas have shown a consensus
in that the activation of chloride secretion in response to
cAMP (adenosine 3', 5'-monophosphate) is impaired in cystic
fibrosis cases. Another affected, independently regulated
chloride channel that has been discovered is activated by
calcium-dependent protein kinases. Sodium ions have also
been noted to be increasingly absorbed by apical sodium
channels.8 Therefore, the lack of regulated chloride ion
transport across the apical membranes and apical absorption
of sodium ions, impedes the extracellular presence of water.
Water will diffuse osmotically into cells and will thus
cause the dehydration of the sol (5- mm fluid layer of the
cell membrane) and the gel (blanket of mucus) produced by
epithelial cells.9 As a result of this diffusion of water,
airways become blocked and pancreatic proteins turn
inactive.
An Account of the Absorption and Secretion of Cl-, Na+, and
Proteins:
An inward, electrochemical Na+ gradient is generated by the
Na+, K+-ATPase pump located in the basolateral membrane (the
cell side facing the organ it is lining). A basolateral
co-transporter then uses the Na+ gradient to transport Cl-
into the cell against its own gradient. This is done in
such a way that when the apical Cl- channels within the
membrane spanning domain open, Cl- diffuse passively with
their gradient through the cell membrane.4
In pancreatic duct cells, a Na+, H+-ATPase pump is used and
a bicarbonate secretion is exchanged for Cl- uptake in the
apical membrane. Chloride ions then diffuse passively when
the Cl- channels are opened. Such secretions also allow for
the exocytosis of proteins in the pancreas which will later
be taken into the small intestines for the breaking down of
carbohydrates.4
In addition to the pump-driven gradients and secretions,
there exists autonomic neurotransmitter secretions from
epithelial cells and exocrine glands. Fluid secretion,
including Cl-, is stimulated predominately by cholinergic,
a-adrenergic mechanisms, and the b-adrenergic actions.4
Such chemical messengers cannot enter the cell, they can
only bind to specific receptors on the cell surface and
transmit messages to and through an intracellular messenger
such as Ca2+ and cAMP by increasing their concentration.
The intracellular message is transmitted across the cell by
either diffusion or by a direct cascade. One example of a
directed cascade is the following:
Possible Treatments For Cystic Fibrosis:
One suggested treatment for CF has been to provide the
missing chemicals to the epithelial cells. This can be
accomplished by the addition of adenosine
3',5'-monophosphate (cAMP) or the addition of the nucleotide
triphosphates ATP or UTP to cultures of nasal and tracheal
epithelia. This has been proven to alter the rate of Cl-
secretion by removing the 5-mmeter sol layer of fluid in the
respiratory tract.9 Moreover, luminal application of the
compound amiloride, which inhibits active Na+ absorption by
blocking Na+ conductance in the apical membrane, reduced
cell secretion and absorption to a steady state value.
Another treatment that has been suggested is to squirt
solutions of genetically engineered cold viruses in an
aerosol form into the nasal passages and into the lungs of
people infected with CF. This is done in hopes that the
virus will transport corrected copies of the mutated gene
into the affected person's airways so it can replace the
mutated nucleotides.10 This form of treatment is known as
gene therapy.
A different approach taken in an attempt to cure cystic
fibrosis involves correcting the disease while the affected
"person" is still an embryo. Test tube fertilization (in
vitro fertilization) and diagnosis of F508 during embryonic
development can be accomplished through a biopsy of a
cleavage-stage embryo, and amplification of DNA from single
embryonic cells.5 After this treatment, only unaffected
embryos would be selected for implantation into the uterus.
Affected embryo's would be discarded.
Conclusion:
Chloride conductance channels have dramatic potentials.
One channel can conduct from 1x106 to 1x108 ions per
second.8 This is particularly impressive when you consider
the fact that there are not many channels present on cells
to perform the required tasks. As a result of this, a
mutation of one channel or even a partial mutation of a
channel, that causes a decrease in the percentage of channel
openings, can exert a major effect.
Even the mildest of cures altering the Cystic Fibrosis
Conductance Regulator in CF afflicted people would lead to
significant improvements in that individuals health. Since
cystic fibrosis is the most common genetic disorder,
particularly amongst Caucasians, in today's society, intense
research efforts towards its cure would be invaluable. When
will cystic fibrosis be completely cured? No one can say
for sure but, strong steps have already been taken towards
reaching this goal.
f:\12000 essays\sciences (985)\Genetics\Diabetes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Contents
Introduction
Overview of Diabetes Type I
What is diabetes type I
Health implications of diabetes type I
Physical Activity
What is physical activity?
Why do we need physical activity in our lives?
Physical Activity and Diabetes (Epidemiology)
Conclusion
Bibliography
Introduction
For our seminar topic "physical activity and disease" we chose diabetes as the focus of our
research.
Since diabetes is such a complex disease with many different forms, we decided to focus on
diabetes type I. This is known as insulin-dependent diabetes mellitus (IDDM). This type of
diabetes includes people who are dependant on injections of insulin on a daily basis in
order to satisfy the bodies insulin needs, they cannot survive without these injections.
OVERVIEW OF DIABETES TYPE I
What is diabetes type I?
In order to understand the disease we firstly need to know about insulin. Insulin is a
hormone. The role of insulin is to convert the food we eat into various useful substances,
discarding everything that is wasteful.
It is the job of insulin to see that the useful substances are put to best use for our
well-being. The useful substances are used for building cells, are made ready for immediate
expenditure as energy and also stored for later energy expenditure.
The cause of diabetes is an absolute or lack of the hormone insulin. As a result of this
lack of insulin the processes that involve converting the foods we eat into various useful
substances does not occur.
Insulin comes from the beta cells which are located in the pancreas. In the case of
diabetes type I almost all of the beta cells have been destroyed. Therefore daily
injections of insulin become essential to life.
Health implications of diabetes type I
One of the products that is of vital importance in our bodies is glucose, a simple
carbohydrate sugar which is needed by virtually every part of our body as fuel to function.
Insulin controls the amount of glucose distributed to vital organs and also the muscles. In
diabetics due to the lack of insulin and therefore the control of glucose given to
different body parts they face death if they don't inject themselves with insulin daily.
Since strict monitoring of diabetes is needed for the control of the disease, little room
is left for carelessness. As a result diabetic patients are susceptible to many other
diseases and serious conditions if a proper course of treatment is not followed.
Other diseases a diabetic is open to: Cardiovascular disease, stroke, Peripheral artery
disease, gangrene, kidney disease, blindness, hypertension, nerve damage, impotence etc.
Basically there is an increased incident of infection in diabetic sufferers. Therefore
special care needs to be taken to decrease the chances of getting these other serious
diseases.
PHYSICAL ACTIVITY
What is physical activity?
(Bouchard 1988) States that physical activity is any bodily movement produced by skeletal
muscles resulting in energy expenditure. Therefore this includes sports and leisure
activities of all forms.
Why do we need physical activity in our lives?
Physical activity and exercise helps tune the "human machine", our bodies.
Imagine a car constantly driven only to stop for fuel. It would be a client for all sorts
of damage, rusting, oil leaking, dehydration and the chances are most likely it would die
in the middle of the road not long after. This is what the body would be like if we didn't
exercise at all. We would be and as a result of todays lifestyle many of us are, the
perfect target to all kinds of diseases and infections.
For those of us who are carrier of some disease or illness we are still encouraged to
exercise by our physicians if we have the strength to. This is to help make our organs,
muscles, bones and arteries more efficient and better equipped to fight against the disease
or illness. This is our way of counter attacking. And if we are still healthy then we
reduce the chances of getting an illness or a disease.
PHYSICAL ACTIVITY AND DIABETES (EPIDEMIOLOGY)
Recently insulin injections have become available to dependant patients. However in the
pre-insulin era physical exercise was one of the few therapies available to physicians in
combating diabetes.
For an IDDM carrier to benefit from exercise they need to be well aware of their body and
the consequences of exercising.
If an IDDM carrier has no real control over their situation and just exercise without
considering their diet, time of insulin intake, type of exercise, duration of the exercise
and the intensity, then the results can be very hazardous to the patient.
In the first journal article that I used for this part of the research (Sutton 1981) had
conducted an investigation on "drugs used in metabolic disorders". The article is designed
to provide some background information on previous beliefs and research conducted early
this century. As well as his own investigations conducted during the beginning of the
1980's. He has compared the results and came to the same conclusion as the investigations
done early in this century.
Sutton's findings show that decrease in blood glucose following an insulin injection was
magnified when the insulin was followed by physical activity/exercise (see figure 1). This
shows that if a person gets involved in physical activity or exercise after insulin the
volume of glucose drops dramatically. This leads to symptoms of hypoglycemia. The reason
this occurs is that glucose uptake by muscles increase during exercise, in spite of no
change or even a diminishing plasma insulin concentration. As a result of this type of
information we know now that if a patient is not controlled through a good diet and program
then they could put themselves in danger. A person who might be poorly maintained and
ketotic will become even more ketotic and hypoglycimic.
Good nutrition is of great importance to any individual especially one that exercises. In
the case of diabetes even more consideration must go into the selection of food before and
after exercise. Doctors suggest large intakes of carbohydrates before exercise for diabetes
carriers to meet the glucose needs of the muscles.
The second article that I used was that of Konen, et al. He and his colleagues conducted
testing and research on "changes in diabetic urinary and transferrin excretion after
moderate exercise". This article was a report of the way the research was conducted and
it's findings.
The researched found that urinary proteins, particularly albumin, increase in urinary
excretion after moderate exercise. Albumin which is associated with micro- and
macrovascular diseases in diabetic patience was found to increase significantly in IDDM
patients, while remaining normal in non-diabetics. (See table 1 and 2 for results)
These results cannot be conclusive to say that this shows that exercise causes other micro-
and macrovascular diseases in diabetics. Since albumin is not associated with any disease
in non-diabetics then the same may be the case for diabetics as well. However further
research is required to find out why such a significant increase occurs in diabetic
patients and what it really means.
It obvious that there are many very complicated issues associated with diabetes which
cannot be explained at this stage. Therefore much more research is required and it's only a
matter of time for these complications to resolved.
Although there are no firm evidence to suggest that exercise will improve or worsen
diabetes still it is recommended by physicians.
Aristotle and the Indian physician, Sushruta, suggested the use of exercise in the
treatment of diabetic patients as early as 600 B.C. And during late last century and early
this century many physician claimed that the need for insulin decreased in exercising
patients.
The benefits of exercise in non-diabetic individuals is well known. For example reduce the
risk of heart disease. This makes exercise very important to diabetic carriers since they
are at a greater risk of getting heart disease than non-diabetics.
Unquestionably, it's important for diabetics to optimise cardiovascular and pulmonary
parameters as it is for non-diabetic individual. Improved fitness can improve one's sense
of well-being and ability to cope with physical and psychological stresses that can be
aggravated in diabetes.
In well controlled exercise programs the benefits are many, as shown on table 3.
CONCLUSION
In conclusion we can see that although there are many factors that need to considered when
a diabetic person exercises, still there are many benefits when an IDDM carrier controls
and maintains a good exercise program. The risks of other disease such as heart disease and
obesity are reduced.
Bibliography
1. Sutton, J.R, (1981), Drugs used in metabolic disorders, Medicine and Science in Sports
and Exercise, Vol 13, pages 266-271.
2. Konen, J.C, (1993), Changes in diabetic urinary transferrin excretion after moderate
exercise, Medicine and Science in Sports and Exercise, pages 1110-1114.
3. Bouchard, C, (1990), Exercise, Fitness and Health, Human Kinetics Publishers.
4. Burke, E.J, (1980), Exercise, Science and Fitness, Mouvement Publishers.
5. Sanborn, M.A, (1980), Issues in Physical Education, Lea and Febiger.
6. Marble, A, (1985), Joslin's Diabetes Mellitus, Twelfth Edition, Lea and Febiger.
7. Kilo, C, (1987), Diabetes - The facts that let you regain control of your life, John
Wiley and Sons, Inc.
8. Seefeldt, V, (1986), Physical Activity and Well-being, American Alliance for Health,
Physical Education, Recreation and Dance.
f:\12000 essays\sciences (985)\Genetics\Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF CONTENTS Page INTRODUCTION ............................................... 2 DARWINIAN THEORY OF EVOLUTION .............................. 4 THE THEORY OF BIOLOGICAL EVOLUTION: CONTRIBUTING ELEMENTS ....................... 7 WALLACE'S CONTRIBUTIONS ................................... 13 HARDY-WEINBERG PRINCIPLE .................................. 15 COMPARISON: LAMARCK vs. DARWIN ........................... 16 DARWIN'S INFLUENCES ....................................... 20 METHODS OF SCIENTIFIC DEDUCTION ........................... 23 LIMITS TO DARWIN'S THEORY ................................. 25 MORPHOLOGICAL & BIOLOGICAL CONCEPTS ....................... 27 BIO-EVOLUTION: POPULATION vs. INDIVIDUALS ................ 29 MECHANISMS FOR GENETIC VARIATION .......................... 31 GENETIC VARIATION AND SPECIATION .......................... 35 DARWIN'S FINCHES .......................................... 37 SPECIATION vs. CONVERGENT EVOLUTION ....................... 39 CONCEPT OF ADAPTATION ..................................... 41 PUNCTUATED EQUILIBRIUM .................................... 43 VALUE/LIMITATIONS: THE THEORY OF BIOLOGICAL EVOLUTION .... 45 ALTERNATE EXPLANATIONS OF BEING ........................... 47 CONCLUSIONS ............................................... 48 INTRODUCTION Theories explaining biological evolution have been bandied about since the ancient Greeks, but it was not until the Enlightment of the 18th century that widespread acceptance and development of this theory emerged. In the mid 19th century english naturalist Charles Darwin - who has been called the "father of evolution" - conceived of the most comprehensive findings about organic evolution ever1. Today many of his principles still entail modern interpretation of evolution. I've assessed and interpreted the basis of Darwin's theories on evolution, incorporating a number of other factors concerning evolutionary theory in the process. Criticism of Darwin's conclusions abounds somewhat more than has been paid tribute to, however Darwin's findings marked a revolution of thought and social upheaval unprecedented in Western consciousness challenging not only the scientific community, but the prominent religious institution as well. Another revolution in science of a lesser nature was also spawned by Darwin, namely the remarkable simplicity with which his major work The Origin of the Species was written - straightforward English, anyone capable of a logical argument could follow it - also unprecedented in the scientific community (compare this to Isaac Newton's horribly complex work taking the scientific community years to interpret2). Evolutionary and revolutionary in more than one sense of each word. Every theory mentioned in the following reading, in fact falls back to Darwinism. DARWINIAN THEORY OF BIOLOGICAL EVOLUTION Modern conception of species and the idea of organic evolution had been part of Western consciousness since the mid-17th century (a la John Ray)3, but wide-range acceptance of this idea, beyond the bounds of the scientific community, did not arise until Darwin published his findings in 19584. Darwin first developed his theory of biological evolution in 1938, following his five-year circumglobal voyage in the southern tropics (as a naturalist) on the H.M.S. Beagle, and perusal of one Thomas Malthus' An Essay on the Principle of Population which proposed that environmental factors, such as famine and disease limited human population growth5. This had direct bearing on Darwin's theory of natural selection, furnishing him with an enhanced conceptualization of the "survival of the fittest" - the competition among individuals of the same species for limited resources - the "missing piece" to his puzzle6. For fear of contradicting his father's beliefs, Darwin did not publish his findings until he was virtually forced after Alfred Wallace sent him a short paper almost identical to his own extensive works on the theory of evolution. The two men presented a joint paper to the Linnaean Society in 1958 - Darwin published a much larger work ("a mere abstract of my material") Origin of the Species a year later, a source of undue controversy and opposition (from pious Christians)7, but remarkable development for evolutionary theory. Their findings basically stated that populations of organisms and individuals of a species were varied: some individuals were more capable of obtaining mates, food and other means of sustenance, consequently producing more offspring than less capable individuals. Their offspring would retain some of these characteristics, hence a disproportionate representation of successive individuals in future generations. Therefore future generations would tend have those characteristics of more accommodating individuals8. This is the basis of Darwin's theory of natural selection: those individuals incapable of adapting to change are eliminated in future generations, "selected against". Darwin observed that animals tended to produce more offspring than were necessary to replace themselves, leading to the logical conclusion that eventually the earth would no longer be able to support an expanding population. As a result of increasing population however, war, famine and pestilence also increase proportionately, generally maintaining comparatively stable population9. Twelve years later, Darwin published a two-volume work entitled The Descent of Man, applying his basic theory to like comparison between the evolutionary nature of man and animals and how this related to socio-political development man and his perception of life. "It is through the blind and aimless progress of natural selection that man has advance to his present level in love, memory, attention, curiosity, imitation, reason, etc. as well as progress in "knowledge morals and religion"10. Here is where originated the classic idea of the evolution of man from ape, specifically where he contended that Africa was the cradle of civilization. This work also met with opposition but because of the impact of his "revolutionary" initial work this opposition was comparatively muted11. A summary of the critical issues of Darwin's theory might be abridged into six concise point as follows: 1 Variation among individuals of a species does not indicate deficient copies of an ideal prototype as suggested by the platonic notion of Eidos. The reverse is true: variation is integral to the evolutionary process. 2 The fundamental struggle in nature occurs within single species population to obtain food, interbreed, and resist predation. The struggle between different species (ie. fox vs. hare) is less consequential. 3 The only variations pertinent to evolution are those which are inherited. 4 Evolution is an ongoing process which must span many moons to become detectably apparent. 5 Complexity of a species may not necessarily increase with the evolutionary process - it may not change at all, even decrease. 6 Predator and prey have no underlying purpose for maintenance of any type of balance - natural selection is opportunistic and irregular12. THE THEORY OF BIOLOGICAL EVOLUTION: CONTRIBUTING ELEMENTS The scientific range of biological evolution is remarkably vast and can be used to explain numerous observations within the field of biology. Generally, observation of any physical, behaviourial, or chemical change (adaptation) over time owing directly to considerable diversity of organisms can be attributed to biological evolution of species. It might also explain the location (distribution) of species throughout the planet. Naturalists can hypothesize that if organisms are evolving through time, then current species will differ considerably from their extinct ancestors. The theory of biological evolution brought about the idea for a record of the progressive changes an early, extinct species underwent. Through use of this fossil record paleontologists are able to classify species according to their similarity to ancestral predecessors, and thereby determine which species might be related to one another. Determination of the age of each fossil will concurrently indicate the rate of evolution, as well as precisely which ancestors preceded one another and consequently which characteristics are retained or selected against. Generally this holds true: probable ancestors do occur earlier in the fossil record, prokaryotes precede eukaryotes in the fossil record. There are however, significant "missing links" throughout the fossil record resulting from species that were, perhaps, never fossilized - nevertheless it is relatively compatible with the theory of evolution13. It can be postulated that organisms evolving from the same ancestor will tend to have similar structural characteristics. New species will have modified versions of preexisting structures as per their respective habitats (environmental situations). Certainly these varying species will demonstrate clear differentiation in important structural functions, however an underlying similarity will be noted in all. In this case the similarity is said to be homologous, that is, structure origin is identical for all descended species, but very different in appearance. This can be exemplified in the pectoral appendages of terrestrial vertebrates: Initial impression would be that of disparate structure, however in all such vertebrates four distinct structural regions have been defined: the region nearest the body (humerus connecting to the pectoral girdle, the middle region (two bones, radius and ulna are present), a third region - the "hand" - of several bones (carpal and metacarpal, and region of digits or "fingers". Current species might also exhibit similar organ functions, but are not descended from the same ancestor and therefore different in structure. Such organisms are said to be analogous and can be exemplified in tetrapods, many containing similar muscles but not necessarily originating from the same ancestor. These two anatomical likenesses cannot be explained without considerable understanding of the theory of organic evolution14. The embryology, or early development of species evolved from the same ancestor would also be expected to be congruent. Related species all share embryonic features. This has helped in determining reasons why development takes place indirectly, structures appearing in embryonic stage serve no purpose, and why they are absent in adults. All vertebrates develop a notchord, gill slits (greatly modified during the embryonic cycle) and a tail during early embryology, subsequently passing through stages in which they resemble larval amphioxus, then larval fishes. The notchord will only be retained as discs, while only the ear canal will remain of the gills in adults. Toothless Baleen whales will temporarily develop teeth and hair during early embryology leading to the conclusion that their ancestors had these anatomical intricacies. A similar pattern, exists in almost all animal organisms during the embryonic stage for numerous formations of common organs including the lungs and liver. Yet there is a virtually unlimited variation of anatomical properties among adult organisms. This variation can only be attributed to evolutionary theory15. Biological evolution theory insists that in the case of a common ancestor, all species should be similar on a molecular level. Despite the tremendous diversity in structure, behaviour and physiology of organisms, there is among them a considerable amount of molecular consistency. Many statements have already been made to ascertain this: All cells are comprised of the same elemental organic compounds, namely proteins, lipid and carbohydrates. All organic reactions involve the action of enzymes. Proteins are synthesized in all cells from 20 known amino acids. In all cells, carbohydrate molecules are derivatives of six-carbon sugars (and their polymers). Glycolysis is used by all cells to obtain energy through the breakdown of compounds. Metabolism for all cells as well as determination of definitude of proteins through intermediate compounds is governed by DNA. The structure for all vital lipids, proteins, some important co-enzymes and specialized molecules such as DNA, RNA and ATP are common to all organisms. All organisms are anatomically constructed through function of the genetic code. All of these biochemical similarities can be predicted by the theory of biological evolution but, of course some molecular differentiation can occur. What might appear as minor differentiation (perhaps the occurrence-frequency of a single enzyme) might throw species into entirely different orders of mammals (ie. cite the chimpanzee and horse, the differentiation resulting from the presence of an extra 11 cytochrome c respiratory enzymes). Experts have therefore theorized that all life evolve from a single organism, the changes having occurred in each lineage, derived in concert from a common ancestor16. Breeders had long known the value of protective resemblance long before Darwin or any other biological evolution theorists made their mark. Nevertheless, evolutionary theory can predict and explain the process by which offspring of two somewhat different parents of the same species will inherit the traits of both - or rather how to insure that the offspring retains the beneficial traits by merging two of the same species with like physical characteristics. It was the work of Mendel that actually led to more educated explanations for the value in protective resemblance17. The Hardy-Weinburg theory specifically, employs Mendel's theory to a degree to predict the frequency of occurrence of dominantly or recessively expressing offspring. Population genetics is almost sufficient in explaining the basis for protective resemblance. Here biological evolutionary theory might obtain its first application to genetic engineering18. Finally, one could suggest that species residing in a specific area might be placed into two ancestral groups: those species with origins outside of the area and those species evolving from ancestors already present in the area. Because the evolutionary process is so slow, spanning over considerable lengths of time, it can be predicted that similar species would be found within comparatively short distances of each other, due to the difficulty for most organisms to disperse across an ocean. These patterns of dispersion are rather complex, but it is generally maintained by biologists that closely related species occur in the same indefinite region. Species may also be isolated by geographic dispersion: they might colonize an island, and over the course of time evolve differently from their relatives on the mainland. Madagascar is one such example - in fact approximately 90 percent of the birds living there are endemic to that region. Thus as predicted, it follows that speciation is concurrent with the theory of biological evolution19. WALLACE'S CONTRIBUTIONS There is rarely a sentence written regarding Wallace that does not contain some allusion to Darwin. Indeed, perhaps the single most significant feat he preformed was to compel Darwin to enter the public scene20. Wallace, another English naturalist had done extensive work in South America and southeast Asia (particularly the Amazon and the Malay Archipelago) and, like Darwin, he had not conceived of the mechanism of evolution until he read (recalled, actually) the work of Thomas Malthus - the notion that "in every generation the inferior would be killed off and the superior would remain - that is the fittest would survive". When the environment changed therefore, he determined "that all the changes necessary for the adaptation of the species ... would be brought about; and as the great changes are always slow there would be ample time for the change to be effected by the survival of the best fitted in every generation". He saw that his theory supplanted the views of Lamarck and the Vistages and annulled every important difficulty with these theories21. Two days later he sent Darwin (leading naturalist of the time) a four-thousand word outline of his ideas entitled "On the Law Which has Regulated the Introduction". This was more than merely cause for Darwin's distress, for his work was so similar to Darwin's own that in some cases it parallelled Darwin's own phrasing, drawing on many of the same examples Darwin hit upon. Darwin was in despair over this, years of his own work seemed to go down the tube - but he felt he must publish Wallace's work. Darwin was persuaded by friends to include extracts of his own findings when he submitted Wallace's work On the Law Which Has Regulated the Introduction of New Species to the Linnaean Society in 1858, feeling doubly horrible because he felt this would be taking advantage of Wallace's position. Wallace never once gave the slightest impression of resentment or disagreement, even to the point of publishing a work of his own entitled Darwinism. This itself was his single greatest contribution to the field: encouraging Darwin to publish his extensive research on the issues they'd both developed22. He later published Contributions to the Theory of Natural Selection, comprising the fundamental explanation and understanding of the theory of evolution through natural selection. He also greatly developed the notion of natural barriers which served as isolation mechanisms, keeping apart not only species but also whole families of animals - he drew up a line ("Wallace's line") where the fauna and flora of southeast Asia were very distinct from those of Australasia23. HARDY-WEINBERG PRINCIPLE Prior to full recognition of Mendel's work in the early 1900's, development of quantitative models describing the changes of gene frequencies in population were not realized. Following this "rediscovery" of Mendel, four scientists independently, almost simultaneously contrived the Hardy-Weinberg principal (named after two of the four scientists) which initiated the science of population genetics: exploration of the statistical repercussions of the principle of inheritance as devised by Mendel. Read concisely the Hardy-Weinberg principle might be stated as follows: Alternate paradigms of genes in ample populations will not be modified proportionately as per successive generation, unless stimulated by mutation, selection, emigration, or immigration of individuals. The relative proportion of genotypes in the population will also be maintained after one generation, should these conditions be negated or mating is random24. Through application of the Hardy-Weinberg principle the precise conditions under which change does not occur in the frequencies of alleles at a locus in a given population (group of individuals able to interbreed and produce fertile offspring) can be formulated: the alleles of a locus will be at equilibrium. A species may occur in congruous correspondence with its population counterpart, or may consist of several diverse populations, physically isolated from one another25. In accordance with Mendelian principle, given two heterozygous alleles A and B, probability of the offspring retaining prominent traits of either parent (AA or BB) is 25 percent, probability of retaining half the traits of each parent (AB) is 50 percent. Thus allele frequencies in the offspring parallel those of the parents. Likewise, given one parent AB and another AA, allele frequencies would be 75 percent A and 25 percent B, while genotype frequencies would be 50 percent AA and 50 percent AB - the gametes generated by these offspring would also maintain the same ratio their parents initiated (given, of course a maximum of two alleles at each locus). In true-to-life application however, where numerous alleles may occur at any given locus numerous possible combinations of gene frequencies are generated. Assuming a population of 100 individuals = 1, 30 at genotype AA, 70 at genotype BB. Applying the proportionate theory, only 30% (0.30) of the gametes produced will retain the A allele, while 70% (0.70) the B allele. Assuming there is no preference for AA or BB individuals for mates, the probability of the (30% of total population) AA males mating with AA females is but 9% (0.3 x 0.3 = 0.09). Likewise the probability of an BB to BB match is 49%, the remainder between (30%) AA and (70%) BB individuals, totalling a 21% frequency. Frequency of alleles in a population in are commonly denoted p and q respectively, while the AB genotype is denoted 2pq. Using the relevant equation p + pq + q = 1, the same proportions would be obtained. It can therefore be noted that the frequencies of the alleles in the population are unchanged. If one were to apply this equation to the next generation, similarly the genotype frequencies will remain unchanged per each successive generation. Generally speaking, the Hardy-Weinberg principle will not favour one genotype over another producing frequencies expected through application of this law. The integral relevance for employment of the Hardy-Weinberg principle is its illustration of expected frequencies where populations are evolving. Deviation from these projected frequencies indicates evolution of the species may be occurring. Allele and genotype frequencies are typically modified per each successive generation and never in ideal Hardy-Weinberg equilibrium. These modifications may be the result of natural selection, but (particularly among small populations) may simply result from random circumstance. They might also arise form immigration of individuals form other populations where gene frequencies will be unique, or form individuals who do not randomly choose mates from their wide-ranged species26. COMPARISON: LAMARCK vs. DARWIN Despite the lack of respect lamarckian theory was dealt at the hands of the early evolution-revolutionaries, the enormous influence it had on numerous scientists, including Lyell, Darwin and the developers of the Hardy-Weinberg theory cannot be denied. Jean Lamarck, a French biologist postulated the theory of an inherent faculty of self-improvement by his teaching that new organs arise form new needs, that they develop in proportion to how often they are used and that these acquisitions are handed down from one generation to the next (conversely disuse of existing organs leads to their gradual disappearance). He also suggested that non-living matter was spontaneously created into the less complex organisms who would evolve over time into organisms of greater and greater complexity. He published his conclusions in 1802, then later (1909) released an expanded form entitled Philosophie zoologique. The English public was first exposed to his findings when Lyell popularized them with his usual flair for writing, but because the influential Lyell also openly criticized these findings they were never fully accepted27. Darwin's own theories were based on those of older evolutionists and the principle of descent with modification, the principle of direct or indirect action of the environment on an individual organism, and a wavering belief in Lamarck's doctrine that new characteristics acquired by the individual through use or disuse are transferred to its descendants. Darwin basically built around this theory, adding that variation occurs in the passage each progressive generation. Lamarck's findings could be summarized by stating that it is the surrounding environment that has direct bearing on the evolution of species. Darwin instead contested that it was inter-species strife "the will to power" or the "survival of the fittest"28. Certainly Lamarck was looking to the condition of the sexes: the significantly evolved difference of musculature between male and females can probably be more easily explained by Lamarckian theory than Darwinian. There was actually quite a remarkable similarity between the conclusions of Darwin's grandfather, Erasmus Darwin and Lamarck - Lamarck himself only mentioned Erasmus in a footnote, and with virtual contempt. The fact is neither Lamarck nor Darwin ever proposed a means by which species traits were passed on, although Lamarck is usually recalled as one of those hopelessly erroneous scientists of past it was merely the basis for his conclusions that were hopelessly out of depth - the conclusions were remarkably accurate29. DARWIN'S INFLUENCES In 1831 a young Charles Darwin received the scientific opportunity of lifetime, when he was invited to take charge f the natural history side of a five year voyage on the H.M.S. Beagle, which was to sail around the world, particularly to survey the coast of South America. Darwin's reference material consisted of works of Sir Charles Lyell, a British geologist (he developed a concept termed uniformitarianism which suggested that geological phenomena could be explained by prevailing observations of natural processes operating over a great spans of time - he has been accused synthesizing the works of others30) who was the author of geologic texts that were required reading throughout the 19th century including Principals of Geology, which along with his own findings (observing the a large land shift resulting from an earthquake), convinced him of geological uniformitarianism, hypothesizing for example, that earthquakes were responsible for the formation of mountains. Darwin faithfully maintained this method of interpreting facts - by seeking explanations of past events by observing occurrences in present time - throughout his life31. The lucid writing style of Lyell and straightforward conclusions influence all of his work. When unearthing remains of extinct animals in Argentina he noted that their remains more closely resembled those of contemporary South American mammals than any other animals in the world. He noted "that existing animals have a close relation in form with extinct species", and deduced that this would be expected "if the contemporary species had evolved form South American ancestors" not however, if thereexisted an ideal biota for each environment. When he arrived on the Galapagos islands (islands having been formed at about the same time and characteristically similar), he was surprised to observe unique species to each respective island, particularly tortoises which possessed sufficiently differentiated shells to tell them apart. From these observations he concluded that the tortoises could only have evolved on the islands32. Thomas Robert Malthus was an English economist and clergyman whose work An Essay on the Principal of Population led Darwin to a more complete understanding of density dependent factors and the "struggle in nature". Malthus noted that there was potential for rapid increase in population through reproduction - but that food cannot increase as fast as population can, and therefore eventuality will allow less food per person, the less able dying out from starvation or sickness. Thus did Malthus identify population growth as an obstacle to human progress and pedalled abstinence and late marriage in his wake. For these conclusions he came under fire from the Enlightment movement which interpreted his works as opposing social reform33. Erasmus Darwin, grandfather of Darwin, was an unconventional, freethinking physician and poet who expressed his ardent preoccupation for the sciences through poetry. In the poem Zoonomia he initiated the idea that evolution of an organism results from environmental implementation. This coupled with a strong influence from the similar conclusions of Lamarck shaped Darwin's perception on the environment's inherent nature to mould and shape evolutionary form34. METHODS OF SCIENTIFIC DEDUCTION Early scientists, particularly those in the naturalist field derived most of their conclusions from observed, unproven empirical facts. Without the means of logically explaining scientific theory, the hypothesis was incurred - an educated guess to be proven through experimentation. Darwin developed his theory of natural selection with a viable hypothesis, but predicted his results merely by observing that which was available. Following Lyell's teaching, using modern observations to determine what occurred in the past, Darwin developed theories that "only made sense" - logical from the point of view of the human mind (meaning it was based on immediate human perception) but decidedly illogical from a purely scientific angle. By perusing the works of Malthus did Darwin finally hit upon his theory of natural selection - not actually questioning these conclusions because they fit so neatly into his own puzzle. Early development of logical, analytic scientific theory did not occur until the advent of philosopher Rene Descartes in the mid-17th century ("I think therefore I am"35). Natural selection was shown to be sadly lacking where it could not account for how characteristics were passed down to new generations36. However, it did present enough evidence for rational thought to be applied to his theory. Thus scientists were able to develop fairly accurate conclusions with very limited means of divination. Opposition from oppressive Judeo-Christian church allowed little room science. Regardless, natural selection became the basis for all present forms of evolutionary theory.37 LIMITS TO DARWIN'S THEORY Darwinism, while comparatively rational and well documented nevertheless upheld the usual problem that can be found in many logical scientific conclusions - namely deliberate ignorance of facts which might modify or completely alter years the conclusions of years of research. Many biologists were less than convinced with an evolutionary hypothesis that could not explain the mechanism of inheritance. It was postulated by others that offspring will tend to have a blend of their two parents characteristics, the parents having a blend of characteristics from their ancestors, the ancestors having a blend of characteristics from their predecessors - allotting the final offspring impure, diminished desirable characteristics38. Thus did they believe a dilution of desirable traits evolved even more diluted desirable traits - these traits now decidedly muted. It was more than two decades after Darwin's death that Mendelian theory of the gene finally came to light at the turn of the century39. Because of this initial scepticism with Darwin's natural selection, when Mendel's work became widely available biologists emphasized the importance of mutation over selection in evolution. Early Mendelian geneticists believe that continuous variation (such features as body size) hardly factored in the formation of new species - perhaps nothing to do with genetic control. Inferences on the gradual divergence of populations diminished in wake of notions of significant mutations40. This gave rise to neo-Darwinian theory in the 1930's, what is called "modern synthesis" which encompasses paleontology, biogeography, systematics and, of course, genetics. Geneticists have noted that acquired characteristics cannot, indeed be inherited, while observing that continuous variation is inherited through the effects of many genes and have therefore concluded that continuously distributed characteristics are also influenced by natural selection and evolve through time. Modern synthesis, in other words, differs little form Darwinian theory, but also incorporates current understanding of inheritance. Modern synthesis maintains that random mutations introduce variation into population, natural selection inaugurating new genes in greater proportions. Despite revolutionary progress the discovery of the gene has made, neo-Darwinian theory is still based on the arbitrary assumption that the primary factor causing adaptive change in populations is natural selection41. MORPHOLOGICAL & BIOLOGICAL CONCEPTS Species have been traditionally described based on their morphological characteristics. This has proven to be somewhat premature to say the least: some organisms in extremely different forms are quite similar in their genetic make-up. Male and females in many species develop more than a few many characteristic physical differences, yet are indeed the same species (imagine that!). Likewise some organisms appear to be quite morophologically similar but are completely incompatible. There are many species of budworm moths, all of which are highly indistinguishable - most of which do not interbreed42. The idea of species is usually called the biological species concept, stressing the importance of interbreeding among individuals in a population as a general description. An entire population might be thought of as a single unit of evolution. However similar difficulties arise in attempting a universal application of this theory. Because morphologically similar species occur in widely separated regions, it is virtually impossible to exact whether they could or could not interbreed. One might ask whether cactus finches from the Galapagos interbreed - the answer may invariably be yes...but due rather to the morphological similarities between them. Consider further asexually producing species, which can be defined by appearance alone: each individual would have to be defined as different biological species - a fact which would remain irrelevant. There are also cases for which no real standard can be applied - the donkey and horse, for example can mate and produce healthy offspring, mules which are almost always sterile and therefore something completely undefinable. Therefore, despite seeming ideal in its delimitation, the biological species concept cannot be employed in describing many natural species43. It is nonetheless a popular concept for theoretical discussions since it can distinguish which populations might evolve through time completely independent of other similar populations. Species classification is therefore not defined by fixed principles surrounding biological and morphological classifications both. The random nature of evolution itself is predictable perhaps only in that one respect: that it remain virtually unpredictable. In accordance with the Hardy-Weinberg theory the proportion of irregularity should not necessarily increase, but because, by its own admission this theory cannot be employed as a standard but merely to predict results, even it is limited random un-law of nature44. BIO-EVOLUTION: POPULATION vs. INDIVIDUALS According to the theory of evolution, all life or most of it, originated from the evolution of a single gene. All relatives - species descended from a common ancestor - by definition share a certain percentage of their genes. If naught else than these genes are of a very similar nature. A species depends on the remainder of its population in developing characteristics which allow easier adaptability to the changing environment. These modified genes will ultimately express themselves as new species or may be passed on to other populations within a given species. For these traits to be expressed individually is certainly not going to benefit the species (ie. the mule retains remarkable traits but cannot reproduce - they're also a literal pain in the ass to generate). Nevertheless should but one individual in a million retain a beneficial characteristic, opportunity for this to be passed on is significantly increased. In short order, as per natural selection highly adapted species can develop where they were dying out (over centuries to be sure, but dying out nonetheless) only a ('n evolutionarily) short span of time ago. Plant breeders especially know the value of the gene pool. They depend on the gene pool of the wild relatives of these plants to develop strains that are well adapted to local conditions (here we refer to comparatively exotic plants). The gene pool is there for all compatible species (and that could be a large amount down the line) to partake of - given the right random conditions and the future for plant breeders brightens45. MECHANISMS FOR GENETIC VARIATION There are a number of known factors are capable of changing the genetic structure of a population, each inconsistent with the Hardy-Weinberg principle. Three primary contributing factors are migration, mutation and selection and are referred to as systematic processes - the change in gene frequency is comparatively predictable in direction and quantity. The dispersive process of genetic is predictable only in quantitative nature. When species are sectioned into diverse, geographically isolated populations, the populations will tend to evolve differently on account of the following accepted standards: 1 Geographically isolated populations will mutate exclusively to their population. 2 The adaptive value for these mutations and gene combinations will differentiate per each population. 3 Different gene frequencies existed before the population was isolated and are therefore not representative of their ancestors. 4 During intervals of small population size gene frequencies will be fluctuating and unpredictable forming a genetic "bottleneck" from which all successive organisms will arise46. Gene frequencies can be altered when a given population is exposed to external populations, the change in frequency modified as per the proportion of foreigners to the mainstream population. Migration may be eliminated between two populations in regions of geographic isolation, which will isolate in turn, the gene pools within the population. If this isolation within population develops over a sufficient span of time the physical differences between two given gene pools may render them incompatible. Thus have the respective gene pools become reproductively isolated and are now defined as biologically different species. However, speciation (division into new species) does not arise exclusively from division into new subgroups inside a population, other aspects might be equally effective47. The primary source for genetic variability is mutation, usually the cause of depletion of species' fitness but sometimes more beneficial. The ability of a species to survive is dependent on its store of genetic diversity, allowing generation of new genotypes with greater tolerance for changing environment. However, some of the best adapted genotypes may still be unable to survive if environmental conditions are too severe. Unless new genetic material is obtained outside the gene pool, evolution will have a limited range of tolerance for change. Generally speaking, spontaneous mutations whether they are required or not. This means many mutations are useless, even harmful under current environmental conditions. These crippling mutations are usually weeded out or kept at low frequencies in the population through natural selection. The mutation rate for most gene loci is between one in 100 thousand to one in a million. Therefore, although mutations are the source of genetic variability, even without natural selection changes in the population would be unnoticeable and very slow. Eventually, if the only pressure affecting the locus is from mutation, gene frequencies will change and fall back to comparative equilibrium48. The fundamental restriction on the validity of the Hardy-Weinberg equilibrium law occurs where population size in immeasurably large. Thus the disseminating process of genetic drift is applicable for gene frequency alteration in situations of small populations. In such a situation inbreeding is unavoidable, hence the primary contributing factor for change of gene frequencies through inbreeding (by natural causes) is genetic drift. The larger the sample size, the smaller the deviation will be from predicted values. The action of sampling gametes from a small gene pool has direct bearing on genetic drift. Evidence is observed via the random fluctuation of gene frequencies per each successive generation in small populations if systematic processes are not observed as contributing factors. From this four basic assumptions have been made for idealized populations as follows: 1 Mating and self-fertilization in respective subgroups of given populations are completely random. 2 Overlap of one generation to its successor does not occur allotting distinct characteristics for each new generation. 3 In all generations and lines of descent the number of possible breeding individuals is the same. 4 Systematic factors such as migration, mutation and natural selection are defunct49. In small populations certain alleles, perhaps held as common to a species may not be present. The alleles will have become randomly lost somewhere in the population in the process of genetic drift. The result is much less variability among small populations that among larger populations. If every locus is fixed in these small populations they will have no genetic variability, and therefore be unable to generate new adaptive offspring through genetic recombination. The ultimate fate of such a population if it remains isolated is extinction50. GENETIC VARIATION & SPECIATION Through genetic variation new species will arise, in a process termed speciation. It is generally held that speciation occurs as two given species evolve their differences over large spans of time - these differences are defined as their genetic variation. The most popular model use to explain how species formed is the geographic speciation model, which suggests that speciation occurs only when an initial population is divided into two or more smaller populations - via genetic variation through systematic means of mutation, natural selection or genetic drift - geographically isolated (physically separated) from one another. Because they are isolated, gene flow (migration) cannot occur between the respective new populations51. These "daughter" populations will eventually adapt to their new environments through genetic variation (process of evolution). If the environments of each isolated population are different then they would be expected to adapt to different conditions and therefore evolve differently. According to the model of geographic speciation, the daughter populations will eventually evolve sufficiently to become incompatible with one another (therefore unable to interbreed or produce viable offspring). As a result of this incompatibility, gene flow could not effectively occur even if the populations were no longer geographically isolated. The differentiated, but closely related species are now termed species pair, or species group. Eventually differentiation will progress far enough for them to be defined as different species. While divergence is a continuing process, it does not necessarily occur at a constant rate - fluctuating between extremely rapid rates and very slow rates of evolution. Two standard methods have been postulated for the occurrence of geographic speciation: i) Individuals from a species might populate a new, isolated region of a give area (such as an island). Their offspring would evolve geographically isolated from the original species. Eventually, geographical isolation from the population on the mainland would evolved distinguishable characteristics. ii) Individuals might, alternately be geographically isolated as physical barriers arise or the range of the species or individuals of a population diminishes52. However, neither of these forms of speciation through geographic isolation and consequent individual genetic variation have been observed or studied direct because of the time span and general difficulty of unearthing desired fossils. Evidence for this form of speciation is therefore indirect and based on postulated theory53. DARWIN'S FINCHES The finches of the Galapagos islands provided Darwin with an important lead towards his development of his theory of evolution. They were (are) a perfect example of how isolated populations could evolve. Here Darwin recognized that life branched out from a common prototype in what is now called adaptive radiation. There were no indigenous finches to the islands when they arrived - some adapted to tree-living, others to cactus habitat, others to the ground. The differentiation was comparatively small, and yet there evolved fourteen species of bird classified under six separate genera, each visibly different only in the characteristics of its beak54. Joint selection pressure equations have been used to calculate the change in gene frequency and consequent rate of mutation resulting from action the of natural selection. Populations of Galapagos finches arrived at their islands from South America and were provided with varying methods of obtainment of sustenance. Only those individuals that evolved characteristics allowing them to more easily obtain food from varying sources were not selected against. Populations were isolated on certain islands and had to adapt to different food sources. The result was an adaptation to food (seeds) from trees, ground or cactus-dominated ares. However, the migratory nature of these finches prompted them to emigrate to alternate islands, therefore interbreeding with otherwise isolated populations of finches. The result has been a variation on single specific characteristics which retain certain properties due to the singular islands they predominantly occupied. When the population of immigrants was high enough, the gene pools of diverse populations of finches presently occupying the island was modified enough such that offspring would inherit some of the traits of otherwise isolated finch populations55. Nevertheless, these finches developed characteristics endemic to their particular habitat, and because finches tend to remain in groups rather than individual families, these particular characteristics became dominant enough to evolve morphologically and later even biologically different characteristics. These discrepancies could only lead to greater genetic variation down the line. Eventually immigrants from the mainland and even other Galapagos islands were completely incompatible with specific finch populations endemic to their respective islands56. Generally, selection pressure decreased as mutations resulting from systematic processes of genetic variation could no longer occur. This produced a significantly less versatile gene pool, however, via genetic drift from individuals of alternate populations who had, at some point evolved from ancestors the population in question. Thus the gene pool could be modified without really affecting the gene frequencies57 - joint pressures were therefore stabilized, along with the newly developed population. SPECIATION vs. CONVERGENT EVOLUTION Speciation is substantially more relevant to the evolution of species than convergent evolution. Through natural selection similar characteristics and ways of life may be evolved by diverse species inhabiting the same region, in what is called convergent evolution - reflecting the similar selective pressure of similar environments. While separate populations of the same species occupying similar habitats may also evidence similar physical characteristics - due primarily to the environment rather than their species origin - it should noted that they progressed form the same ancestor. A defining principle for the alternate natures of speciation and convergent evolution put simply: speciation results form a common ancestor, convergent evolution results from any number of ancestors58. Morphologically similar populations resulting from the same ancestor may be compatible and able to produce viable offspring (if in some occasions not fertile offspring). Morphologically similar species resulting form different ancestors are never compatible with one another - even if they are virtual morphological twins. In fact, morphologically disparate populations of the same species may be compatible with one another - whereas those disparate through convergent evolution would be more than merely incompatible, they may be predator and prey. Convergent evolution may only account for single specific physical characteristics of very disparate, unrelated species - such as the development of flipper-like appendages for the sea turtle (reptile), penguin (bird) and walrus (mammal)59. CONCEPT OF ADAPTATION If individuals were unable to adapt to changes in the environment they would be extinct in short order. Adaptability is often based on nuclear inheritance down the generations. Should an organism develop a resistance to certain environmental conditions, this characteristic may be passed down through the gene pool, and then through natural selection be dominant for all organisms of a given population. Bacteria are able to accomplish this feat at a remarkably fast rate. Most, if not all forms of bacteria are compatible with one another, that is able to exchange genetic information. The speed at which bacteria reproduces is immeasurably faster than that of more complex, eukaryote organisms. Bacteria have a much shorter lifespan as well - but because they can develop very quickly into large colonies given ideal conditions, it is easier to understand bacteria in clusters. Should a single bacterial organism develop a trait that slightly aids its resistance to destructive environmental conditions, it can pass its modified genetic structure on to half of a colony in a matter of hours. In the meantime the colony is quickly expanding, fully adapted to the environment - soon however, it has developed more than it can be accommodated. The population will drop quickly in the face of inadaptability. But that (previosly mentioned) exterior bacterial organism with the modified trait releases information yielding new growth, allowing the colony to expand further. It is generally accepted that bacterial colonies will achieve a maximum capability - however, through adaptation the bacterial population will quickly excel once again60. Antibiotics are now sent to destroy the bacteria. Soon they will be obliterated - and now all that remains of the colony are a few choice bacterial organisms. However, an otherwise isolated bacteria enters the system to exchange genetic information with the much smaller bacterial colony, conditions are favourable, the bacteria expands again. Antibiotics are sent again to destroy this colony - but the exterior bacteria, originating in another organism and having developed a resistance to this type of antibody has provided much of the colony with the means for resistance to these antibodies as well. Once again the bacterial culture has expanded having resisted malignant exterior interlopers61. This is how bacteria develops, constantly exchanging nuclear information, constantly able to adapt to innumerable harmful sources. As bacteria are exposed to more destructive forces, the more they decelop resistace to, as surely many of the billions of bacteria could develop an invulnerability to any threatening exterior sources given ideal environmental conditions. PUNCTUATED EQUILIBRIUM Recently the concept of punctuated equilibrium, as proposed by American paleontologist Stephen Jay Gould has be the subject of much controversy in the scientific world. Gould advanced the idea that evolutionary changes take place in sudden bursts, and are not modified for long periods time when they are reasonably adapted to altered environment62. This almost directly contradicts the older, established Darwinian notions that species evolve through phyletic gradualism, that evolution occurs at a fairly constant rate. It is not suggested by adherents of the punctuated equilibrium model that pivotal fluctuations in morphology occur spontaneously or in only a few generations changes are established in populations - they argue instead that the changes may occur in but 100 to 1000 generations. It is difficult to determine which model could more adequately describe what transpires over the course of speciation and evolution due to gaps in fossil-record, 50 to 100 thousand years of strata often covering deposits bearing fossils. Genetic make-up need not change much for rapid, discernable morphological alterations to detected63. Impartial analysts on the two theories conclude that they are both synonymous with evolutionary theory. Their primary differences entail their emphasis on the importance of speciation in long-term evolutionary patterns in lineage. While phyletic gradualism emphases the significance of changes in a single lineage and the revision of species through slight deviation, punctuated equilibrium emphases the significance of alteration occurring during speciation, maintaining that local (usually small) populations adapt rapidly to local circumstance in production of diverse species - some of which acquire the means for supplantation of their ancestors and rampant settlement in many important adaptive breakthroughs64. One must consider that Darwin was not aided by Mendelian theory. Under such circumstances Darwin would have surely produced an entirely different theory for the inheritance of beneficial traits. Consider that mutations can presumably occur spontaneously, given the properly modified parent. It can therefore be stated that punctuated equilibrium is probably a more likely explanation as it does take into account modern cell, and genetic theory. Phyletic gradualism, while certainly extremely logical is a theory which simply cannot encompass those circumstance in which significant change is recorded over comparatively short periods of time. Both are complementary to be sure, but perhaps one of the two distorts this complementary nature formulating inaccurate assumption. VALUE/LIMITATIONS: THE THEORY BIOLOGICAL EVOLUTION Whether or not the theory of evolution is useful depends on whether or one values progress above development of personal notions of existence. Certainly under the blanket of a superficial American Dream one would be expected to subscribe to ideals that society, that the state erects. Of course, these ideals focus on betterment of society as a whole - which now unfortunately, means power to the state. Everybody is thus caught up in progress, supposedly to "improve the quality of life", and have been somewhat enslaved by the notion of work. Work has become something of an idol, nothing can be obtained without work - for the state. Whether one agrees with the thoughtless actions of the elite or not, people are oppressed by conforming to ideals that insist upon human suffering. Some irresponsible, early religious institutions did just that, erecting a symbol of the people's suffering and forcing them to bow before it. Development of aeronautic, or even cancer research contributes primarily to this ideal of progress. Development of such theories as biological evolution, contribute nothing toward progress. It instills in the people new principles, to dream and develop an understanding of themselves and that which surrounds them ones, freeing their will from that shuffling mass, stumbling as they are herded towards that which will reap for them suffering and pain. The state provides this soulless mass with small pretty trinkets along the way, wheedling and cajoling them with media images of how they should lead their lives - the people respond with regrets. Modern theory of biological evolution is actually sadly lacking in explanation for exactly how characteristics are passed down to future generations. It is understood how nitrogen bases interact to form a genetic code for an organism - but how the modification that the organism develops, occurs is unknown. Somehow the organism mutates to adapt to environmental conditions, and then presumably the offspring of this organism will retain these adaptations65. Of course, biological evolution cannot also explain precisely how first organisms developed: Generally, the theory accounts for energy and chemical interactions at a level consistent enough to establish a constant flow of said interactions - but even here it falls short. And what of phyletic gradualism? It is completely unable to explain the more sudden mutations that occur...for obvious reasons it cannot explain this (Darwin had no knowledge of genetics), but even punctuated gradualism doesn't balance this problem. I'm sure there are numerous other problems which can be addressed but these can be dealt with where opinion can be more educated. ALTERNATE EXPLANATIONS OF BEING Man it would appear, has always sought meaning for his existence. Development of many theories of existence have been conceived and passed down through the ages. Institutions conferring single metaphysical and elemental viewpoints have been established, some of which have been particularly irresponsible and oppressive towards the people they were supposed to "enlighten". Most religious institutions have been used as political tools for means of manipulation of the masses, going back to early Roman days when empower Augustus absorbed Christianity into the Roman worship of the sun, Sol Invectus, as a means of subjugating the commoners to Roman doctrine. Generally religious institutions have exploited the people and have been used as excuses for torture, war, mass exterminations and general persecution and oppression of the people it pretends to serve, telling the people they must suffer to reach ultimate transcendent fulfilment. Unfortunately this oppression continues in today's modern - even Western - world. There have actually been almost innumerable explanations for the physical presence of man - these explanations merely been suppressed by the prevailing religious institutions for fear that they will be deprived absolute power over the people...they're right. CONCLUSIONS Without Darwin it can be concluded, reasonable interpretation of biological evolution simply would not be. Natural selection, the process determining the ultimate survival of a new organism, remains the major contributing factor to even the most modern evolutionary theory. The evolutionary process spans over the course of hundreds of thousands of generations, organisms evolving through systematic and dispersive mechanisms of speciation. Recently, heated debate surrounding whether characteristics are passed on in bursts of activity through punctuated equilibrium or at a constant rate through the more traditional phyletic gradualism66. The release of Mendelian theory into the scientific community filled the primary link missing in Darwin's theory - how biological characteristics were passed on to future generations. Applications of genetic theory to evolutionary theory however, are somewhat limited. It is difficult to classify all species even through modern means of paleontology and application to the theory of organic evolution. BIBLIOGRAPHY 1 Brent, Peter. Charles Darwin, A Man of Enlarged Curiosity. Toronto: George J. McLeod Ltd., 1981. 2 Dawkins, Richard. The Selfish Gene. New York: Paladin, 1978. 3 Farrington, Benjamin. What Darwin Really Said. New York: Shoken Books, 1966. 4 Gailbraith, Don. Biology: Principals, Patterns and Processes. Toronto: John Wiley and Sons Canada Ltd. 1989, Un. 6: Evolution. 5 Glass, Bently. Forerunners of Darwin 1745-1859. New York: Johns Hopkins Press, 1968. 6 Gould, S.J. Ever Since Darwin. New York: Burnett Books, 1978. 7 Grolier Encyclopedia, New. New York: Grolier Publishing, Inc., 1991. 8 Haldane, J.B.S. The Causes of Evolution. London: Green and Co., 1982. 9 Leakey, Richard E.. Mankind and Its Beginnings. New York: Anchor Press/Doubleday, 1978. 10 Miller, Johnathan. Darwin For Beginners. New York: Pantheon Books, 1982. 11 Moore, Johh A. Heredity and the Environment. New York: Oxford University Press, 1973. 12 Patterson, Colin. Evolution. London: British Museum of Natural History Press, 1976. 13 Random House Encyclopedia, The. New York: Random House Inc., 1987, p. 406-25. 14 Ridley, Mark. The Essential Darwin. London, Eng: Allen & Unwin, 1987. 15 Smith, J.M. On Evolution. London: Doubleday, 1972. 16 Stansfield, William D.. Genetics 2/ed. New York: McGraw-Hill Book Company, 1983, p.266-287. 17 Thomas, K.S.. H.M.S. Beagle, 1820-1870. Washington: Oxford University Press, 1975. ENDNOTES
f:\12000 essays\sciences (985)\Genetics\Fetal Alcohol Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fetal Alcohol Syndrome
Fetal Alcohol Syndrome
Fetal Alcohol Syndrome/Fetal Alcohol Effects is a problem running rampant and out of control all across America. Fetal Alcohol Syndrome is the effect of pregnant women-drinking alcohol. Through education, we can eradicate this expensive and debilitating disease that is plaguing our children and our country.
Fetal Alcohol Syndrome was first diagnosed about 25 years ago. A group of doctors at the University of Washington in Seattle corned the term Fetal Alcohol Syndrome in 1973 (Dorris 143). Prior to this Fetal Alcohol Syndrome/Fetal Alcohol Effects children were misdiagnosed as problem children or Learning Disabled. Some were mistaken for bad kids and sent to homes for juvenile delinquents.
Fetal Alcohol Syndrome (FAS) is a grouping of defects that may occur in infants born to women who drink alcohol during pregnancy. Amy Nevitt states, FAS, the leading cause of retardation in the west affects more than 8000 babies in the United States every year.
FAS is a birth defect caused by a woman's consumption of alcohol while she is pregnant. FAS is one hundred percent preventable, however, because of their mother's decision to drink alcohol during pregnancy, none of the thousands of affected babies had a chance to be born normal (13). As stated by the British Columbia Fetal Alcohol Community Action Guide (B.C. FAS), "Fetal Alcohol Syndrome is a condition affecting some children born to women who drank heavily during pregnancy" (7).
Fetal Alcohol Effects is a term used to describe Partial FAS. The B.C. FAS says the new term for describing FAE effects is Alcohol-Related Birth Defects (ARBD)(9). FAE is best described by the B.C. FAS who says, "FAE has been used to imply a 'milder' form of FAS, but the cognitive and behavioral problems described by FAE (now partial FAS and ARND) can be very debilitating, causing life long disability which is not 'mild' or insignificant" (8).
As written in the B.C. FAS booklet:
"Partial FAS is the recommended term used to describe the cluster of problems facing those, who have: [SIC] evidence of some of the characteristic facial abnormalities associated with FAS [,] evidence of one other component of FAS, i.e. growth deficiency or brain damage, including behavioural and cognitive problems when it is known that there was significant exposure to alcohol in utero". (8)
We ask ourselves what causes Fetal Alcohol Syndrome/Fetal Alcohol Effects! Amy Nevitt articulates, "Alcohol is a teratogenic drug. This means that it can cause birth defects" (15). The more information we have about alcohol and its effects the sooner we can help stop this debilitating disease. Lyn Weiner and Barbara A. Morse declares:
Ethanol has the potential to cause a greater variety of metabolic and physiologic disturbances of fetal development than any other commonly ingested substance. The clinical and experimental literature provides an ever-increasing understanding of the mechanisms underlying alcohol's adverse effects on fetal development. Effects vary with each gestational stage. Alcohol consumption throughout pregnancy is associated with the most severe outcome. The demonstrated benefits when heavy drinking ceases reinforces the value of providing supportive therapy to women at risk. The prenatal setting is an important site for prevention of alcohol-related birth defects. Identification and treatment of problem drinking pregnant women holds the greatest promise for the prevention of alcohol-related birth defects. (145)
As stated by Weiner and Morse, through experimental studies clinical observations have shown "structural growth and behavioral defects in association with maternal ethanol exposure". The consumption of alcohol has since been widely acknowledged to be a risk factor for adverse pregnancy outcome (126-27).
Drinking when pregnant causes damage to the fetus. According to Nevitt, "The amount of damage depends on the frequency, quantity, and timing of the mother's alcohol consumption" (18). The facts of how many pregnant women who drink while pregnant, according to Nevitt is, " About 16 percent of pregnant women drink enough alcohol to be at risk for bearing children with some negative effects (13).
It is unclear how much alcohol consumption a pregnant woman can or can not drink during pregnancy (Nevitt 17,18). The best thing is to abstain from drinking any alcoholic beverages while pregnant. Signs of FAS include low birth weight and an abnormally small head; facial deformities such as small and narrow or very round eyes, flattened midface and widely spaced nose, very narrow upper lip, and oddly set ears; and mild to moderate mental retardation. As FAS children develop, they also often exhibit behavioral and cognitive problems. In some cases the defects are severe and are accompanied by other systemic abnormalities. When some but not all of these signs are observed, they are more generally known as fetal alcohol effects (FAE)(Weiner and Morse 128).
Children with FAE are less likely to be diagnosed early in life because they don't get treatment as early as FAS children because they are identified as needing help much later in life. Which is sometimes too late to help them.
There are many adverse effects of drinking alcohol. As early as 1886 doctors noted the frequency of reported spontaneous abortions by women who were alcoholics (Abel 47). As illustrated in the article by Rana Shaskin, the birth defects associated with FAS are groupings of defects are present. These defects are central nerve system damage, growth deficiency, and physical abnormality (1). The average birth weight of an FAS baby is almost three pounds lighter than the median birth weight for all infants born in the United States (Abel 55).
Other adverse effects of alcohol on the fetus are premature birth. As Abel says, prenatal death and neurological disorders of surviving children can be connected to the pregnant mother's alcohol consumption (52). There are many abnormalities associated with FAS. The child can have skeletal cardiac, liver, kidney, and urinary; along with neural-tube defects, genital and tumors due to alcohol use by the mother (46). Research shows that people with FAS have an average IQ of 65. Scores ranged from 16 to 105 (83).
Alcohol causes serious damage to the central nervous system (CNS). As Weiner implies, "The damage to the CNS may be further complicated by a home in which one or both parents is alcoholic" (131). "The most common sign of alcohol's' effects on fetal development is retarded growth in weight, length and head circumference, both in utero and during childhood" (129).
As Fetal Alcohol Syndrome/Fetal Alcohol Effects children get older their problems only multiply. Because as stated by Michael Dorris, FAS students do not seem to try to learn or finish their school assignments. Usually FAS children show no drive or persistence towards schoolwork (205). According to Shaskin, as FAS children grow into adolescence their problems increase. They drop out of school and have more incidents of behavioral problems (4).
People with FAS have a number of learning disabilities, some of which include a difficulty in generalizing information and matching words with behaviors. They also have trouble mastering a new skill and remembering things they have recently learned, i.e.: tying a knot(Nevitt 26-8).
People with FAS also have a "spotty memory," where they may remember, for example, something that happened a year ago, but cannot remember the day before. In addition, they have an "inflexibility of thought," where a person with the syndrome can only understand a concept expressed in one way. Once that concept has been learned that one way, it is hard for the individual to understand it in any other context. A difficulty in predicting outcomes is another disability shared by FAS victims. For example, a child with FAS might not be able to foresee what will happen when he knocks over a cup of juice. A child with FAS often tends to make the same mistake repeatedly. Another disturbing trait shared by FAS affected people are a difficulty distinguishing fact from fantasy. A person with FAS could be watching a movie and go on thinking that what is going on in the movie is actually going on in real life. People with FAS also have an alarming difficulty distinguishing friends from strangers: they may meet someone once for about five minutes and already consider them a friend, which could be potentially dangerous(Nevitt 26-8).
Fetal Alcohol Syndrome/Fetal Alcohol Effects babies are very stressful and require lots of caring and understanding. Nevitt summed up many of the difficulties of a parent of a baby that is FAS because there are many unique problem associated with FAS. Nevitt states, such as FAS/FAE babies do not thrive as well as normal babies; they have poor reflexes, and at times they have no appetite. It can sometimes take hours to feed a FAS baby four ounces of milk (21).
Fetal Alcohol Syndrome/Fetal Alcohol Effects people require supervision and stern guidance throughout life. As Dorris says, FAS caretakers must provide a structured environment to the Fetal Alcohol Syndrome/Fetal Alcohol Effects person. Any violation must be corrected on the spot, and consistency is a must. Clear and simple instructions that are set in stone is what works best (247). The reasons for so much supervision for Fetal Alcohol Syndrome/Fetal Alcohol Effects people is clear, without supervision and a good and understanding caretaker life would be very hard and unfair for an Fetal Alcohol Syndrome/Fetal Alcohol Effects person. Tanner-Halverson says, FAS adults need a structured environment to do well and live a productive life. Adult FAS need guidance because they are still easily distracted and forgetful (1B).
At this time there is no known cure for Fetal Alcohol Syndrome and Fetal Alcohol Effects (NOFAS 2A). The best that society can do is prevention of Fetal Alcohol Syndrome and Fetal Alcohol Effects by educating and informing everyone and anyone who will listen on the adverse effects alcohol can have on babies and society. The medical field is the most important community to educate for the obvious reason that they are the people who will detect and treat Fetal Alcohol Syndrome and Fetal Alcohol Syndrome babies (NOFAS 1A).
The next group we should target is the educators, and teach the country's educators the when, where, what, why, and how to handle a Fetal Alcohol Syndrome or Fetal Alcohol Effects person. Educators in our elementary and high schools should be able to educate our children on the effects of alcohol on the fetus, due to the rising rate of teen pregnancies (The Arc 2C).
As stated by Patricia Tanner-Halverson, "Keys to working successfully with Fetal Alcohol Syndrome/Fetal Alcohol Effects children are structure, consistency, variety, brevity and persistence". The next group we should target is the women who are at risk for having children with Fetal Alcohol Syndrome/Fetal Alcohol Effects and inform and educate them on the dangers of drinking while pregnant. Show them the consequences and the unnecessary hardships that their baby may have to endure due to her drinking alcohol.
Education along with intervention and assistance from the community is what will help stop FAS/FAE. By providing support groups for women who are alcoholic and pregnant, support groups for parent, foster parents, and caretakers of Fetal Alcohol Syndrome/Fetal Alcohol Effects people.
The government could do much more for Fetal Alcohol Syndrome/Fetal Alcohol Effects people. Government could require that people in the medical field be trained for so many hours on the subject of Fetal Alcohol Syndrome/Fetal Alcohol Effects.
Education of the medical field is very important (Fetal Alcohol Syndrome Public Awareness Campaign 1979 206). Allocating monies and giving grants for research and care of Fetal Alcohol Syndrome/Fetal Alcohol Effects people. More research is needed to fine out if other drugs can cause Fetal Alcohol Syndrome/Fetal Alcohol Effects (The Arc 2C).
If at all possible intervention to prevent alcohol from affecting the fetus should happen as early as possible, the first trimester is when most damage is though to occur. Which makes intervention as early as possible the best chance of stop Fetal Alcohol Syndrome/Fetal Alcohol Effects from happening to the unborn child.
To help prevent Fetal Alcohol Syndrome/Fetal Alcohol Effects the father of the unborn child must also be educated on the possible results of the effect alcohol could have on the baby. It would be a lot easier for the pregnant woman to refrain from drinking if the father did not drink during the pregnancy. Armed with the knowledge of the effects of Fetal Alcohol Syndrome/Fetal Alcohol Effects perhaps he would provide more support to the expectant mother.
Fetal alcohol exposure has life long effects and consequences that are not restricted to any one race or socio-economic group. Fetal Alcohol Syndrome/Fetal Alcohol Effects does not go away, brain damage is permanent, and birth defects are also permanent. Metal retardation is permanent and irreversible, behavioral problems are permanent; all of these problems associated with Fetal Alcohol Syndrome/Fetal Alcohol Effects are forever and once alcohol has done the damage there is no recovery.
However, through education we can beat Fetal Alcohol Syndrome/Fetal Alcohol Effects by educating and assisting women who are of the childbearing age. Simply put, if you're pregnant don't drink. If you need help to quit there are people waiting to help.
Works Cited
Abel, Ernest L. Fetal Alcohol Syndrome. Oredell, New
Jersey: Medical Economics, 1990
British Columbia FAS Community Action Guide. British Columbia:
1997.
Chasnoff, Ira, J. Drugs, Alcohol, Pregnancy and Parenting.
Boston: Kluwer Academic, 1988.
Dorris, Michael. The Broken Cord. New York: Harper & Row,
1989.
The Arc, "Facts about alcohol use during pregnancy." 1997
http://www.thearc.org/faqs/fas (26 May 1998).
NOFAS, "Fetal Alcohol Syndrome is the name given to a group of
physical and mental birth defects that is the direct
result of a woman's drinking alcohol during
pregnancy." 1997 http://www.nofas.org/what.htm (26 May
1998).
Nevitt, Amy. Fetal Alcohol Syndrome. New York: Rosen,
1996
Shaskin, Rana. Fetal Alcohol Syndrome/Effects. Vancouver:
1994.
Tanner-Halverson, Patricia. "Strategies for parents and
Caregivers of FAS and FAE children." 1997 http://www.nofas.org/strategy.htm (26 May 1998).
United States. The Fetal Alcohol Syndrome Public Awareness
Campaign 1979. Progress Report Concerning The Advance
Notice of Proposed Rulemaking on Warning Labels on
Containers of Alcoholic Beverages and Addendum.
Washington: Department of Treasury, 1979
Word Count: 2322
f:\12000 essays\sciences (985)\Genetics\Heart Problems.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CONTENTS 3 Introduction 4 The Human Heart 5 Symptoms of Coronary Heart Disease 5 Heart Attack 5 Sudden Death 5 Angina 6 Angina Pectoris 6 Signs and Symptoms 7 Different Forms of Angina 8 Causes of Angina 9 Atherosclerosis 9 Plaque 10 Lipoproteins 10 Lipoproteins and Atheroma 11 Risk Factors 11 Family History 11 Diabetes 11 Hypertension 11 Cholesterol 12 Smoking 12 Multiple Risk Factors 13 Diagnosis 14 Drug Treatment 14 Nitrates 14 Beta-blockers 15 Calcium antagonists 15 Other Medications 16 Surgery 16 Coronary Bypass Surgery 17 Angioplasty 18 Self-Help 20 Type-A Behaviour Pattern 21 Cardiac Rehab Program 22 Conclusion 23 Diagrams and Charts 26 Bibliography
INTRODUCTION In today's society, people are gaining medical knowledge at quite a fast pace. Treatments, cures, and vaccines for various diseases and disorders are being developed constantly, and yet, coronary heart disease remains the number one killer in the world. The media today concentrates intensely on drug and alcohol abuse, homicides, AIDS and so on. What a lot of people are not realizing is that coronary heart disease actually accounts for about 80% of all sudden deaths. In fact, the number of deaths from heart disease approximately equals to the number of deaths from cancer, accidents, chronic lung disease, pneumonia and influenza, and others, COMBINED. One of the symptoms of coronary heart disease is angina pectoris. Unfortunately, a lot of people do not take it seriously, and thus not realizing that it may lead to other complications, and even death.
THE HUMAN HEART In order to understand angina, one must know about our own heart. The human heart is a powerful muscle in the body which is worked the hardest. A double pump system, the heart consists of two pumps side by side, which pump blood to all parts of the body. Its steady beating maintains the flow of blood through the body day and night, year after year, non-stop from birth until death. The heart is a hollow, muscular organ slightly bigger than a person's clenched fist. It is located in the centre of the chest, under the breastbone above the sternum, but it is slanted slightly to the left, giving people the impression that their heart is on the left side of their chest. The heart is divided into two halves, which are further divided into four chambers: the left atrium and ventricle, and the right atrium and ventricle. Each chamber on one side is separated from the other by a valve, and it is the closure of these valves that produce the "lubb-dubb" sound so familiar to us. (see Fig. 1 - The Structure of the Heart) Like any other organs in our body, the heart needs a supply of blood and oxygen, and coronary arteries supply them. There are two main coronary arteries, the left coronary artery, and the right coronary artery. They branch off the main artery of the body, the aorta. The right coronary artery circles the right side and goes to the back of the heart. The left coronary artery further divides into the left circumflex and the left anterior descending artery. These two left arteries feed the front and the left side of the heart. The division of the left coronary artery is the reason why doctors usually refer to three main coronary arteries. (Fig. 2 - Coronary Arteries)
SYMPTOMS OF CORONARY HEART DISEASE There are three main symptoms of coronary heart disease: Heart Attack, Sudden Death, and Angina. Heart Attack Heart attack occurs when a blood clot suddenly and completely blocks a diseased coronary artery, resulting in the death of the heart muscle cells supplied by that artery. Coronary and Coronary Thrombosis2 are terms that can refer to a heart attack. Another term, Acute myocardial infarction2, means death of heart muscle due to an inadequate blood supply. Sudden Death Sudden death occurs due to cardiac arrest. Cardiac arrest may be the first symptom of coronary artery disease and may occur without any symptoms or warning signs. Other causes of sudden deaths include drowning, suffocation, electrocution, drug overdose, trauma (such as automobile accidents), and stroke. Drowning, suffocation, and drug overdose usually cause respiratory arrest which in turn cause cardiac arrest. Trauma may cause sudden death by severe injury to the heart or brain, or by severe blood loss. Stroke causes damage to the brain which can cause respiratory arrest and/or cardiac arrest. Angina People with coronary artery disease, whether or not they have had a heart attack, may experience intermittent chest pain, pressure, or discomforts. This situation is known as angina pectoris. It occurs when the narrowing of the coronary arteries temporarily prevents an adequate supply of blood and oxygen to meet the demands of working heart muscles.
ANGINA PECTORIS Angina Pectoris (from angina meaning strangling, and pectoris meaning breast) is commonly known simply as angina and means pain in the chest. The term "angina" was first used during a lecture in 1768 by Dr. William Heberden. The word was not intended to indicate "pain," but rather "strangling," with a secondary sensation of fear. Victims suffering from angina may experience pressure, discomfort, or a squeezing sensation in the centre of the chest behind the breastbone. The pain may radiate to the arms, the neck, even the upper back, and the pain may come and go. It occurs when the heart is not receiving enough oxygen to meet an increased demand. Angina, as mentioned before, is only temporarily, and it does not cause any permanent damage to the heart muscle. The underlying coronary heart disease, however, continues to progress unless actions are taken to prevent it from becoming worse. Signs and Symptoms Angina does not necessarily involve pain. The feeling varies from individuals. In fact, some people described it as "chest pressure," "chest distress," "heaviness," "burning feeling," "constriction," "tightness," and many more. A person with angina may feel discomforts that fit one or several of the following descriptions: - Mild, vague discomfort in the centre of the chest, which may radiate to the left shoulder or arm - Dull ache, pins and needles, heaviness or pains in the arms, usually more severe in the left arm - Pain that feels like severe indigestion - Heaviness, tightness, fullness, dull ache, intense pressure, a burning, vice-like, constriction, squeezing sensation in the chest, throat or upper abdomen - Extreme tiredness, exhaustion of a feeling of collapse - Shortness of breath, choking sensation - A sense of foreboding or impending death accompanying chest discomfort - Pains in the jaw, gums, teeth, throat or ear lobe - Pains in the back or between the shoulder blades
Angina can be so severe that a person may feel frightened, or so mild that it might be ignored. Angina attacks are usually short, from one or two minutes to a maximum of about four to five. It usually goes away with rest, within a couple of minutes, or ten minutes at the most. Different Forms of Angina There are several known forms of angina. Brief pain that comes on exertion and leave fairly quickly on rest is known as stable angina. When angina pain occurs during rest, it is called unstable angina. The symptoms are usually severe and the coronary arteries are badly narrowed. If a person suffers from unstable angina, there is a higher risk for that person to develop heart attacks. The pain may come up to 20 times a day, and it can wake a person up, especially after a disturbing dream. Another type of angina is called atypical or variant angina. In this type of angina, pain occurs only when a person is resting or asleep rather than from exertion. It is thought to be the result of coronary artery spasm, a sort of cramp that narrows the arteries. Causes of Angina The main cause of angina is the narrowing of the coronary arteries. In a normal person, the inner walls of the coronary arteries are smooth and elastic, allowing them to constrict and expand. This flexibility permits varying amounts of oxygenated blood, appropriate to the demand at the time, to flow through the coronary arteries. As a person grows older, fatty deposits will accumulate on the artery walls, especially if the linings of the arteries are damaged due to cigarette smoking or high blood pressure. As more and more fatty materials build up, they form plaques which causes the arteries to narrow and thus restricting the flow of blood. This process is known as atherosclerosis. However, angina usually does not occur until about two-thirds of the artery's diameter is blocked. Besides atherosclerosis, there are other heart conditions resulting in the starvation of oxygen of the heart, which also causes angina. The nerve factor - The arteries are supplied with nerves, which allow them to be controlled directly by the brain, especially the hypothalamus - an area at the centre of the brain which regulates the emotions. The brain controls the expanding and narrowing of the arteries when necessary. The pressures of modern life: aggression, hostility, never-ending deadlines, remorseless, competition, unrest, insecurity and so on, can trigger this control mechanism.
When you become emotional, the chemicals that are released, such as adrenaline, noradrenaline, and serotonin, can cause a further constriction of the coronary arteries. The pituitary gland, a small gland at the base of the brain, under the control of the hypothalamus, can signal the adrenal glands to increase the production of stress hormones such as cortisol and adrenaline even further. Coronary spasm - Sudden constrictions of the muscle layer in an artery can cause platelets to stick together, temporarily restricting the flow of flow. This is known as coronary spasm. Platelets are minute particles in the blood, which play an essential role both in the clotting process and in repairing any damaged arterial walls. They tend to clump together more easily when the blood is full of chemicals released during arousal, such as cortisol and others. Coronary spasm causes the platelets to stick together and to the wall of the artery, while substances released by the platelets as they stick together further constrict the blood vessels. If the artery is already narrowed, this can have a devastating effect as it drastically reduces the blood flow. (Fig. 3 - Spasm in a coronary artery) When people are very tense, they usually overbreathe or hold their breath altogether. Shallow, irregular but rapid breathing washes out carbon dioxide from the system and the blood will become over-oxygenated. One might think that the more oxygen in the blood the better, but overloaded blood actually does not give up oxygen as easily, therefore the amount of oxygen available to the heart is reduced. Carbon dioxide is present in the blood in the form of carbonic acid, when there is a loss in carbonic acid, the blood becomes more basic, or alkaline, which leads to spasm of blood vessels, almost certainly in the brain but also in the heart.
ATHEROSCLEROSIS The coronary arteries may be clogged with atherosclerotic plaques, thus narrowing the diameter. Plaques are usually collections of connection tissue, fats, and smooth muscle cells. The plaque project into the lumen, the passageway of the artery, and interfere with the flow of blood. In a normal artery, the smooth muscle cells are in the middle layer of the arterial wall; in atherosclerosis they migrate into the inner layer. The reason behind their migration could hold the answers to explain the existence of atherosclerosis. Two theories have been developed for the cause of atherosclerosis. The first theory was suggested by German pathologist Rudolf Virchow over 100 years ago. He proposed that the passage of fatty material into the arterial wall is the initial cause of atherosclerosis. The fatty material, especially cholesterol, acts as an irritant, and the arterial wall respond with an outpouring of cells, creating atherosclerotic plaque. The second theory was developed by Austrian pathologist Karl von Rokitansky in 1852. He suggested that atherosclerotic plaques are aftereffects of blood-clot organization (thrombosis). The clot adheres to the intima and is gradually converted to a mass of tissue, which evolves into a plaque. There are evidences to support the latter theory. It has been found that platelets and fibrin (a protein, the final product in thrombosis) are often found in atherosclerotic plaques, also found are cholesterol crystals and cells which are rich in lipid. The evidence suggests that thrombosis may play a role in atherosclerosis, and in the development of the more complicated atherosclerotic plaque. Though thrombosis may be important in initiating the plaque, an elevated blood lipid level may accelerate arterial narrowing. Plaque Inside the plaque is a yellow, porridge-like substance, consisting of blood lipids, cholesterol and triglycerides. These lipids are found in the bloodstream, they combine with specific proteins to form lipoproteins. All lipoprotein particles contain cholesterol, triglycerides, phospholipids, and proteins, but the proportion varies in different particles.
Lipoproteins Lipoproteins all vary in size. The largest lipoproteins are called Chylomicra, and consist mostly of triglycerides. The next in size are the pre-beta-lipoproteins, then the beta lipoproteins. As their size decreases, so do their concentration of triglycerides, but the smaller they are, the more cholesterol they contain. Pre- beta-lipoproteins are also known as low density lipoproteins (LDL), and beta lipoproteins are also called very low density lipoproteins (VLDL). They are most significant in the development of atheroma. The smallest lipoprotein particles, the alpha lipoproteins, contain a low concentration of cholesterol and triglycerides, but a high level of proteins, and are also known as high density lipoproteins (HDL). They are thought to be protective against the development of atherosclerotic plaque. In fact, they are transported to the liver rather than to the blood vessels. Lipoproteins and Atheroma The theory is that lipoproteins pass between the lining cells of the arteries and some of them accumulate underneath. All except the chylomicra, which are too big, have a chance to accumulate. The protein in the lipoproteins are broken down by enzymes, leaving behind the cholesterol and triglycerides. These fats are trapped and set up a small inflammatory reaction. The alpha particles do not react with the enzymes are returned to the circulation.
RISK FACTORS There are several risk factors that contribute to the development of atherosclerosis and angina: Family history, Diabetes, Hypertension, Cholesterol, and Smoking. Family History We all carry approximately 50 genes that affect the function and structure of the heart and blood vessels. Genetics can determine one's risk of having heart disease. There are many cases today where heart disease runs in a family, for many generations. Diabetes Diabetics are at least twice as likely to develop angina than nondiabetics, and the risk is higher in women than in men. Diabetes causes metabolic injury to the lining of arteries, as a result, the tiny blood vessels that nourish the walls of medium-size arteries throughout the body, including the coronary arteries, become defective. These microscopic vessels become blocked, impeding the delivery of blood to the lining of the larger arteries, causing them to deteriorate, and artherosclerosis results. Hypertension High blood pressure directly injures the artery lining by several mechanisms. The increased pressure compresses the tiny vessels that feed the artery wall, causing structural changes in these tiny arteries. Microscopic fracture lines then develop in the arterial wall. The cells lining the arteries are compressed and injured, and can no longer act as an adequate barrier to cholesterol and other substances collecting in the inner walls of the blood vessels. Cholesterol Cholesterol has become one of the most important issues in the last decade. Reducing cholesterol intake can directly decrease one's risk of developing heart disease, and people today are more conscious of what they eat, and how much cholesterol their foods contain. Cholesterol causes atherosclerosis by progressively narrowing the arteries and reduces blood flow. The building up of fatty deposits actually begins at an early age, and the process progresses slowly. By the time the person reaches middle-age, a high cholesterol level can be expected.
Smoking It has been proven that about the only thing smoking do is shorten a person's life. Despite all the warnings by the surgeon general, people still manage to find an excuse to quit smoking. Cigarette smoke contains carbon monoxide, radioactive polonium, nicotine, arsenious oxide, benzopyrene, and levels of radon and molybdenum that are TWENTY times the allowable limit for ambient factory air. The two agents that have the most significant effect on the cardiovascular system are carbon monoxide and nicotine. Nicotine has no direct effect on the heart or the blood vessels, but it stimulates the nerves on these structures to cause the secretion of adrenaline. The increase of adrenaline and noradrenaline increases blood pressure and heart rate by about 10% for an hour per cigarette. In simpler words, nicotine causes the heart to beat more vigorously. Carbon monoxide, on the other hand, poisons the normal transport systems of cell membranes lining the coronary arteries. This protective lining breaks down, exposing the undersurface to the ravages of the passing blood, with all its clotting factors as well as cholesterol. Multiple Risk Factors The five major risk factors described above do more than just add to one another. There is a virtual multiplication effect in victims with more than one risk factor. (Chart: Risk Factors)
DIAGNOSIS It is very important for patients to tell their doctors of the symptoms as honestly and accurately as possible. The doctor will need to know about other symptoms that may distinguish angina from other conditions, such as esophagitis, pleurisy, costochondritis, pericarditis, a broken rib, a pinched nerve, a ruptured aorta, a lung tumour, gallstones, ulcers, pancreatitis, a collapsed lung or just be nervous. Each of the above mentioned is capable of causing chest pain. A patient may take a physical examination, which includes taking the pulse and blood pressure, listening to the heart and lung with a stethoscope, and checking weight. Usually an experienced cardiologist can distinguish it as a cardiac or noncardiac situation within minutes. There are also routine tests, such as urine and blood tests, which can be used to determine body fat level. Blood test can also tests for: Anemia - where the level of haemogoblin is too low, and can restrict the supply of blood to the heart. Kidney function - levels of various salts, and waste products, mainly urea and creatinine in the blood. Normally these levels should be quite low. There are other factors which can be tested such as salt level, blood fat and sugar levels. A chest x-ray provides the doctor with information about the size of the heart. Like any other muscles in the body, if the heart works too hard for a period of time, it develops, or enlarges. An electrocardiogram (ECG) is the tracing of the electrical activity of the heart. As the heart beats and relaxes, the signals of the heart's electrical activities are picked up and the pattern is recorded. The pattern consists of a series of alternating plateaus and sharp peaks. ECG can indicate if high blood pressure has produced any strain on the heart. It can tell if the heart is beating regularly or irregularly, fast or slow. It can also pick up unnoticed heart attacks. A variation of the ECG is the veterocardiogram (VCG). It performs exactly like the ECG except the electrical activity is shown in the form of loops, or vectors, which can be watched on a screen, printed on paper, or photographed. What makes VCG superior to ECG is that VCG provides a three-dimensional view of a single heart beat.
DRUG TREATMENT Angina patients are usually prescribed at least one drug. Some of the drugs prescribed improve blood flow, while others reduce the strain on the heart. Commonly prescribed drugs are nitrates, beta- blockers, and Calcium antagonists. It should be noted that drugs for angina only relief the pain, it does nothing to correct the underlying disorder. Nitrates Nitroglycerine, which is the basis of dynamite, relaxes the smooth fibres of the blood vessels, allowing the arteries to dilate. They have a tendency to produce flushing and headaches because the arteries in the head and other parts of the body will also dilate. Glyceryl trinitrate is a short-acting drug in the form of small tablets. It is taken under the tongue for maximum and rapid absorption since that area is lined with capillaries. It usually relieves the pain within a minute or two. One of the drawbacks of trinitrates is that they can be exposed too long as they deteriorate in sunlight. Trinitrates also come in the form of ointment or "transdermal" sticky patch which can be applied to the skin. Dinitrates and mononitrates are used for the prevention of angina attacks rather than as pain relievers. They are slower acting than trinitrates, but they have a more prolonged effect. They have to be taken regularly, usually three to four times a day. Dinitrates are more common than trinitrates or tetranitrates. Beta-blockers Beta-blockers are used to prevent angina attacks. They reduce the work of the heart by regulating the heart beat, as well as blood pressure; the amount of oxygen required is thereby reduced. These drugs can block the effects of the stress hormones adrenaline and noradrenaline at sites called beta receptors in the heart and blood vessels. These hormones increase both blood pressure and heart rate. Other sites affected by these hormones are known as alpha receptors.
There are side effects, however, for using beta-blockers. Further reduction in the pumping action may drive to a heart failure if the heart is strained by heart disease. Hands and feet get cold due to the constriction of peripheral vessels. Beta- blockers can sometimes pass into the brain fluids, and causes vivid dreams, sleep disturbance, and depression. There is also a possibility of developing skin rashes and dry eyes. Some beta- blockers raise the level of blood cholesterol and triglycerides. Calcium antagonists These drugs help prevent angina by moping up calcium in the artery walls. The arteries then become relaxed and dilated, so reducing the resistance to blood flow, and the heart receives more blood and oxygen. They also help the heart muscle to use the oxygen and nutrients in the blood more efficiently. In larger dose they also help lower the blood pressure. The drawback for calcium antagonists is that they tend to cause dizziness and fluid retention, resulting in swollen ankles. Other Medications There are new drugs being developed constantly. Pexid, for example, is useful if other drugs fail in severe angina attacks. However, it produces more side effects than others, such as pins and needles and numbness in limbs, muscle weakness, and liver damage. It may also precipitate diabetes, and damages to the retina.
SURGERY When medications or any other means of treatment are unable to control the pain of angina attacks, surgery is considered. There are two types of surgical operation available: Coronary Bypass and Angioplasty. The bypass surgery is the more common, while angioplasty is relatively new and is also a minor operation. Surgery is only a "last resort" to provide relief and should not be viewed as a permanent cure for the underlying disease, which can only be controlled by changing one's lifestyle. Coronary Bypass Surgery The bypass surgery involves extracting a vein from another part of the body, usually the leg, and uses it to construct a detour around the diseased coronary artery. This procedure restores the blood flow to the heart muscle. Although this may sound risky, the death rate is actually below 3 per cent. This risk is higher, however, if the disease is widespread and if the heart muscle is already weakened. If the grafted artery becomes blocked, a heart attack may occur after the operation. The number of bypasses depends on the number of coronary arteries affected. Coronary artery disease may affect one, two, or all three arteries. If more than one artery is affected, then several grafts will have to be carried out during the operation. About 20 per cent of the patients considered for surgery have only one diseased vessel. In 50 per cent of the patients, there are two affected arteries, and in 30 per cent the disease strikes all three arteries. These patients are known to be suffering from triple vessel disease and require a triple-bypass. Triple vessel disease and disease of the left main coronary artery before it divides into two branches are the most serious conditions. The operation itself incorporates making an incision down the length of the breastbone in order to expose the heart. The patient is connected to a heart-lung machine, which takes over the function of the heart and lungs during the operation and also keeps the patient alive. At the same time, a small incision is made on the leg to remove a section of the vein.
Once the section of vein has been removed, it is attached to the heart. One end of the vein is sewn to the aorta, while the other end is sewn into the affected coronary artery just beyond the diseased segment. The grafted vein now becomes the new artery through which the blood can flow freely beyond the obstruction. The original artery is thus bypassed. The whole operation requires about four to five hours, and may be longer if there is more than one bypass involved. After the operation, the patient is sent to the Intensive Care Unit (ICU) for recovery. The angina pain is usually relieved or controlled, partially or completely, by the operation. However, the operation does not cure the underlying disease, so the effects may begin to diminish after a while, which may be anywhere from a few months to several years. The only way patients can avoid this from happening is to change their lifestyles. Angioplasty This operation is a relatively new procedure, and it is known in full as transluminal balloon coronary angioplasty. It entails "squashing" the atherosclerotic plaque with balloons. A very thin balloon catheter is inserted into the artery in the arm or the leg of a patient under general anaesthetic. The balloon catheter is guided under x-ray just beyond the narrowed coronary artery. Once there, the balloon is inflated with fluid and the fatty deposits are squashed against the artery walls. The balloon is then deflated and drawn out of the body. This technique is a much simpler and more economical alternative to the bypass surgery. The procedure itself requires less time and the patient only remains in the hospital for a few days afterward. Exactly how long the operation takes depends on where and in how many places the artery is narrowed. It is most suitable when the disease is limited to the left anterior descending artery, but sometimes the plaques are simply too hard, making them impossible to be squashed, in which case a bypass might be necessary.
SELF-HELP The only way patients can prevent the condition of their heart from deteriorating any further is to change their lifestyles. Although drugs and surgery exist, if the heart is exposed to pressure continuously and it strains any further, there will come one day when nothing works, and all that remain is a one-way ticket to heaven. The following are some advices on how people can change the way they live, and enjoy a lifetime with a healthy heart once more. Work A person should limit the amount of exertions to the point where angina might occur. This varies from person to person, some people can do just as much work as they did before developing angina, but only at a slower pace. Try to delegate more, reassess your priorities, and learn to pace yourself. If the rate of work is uncontrollable, think about changing the job. Exercise Everyone should exercise regularly to one's limits. This may sound contradictory that, on the one hand, you are told to limit your exertion and, on the other, you are told to exercise. It is actually better if one exercise regularly within his or her limits. Exercises can be grouped into two categories: isotonic and isometric. People suffering from angina should limit themselves to only isotonic exercises. This means one group of muscle is relaxed while another group is contracted. Examples of this type of exercise include walking, swimming leisurely, and yoga; some harder exercises are cycling and jogging. Weight Loss The more weight there is on the body, the more work the heart has to do. Reducing unnecessary weight will reduce the amount of strain on the heart, and likely lower blood pressure as well. One can lose weight by simply eating less than their normal intake, but keep in mind that the major goal is to cut down on fatty and sugar foods, which are low in nutrients and high in calories.
Diet What you eat can have a direct effect on the kind of condition you are in. To stay fit and healthy, eat fewer animal fats, and foods that are high in cholesterol. They include fatty meat, lard, suet, butter, cream and hard cheese, eggs, prawns, offal and so on. Also, the amount of salt intake should be reduced. Eat more food containing a high amount of fibre, such as wholegrain cereal products, pulses, wholemeal bread, as well as fresh fruits and vegetables. Alcohol, tea and coffee Alcohol in moderation does no harm to the body, but it does contain calories and may slow the weight loss progress. People can drink as much mineral water, fruit juice and ordinary or herb tea as they wish, but no more than two cups of coffee per day. Cigarettes It has been medically proven that cigarettes do the body no good at all. It makes the heart beat faster, constricts the blood vessels, and generally increases the amount of work the heart has to do. The only right thing to do is to quit smoking, it will not be easy, but it is worth the effort. Stress Stress can actually be classified as a major risk factor, and it is one neglected by most people. Try to avoid those heated arguments and emotional situations that increase blood pressure, as well as stimulate the release of stress hormones. If they are unavoidable, try to anticipate them and prevent the attack by sucking an angina tablet beforehand. Relaxation Help your body to relax when feeling tense by sitting or lying down quietly. Close your eyes, breathe slowly and deeply through the nose, make each exhalation long, soft and steady. An adequate amount of sleep each night is always important. Sexual activity It is true that sexual intercourse may bring on an angina attack, but the chronic frustration of abstinence may cause more tension. If intercourse precipitates angina, either suck on an angina tablet a few minutes beforehand or let your partner assume the more active role.
TYPE-A BEHAVIOUR PATTERN There is a marked increase of coronary heart disease in most industrialized societies in the twentieth century. This may have resulted, in part, because these societies reward those who performed more quickly, aggressively, and competitively. Type-A individuals of both sexes were considered to have the following characteristics: (1) an intense, sustained drive to achieve self- selected but often poorly defined goals. (2) a profound inclination and eagerness to compete. (3) a persistent desire for recognition and advancement. (4) a continuous involvement in multiple and diverse functions subject to time restrictions. (5) habitual propensity to accelerate the rate of execution of most physical and mental functions. (6) extraordinary mental and physical alertness. (7) aggressive and hostile feelings. The enhanced competitiveness of type-A persons leads to an aggressive and ambitious achievement orientation, increased mental and physical alertness, muscular tension, and an explosive and rapid style of speech. A sense of time urgency leads to restlessness, impatience, and acceleration of most activities. This in turn may result in irritability and the enhanced potential for type-A hostility and anger. Type-A individuals are thus at an increased risk of developing coronary heart disease. The type-A behaviour pattern is defined as an action-emotion complex involving10: (1) behavioural dispositions (e.g., ambitiousness, aggressiveness, competitiveness, and impatience). (2) specific behaviours (e.g., muscle tenseness, alertness, rapid and emphatic speech stylistics, and accelerated pace of most activities). (3) emotional responses (e.g., irritation, hostility, and anger). Comparatively, type-A persons are more risky to develop coronary heart disease than type-B individuals, whose manners and behaviours are relaxed. The risk, however, is independent of the risk factors. Not all physicians are convinced that type-A behaviour pattern is a risk factor, and thousands of studies and researches are currently being done by experts on this topic.
THE CARDIAC REHAB PROGRAM This program at the Credit Valley Hospital is designed to help patients with coronary artery disease lower their overall risk, and to prevent any further attacks. It provides rehabilitation for patients who are likely to have heart attacks, have had heart attacks, or had a recent surgery. Most patients come to this one-hour class two nights a week, which takes place outside the physiotherapy department. The class is ran by volunteers, and is usually supervised by a kinesiologist. The patients come in a little before 6:00 pm, and have their blood pressure taken. At six o'clock, volunteers will take the patients through a fifteen-minute warm-up. After the warm-up, the patients will go on with their exercise for half an hour. The patients can choose from walking, rowing machines, stationary bicycles, and arm ergometer, or a combination of two or more as their exercise. Each patient is reassessed once a month, in order to keep track of their progress. Volunteers will ask the patient being reassessed a series of questions, which includes frequency of exercise, type of exercise program, problems with exercise, etc. About 6:30, when the patients are near the peak of their exercise, the ones being reassessed will have to have their pulse and blood pressure measured; to see if they have reached their "target heart rate", and to see if their blood pressure goes up as expected. At about 6:45, the patients end their exercise and cool-down begins. Cool-down is in a way similar to warm-up, only this helps the patients to relax their hearts, as well as their body after a half-hour workout. After cool-down most patients have their blood pressure taken again just to make sure nothing unusual occurs.
CONCLUSION Angina pectoris is not a disease which affect a person's heart permanently, but to encounter angina pain means something is wrong. The pain is the heart's distress signal, a built-in warning device indicating that the heart has reached its maximum workload. Upon experiencing angina, precautions should be taken. A person's lifestyle plays a major role in determining the chance of developing heart diseases. If people do not learn how to prevent it themselves, coronary artery disease will remain as the single biggest killer in the world, by far.
DIAGRAMS AND CHARTS Fig. 1 The Structure of the Heart
Fig. 2 Coronary arteries Fig. 3 Spasm in a coronary artery
RISK FACTORS Average Risk = 100 ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³ ³ ³ ³ ³ ³NONE ³ ³ ³ 77 ³ ³ ³ ³ ³ ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÄÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³ ³ ³ ³ ³ ³CIGARETTES ³ ³ ³ 120 ³ ³ ³ ³ ³ ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÄÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÄÄÄÄÄÄÄÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³CIGARETTES ³ ³ ³ ³ ³AND CHOLESTEROL ³ ³ ³ 236 ³ ³ ³ ³ ³ ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÄÄÄÄÄÄÄÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÂÄÄÄÄÄÄÄÄ¿ ³CIGARETTES, ³ ³ ³ ³ ³CHOLESTEROL, AND ³ ³ ³ 384 ³ ³HIGH BLOOD PRESSURE³ ³ ³ ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÄÂÄÄÄÄÄÂÄÄÄÄÄÂÄÄÄÄÁÂÄÄÄÄÄÂÄÙ 100 200 300 400 500 For purpose of illustration, this chart uses as abnormal a blood pressure level of 180 systolic and a very high cholesterol level of 310 in a 45-year-old man. CORONARY HEART DISEASE AND MULTIPLE FACTORS ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³HIGH BLOOD PRESSURE, HIGH CHOLESTEROL AND CIGARETTES ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³HIGH CHOLESTEROL AND CIGARETTES ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³CIGARETTES ³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄ¿ ³NONE ³ ÀÄÄÄÄÄÄÄÄÙ ÚÄÄÄÄÄÄÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÂÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³LOW ³ 1 1/2 times ³ 3 times ³ 5 times ³ ÀÄÄÄÄÄÄÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÁÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ
BIBLIOGRAPHY 1. Amsterdam, Ezra A. and Ann M. Holms. TAKE CARE OF YOUR HEART, New York, Facts on File, 1984. 2. Houston, B. Kent and C.R. Snyder. TYPE A BEHAVIOUR PATTERN, John Wiley & Sons, Inc., 1988. 3. Pantano, James A. LIVING WITH ANGINA, New York, Harper & Row, 1990. 4. Patel, Chandra. FIGHTING HEART DISEASE, Toronto, Macmillan, 1988. 5. Shillingford, J.P. CORONARY HEART DISEASE: THE FACTS, Oxford, Oxford University Press, 1982. 6. The Heart and Stroke Foundation of Canada. CARDIOPULMONARY RESUSCITATION - BASIC RESCUER MANUAL, Canada, 1987. 7. Tiger, Steven. HEART DISEASE, New York, Julian Messner, 1986. ------------------------------------------------------------------------------
f:\12000 essays\sciences (985)\Genetics\Hemophilia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
(1) In the human body, each cell contains 23 pairs of chromosomes, one of each pair inherited through the egg from the mother, and the other inherited through the sperm of the father. Of these chromosomes, those that determine sex are X and Y. Females have XX and males have XY. In addition to the information on sex, 'the X chromosomes carry determinants for a number of other features of the body including the levels of factor VIII and factor IX.'1 If the genetic information determining the factor VIII and IX level is defective, haemophilia results. When this happens, the protein factors needed for normal blood clotting are effected. In males, the single X chromosome that is effected cannot compensate for the lack, and hence will show the defect. In females, however, only one of the two chromosomes will be abnormal. (unless she is unlucky enough to inherit haemophilia from both sides of the family, which is rare.)2 The other chromosome is likely to be normal and she can therefore compensate for this defect. There are two types of haemophilia, haemophilia A and B. Haemophilia A is a hereditary disorder in which bleeding is due to deficiency of the coagulation factor VIII (VIII:C)3. In most of the cases, this coagulant protein is reduced but in a rare amount of cases, this protein is present by immunoassay but defective.4 Haemophilia A is the most common severe bleeding disorder and approximately 1 in 10,000 males is effected. The most common types of bleeding are into the joints and muscles. Haemophilia is severe if the factor VIII:C levels are less that 1 %, they are moderate if the levels are 1-5% and they are mild if they levels become 5+%.5 (2) Those with mild haemophilia bleed only in response to major trauma or surgery. As for the patients with severe haemophilia, they can bleed in response to relatively mild trauma and will bleed spontaneously. In haemophiliacs, the levels of the factor VIII:C are reduced. If the plasma from a haemophiliac person mixes with that of a normal person, the Partial thromboplastin time (PTT) should become normal. Failure of the PTT to become normal is automatically diagnostic of the presence of a factor VIII inhibitor. The standard treatment of the haemophiliacs is primarily the infusion of factor VIII concentrates, now heat-treated to reduce the chances of transmission of AIDS.6 In the case of minor bleeding, the factor VIII:C levels should only be raised to 25% with one infusion. For moderate bleeding, 'it is adequate to raise the level initially to 50% and maintain the level at greater that 25% with repeated infusion for 2-3 days. When major surgery is to be performed, one raises the factor VIII:C level to 100% and then maintains the factor level at greater than 50% continuously for 10-14 days.'7 Haemophilia B, the other type of haemophilia, is a result of the deficiency of the coagulation factor IX - also known as Christmas disease. This sex-linked disease is caused by the reduced amount of the factor IX. Unlike haemophilia A, the percentage of it's occupance due to an abnormally functioning molecule is larger. The factor IX deficiency is 1/7 as common as factor VIII deficiency and it is managed with factor VIII concentrates. Unlike factor VIII concentrates which have a half-life of 12 hours, the half-life of factor IX concentrates is 18 hours. In addition, factor IX (3) concentrates contain a number of other proteins, including activated coagulating factors that contribute to a risk of thrombosis. Therefore, more care is needed in haemophilia B to decide on how much concentration should be used. The prognosis of the haemophiliac patients has been transformed by the availability of factor VIII and factor IX replacement. The limiting factors that result include disability from recurrent joint bleeding and viral infections such as hepatitis B from recurrent transfusion.8 Since most haemophiliacs are male and only their mother can pass to them the deficient gene, a very important issue for the families of haemophiliacs now is identifying which females are carriers. One way to determine this is to estimate the amount of factor VIII and IX present in the woman. However, while a low level confirms the carrier status, a normal level does not exclude it. In addition, the factor VIII and IX blood levels are known to fluctuate in people and will increase with stress and pregnancy. As a result, only a prediction of the carrier status can be given with this method. Another method to determine the carrier status in a woman is to look directly at the DNA from a small blood sample of several members of the family including the haemophiliacs. In Canada, modern operations include Chorionic Villous Sampling (CVS) and it helps analyze the DNA for markers of haemophilia at 9-11 weeks of pregnancy. (Fig. 1)9 A small probe is inserted through the neck of the mother womb or through the abdomen under local anaesthetics. A tiny sample from the placenta is removed and sent for DNA analysis. (4) Since this process can be done at 9-11 weeks after pregnancy, the pregnancy is in it's relatively early stages and a decision by the mother (and father) to terminate the pregnancy will not be as physically or emotionally demanding on the mother than if she had it performed in the late stages of the pregnancy. Going back to the haemophiliacs, many have become seropositive for HIV infections transmitted through factor VIII and IX concentrates and many have developed AIDS. In Canada, the two drugs currently undergoing clinical testing for treatment of HIV disease are AZT and DDI. For the use of AZT, the major complication is suppression of normal bone marrow activity. This results in low red and white blood cell counts.The former can lead to severe fatigue and the latter to susceptibility to infections.10 DDI is provided as a powder, which must be reconstructed with water immediately prior to use. The most common adverse effect so far is the weakness in the hands and legs. However, it appears that DDI is free of the bone marrow.11 AZT and DDI both represent the first generation of anti-retroviral drug and it is the hope of many people that they will be followed by less toxic and more effective drugs. As it can be seen, haemophilia is one of those sex-linked diseases that must involve the inheritance of both recessive and deficient chromosomes. It is mostly found in males and since every male has a Y chromosome
f:\12000 essays\sciences (985)\Genetics\Hologram.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Holograms Toss a pebble in a pond -see the ripples? Now drop two pebbles close together. Look at what happens when the two sets of waves combine -you get a new wave! When a crest and a trough meet, they cancel out and the water goes flat. When two crests meet, they produce one, bigger crest. When two troughs collide, they make a single, deeper trough. Believe it or not, you've just found a key to understanding how a hologram works. But what do waves in a pond have to do with those amazing three- dimensional pictures? How do waves make a hologram look like the real thing? It all starts with light. Without it, you can't see. And much like the ripples in a pond, light travels in waves. When you look at, say, an apple, what you really see are the waves of light reflected from it. Your two eyes each see a slightly different view of the apple. These different views tell you about the apple's depth -its form and where it sits in relation to other objects. Your brain processes this information so that you see the apple, and the rest of the world, in 3-D. You can look around objects, too -if the apple is blocking the view of an orange behind it, you can just move your head to one side. The apple seems to "move" out of the way so you can see the orange or even the back of the apple. If that seems a bit obvious, just try looking behind something in a regular photograph! You can't, because the photograph can't reproduce the infinitely complicated waves of light reflected by objects; the lens of a camera can only focus those waves into a flat, 2-D image. But a hologram can capture a 3-D image so lifelike that you can look around the image of the apple to an orange in the background -and it's all thanks to the special kind of light waves produced by a laser. "Normal" white light from the sun or a lightbulb is a combination of every colour of light in the spectrum -a mush of different waves that's useless for holograms. But a laser shines light in a thin, intense beam that's just one colour. That means laser light waves are uniform and in step. When two laser beams intersect, like two sets of ripples meeting in a pond, they produce a single new wave pattern: the hologram. Here's how it happens: Light coming from a laser is split into two beams, called the object beam and the reference beam. Spread by lenses and bounced off a mirror, the object beam hits the apple. Light waves reflect from the apple towards a photographic film. The reference beam heads straight to the film without hitting the apple. The two sets of waves meet and create a new wave pattern that hits the film and exposes it. On the film all you can see is a mass of dark and light swirls -it doesn't look like an apple at all! But shine the laser reference beam through the film once more and the pattern of swirls bends the light to re- create the original reflection waves from the apple -exactly. Not all holograms work this way -some use plastics instead of photographic film, others are visible in normal light. But all holograms are created with lasers -and new waves. All Thought Up and No Place to Go Holograms were invented in 1947 by Hungarian scientist Dennis Gabor, but they were ignored for years. Why? Like many great ideas, Gabor's theory about light waves was ahead of its time. The lasers needed to produce clean waves -and thus clean 3-D images -weren't invented until 1960. Gabor coined the name for his photographic technique from holos and gramma, Greek for "the whole message. " But for more than a decade, Gabor had only half the words. Gabor's contribution to science was recognized at last in 1971 with a Nobel Prize. He's got a chance for a last laugh, too. A perfect holographic portrait of the late scientist looking up from his desk with a smile could go on fooling viewers into saying hello forever. Actor Laurence Olivier has also achieved that kind of immortality -a hologram of the 80 year-old can be seen these days on the stage in London, in a musical called Time. New Waves When it comes to looking at the future uses of holography, pictures are anything but the whole picture. Here are just a couple of the more unusual possibilities. Consider this: you're in a windowless room in the middle of an office tower, but you're reading by the light of the noonday sun! How can this be? A new invention that incorporates holograms into widow glazings makes it possible. Holograms can bend light to create complex 3- D images, but they can also simply redirect light rays. The window glaze holograms could focus sunlight coming through a window into a narrow beam, funnel it into an air duct with reflective walls above the ceiling and send it down the hall to your windowless cubbyhole. That could cut lighting costs and conserve energy. The holograms could even guide sunlight into the gloomy gaps between city skyscrapers and since they can bend light of different colors in different directions, they could be used to filter out the hot infrared light rays that stream through your car windows to bake you on summer days. Or, how about holding an entire library in the palm of your hand? Holography makes it theoretically possible. Words or pictures could be translated into a code of alternating light and dark spots and stored in an unbelievably tiny space. That's because light waves are very, very skinny. You could lay about 1000 lightwaves side by side across the width of the period at the end of this sentence. One calculation holds that by using holograms, the U. S. Library of Congress could be stored in the space of a sugar cube. For now, holographic data storage remains little more than a fascinating idea because the materials needed to do the job haven't been invented yet. But it's clear that holograms, which author Isaac Asimov called "the greatest advance in imaging since the eye" will continue to make waves in the world of science.
f:\12000 essays\sciences (985)\Genetics\Humpback Whales.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
To look up into the mountains and see the steam rolling from a mountain stream
on a cold winters morning is a beautiful sight. However, to look out over the horizon and
see the white spray of salt water coming from the blow of a huge hump-back whale is
much more exciting sight and a whole lot warmer. I lived in the mountains of Colorado
for most of my childhood. The first time I had the opportunity to see the ocean was on a
vacation to California, when I was about 15 years old. It was even better than I had
dreamed it would be. The different animals in the ocean, the color of the water, and the
warm sand between my toes was probably what led me to come to the islands of Hawaii.
When I first saw the hump-back whale I was amazed at their huge size and how they could
breach out of the water so gracefully. It is as if they were trying to play or show off. So
when we were asked to choose a favorite animal, I had no problem deciding on the hump-
back whale.
The hump-back whale gets it's name from the distinctive hump in front of the
dorsal fin and from the way it raises it's back high above water before diving. They are a
member of the order Cetacea. This order is of aquatic mammals and the hump-back
belongs to the suborder of the Mysticeti. The Mysticeti are the baleen whales which have
three families and several species. The family in which the hump-back belongs is the
Balaenopteridae, the true fin backed whale. The thing that separates this genus from the
other fin-backed whales is the pectoral fins, which grow in lengths of about 5 meters (16.4
feet). This Genus is called Megaptera meaning great wing (Tinker 290). There was a
controversy over the species name in the late nineteenth and early twentieth century. In
1932, Remington Kellogg finally settled the matter with Megatera Novaeangliae
(Cousteau 84). The common English name is the hump-back whale.
The hump-back whale lives in both the Atlantic and the Pacific oceans. Since we
live in the Pacific I'll be discussing the hump-backs of the North Pacific. They migrate
from North to South. In the months of July through September they gather in the
Aleutian Islands, Bering Sea or the Chukchi Sea. They head south for the winter. They
go to one of three areas: (1) Between the Bonin Islands, the Marianas Islands, the
Ryukyu Islands and Taiwan; (2) The Hawaiian Islands, and (3) Along the coast of
Mexico (Tinker 291).
One of the reasons these whales go North is for feeding. They have a short food
chain compared to most mammals. Phytoplankton turns sunlight into energy and this
energy is consumed by zooplankton. The zooplankton and phytoplankton are eaten by
small fish. The whale in turn eats the fish. The chain is complete when waste products or
dead whales decompose. They have a very short time frame in which they eat compared
to the twelve months out of the year. They have not been seen feeding in Hawaii. It
seems that they only feed during the summer months up north. During the fasting periods,
in Hawaii, they survive on their blubber. They mix their diet with copepods, euphausiids
(krill), and small fish, primarily herring and capelin. They are considered filter feeders,
using baleen plates to filter out their food. They take huge amounts of water into their
mouth using a gulping method and then when they push the water out, they put their
tongue up so the water must pass through the baleens. The food becomes trapped and
falls toward the rear of the mouth. The two gulping methods hump-back whales use are
lunge feeding and bubble net feeding. Lunge feeding is used when food is abundant. The
whale simply swims through the prey with it's mouth open engulfing the prey. They can
do this vertically, laterally or inverted. This is done toward the surface of the ocean.
Bubble net feeding is used when the prey is less abundant. The whale dives below the
prey and discharges bubbles from it's blowhole. As the bubbles ascend they form a net
that disorients the prey. Then the whale swims upward and fills his mouth with the net of
fish and bubbles (Kaufman 55). Hump-backs have ventral grooves in their throat that
expand allowing an enormous amount of water to be gulped. Hump-backs consume
nearly a ton of food in a day's time during their feeding season.
The hump-back whale's stomach consists of three chambers and the duodenal
ampulla much like a cows. The three stomachs are separate from each other. They have
small and large intestines, a rectum, caecum and an anus. These organs are very similar
and work much the same as in most mammals. The digestive glands of a whale are
somewhat different. They do not have salivary glands that are functional. The liver is
bilobed and the gall bladder is absent. The pancreas however resembles that of most other
mammals (Tinker 63).
Mammals, which live in the sea, have a continued problem of dehydration. Hump-
backs get water from the food they eat and during their fasting periods they get it from
their blubber. However, the salinity in the whales bodily fluids is much higher than land
mammals but it is still lower than the seawater. This creates a problem. They are in
danger of losing too much water. In order to maintain a proper balance the whale passes
large quantities of highly concentrated urine. The kidneys are specialized to do this. The
feces also permit discharge of salt. However, few studies have been done on hump-back's
feces or urine (Kaufman 31).
As humans we can breathe either from our mouth or nose. This is not the case of
the hump-back whale. The whale can neither inhale or exhale through their mouth. The
nasal openings of a whale are known as the blowhole. There are two paired openings at
the top of the head. The holes are closed and made water tight by two plugs
(Tinker 65-68).
If you weighed ten elephants that would be the average weight of one hump-back
whale. The male and female whale alike weigh between thirty and fifty tons. This weight
will vary depending on the season. While fasting in Hawaii the weight will be much less.
The calves are born in January and early February as a result from the previous years
mating. They are born at approximately fourteen feet long and end up as long as sixty-
two and a half feet with an average of fifty feet. The calf, a young hump-back, will drink
one-hundred pounds of milk each day. This milk is very rich compared to domestic
animals. The calf will begin to nurse soon after birth from two nipples located on either
side of the vaginal slit. (Coustea 86). After birth they grow very fast. By March they
more than double their weight and are ready to begin their migration north. They will
wean in about five to seven months from birth.
. Whales are not monogamous. Males have been seen romping and playing with
females and it is thought that sometime during this romping and playing mating occurs. It
has never been determined when. Over eleven to twelve months later, back in the same
waters, the female gives birth. Usually they do not have calves each year, however, it is
possible. The birth of twins has never been recorded however it is possible. Sexual
maturity is as early as four years old for both sexes. They live for about thirty years but
studies have shown they can live much longer. Using a "wax plug" system, much like the
system of the rings of a tree, one whale was thought to have been fifty-eight years old
before it died. (Balcom 15-19).The reproductive organs are located internally. The males
penis is withdrawn into a slit. An erection of the penis is accomplished by a pair of
muscles, much like that of cattle and horses. The females ovaries produce single celled
eggs. When the egg is mature it is discharged into the fallopian tubes, a process known as
ovulation. At this time if mating occurs and the egg is fertilized with sperm from the male
the birth of a baby whale is on the way. (Kaufman 31-33).
Most mammals usually have five sense organs. The whale only has three. Touch,
which is located in the skin, is the sense that can feel pain, heat, cold and vibration. They
also have feelers called vibrissae. These feelers are very similar to whiskers on a domestic
cat. The vibrissae are located in rows on the end of the lower jaw, on the sides of the
lower jaw and on top of the head. Sight is the sense that allows the whale to see. The
shape of the whales eyeball tend to make them far-sighted below the surface and near
sighted above the surface. Since the eyes are located on either side of the head it makes it
impossible for their visual fields to overlap, therefore, they do not have depth perception.
Their auditory sense, or hearing, is very important because in the ocean the visibility is
poor. Good hearing is used to help locate food, hear the approach of enemies, and
communicate with each other. Their ears are gone and only a slit appears midway
between the eye and the base of the flipper. The sense of smell and taste are not present
like in most mammals (Tinker 81-85).
Due to the size of these enormous animals they have few predators. Man is their
worst enemy. However, they do have confrontations with other whales. Some of the
defenses used are, filling their mouth with water or air so to bluff the invader into thinking
they are bigger than they are. As a second line of defense they will use the head and fins
as weapons. They also use their huge body as a defense mechanism by positioning
themselves between the invader, like a boat, and mother and calf. (Kaufman 93-115). A
more subtle defense is countershading , where the top of the whale is dark which makes it
harder to see from above looking down and the bottom is light so looking up it is hard to
see against the lighter surface of the ocean.
Hump-backs produce a wide range of sounds. Often these sounds are long and
complex that are repeated for hours. The first sounds were recorded here in Hawaii in
1952 by O.W. Schreiber on the basis of recordings collected at the U.S. Navy Sound
Fixing and Ranging Station. One whale sung a song for fourteen hours without stopping.
Since singing is done primarily during the mating season it is thought to serve a
reproductive function. It has been shown that only the males sing this song. It may also
attract females, scare away other males, or maintain the distance between singers. Males
and females alike make other sounds which are associated with feeding and socially active
groups (Kaufman 73-77).
The whales pectoral fins is not used for propulsion but to balance and steer. The
tail or fluke is used to move this massive mammal through the water. The muscle caudal
peduncle move the fluke in an up and down direction which propels the whale through
the water (Tinker 55).
The worldwide population of humpbacks is estimated between ten thousand and
fifteen thousand animals. This count is down from over one hundred and fifty thousand
last century. (Dietz 39). Man has hunted the whale close to extinction. The good news is
that we have bans against killing whales in most waters. Hopefully we did this in time to
save them from extinction. It would be a true shame if my grandchildren could not enjoy
these wonderful creatures.
Budd 1
f:\12000 essays\sciences (985)\Genetics\Huntingtons disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Huntington's Background
Huntington's disease is inherited as an autosomal dominant disease that gives rise to
progressive, elective (localized) neural cell death associated with choreic movements
(uncontrollable movements of the arms, legs, and face) and dementia. It is one of the
more common inherited brain disorders. About 25,000 Americans have it and another
60,000 or so will carry the defective gene and will develop the disorder as they age.
Physical deterioration occurs over a period of 10 to 20 years, usually beginning in a
person's 30's or 40's. The gene is dominant and thus does not skip generations.
Having the gene means a 92 percent chance of getting the disease. The disease is
associated with increases in the length of a CAG triplet repeat present in a gene
called 'huntington' located on chromosome 4. The classic signs of Huntington disease
are progressive chorea, rigidity, and dementia, frequently associated with seizures.
Studies & Research
Studies were done to determine if somatic mtDNA (mitochondria DNA) mutations might
contribute to the neurodegeneration observed in Huntington's disease. Part of the
research was to analyze cerebral deletion levels in the temporal and frontal lobes.
Research hypothesis: HD patients have significantly higher mtDNA deletionlevels than
agematched controls in the frontal and temporal lobes of the cortex. To test the
hypothesis, the amount of mtDNA deletion in 22 HD patients brains was examined by serial
dilution-polymerase chain reaction (PCR) and compared the results with mtDNA deletion
levels in 25 aged matched controls.
Brain tissues from three cortical regions were taken during an autopsy (from the 22 HD
symptomatic HD patients): frontal lobe, temporal lobe and occipital lobe, and putamen.
Molecular analyses were performed on genomic DNA isolated from 200 mg of frozen brain
regions as described above. The HD diagnosis was confirmed in patients by PCR amplification
of the trinucleotide repeat in the IT 15 gene. One group was screened with primers that
included polymorphism and the other was screened without the polymorphism.
After heating the reaction to 94 degreesC for 4 minutes, 27 cycles of 1 minute at 94
degreesC and 2 minutes at 67 degreesC, tests were performed. The PCR products were
settled on 8% polyacrylamide gels. The mtDNA deletion levels were quantitated relative
to the total mtDNA levels by the dilution-PCR method. When the percentage of the mtDNA
deletion relative to total mtDNA was used as a marker of mtDNA damage, most regions of
the brain accrued a very small amount of mtDNA damage before age 75. Cortical regions
accrued 1 to 2% deletion levels between ages 80-90, and the putamen accrued up to 12%
of this deletion after age 80. The study presented evidence that HD patients have much
higher mtDNA deletionlevels than agematched controls in the frontal and temporal lobes
of the cortex. Temporal lobe mtDNA deletion levels were 11 fold higher in HD patients
than in controls, whereas the frontal lobe deletion levels were fivefold higher in HD
patients than in controls. There was no statistically significant difference in the
average mtDNA deletion levels between HD patients and controls in the occipital lobe
and the putamen. The increase in mtDNA deletion levels found in HD frontal and
temporal lobes suggests that HD patients have an increase mtDNA somatic mutation rate.
Could the increased rate be from a direct consequence of the expanded trinucleotide
repeat of the HD gene, or is it from an indirect consequence? Whatever the origin of
the deletion, these observations are consistent with the hypothesis: That the
accumulation of somatic mtDNA mutations erodes the energy capacity of the brain,
resulting in the neuronal loss and symptoms when energy output declines below tissue
expression thresholds. (Neurology, October 95)
Treatments
Researchers have identified a key protein that causes the advancement of Huntington's
after following up on the discovery two years ago of the gene that causes this
disorder. Shortly after the Huntington's gene was identified, researchers found the
protein it produces, a larger than normal molecule they called huntingtin that was
unlike any protein previously identified. The question that they did not know was what
either the healthy huntingtin protein or its aberrant form does in a cell. Recently, a
team from Johns Hopkins University found a second protein called HAP-1, that attaches
to the huntingtin molecule only in the brain. The characteristics of this second
protein has an interesting feature- it binds much more tightly to defective huntingtin
than to the healthy from, and it appears that this tightly bound complex causes damage
to brain cells.
Researchers are hoping to find simple drugs that can weaken this binding, thereby preventing
the disease to progress any further.
In other Huntington-related research, scientists have found where huntingtin protein is
localized in nerve cells, a step closer to discovering its contribution toward
Huntington's.
A French team reported that they have developed an antibody that attaches itself to the
defective protein in Huntington's and four other inherited diseases. This finding may lead
to identifying the defects in a variety of others unexplained disorders.
The identification of the gene an the huntingtin protein promised to be a major
breakthrough in tracing the causes of Huntington's, but that promise has so far been
delayed. The protein of Huntington is unlike any other protein known making it
difficult for researchers to guess its role in a healthy cell. However, this has not
stopped researchers from trying to find a possible cure for HD.
Effects on Society
By finding possible drugs to weaken the binding of the HAP-1 protein, researchers can
provide society an incredibly sophisticated, but quick and easy wasy to screen for new
treatments. One of the biggest arguments for genetic testing, even when there isn't any
cure or treatment to offer the patient, is financial planning. If you know that you're
probably going to be disabled and unable to work before reaching 50, you can plan for it.
But what if your income doesn't allow for it? This demonstrates the importance for
continuous research on HD.
Overview of the Two Articles
Both articles concentrate on HD's protein causing affect. There is no doubt between the two
that HD is an inherited mutation. The Neurology articles explains how HD patients have much
higher deletion levels than agematched controls in the frontal and temporal lobes of the
cortex, whereas the article from Times Medical Writer focuses on a possible treatment
resulting from a finding of a second protein called HAP-1, that binds itself to the
huntingtin molecule only in the brain. Both conclude that HD is a mutation that causes
damage to brain cells further in a person's life.
f:\12000 essays\sciences (985)\Genetics\Hurricanes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hurricanes ========== Hurricanes get their start over the warm tropical waters of the North Atlantic Ocean near the equator. Most hurricanes appear in late summer or early fall, when sea temperatures are at their highest. The warm waters heats the air above it, and the updrafts of warm, moist air begin to rise. Day after day the fluffy cumuli form atop the updrafts. But the cloud tops rarely rise higher than about 6,000 feet. At that height in the tropics, there is usually a layer of warm, dry air that acts like an invisible ceiling or lid. Once in a while, something happens in the upper air that destroys this lid. Scientist don not know how this happens. But when it does, it's the first step in the birth of a hurricane. With the lid off, the warm, moist air rises higher and higher. Heat energy, released as the water vapor in the air condenses. As it condenses it drives the upper drafts to heights of 50,000 to 60,000 feet. The cumuli become towering thunderheads. From outside the storm area, air moves in over the sea surface to replace the air soaring upwards in the thunderheads. The air begins swirling around the storm center, for the same reason that the air swirls around a tornado center. As this air swirls in over the sea surface, it soaks up more and more water vapour. At the storm center, this new supply of water vapor gets pulled into the thunderhead updrafts, releasing still more energy as the water vapor condenses. This makes the updrafts rise faster, pulling in even larger amounts of air and water vapor from the storm's edges. And as the updrafts speed up, air swirls faster and faster around the storm center. The storm clouds, moving with the swirling air, form a coil. In a few days the hurricane will have grown greatly in size and power. The swirling shape of the winds of the hurricane is shaped like a dough-nut. At the center of this giant "dough-nut" is a cloudless, hole usually having a radius of 10 miles. Through it, the blue waters of the ocean can be seen. The hurricane's wind speed near the center of the hurricane ranges from 75 miles to 150 miles per hour. The winds of a forming hurricane tend to pull away from the center as the wind speed increases. When the winds move fast enough, the "hole" developes. This hole is the mark of a full-fledge hurricane. The hole in the center of the hurricane is called the "eye" of the hurricane. Within the eye, all is calm and peaceful. But in the cloud wall surrounding the eye, things are very different. Although hurricane winds do not blow as fast as tornado winds, a hurricane is far more destructive. That's because tornado winds cover only a small area, usually less than a mile across. A hurricane's winds may cover an area 60 miles wide out from the center of the eye. Another reason is tornadoes rarely last as long as an hour, or travel more than 100 miles. However , a hurricane may rage for a week or more (example: Hurricane Dorthy) In that time, it may travel tens of thousands of miles over the sea and land. At sea, hurricane winds whip up giant waves up to 20 feet high. Such waves can tear freighters and other oceangoing ships in half. Over land, hurricane winds can uproot trees, blow down telephone lines and power lines, and tear chimneys off rooftops. The air is filled with deadly flying fragments of brick, wood, and glass.
f:\12000 essays\sciences (985)\Genetics\Hypnosis1.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
GUIDE TO HYPNOSIS
HOW TO GUIDE SOMEONE INTO HYPNOSIS: NOTE THAT I SAID GUIDE, YOU CAN
NEVER,
HYNOTISE SOMEONE, THEY MUST BE WILLING. OK, THE SUBJECT MUST BE
LYING OR
SITTING IN A COMFORTABLE POSITION, RELAXED, AND AT A TIME WHEN
THINGS
ARENT
GOING TO BE INTERRUPTED.
TELL THEM THE FOLLOWING OR SOMETHING CLOSE TO IT, IN A PEACEFUL,
MONOTINOUS
TONE (NOT A COMMANDING TONE OF VOICE)
NOTE: LIGHT A CANDLE AND PLACE IT SOMEWHERE WHERE IT CAN BE EASILY
SEEN.
TAKE A DEEP BREATH THROUGH YOUR NOSE AND HOLD IT IN FOR A COUNT OF
8. NOW,
THROUGH YOUR MOUTH, EXHALE COMPLETELY AND SLOWLY. CONTINUED
BREATHING
LONG,
DEEP, BREATHS THROUGH YOUR NOSE AND EXHALING THROUGH YOUR
MOUTH. TENSE UP
ALL YOUR MUSCLES VERY TIGHT, NOW, COUNTING FROM TEN TO ONE,
RELEASE THEM
SLOWLY, YOU WILL FIND THEM VERY RELAXED. NOW, LOOK AT THE CANDLE,
AS
YOU LOOK AT IT, WITH EVERY BREATH AND PASSING MOMEMENT, YOU ARE
FEELING
INCREASINGLY MORE AND MORE PEACEFUL AND RELAXED. THE CANDLES
FLAME IS
PEACEFUL AND BRIGHT.
AS YOU LOOK AT IT I WILL COUNT FROM 100 DOWN, AS A COUNT, YOUR EYES
WILL
BECOME MORE AND MORE RELAXED, GETTING MORE AND MORE TIRED WITH
EACH
PASSING MOMENT."
NOW, COUNT DOWN FROM 100, ABOUT EVERY 10 NUMBERS SAY "WHEN I
REACH XX YOUR
EYES (OR YOU WILL FIND YOUR EYES) ARE BECOMING MORE AND MORE
TIRED." TELL
THEM THEY MAY CLOSE THEIR EYES WHENEVER THEY FEEL LIKE IT. IF THE
PERSONS
EYES ARE STILL OPEN WHEN YOU GET TO 50 THEN INSTEAD OF SAYING
"YOUR EYES WILL.."
SAY "YOUR EYES ARE...".
WHEN THEIR EYES ARE SHUT SAY THE FOLLOWING. AS YOU LIE (OR SIT) HERE
WITH
YOUR EYES COMFORTABLY CLOSE YOU FIND YOURSELF RELAXING MORE AND
MORE WITH EACH MOMENT AND BREATH.
THE RELAXATION FEELS PLEASANT AND BLISSFUL SO, YOU HAPPILY GIVE
WAY TO
THIS WONDERFUL FEELING. IMAGINGE YOURSELF ON A CLOUD, RESTING
PEACEFULLY,
WITH A SLIGHT BREEZE CARESSING YOUR BODY. A TINGLING SENSASION
BEGINS
TO WORK ITS WAY, WITHIN AND WITHOUT YOUR TOES, IT SLOWLY MOVES UP
YOUR
FEET, MAKING THEM WARM, HEAVY AND RELAXED. THE CLOUD IS SOFT AND
SUPPORTS
YOUR BODY WITH ITS SOFT TEXTURE, THE SCENE IS PEACEFUL AND
ABSORBING,
THE PEACEFULNESS ABSORBS YOU COMPLETELY...
THE TINGLING GENTLY AND SLOWLY MOVES UP YOUR LEGS, RELAXING
THEM.
MAKING THEM WARM AND HEAVY. THE RELAXATION FEELS VERY GOOD, IT
FEELS SO
GOOD TO RELAX AND LET GO. AS THE TINGLING CONTINUES ITS JOURNEY UP
INTO
YOUR SOLAR PLEXUS, YOU FEEL YOUR INNER STOMACH BECOME VERY
RELAXED. NOW,
IT MOVES SLOWLY INTO YOUR CHEST, MAKING YOUR BREATHING RELAXED
AS WELL.
THE FEELING BEGINS TO MOVE UP YOUR ARMS TO YOUR SHOULDERS, MAKING
YOUR
ARMS
HEAVY AND RELAXED AS WELL. YOU ARE AWARE OF THE TOTAL RELAXATION
YOU ARE
NOW EXPERIENCING, AND YOU GIVE WAY TO IT. IT IS GOOD AND PEACEFUL,
THE
TINGLING NOW MOVEVES INTO YOUR FACE AND HEAD, RELAXING YOUR
JAWS, NECK,
AND
FACIAL MUSCLES, MAKING YOUR CARES AND WORRIES FLOAT AWAY. AWAY
INTO THE
BLUE SKY AS YOU REST BLISFUlLY ON THE CLOUD....
IF THEY ARE NOT RESPONSIVE OR YOU THINK THEY (HE OR SHE..) IS GOING
TO
SLEEP, THEN ADD IN A "...ALWAYS CONCENTRATING UPON MY VOICE,
INGORING ALL
OTHER SOUNDS. EVEN THOUGH OTHER SOUNDS EXSIST, THEY AID YOU IN
YOUR
RELAXATION..." THEY SHOULD SOON LET OUT A SIGH AS IF THEY WERE
LETTING GO,
AND THEIR FACE SHOULD HAVE A "WOODENESS" TO IT, BECOMING
FEATURLESS...
NOW,
SAY THE FOLLOWING ".... YOU NOW FIND YOURSELF IN A HALLWAY, THE
HALLWAY
IS
PEACEFUL AND NICE. AS I COUNT FROM 10 TO 1 YOU WILL IMAGINE YOURSELF
WALKING FURTHER AND FURTHER DOWN THE HALL. WHEN I REACH ONE YOU
WILL FIND
YOURSELF WHERE YOU WANT TO BE, IN ANOTHER, HIGHER STATE OF
CONCIOUS AND
MIND. (COUNT FROM TEN TO ONE)....." DO THIS ABOUT THREE OR FOUR
TIMES.
THEN, TO TEST IF THE SUBJECT IS UNDER HYPNOSIS OR NOT, SAY....
"...YOU FEEL A STRANGE SENSATION IN YOUR (ARM THEY WRITE WITH) ARM,
THE
FEELING BEGINS AT YOUR FINGERS AND SLOWLY MOVES UP YOUR ARM, AS IT
MOVES
THROUGH YOUR ARM YOUR ARM BECOMES LIGHTER AND LIGHTER, IT WILL
SOON BE SO
LIGHT IT WILL ..... BECOMING LIGHTER AND LIGHTER WHICH EACH BREATH
AND
MOMENT..."
THEIR FINGERS SHOULD BEGIN TO TWITCH AND THEN MOVE UP, THE ARM
FOLLOWING,
NOW MY FRIEND, YOU HAVE HIM/HEP IN HYPNOSIS. THE FIRST TIME YOU DO
THIS,
WHILE HE/SHE IS UNDER SAY GOOD THINGS, LIKE: "YOUR GOING TO FEEL
GREAT
TOMORROW" OR "EVERY DAY IN EVERY WAY YOU WILL FIND YOURSELF
BECOMING
BETTER
AND BETTER".. OR SOME CRAP LIKE THAT... THE MORE THEY GO UNDER, THE
DEEPER
IN HYPNOSIS THEY WILL GET EACH TIME YOU DO IT.
+----------------------------+
! WHAT TO DO WHEN HYPNOTISED !
+----------------------------+
WHEN YOU HAVE THEM UNDER YOU MUST WORD THINGS VERY CAREFULLY TO
GET YOUR
WAY. YOU CANNOT SIMPLY SAY... TAKE OFF YOUR CLOTHES AND FUCK THE
PILLOW.
NO, THAT WOULD NOT REALLY DO THE TRICK. YOU MUST SAY SOMETHING
LIKE....
"YOU FIND YOUR SELF AT HOME, IN YOUR ROOM AND YOU HAVE TO TAKE A
SHOWER
(VIVIDLY DESCRIBE THEIR ROOM AND WHATS HAPPENING), YOU BEGIN TO
TAKE OFF
YOUR CLOTHES..." NOW, IT CANT BE THAT SIMPLE, YOU MUST KNOW THE
PERSONS
HOUSE, ROOM, AND SHOWER ROOM. THEN DESCRIBE THINGS VIVIDLY AND
TELL THEM
TO ACT IT OUT (THEY HAVE TO BE DEEPLY UNDER TO DO THIS...). I WOULD
JUST
SUGGEST THAT YOU EXPERIMENT A WHILE, AND GET TO KNOW HO; TO DO
THINGS.
+-----------+
! WAKING UP !
+-----------+
WAKING UP IS VERY EASY, JUST SAY.. "...AS I COUNT FROM 1 TO 5 YOU WILL
FIND YOURSELF BECOMMING MORE AND MORE AWAKE, MORE AND MORE
LIVELY. WHEN
YOU WAKE UP YOU WILL FIND YOURSELF COMPLETELY ALIVE, AWAKE, AND
REFRESHED.
MENTALLY AND PHYSICALLY, REMEMBERING THE PLEASANT SENSATION
THAT HYPNOSIS
BRINGS... WAKING UP FEELING LIKE A NEW BORN BABY, REBORN WITH LIFE
AND
VIGOR, FEELING EXCELLENT. REMEMBERING THAT NEXT TIME YOU ENTER
HYPNOSIS IT
WILL BECOME AN EVER INCREASING DEEPER AND DEEPER STATE THAN
BEFORE.
1- YOU FEEL ENERGY COURSE THROUGHOUT YOUR LIMBS.
2- YOU BEGIN TO BREATHE DEEPLY, STIRRING.
3- BEGINING TO MOVE MORE AND MORE YOUR EYES OPEN, BRINGING YOU UP
TO
FULL CONCIOUS.
4- YOU ARE UP,UP, UP AND AWAKENING MORE AND MORE.
5- YOU ARE AWAKE AND FEELING GREAT."
AND THATS IT! YOU NOW KNOW HOW TO HYPNOTISE YOURSELF AND SOMEONE
ELSE.
YOU WILL LEARN MORE AND MORE AS YOU EXPERIMENT.
f:\12000 essays\sciences (985)\Genetics\Hypnosis2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hypnosis
Over the years, hypnosis has overcome a lot of skepticism. This
research paper will explore the art, use, and questions about hypnosis
both in recreation and in therapy. In this paper, you will learn what
hypnosis is, different types of it, and different techniques for using it.
Hypnosis, as defined by Roy Hunter, is "a natural state of mind,
induced in everyday living much more than it is induced artificially".
Another definition by Hunter is "guided meditation". Many people do not
realize this, but you can be hypnotized by many things. Anytime you
become engrossed in a book or a movie, you may enter a sort of meditative
trance. There are different techniques for induction into a hypnotic
trance. One is eye fixation. This simply uses a fixed gaze, and was very
popular in the 1800's and is most commonly used by Hollywood. Another is
progressive relaxation or imagery. You have someone imagine being in a
safe or peaceful place, and then awaken to full consciousness. Another
induction method is the mental confusion method which confuse the
conscious mind to the point where it just lets go and becomes relaxed.
Another one is shock to nervous system. This technique is commonly used
by stage hypnotists and it is employing a sudden exited command in a
surprising way. The participant will experience a "moment of
passivity"(Hunter)where they'll either resist the trance or "let go" into
hypnosis.
Hypnosis also has some useful situations. One would be in the area
of memory. When you are entranced in the hypnotic state, your sense of
memory is enhanced. Although this is true, the things which are
remembered can not be regarded a truth. Sometimes when a person is
entranced, they will 'remember' things that never actually happened, but
have great personal significance.
One area that has caused tremendous controversy is in the area of
hypnotizability. The question has been raised many times if there are
certain people who can be hypnotized and certain people who cannot be
hypnotized. There are indeed people who can and cannot. The only thing
it depends on is how well you can focus. People who have better focus
generally have better results with hypnotizability, and people who have a
harder time focusing generally tend to be less susceptible, as a general
rule.
Although hypnosis is totally safe as long as your hypnotist is
competent and trustworthy, some skeptical people still have fears and
concerns. This, once again, all relies on how ethical your hypnotist is.
Some people also think that people lose control of their actions when they
are hypnotized. In a way, you do "lose control". From what I've learned,
you enter what I'd describe as an "uninhibited state" where things that
you would normally find horribly embarrassing would seem perfectly normal,
but you do not give up control over moral decisions. A person in a
hypnotic trance can come out anytime they want to if they are asked to do
something that goes against their moral values.
Another use of hypnosis is in therapy. This is called hypnotherapy.
Hypnotherapy, as defined by Hunter is "the use of hypnosis for
self-improvement and/or the release of problems. All hypnotherapy employs
hypnosis, but not all hypnosis is hypnotherapy". Another definition is
"any form of psychotherapy practiced in conjunction with the hypnotic
modality, within that altered state of consciousness called a hypnotic
trance"(source unknown). Hypnotherapy has a wide variety of uses. Some
surgeons and anesthesiologists use it in controlling pain, relaxing the
patient, relieving postsurgical depression, and controlling nausea. It is
helpful in treating sexual disorders such as impotence and frigidity, and
the psychosomatic disorders. Treatment of problems using hypnosis has
been used throughout history. Anton Mesmer is considered the first modern
hypnotherapist. He is the one to come up with the term "animal
magnetism".
Some people believe that hypnotherapy can be used as a substitute for
cognitive counseling. This is not true. Hypnotherapy can be used for
some things, but some things just can't be solved by solved by
hypnotherapy. For example, hypnotherapy is not a substitute for marriage
counciling. Hypnosis cannot solve all of life's problems.
In conclusion, hypnosis is a natural state of mind and anyone with
enough focus can enter the hypnotic state. Also, hypnotherapy is good for
treating many illnesses.
f:\12000 essays\sciences (985)\Genetics\Hypogravitational Osteoporosis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
==============================================================================
Your Bones in Space ASTRONOMY AND SPACE SCIENCE
------------------------------------------------------------------------------
Hypogravitational Osteoporosis: A review of literature.
By Lambert Titus Parker. May 19 1987.
Osteoporosis: a condition characterized by an absolute decrease in the
amount of bone present to a level below which it is capable of maintaining
the structural integrity of the skeleton.
To state the obvious, Human beings have evolved under Earth's gravity
"1G". Our musculoskeleton system have developed to help us navigate in
this gravitational field, endowed with ability to adapt as needed under
various stress, strains and available energy requirement. The system
consists of Bone a highly specialized and dynamic supporting tissue which
provides the vertebrates its rigid infrastructure. It consists of specialized
connective tissue cells called osteocytes and a matrix consisting of
organic fibers held together by an organic cement which gives bone its
tenacity, elasticity and its resilience. It also has an inorganic component
located in the cement between the fibers consisting of calcium phosphate
[85%]; Calcium carbonate [10%] ; others [5%] which give it the hardness
and rigidity. Other than providing the rigid infrastructure, it protects
vital organs like the brain], serves as a complex lever system, acts as a
storage area for calcium which is vital for human metabolism, houses the
bone marrow within its mid cavity and to top it all it is capable of changing
its architecture and mass in response to outside and inner stress. It
is this dynamic remodeling of bone which is of primary interest in microgravity.
To feel the impact of this dynamicity it should be noted that a bone
remodeling unit [a coupled phenomena of bone reabsorption and bone formation]
is initiated and another finished about every ten seconds in a healthy
adult. This dynamic system responds to mechanical stress or lack of it
by increasing the bone mass/density or decreasing it as per the demand
on the system. -eg; a person dealing with increased mechanical stress
will respond with increased mass / density of the bone and a person who
leads a sedentary life will have decreased mass/density of bone but the right
amount to support his structure against the mechanical stresses she/she
exists in. Hormones also play a major role as seen in postmenopausal
females osteoporosis (lack of estrogens) in which the rate of bone reformation
is usually normal with the rate of bone re-absorption increased.
In Skeletal system whose mass represent a dynamic homeostasis in 1g weight-
bearing,when placed in microgravity for any extended period of time requiring
practically no weight bearing, the regulatory system of bone/calcium
reacts by decreasing its mass. After all, why carry all that extra mass
and use all that energy to maintain what is not needed? Logically the
greatest loss -demineralization- occurs in the weight bearing bones of
the leg [Os Calcis] and spine. Bone loss has been estimated by calcium-balance
studies and excretion studies. An increased urinary excretion of calcium
, hydroxyproline & phosphorus has been noted in the first 8 to 10 days
of microgravity suggestive of increased bone re-absorption. Rapid increase
of urinary calcium has been noted after takeoff with a plateau reached
by day 30. In contrast, there was a steady increase off mean fecal calcium
throughout the stay in microgravity and was not reduced until day 20 of
return to 1 G while urinary calcium content usually returned to preflight
level by day 10 of return to 1G.
There is also significant evidence derived primarily from rodent studies that
seem to suggest decreased bone formation as a factor in hypogravitational
osteoporosis. Boy Frame,M.D a member of NASA's LifeScience Advisory Committee
[LSAC] postulated that "the initial pathologic event after the astronauts
enter zero gravity occurs in the bone itself, and that changes in mineral
homeostasis and the calcitropic hormones are secondary to this. It appears
that zero gravity in some ways stimulate bone re-absorption, possibly through
altered bioelectrical fields or altered distribution of tension and pressure
on bone cells themselves. It is possible that gravitational and muscular
strains on the skeletal system cause friction between bone crystals
which creates bioelectrical fields. This bioelectrical effect in some
way may stimulate bone cells and affect bone remodeling." In the early
missions, X-ray densitometry was used to measure the weight-bearing bones
pre & post flight. In the later Apollo, Skylab and Spacelab missions Photon
absorptiometry (a more sensitive indicator of bone mineral content) was
utilized. The results of these studies indicated that bone mass [mineral
content] was in the range of 3.2% to 8% on flight longer than two weeks
and varying directly with the length of the stay in microgravity. The
accuracy of these measurements have been questioned since the margin
of error for these measurements is 3 to 7% a range being close to the
estimated bone loss.
Whatever the mechanism of Hypogravitational Osteoporosis, it is one of
the more serious biomedical hazard of prolonged stay in microgravity.
Many forms of weight loading exercises have been tried by the astronauts
& cosmonauts to reduce the space related osteoporosis. Although isometric
exercises have not been effective, use of Bungee space suit have shown
some results. However use of Bungee space suit [made in such a way that
everybody motion is resisted by springs and elastic bands inducing stress
and strain on muscles and skeletal system] for 6 to 8 hrs a day necessary
to achieve the desired effect are cumbersome and require significant workload and
reduces efficiency thereby impractical for long term use other than proving
a theoretical principle in preventing hypogravitational osteoporosis.
Skylab experience has shown us that in spite of space related osteoporosis
humans can function in microgravity for six to nine months and return
to earth's gravity. However since adults may rebuild only two-third of
the skeletal mass lost, even 0.3 % of calcium loss per month though small
in relation to the total skeletal mass becomes significant when Mars mission
of 18 months is contemplated. Since adults may rebuild only two-thirds
of the skeletal mass lost in microgravity, even short durations can cause
additive effects. This problem becomes even greater in females who are
already prone to hormonal osteoporosis on Earth.
So far several studies are under way with no significant results. Much
study has yet to be done and multiple experiments were scheduled on the
Spacelab Life Science [SLS] shuttle missions prior to the Challenger
tragedy. Members of LSAC had recommended that bone biopsies need to be
performed for essential studies of bone histomorphometric changes to
understand hypogravitational osteoporosis. In the past, astronauts with
the Right Stuff had been resistant and distrustful of medical experiments
but with scientific personnel with life science training we should be
able to obtain valid hard data. [It is of interest that in the SLS mission,
two of the mission specialists were to have been physicians, one physiologist
and one veterinarian.]
After all is said, the problem is easily resolved by creation of artificial
gravity in rotating structures. However if the structure is not large
enough the problem of Coriolis effect must be faced. To put the problem
of space related osteoporosis in perspective we should review our definition
of Osteoporosis: a condition characterized by an absolute decrease in the
amount of bone present to a level below which it is capable of maintaining the
structural integrity of the skeleton. In microgravity where locomotion
consists mostly of swimming actions with stress being exerted on upper
extremities than lower limbs resulting in reduction of weight bearing
bones of lower extremities and spine which are NOT needed for maintaining
the structural integrity of the skeleton. So in microgravity the skeletal
system adapts in a marvelous manner and problem arises only when this
microgravity adapted person need to return to higher gravitational field.
So the problem is really a problem of re-adaptation to Earth's gravity.
To the groups wanting to justify space related research: Medical expense
due to osteoporosis in elderly women is close to 4 billion dollars a
year and significant work in this field alone could justify all space life
science work. It is the opinion of many the problem of osteoporosis on earth
and hypogravity will be solved or contained, and once large rotating
structures are built the problem will become academic. For completeness
sake: Dr. Graveline, at the School of Aerospace Medicine, raised a litter
of mice on a animal centrifuge simulating 2G and compared them with a
litter mates raised in 1G. "They were Herculean in their build, and unusually
strong...." reported Dr.Graveline. Also X-ray studies showed the 2G mice
to have a skeletal density to be far greater than their 1G litter mates.
f:\12000 essays\sciences (985)\Genetics\InfluencesonNormalPhysicalDevelopment.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Influences on Normal Physical Development
Influences on Normal Physical Development
Physical growth in early childhood is partially easy to measure and gives an idea of how children normally develop during this period. The average child in North America is less than three feet tall at two years of age. Physical growth contains no discrete stages, plateaus, or qualitative changes. Large differences may develop between individual children and among groups of children. Sometimes these differences affect the psychological development of young children. These differences create a nice variety among children.
Most dimensions of growth are influenced by the child's genetic background. Also, races and ethnic backgrounds around the world differ in growth patterns. Nutrition can affect growth, but it does not override genetic factors.
One factor in the cause of slow growth is malnutrition. Malnutrition can start as early as pregnancy. Low birth weight babies have an increased risk of infection and death during the first few weeks of life. Food-deprived children carry a greater risk of neurological deficiencies that result in poor vision, impaired educational attainment, and cerebral problems. Such children are also more prone to diseases such as malaria, respiratory tract infections or pneumonia. The illnesses of malnourished children can cause more lasting damage than in a healthy child. The destructive conjunction between low food intake and disease is magnified at the level of the hungry child. There is evidence, according to The Journal of Nutrition, that an estimated 50 percent of disease-related mortality among infants could be avoided if infant malnutrition were eradicated. It has also been shown that low birth- weight is associated with increased prevalence of diseases such as stroke, heart disease and diabetes in adult life. Most damage during the first few years of life cannot easily be undone.
There are many reasons why some children never reach normal height. Some causes of short stature are well understood and can be corrected, but most are subjects of ongoing research. Achondroplasia is the most common growth defect in which abnormal body proportions are present. Achondroplasia is a genetic disorder of bone growth. It affects about one in every 26,000 births. It occurs in all races and in both sexes. It is one of the oldest recorded birth defects found as far back as Egyptian art. A child with achondroplasia has a relatively normal torso but short arms and legs. People sometimes think the child is mentally retarded because they are slow to sit, stand, and walk alone. In most cases, however, a child with achondroplasia has normal intelligence. Children with achondroplasia occasionally die suddenly in infancy or early childhood. These deaths usually occur during sleep and are thought to result from compression of the upper end of the spinal cord, which can interfere with breathing. This disease is caused by an abnormal gene. The discovery of the gene allowed the development of highly accurate prenatal tests that can diagnose or rule out achondroplasia. There is currently no way to normalize skeletal development of children with achondroplasia, so there is no cure. Growth hormone treatments, which increase height in some forms of short stature, do not substantially increase the height of children with achondroplasia. There is no way to prevent the majority of cases of achondroplasia, since these births result from totally unexpected gene mutations in unaffected parents.
One treatment available for children is known as growth hormone therapy. The policy governing the use of growth hormone (GH) therapy has shifted from treating only those children with classic growth hormone deficiency to treating short children to improve their psycho social functioning. This has caused quite a controversy. Parents have described shorter boys as less socially competent and having more behavioral problems than that of the normal sample. Shorter boys describe themselves as less socially active but not having more behavioral problems than that of the normal group. This is according to a study conducted by the Children's Hospital of Buffalo and the State University of New York at Buffalo. The researchers conclude growth hormone therapy should not be administered routinely to all short children for the purpose of improving their psychological health. They urge that physicians consider both a child's short stature and psycho social functioning before making a referral for growth hormone therapy.
Another factor in the growth of children is their change of appetite. Young preschoolers may eat less than they did as a toddler. This is also when they will become more selective and choosy with the foods they eat. These changes are normal and result from the slowing down of growth after infancy. Preschool children simply do not need as many calories as they did after birth. Children's food preferences are influenced by the adult models around them. Preschoolers tend to like the same foods as their parents and other important adults in their lives.
Variations in growth can result from cultural and psychological factors. Failure to thrive is defined in the class textbook as a condition in which an infant seems seriously delayed in physical growth and is noticeably apathetic in behavior. This condition may result from situations that interfere with normal positive relationships between parent and child, especially during infancy or the early preschool period. The result is a deprived relationship that may lead the child to eat poorly or be plagued by constant anxiety. The nervousness can interfere with sleep or the production of growth hormones. If failure to thrive has not persisted for too long, it usually can be reversed in the short run through special nutritional and medical intervention to help the child regain strength and begin growing normally again.
There are many factors that can result in slow growth in children. Between the ages of two and five, growth slows down and children take on more adult bodily proportions. Usually growth is rather smooth during the preschool period. Genetic and ethnic backgrounds affect its overall rate, as do the quality of nutrition and children's experiences with illness. Children's appetites are often smaller in the preschool years than in infancy, and preschoolers become more selective about their food preferences. If children fall behind in growth because of poor nutrition or hormonal deficiencies, they often can achieve catch-up growth if slow growth has not been too severe or prolonged. A few children suffer from failure to thrive, a condition marked by reduced physical growth, possibly as a result of family stress and conflict. Bibliography
Achondroplasia. Public Health Education Information Sheet. Http://www.noah.cuny.edu/pregnancy/march_of_dimes/birth_de fects/achondro.html.
Byers, T. 1995. The Emergence of Chronic Diseases in Developing Countries. SCN News 13: 14-19; Golden, M. H. N. 1995. Specific deficiencies versus growth failure. SCN News 12:10-14.
Growth Hormone: Not for All Short Children. Medical Sciences Bulletin, Pharmaceutical Information Associates, Ltd. Http://www.pharmingo.com/pubs/msb/grhorm.html.
Mason, J. B. 1990. Malnutrition and Infection. SCN News. 5: 2o21; UN Administrative Committee on Coordination-Sub Committee on Nutrition (ACC/SCN). 1995. Maternal Nutrition and health: A Summary of Research on Birth weight. Maternal Nutrition and Health 14 (1/2): 14-17.
Pelletier, D. 1995. The Effects of Malnutrition on Child Mortality in Developing Countries. Bulletin of the World Heath Organization 73 (4); Pelletier, D. 1994. The Relationship between Child Anthropometry and Mortality in Developing Countries. The Journal of Nutrition. Supplement 124 (1OS).
Pollitt, E. 1995. Nutrition in Early Life and the Fulfilment of Intellectual Potential. The Journal of Nutrition. Supplement 125 (4S): 1111S- 1118S.
Seifert, Kelvin L. and Robert J. Hoffnung. Child and Adolescent Development. 1997, Chapter 8, pages 236-244.
Word Count: 1230
f:\12000 essays\sciences (985)\Genetics\Kant the Universal Law Formation of the Categorical Imp~149.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Kant: the Universal Law Formation of the Categorical Imperative
Kantian philosophy outlines the Universal Law Formation of the
Categorical Imperative as a method for determining morality of actions.
This formula is a two part test. First, one creates a maxim and
considers whether the maxim could be a universal law for all rational
beings. Second, one determines whether rational beings would will it to
be a universal law. Once it is clear that the maxim passes both prongs
of the test, there are no exceptions. As a paramedic faced with a
distraught widow who asks whether her late husband suffered in his
accidental death, you must decide which maxim to create and based on the
test which action to perform. The maxim "when answering a widow's
inquiry as to the nature and duration of her late husbands death, one
should always tell the truth regarding the nature of her late husband's
death" (M1) passes both parts of the Universal Law Formation of the
Categorical Imperative. Consequently, according to Kant, M1 is a moral
action.
The initial stage of the Universal Law Formation of the Categorical
Imperative requires that a maxim be universally applicable to all
rational beings. M1 succeeds in passing the first stage. We can easily
imagine a world in which paramedics always answer widows truthfully when
queried. Therefore, this maxim is logical and everyone can abide by it
without causing a logical impossibility. The next logical step is to
apply the second stage of the test.
The second requirement is that a rational being would will this maxim
to become a universal law. In testing this part, you must decide whether
in every case, a rational being would believe that the morally correct
action is to tell the truth. First, it is clear that the widow expects
to know the truth. A lie would only serve to spare her feelings if she
believed it to be the truth. Therefore, even people who would consider
lying to her, must concede that the correct and expected action is to
tell the truth. By asking she has already decided, good or bad, that she
must know the truth.
What if telling the truth brings the widow to the point where she
commits suicide, however? Is telling her the truth then a moral action
although its consequence is this terrible response? If telling the
widow the truth drives her to commit suicide, it seems like no rational
being would will the maxim to become a universal law. The suicide is,
however, a consequence of your initial action. The suicide has no
bearing, at least for the Categorical Imperative, on whether telling the
truth is moral or not. Likewise it is impossible to judge whether upon
hearing the news, the widow would commit suicide. Granted it is a
possibility, but there are a multitude of alternative choices that she
could make and it is impossible to predict each one. To decide whether
rational being would will a maxim to become a law, the maxim itself must
be examined rationally and not its consequences. Accordingly, the maxim
passes the second test.
Conversely, some people might argue that in telling the widow a lie,
you spare her years of torment and suffering. These supporters of "white
lies" feel the maxim should read, "When facing a distraught widow, you
should lie in regards to the death of her late husband in order to spare
her feelings." Applying the first part of the Universal Law Formation of
the Categorical Imperative, it appears that this maxim is a moral act.
Certainly, a universal law that prevents the feelings of people who are
already in pain from being hurt further seems like an excellent
universal law. Unfortunately for this line of objection, the only reason
a lie works is because the person being lied to believes it to be the
truth. In a situation where every widow is lied to in order to spare her
feelings, then they never get the truth. This leads to a logical
contradiction because no one will believe a lie if they know it a lie
and the maxim fails.
Perhaps the die-hard liar can regroup and test a narrower maxim. If it
is narrow enough so that it encompasses only a few people, then it
passes the first test. For example, the maxim could read, "When facing a
distraught widow whose late husband has driven off a bridge at night,
and he struggled to get out of the car but ended up drowning, and he was
wearing a brown suit and brown loafers, then you should tell the widow
that he died instantly in order to spare her feelings." We can easily
imagine a world in which all paramedics lied to widows in this specific
situation.
That does not necessarily mean that it will pass the second test
however. Even if it does pass the first test, narrowing down maxim can
create other problems. For instance circumstances may change and the
people who were originally included in the universal law, may not be
included anymore. Consequently you many not want to will your maxim to
be a universal law. Likewise, if one person can make these maxims that
include only a select group of people, so can everyone else. If you
create a maxim about lying to widows that is specific enough to pass the
first test, so can everyone else. One must ask if rational beings would
really will such a world in which there would be many, many specific,
but universal, laws. In order to answer this question, one must use the
rational "I" for the statement "I, as a rational being would will such a
world," not the specific, embodied "I" which represents you in your
present condition. You must consider that you could be the widow in the
situation rather than the paramedic, then decide whether you would will
such a universal law.
I agree with the morality based on Kantian principles because it is
strict in its application of moral conduct. Consequently there is no
vacillating in individual cases to determine whether an action is moral
or not. An action is moral in itself not because of its consequences but
because any rational being wills it to be a universal law and it does
not contradict itself. Regardless of what the widow does with the
information, the act of telling her the truth, is a moral one. No one
would argue that telling the truth, if she asks for it, is an immoral
thing to do. Sometimes moral actions are difficult, and perhaps in this
situation it would be easier to lie to the widow, but it would still be
an immoral action that I would not want everyone to do. This picture of
morality resonates with my common sense view of morality. If the widow
subsequently commits suicide or commits any other immoral act as a
consequence, that has no bearing on the morality of the original action
in itself.
Utilitarianism would differ on this point. Utilitarianism outlines that
an action is moral if it increases the total happiness of society.
Morality is based on consequences. Telling a lie to the widow would
increase her happiness and consequently would, at least possibly, be a
moral action. Utilitarianism would also take into account the precedent
set by lying; however, the analysis still rests on predicted consequence
rather than on the action's intrinsic moral value. The morality of
telling the lie is on a case by case basis. In some situations, it might
be better to tell the truth, and according to utilitarianism that would
then be the moral action. Unlike Kantian philosophy, one is not bound by
an immutable universal law. Instead one must judge in each case which
action will produce the most overall happiness. The problem with this
approach is that morality loses any value as a universal or intrinsic
quality. Every decision is made on an individual basis in an individual
and specific situation. In fact, utilitarianism considers happiness to
be the only intrinsically valuable end.
Defenders of utilitarianism claim that it maintains universality by
considering the greatest happiness of all beings, rather than just
individual happiness. Still, the morality is based on constantly
changing and often unpredictable consequences. The requirement that one
consider all of the consequences of an action and determine the best
possible action through such calculations makes me reject utilitarianism
as a method of determining morality.
Although utilitarianism often offers the easier solution to perform
because it produces immediate gratification and allows many exceptions
to common sense moral codes, the answers it gives are unfilling and
unrealistic. Furthermore, it is difficult, if not impossible, to make
all of the required calculations beforehand. Kant's solution, although
as interpreted by Kant is sometimes overly extreme, is much better than
utilitarianism. It resonates with my moral sensibilities to consider
that actions are moral or immoral regardless of their immediate
consequences. I am willing to accept that sometimes the moral action is
harder to perform, but I am unwilling to accept that morality rests
within the specifics of a situation and the possible consequences.
Therefore, I consider Kant's Universal Law Formation of the Categorical
Imperative to be a better test of morality than Mill's Utilitarianism.
f:\12000 essays\sciences (985)\Genetics\LSD.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Table of Contents Introduction Table 1:Effects of LSD A Brief Foray Into Philosophy and the Cognitive Sciences The Suspects Figure 1: Structure of LSD Overview of Synaptic Transmission Theory: LSD Pre-synaptically Inhibits 5-HT Neurons Theory: LSD Post-synaptically Antagonizes 5-HT2 Receptors Figure 2: LSD Binding at 5-HT2 Receptor Theory: LSD Post-synaptically Partially Agonizes 5-HT2 Receptors Theory: LSD Post-synaptically Agonizes 5-HT1 Receptors Conclusion References Introduction The psychedelic effects of d-Lysergic Acid Diethylamide-25 (LSD) were discovered by Dr. Albert Hoffman by accident in 1938. In the 1950s and 1960s, LSD was used by psychiatrists for analytic psychotherapy. It was thought that the administration of LSD could aid the patient in releasing repressed material. It was also suggested that psychiatrists themselves might develop more insight into the pathology of a diseased mind through self experimentation. 1,2 During the late 60s, LSD became popular as a recreational drug. While it has been suggested that recreational use of the drug has dropped, a recent report on CNN claimed that 4.4% of 8th graders have tried it. LSD is considered to be one of, if not the, most potent hallucinogenic drug known. Small doses of LSD (1/2 - 2 ug/kg body weight) result in a number of system wide effects that could be classified into somatic, psychological, cognitive, and perceptual categories. These effects can last between 5 and 14 hours. Table 1: Effects of LSD 1, 2, 3 Somatic Psychological Cognitive Perceptual mydriasis hallucinations disturbed thought processes increased stimulus from environment hyperglycemia depersonalization difficulty expressing thoughts changes in shape/color hyperthermia reliving of repressed memories impairment of reasoning synaesthesia (running together of sensory modalities) piloerection mood swings (related to set and setting) impairment of memory - esp. integration of short -> long term disturbed perception of time vomiting euphoria lachrymation megalomania hypotension schizophrenic-like state respiratory effects are stimulated at low doses and depressed at higher doses reduced "defenses", subject to "power of suggestion" brachycardia The study of hallucinogens such as LSD is fundamental to the neurosciences. Science thrives on mystery and contradiction; indeed without these it stagnates. The pronounced effects that hallucinogens have throughout the nervous system have served as potent demonstrations of difficult to explain behavior. The attempts to unravel the mechanisms of hallucinogens are closely tied to basic research in the physiology of neuroreceptors, neurotransmitters, neural structures, and their relation to behavior. This paper will first examine the relationship between neural activity and behavior. It will then discuss some of the neural populations and neurotransmitters that are believed to by effected by LSD. The paper will conclude with a more detailed discussion of possible ways that LSD can effect the neurotransmitter receptors which are probably ultimately responsible for its LSD. A Brief Foray Into Philosophy and the Cognitive Sciences Modern physics is divided by two descriptions of the universe: the theory of relativity and quantum mechanics. Many physicists have faith that at some point a "Grand Unified Theory" will be developed which will provide a unified description of the universe from subatomic particles to the movement of the planets. Like in physics, the cognitive sciences can describe the brain at different levels of abstraction. For example, neurobiologists study brain function at the level of neurons while psychologists look for the laws describing behavior and cognitive mechanisms. Also like in physics, many in these fields believe that it is possible that one day we will be able to understand complicated behaviors in terms of neuronal mechanisms. Others believe that this unification isn't possible even in theory because there is some metaphysical quality to consciousness that transcends neural firing patterns. Even if consciousness can't be described by a "Grand Unified Theory" of the cognitive sciences, it is apparent that many of our cognitive mechanisms and behaviors can. While research on the level of neurons and psychological mechanisms is fairly well developed, the area in between these is rather murky. Some progress has been made however. Cognitive scientists have been able to associate mechanisms with areas of the brain and have also been able to describe the effects on these systems by various neurotransmitters. For example, disruption of hippocampal activity has been found to result in a deficiency in consolidating short term to long term memory. Cognitive disorders such as Parkinson's disease can be traced to problems in dopaminergic pathways. Serotonin has been implicated in the etiology of various CNS disorders including depression, obsessive-compulsive behavior, schizophrenia, and nausea. It is also known to effect the cardiovascular and thermoregulatory systems as well as cognitive abilities such as learning and memory. The lack of knowledge in the middle ground between neurobiology and psychology makes a description of the mechanisms of hallucinogens necessarily coarse. The following section will explore the possible mechanisms of LSD in a holistic yet coarse manner. Ensuing sections will concentrate on the more developed studies of the mechanisms on a neuronal level. The Suspects Researchers have attempted to identify the mechanism of LSD through three different approaches: comparing the effects of LSD with the behavioral interactions already identified with neuotransmitters, chemically determining which neurotransmitters and receptors LSD interacts with, and identifying regions of the brain that could be responsible for the wide variety of effects listed in Table 1. Initial research found that LSD structurally resembled serotonin (5-HT). As described in the previous section, 5-HT is implicated in the regulation of many systems known to be effected by LSD. This evidence indicates that many of the effects of LSD are through serotonin mediated pathways. Subsequent research revealed that LSD not only has affinities for 5-HT receptors but also for receptors of histamine, ACh, dopamine, and the catecholines: epinephrine and norepinephrine.3 Only a relative handful of neurons (numbering in the 1000s) are serotonergic (i.e. release 5-HT). Most of these neurons are clustered in the brainstem. Some parts of the brainstem have the interesting property of containing relatively few neurons that function as the predominant provider of a particular neurotransmitter to most of the brain. For example, while there are only a few thousand serotonergic cells in the Raphe Nuclei, they make up the majority of serotonergic cells in the brain. Their axons innervate almost all areas of the brain. The possibility for small neuron populations to have such systemic effects makes the brain stem a likely site for hallucinogenic mechanisms. Two areas of the brainstem that are thought to be involved in LSD's pathway are the Locus Coeruleus (LC) and the Raphe Nuclei. The LC is a small cluster of norepinephrine containing neurons in the pons beneath the 4th ventricle. The LC is responsible for the majority of norepinephrine neuronal input in most brain regions.4 It has axons which extend to a number of sites including the cerebellum, thalamus, hypothalamus, cerebral cortex, and hippocampus. A single LC neuron can effect a large target area. Stimulation of LC neurons results in a number of different effects depending on the post-synaptic cell. For example, stimulation of hippocampal pyramidal cells with norepinephrine results in an increase in post-synaptic activity. The LC is part of the ascending reticular activating system which is known to be involved in the regulation of attention, arousal, and the sleep-wake cycle. Electrical stimulation of the LC in rats results in hyper-responsive reactions to stimuli (visual, auditory, tactile, etc.)5 LSD has been found to enhance the reactivity of the LC to sensory stimulations. However, LSD was not found to enhance the sensitivity of LC neurons to acteylcholine, glutamate, or substance P.6 Furthermore, application of LSD to the LC does not by itself cause spontaneous neural firing. While many of the effects of LSD can be described by its effects on the LC, it is apparent that LSD's effects on the LC are indirect.4 While norepinephrine activity throughout the brain is mainly mediated by the LC, the majority of serotonergic neurons are located in the Raphe Nuclei (RN). The RN is located in the middle of the brainstem from the midbrain to the medulla. It innervates the spinal cord where it is involved in the regulation of pain. Like the LC, the RN innervates wide areas of the brain. Along with the LC, the RN is part of the ascending reticular activating system. 5-HT inhibits ascending traffic in the reticular system; perhaps protecting the brain from sensory overload. Post-synaptic 5-HT receptors in the visual areas are also believed to be inhibitory. Thus, it is apparent that an interruption of 5-HT activity would result in disinhibition, and therefore excitation, of various sensory modalities. Current thought is that the mechanism of LSD is related to the regulation of 5-HT activity in the RN. However, the RN is also influenced by GABAergic, catecholamergic, and histamergic neurons. LSD has been shown to also have affinities for many of these receptors. Thus it is possible that some of its effects may be mediated through other pathways. Current research however has focused on the effects of LSD on 5-HT activity. Before specific mechanisms and theories are discussed, a brief discussion of the principles of synaptic transmission will be given. Overview of Synaptic Transmission There are two types of synapses between neurons: chemical and electrical. Chemical synapses are more common and are the type discussed in this paper. When an action potential (AP) travels down a pre-synaptic cell, vesicles containing neurotransmitter are released into the synapse (exocytosis) where they effect receptors on the post synaptic cell. Synaptic activity can be terminated through reuptake of the neurotransmitter to the pre-synaptic cell, the presence of enzymes which inactivate the transmitter (metabolism), or simple diffusion. A pre-synaptic neuron can act on the post-synaptic neuron through direct or indirect pathways. In a direct pathway, the post-synaptic receptor is also an ion channel. The binding of a neurotransmitter to its receptor on the post-synaptic cell directly modifies the activity of the channel. Neurotransmitters can have excitatory or inhibitory effects. If a neurotransmitter is excitatory, it binds to a ligand activated channel in the post-synaptic cell resulting in a change in membrane permeability to ions such as Na+ or K+ resulting in a depolarization which therefore brings the post-synaptic cell closer to threshold. Inhibitory neurotransmitters can work post-synaptically by modifying the membrane permeability of the post-synaptic cell to anions such as Cl- which results in hyperpolarization. Many neurotransmitters that have system-wide effects such as epinephrine (adrenaline), norepinephrine (noradrenaline), and 5-HT work by an indirect pathway. In an indirect pathway, the post-synaptic receptor acts on an ion channel through indirect means such as a secondary messenger system. Many indirect receptors such as muscarinic, Ach, and 5-HT involve the use of G proteins.5 Indirect mechanisms often will alter the behavior of a neuron without effecting its resting potential. For example, norepinephrine blocks slow Ca activated K channels in the rat hippocampal pyramidal cells. Normally, Ca influx eventually causes the K channels to open. This causes a prolonged after hyperpolarization which extends the refractory period of the neuron. Therefore, by blocking the K channels, the prolonged after hyperpolarization is inhibited which results in the neuron firing more APs for a given excitatory input.5 Other indirect means of neuromodulation include interfering with pre-synaptic neurotransmitter synthesis, storage, release, or reuptake. Inhibiting the reuptake of a neurotransmitter, for example, can cause an excitatory response. Stimulation of neurotransmitter receptors can have a variety of effects on both pre and post-synaptic cells. Pre-synaptic receptors are sometimes involved in self regulation while post-synaptic receptors can cause an increase (excitation) or decrease (inhibition) of AP firing in a neuron. A subtler method of neuromodulation involves molecules that effect these neuroreceptors. Molecules that excite a receptor are referred to as agonists while those that interfere with receptor binding are called antagonists. For example, 5-HT often acts as an inhibitory neurotransmitter. A 5-HT receptor antagonist could interfere with the activation of post-synaptic 5-HT receptors causing them to be less responsive to inhibition. This disinhibition would make the post-synaptic cell more responsive to neural inputs, most likely resulting in an excitatory response. Theory: LSD Pre-synaptically Inhibits 5-HT Neurons Raphe Nuclei neurons are autoreactive; that is they exhibit a regular spontaneous firing rate that is not triggered by an external AP. Evidence for this comes from the observation that RN neural firing is relatively unaffected by transections isolating it from the forebrain. Removal of Ca++ ions, which should block synaptic transmission, also has little effect on the rhythmic firing pattern. This firing pattern however is susceptible to neuromodulation by a number of transmitters.7 In 1968, Aghajanian and colleagues observed that systemic administration of LSD inhibited spontaneous firing of these autoreactive serotonergic neurons in the RN. Serotonergic neurons are known to have a negative feedback pathway through autoreceptors (receptors on the pre-synaptic cell that respond to the neurotransmitter released by the cell). This means that an increase in 5-HT levels causes a decrease in the activity of serotonergic neurons. Serotonergic neurons are also known to make synaptic connections with other RN neurons. This could have the result of spreading out the effects of negative feedback to other RN neurons. This led to the theory that LSD causes a depletion of 5-HT through negative feedback in pre-synaptic autoreceptors.7 The depletion of 5-HT was thought to be responsible for the effects on the previously described systems innervated by the serotonergic neurons. A number of subsequent observations have called this theory into doubt however. Low doses of LSD effect behavior but do not depress firing in the RN.8 The behavioral effects of LSD outlast the modification of RNN firing.8 While repeated dosage of LSD results in a decrease of behavioral modifications (tolerance), its effects on the RN are unchanged.8 Other hallucinogens such as mescaline and DOM do not effect R neurons.8 Depletion of 5-HT does not eliminate the effectiveness of LSD. If LSD worked by inhibiting the 5-HT output of pre-synaptic 5-HT neurons, it should be ineffaceable if 5-HT is depleted. The opposite result was actually observed; depletion enhances LSD activity.9 Mianserin, a 5-HT2 receptor antagonist, blocks LSD behavior but does not block LSD's depression of RN neurons.9 While LSD does cause a decrease in the autoreactive firing of RN neurons, this appears to be an effect and not the cause. These observations are considered however to be compatible with a post-synaptic model. Subsequent research found that LSD and other hallucinogens have a high affinity for post-synaptic 5-HT1 and 5-HT2 receptors. In fact there is significant correlation between the affinity of a hallucinogen for these receptors and its human potency. While it seems logical that 5-HT activity is modulated at 5-HT receptor sites, it is possible that LSD could be affecting 5-HT receptor activity indirectly through adrenic or dopaminic pathways. However, blocking these receptors caused no change in LSD's activity on the 5-HT receptors, thus it appears that 5-HT activity is indeed modified by 5-HT receptors.10 While evidence indicates that LSD is a 5-HT1 agonist, it is debated whether the effects on 5-HT2 receptors is agonistic or antagonistic.11 Theory: LSD Post-synaptically Antagonizes 5-HT2 Receptors Initial post-synaptic theories postulated that LSD was a 5-HT2 agonist. Pierce and Peroutka (P&P), however, argued that LSD has a number of antagonistic properties and called into doubt some of the evidence presented as being compatible with agonist activity. The primary evidence for agonistic behavior comes from observations that the effects of LSD are inhibited by 5-HT2 antagonists. P&P pointed out that this is not always the case. For example, some 5-HT2 antagonists such as spiperone do not block LSD behavior. In addition, radioligand binding studies have shown that the affinity of 5-HT2 receptor agonists is pH dependent while the affinity of 5-HT2 receptor antagonists and LSD are pH independent.9 5-HT2 receptors are connected to a phosphatidylinositol (PI) second messenger system. PI turnover has been found to be stimulated by 5-HT and antagonized by 5-HT2 antagonists. P&P found that nM concentrations of LSD do not stimulate PI turnover. Therefore, LSD does not act as a classic agonist. They also found that nM concentrations of LSD inhibited the stimulatory effect of 10M 5-HT. The ability of LSD to inhibit a concentration 1000x greater is consistent with it being a 5-HT2 antagonist P&P also point out that the excitatory effects of 5-HT on CNS neurons appears to be caused by a decrease in K+ conductance attributable to activation of 5-HT2 receptors. P&P found that LSD inhibits this effect in rat somatosensory pyramidal neurons. This also is evidence that LSD acts in an antagonistic role.9 The final line of evidence presented by P&P was from smooth muscle studies. The guinea pig trachea contracts when M concentrations of 5-HT are present. The ability of 5-HT antagonists to inhibit this effect correlates with the antagonists affinity for the 5-HT2 binding site. Thus it appears that this muscle contraction is 5-HT2 mediated. It was found that nM concentrations of LSD did not cause muscle contraction and inhibited the agonistic effects of M concentrations of 5-HT. This also is compatible with the actions of an antagonist. Theory: LSD Post-synaptically Partially Agonizes 5-HT Receptors Many of the apparent contradictions in evidence in the debate over whether LSD acts as a 5-HT2 agonist or antagonist can be reconciled by the theory that LSD acts as a partial 5-HT2 agonist. Dr. Glennon presented a number of arguments for this theory including data from his own research and from the studies discussed by P&P in the previous section. One of the primary tools used by Glennon to determine the effects of various chemicals on the interactions between LSD and 5-HT was drug discrimination training in rats. Rats were trained to discriminate 1-(2,5-dimethoxy-4-methylphenyl)-2-aminopropane (DOM) from saline. Training with DOM stimuli generalized to many indolealkylamine and phenalkylamine hallucinogens. DOM was chosen instead of LSD as a training drug because of concern that LSD had a number of pharmacological effects. It was thought that if the rat was trained with LSD, it might makes discriminations based on one of the pharmacological effects of LSD other than its effects on 5-HT. With this tool, Glennon demonstrated that a number of 5-HT2 antagonists inhibited the ability of rats to discriminate LSD from saline. This indicates that LSD acts as a 5-HT2 agonist. Glennon offered no explanation for P&P's observation that some antagonists such as spiperone do not have this effect. However, spiperone and a few other similar antagonists appear to only be about 40% effective in inhibiting 5-HT2 sites due to its relative nonselectivity.13 As discussed in the previous section, PI turnover has been found to be stimulated by 5-HT and is antagonized by 5-HT2 antagonists. In another study of the effects of LSD on PI turnover, it was found that LSD acted as a partial agonist (it produces approximately 25% of the effect caused by 5-HT). The apparent difference between this second study and P&P's is that the second study tested the effects at a variety of doses. From this it was concluded that while LSD has a higher affinity for 5-HT receptors than 5-HT does, it has a lower efficacy. This is compatible with P&P's observation that nM concentrations of LSD inhibited the stimulatory effects of uM 5-HT. If LSD acted as a partial agonist with low efficacy, it could compete with 5-HT in binding to 5-HT2 receptors. Since 5-HT is a more potent agonist than the LSD, the effects of LSD would appear antagonistic. Glennon argued that the guinea pig trachea may not be a good example since 5-HT does not work through a PI mechanism in this case. In the rat aorta, however, 5-HT does hydrolize PI and the contractile effects of 5-HT are antagonized by ketanserin (a 5-HT2 antagonist). While LSD was not tested, another hallucinogen, DOB, was found to have an agonistic effect that could be antagonized by ketanserin. This suggests that LSD acts agonistically in the rat aorta. Glennon points out that it may well be the case that in other cases, the effects may be antagonistic. However, these effects could be explained if LSD had a low efficacy for the receptor. Hyperthermia and platelet aggregation are both affected by 5-HT2 mechanisms. Hallucinogens such as LSD have been shown to behave agonistically and in the case of platelets, to be antagonized by 5-HT2 antagonists such as ketanserin.11 LSD often has a biphasic response in which low doses have the opposite effects of higher doses. The head twitch response in rodents is believed to be 5-HT2 mediated. At low doses, it has been found that LSD elicits a head-twitch response while at higher doses it antagonizes the response. The rat startle reflex is amplified at low dosages of LSD while decreased at higher doses. This biphasic behavior can also be explained if LSD behaves as a partial agonist.11 In summary, this theory claims that: "LSD is a high-affinity, low efficacy, nonselective 5-HT agonist; in the absence of another agonist it may function as an agonist, whereas in the presence of a high efficacy agonist, it will function as an antagonist." 11 Theory: LSD Post-synaptically Agonizes 5-HT1 Receptors Glennon also gave another possible explanation for the antagonistic activity of LSD. There is some evidence that 5-HT1 receptors have an antagonistic relationship with 5-HT2 receptors. As discussed in the previous section, head twitch behavior is believed to be 5-HT2 mediated. DOI acts as a 5-HT2 agonist and elicits head twitch. 5-OMe DMT also is a 5-HT agonist but has less efficacy than DOI. If the subject is pretreated with 5-OMe DMT, the effects of DOI are attenuated (because many of the receptors are filled with the lower efficacy 5-OMe DMT molecules.) It has been found that A 5-HT1 agonist (8-OH DPAT) can also cause DOI attenuation. Other studies have also demonstrated that 5-HT1 agonists can behave functionally as 5-HT2 antagonists.11 Glennon argued that this theory is lent extra credence from the observation that 5-HT2 and 5-HT1c have similar relationships with various hallucinogens. A number of these hallucinogens have been shown to be 5-HT1c agonists. Like 5-HT2 sites, the affinity of hallucinogens for 5-HT1c sites correlates with their hallucinogenic potency in humans. Thus another explanation of the biphasic behavior of LSD is that increasingly higher doses of LSD cause increased antagonism of the 5HT2 receptor through agonism of 5HT1 receptors. Although, the pre-synaptic theory seems to be fairly well discredited, it is interesting to note that there is debate as to whether pre-synaptic serotonin autoreceptors are of the 5-HT1 type. Whether serotonergic autoreceptors are 5-HT1 or not, it has been demonstrated that there are also post-synaptic 5HT-1 receptors.12 While the role of these receptors is not completely known, some researchers have hypothesized that 5-HT1 receptors may be involved in the regulation of norepinephrine.13 As discussed previously, the majority of norepinephrine neurons are located in the LC which also has system wide innervation. Recent research on 5-HT receptors calls the theory that 5-HT1 agonism results in 5-HT2 antagonism into question. Since Glennon's paper, the 5-HT1c receptor has been reclassified as 5-HT2c. Since the 5-HT2 receptors discussed in this paper belong to the same family as what was called the 5-HT1c receptor, these have been reclassified as 5-HT2a.14 Since "5-HT1c" is a member of the 5-HT2 family, it is not surprising the LSD affinities are similar for the two receptors. While these reclassifications do not necessarily discount the theory that one receptor has an antagonistic effect on the other, it seems likely that the evidence for this may need to be re-evaluated in terms of recent findings. Conclusion The lack of understanding about the mechanisms of LSD is indicative of the problems involved in the bridging of the worlds of psychology and neurobiology. As more is learned about the roles and interactions of various neurotransmitters, receptors, and on a larger scale: portions of the brain, the mystery will be further unraveled. With this caveat emptor firmly in mind, it seems that the best explanation of LSD's effects is that it behaves as a high affinity partial 5-HT agonist. Depending on the presence of other molecules and its own concentration, LSD can have either agonistic or antagonistic effects on post-synaptic 5-HT2 family receptors. This modulation of 5-HT behavior is probably responsible for many of the effects attributable to LSD. LSD also has an affinity for other neurotransmitter receptors that play important roles in the brain stem such as norepinephrine, dopamine, and histamine. It is also hypothesized that LSD may modulate neural responses to these transmitters through its activity on 5-HT1 receptors. Both the Locus Coeruleus and the Raphe Nuclei are part of the ascending reticular activating system which is implicated in the sensory modalities. The inhibition of 5-HT in the RN and release of norepinephrine from LC neurons results in a flood of information from the sensory system reaching the brain. Some of the cognitive effects of LSD could be attributed to the effects of brain stem innervation to areas of the brain such as the cerebral cortex and the hippocampus. References 1.(1995): "FAQ-LSD" From internet newsgroup: alt.drugs.psychedelics 2.Sankar (1975): "LSD: A Total Study" 3.Ashton H (1987): "Brain Systems Disorders and Psychotropic Drugs" 4.Snyder (1986): "Drugs and the Brain" Sci Am Books Inc. From FAQ-LSD 5.Nicholls J, Martin R, Wallace B (1992): "From Neuron to Brain: Acellular andMolecular Approach to the Function of the Nervous System" 6.Aghajanian GK(1980): "Mescaline and LSD Facilitate the Activation of Locus Coeruleus Neurons by Peripheral Stimulation" Brain Res 186:492-496 7.Jacobs, B (1985): "An Overview of Brain Serotonergic Unit Activity and its Relevance to the Neuropharmacology of Serotonin." From: Green, A: Neuropharmacology of Serotonin 8.Jacobs, B, Trulson M, Heym J, (1981): "Dissociations Between the Effects of Hallucinogenic Drugs on Behavior and Raphe Unit Activity in Freely Moving Cats" Brain Res 215:275-293 9.Pierce P, Peroutka S (1990): "Antagonist Properties of d-LSD at 5-Hydroxytryptamine2 Receptors". Neuropsychopharmacolgy 3(5-6):509-517 10.Moret C (1985): "Pharmacology of the Serotonin Autoreceptor" From: Green, A: Neuropharmacology of Serotonin 11.Glennon R (1990): "Do Classical Hallucinogens Act as 5-HT2 Agonists or Antagonists?" Neuropsychopharmacolgy 3(5-6):509-517 12.Green R, Heal D (1985): "The Effects of Drugs on Serotonin Mediated Behavioral Models" From Green, A: Neuropharmacology of Serotonin 13.Leysen J (1985): "Characterization of serotonin receptor binding sites" From Green, A: Neuropharmacology of Serotonin 14.Borne R. (1994) "Serotonin: The Neurotransmitter for the 90's" URL: http://www.fairlite.com/ocd/artiles/ser90.shtml. From: Drug Topics Oct, 10 1994:108
f:\12000 essays\sciences (985)\Genetics\Lyme Disease.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Genetics\Malaria Vaccinology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Current Status of Malaria Vaccinology
In order to assess the current status of malaria vaccinology one
must first take an overview of the whole of the whole disease.
One must understand the disease and its enormity on a global
basis. Malaria is a protozoan disease of which over 150 million
cases are reported per annum. In tropical Africa alone more than
1 million children under the age of fourteen die each year from
Malaria. From these figures it is easy to see that eradication of
this disease is of the utmost importance.
The disease is caused by one of four species of Plasmodium These
four are P. falciparium, P .malariae, P .vivax and P .ovale.
Malaria does not only effect humans, but can also infect a
variety of hosts ranging from reptiles to monkeys. It is
therefore necessary to look at all the aspects in order to assess
the possibility of a vaccine. The disease has a long and complex
life cycle which creates problems for immunologists. The vector
for Malaria is the Anophels Mosquito in which the life cycle of
Malaria both begins and ends. The parasitic protozoan enters the
bloodstream via the bite of an infected female mosquito. During
her feeding she transmits a small amount of anticoagulant and
haploid sporozoites along with saliva. The sporozoites head
directly for the hepatic cells of the liver where they multiply
by asexual fission to produce merozoites. These merozoites can
now travel one of two paths. They can go to infect more hepatic
liver cells or they can attach to and penetrate erytherocytes.
When inside the erythrocytes the plasmodium enlarges into
uninucleated cells called trophozites The nucleus of this
newly formed cell then divides asexually to produce a schizont,
which has 6-24 nuclei. Now the multinucleated schizont then
divides to produce mononucleated merozoites . Eventually the
erythrocytes reaches lysis and as result the merozoites enter the
bloodstream and infect more erythrocytes. This cycle repeats
itself every 48-72 hours (depending on the species of plasmodium
involved in the original infection) The sudden release of
merozoites toxins and erythrocytes debris is what causes the
fever and chills associated with Malaria.
Of course the disease must be able to transmit itself for
survival. This is done at the erythrocytic stage of the life
cycle. Occasionally merozoites differentiate into
macrogametocytes and microgametocytes. This process does not
cause lysis and there fore the erythrocyte remains stable and
when the infected host is bitten by a mosquito the gametocytes
can enter its digestive system where they mature in to
sporozoites, thus the life cycle of the plasmodium is begun again
waiting to infect its next host. At present people infected with
Malaria are treated with drugs such as Chloroquine, Amodiaquine
or Mefloquine. These drugs are effectiv e ateradicating the
exoethrocytic stages but resistance to them is becoming
increasing common. Therefore a vaccine looks like the only viable
option.
The wiping out of the vector i.e. Anophels mosquito would also
prove as an effective way of stopping disease transmission but
the mosquito are also becoming resistant to insecticides and so
again we must look to a vaccine as a solution Having read certain
attempts at creating a malaria vaccine several points become
clear. The first is that is the theory of Malaria vaccinology a
viable concept? I found the answer to this in an article
published in Nature from July 1994 by Christopher Dye and
Geoffrey Targett. They used the MMR (Measles Mumps and Rubella)
vaccine as an example to which they could compare a possible
Malaria vaccine Their article said that "simple epidemiological
theory states that the critical fraction (p) of all people to be
immunised with a combined vaccine (MMR) to ensure eradication of
all three pathogens is determined by the infection that spreads
most quickly through the population; that is by the age of one
with the largest basic case reproduction number Ro. If a vaccine
can be made against the strain with the highest Ro it could
provide immunity to all malaria plasmodium "
Another problem faced by immunologists is the difficulty in
identifying the exact antigens which are targeted by a protective
immune response. Isolating the specific antigen is impeded by the
fact that several cellular and humoral mechanisms probably play a
role in natural immunity to malaria - but as is shown later there
may be an answer to the dilemma. While researching current
candidate vaccines I came across some which seemed more viable
than others and I will briefly look at a few of these in this
essay. The first is one which is a study carried out in the
Gambia from 1992 to 1995.(taken from the Lancet of April 1995).
The subjects were 63 healthy adults and 56 malaria identified
children from an out patient clinic Their test was based on the
fact that experimental models of malaria have shown that
Cytotoxic T Lymphocytes which kill parasite infected hepatocytes
can provide complete protective immunity from certain species of
plasmodium in mice. From the tests they carried out in the
Gambia they have provided, what they see to be indirect evidence
that cytotoxic T lymphocytes play a role against P falciparium in
humans Using a human leucocyte antigen based approach termed
reversed immunogenetics they previously identified peptide
epitopes for CTL in liver stage antigen-1 and the
circumsporozoite protein of P falciparium which is most lethal of
the falciparium to infect humans. Having these identified they
then went on to identify CTL epitopes for HLA class 1 antigens
that are found in most individuals from Caucasian and African
populations. Most of these epidopes are in conserved regions of
P. falciparium. They also found CTL peptide epitopes in a further
two antigens trombospodin related anonymous protein and
sporozoite threonine and asparagine rich protein. This indicated
that a subunit vaccine designed to induce a protective CTL
response may need to include parts of several parasite antigens.
In the tests they carried out they found, CTL levels in both
children with malaria and in semi-immune adults from an endemic
area were low suggesting that boosting these low levels by
immunisation may provide substantial or even complete protection
against infection and disease. Although these test were not a
huge success they do show that a CTL inducing vaccine may be the
road to take in looking for an effective malaria vaccine. There
is now accumulating evidence that CTL may be protective against
malaria and that levels of these cells are low in naturally
infected people. This evidence suggests that malaria may be an
attractive target for a new generation of CTL inducing vaccines.
The next candidate vaccine that caught my attention was one which
I read about in Vaccine vol 12 1994. This was a study of the
safety, immunogenicity and limited efficacy of a recombinant
Plasmodium falciparium circumsporozoite vaccine. The study was
carried out in the early nineties using healthy male Thai rangers
between the ages of 18 and 45. The vaccine named R32 Tox-A was
produced by the Walter Reed Army Institute of Research,
Smithkline Pharmaceuticals and the Swiss Serum and Vaccine
Institute all working together. R32 Tox-A consisted of the
recombinantly produced protein R32LR, amino acid sequence
[(NANP)15 (NVDP)]2 LR, chemically conjugated to Toxin A
(detoxified) if Pseudomanas aeruginosa. Each 0.4 ml dose of R32
Tox-A contained 320mg of the R32 LR-Toxin-A conjugate (molar
ratio 6.6:1), absorbed to aluminium hydroxide (0.4 % w/v), with
merthiolate (0.01 %) as a preservative. The Thai test was based
on specific humoral immune responses to sporozoites are
stimulated by natural infection and are directly
predominantly against the central repeat region of the major
surface molecule, the circumsporozoite (CS) protein. Monoclonal
CS antibodies given prior to sporozoite challenge have achieved
passive protection in animals.
Immunization with irradiated sporozoites has produced protection
associated with the development of high levels of polyclonal
CS antibodies which have been shown to inhibit sporozoite
invasion of human hepatoma cells. Despite such encouraging animal
and in vitro data, evidence linking protective immunity in humans
to levels of CS antibody elicited by natural infection have been
inconclusive possibly this is because of the short serum half-
life of the antibodies. This study involved the volunteering of
199 Thai soldiers. X percentage of these were vaccinated using
R32 Tox -A prepared in the way previously mentioned and as
mentioned before this was done to evaluate its safety,
immunogenicity and efficacy. This was done in a double blind
manner all of the 199 volunteers either received R32Tox-A or a
control vaccine (tetanus/diptheria toxiods (10 and 1 Lf units
respectively) at 0, 8 and 16 weeks. Immunisation was performed in
a malaria non-transmission area, after completion of which
volunteers were deployed to an endemic border area and monitored
closely to allow early detection and treatment of infection. The
vaccine was found to be safe and elicit an antibody response in
all vaccinees. Peak CS antibody (IgG) concentrated in malaria-
experienced vaccinees exceeded those in malaria-na‹ve vaccinees
(mean 40.6 versus 16.1 mg ml-1; p = 0.005) as well as those
induced by previous CS protein derived vaccines and observed in
association with natural infections. A log rank comparison of
time to falciparium malaria revealed no differences between
vaccinated and non-vaccinated subjects. Secondary analyses
revealed that CS antibody levels were lower in vaccinee malaria
cases than in non-cases, 3 and 5 months after the third dose of
vaccine. Because antibody levels had fallen substantially before
peak malaria transmission occurred, the question of whether or
not high levels of CS antibody are protective still remains to be
seen. So at the end we are once again left without conclusive
evidence, but are now even closer to creating the sought after
malaria vaccine. Finally we reach the last and by far the most
promising, prevalent and controversial candidate vaccine. This I
found continually mentioned throughout several scientific
magazines. "Science" (Jan 95) and "Vaccine" (95) were two which
had no bias reviews and so the following information is taken
from these. The vaccine to which I am referring to is the SPf66
vaccine. This vaccine has caused much controversy and
raised certain dilemmas. It was invented by a Colombian physician
and chemist called Manual Elkin Patarroyo and it is the first of
its kind. His vaccine could prove to be one the few effective
weapons against malaria, but has run into a lot of criticism and
has split the malaria research community. Some see it as an
effective vaccine that has proven itself in various tests
whereas others view as of marginal significance and say more
study needs to be done before a decision can be reached on
its widespread use. Recent trials have shown some promise. One
trial carried by Patarroyo and his group in Columbia during 1990
and 1991 showed that the vaccine cut malaria episodes by over 39
% and first episodes by 34%. Another trail which was completed in
1994 on Tanzanian children showed that it cut the incidence of
first episodes by 31%. It is these results that have caused the
rift within research areas. Over the past 20 years, vaccine
researchers have concentrated mainly on the early stages of the
parasite after it enters the body in an attempt to block
infection at the outset (as mentioned earlier).
Patarroyo however, took a more complex approach. He spent his
time designing a vaccine against the more complex blood stage of
the parasite - stopping the disease not the infection. His
decision to try and create synthetic peptides raised much
interest. At the time peptides were thought capable of
stimulating only one part of the immune system; the antibody
producing B cells whereas the prevailing wisdom required T cells
as well in order to achieve protective immunity. Sceptics also
pounced on the elaborate and painstaking process of elimination
Patarroyo used to find the right peptides. He took 22
"immunologically interesting" proteins from the malaria
parrasite, which he identified using antibodies from people
immune to malaria, and injected these antigens into monkeys and
eventually found four that provided some immunity to malaria. He
then sequenced these four antigens and reconstructed dozens of
short fragments of them. Again using monkeys (more than a
thousand) he tested these peptides individually and in
combination until he hit on what he considered to be the jackpot
vaccine. But the WHO a 31% rate to be in the grey area and so
there is still no decision on its use.
In conclusion it is obvious that malaria is proving a difficult
disease to establish an effective and cheap vaccine for in that
some tests and inconclusive and others while they seem to work do
not reach a high enough standard. But having said that I hope
that a viable vaccine will present itself in the near future
(with a little help from the scientific world of course).
Word Count: 2,223
f:\12000 essays\sciences (985)\Genetics\MECHANICAL ENERGY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
MECHANICAL ENERGY Have you ever wondered how a jet aircraft lifts its tremendous weight off the ground, or what gives a runner the stamina to reach the finish line in a race? In order to answer all these questions we must talk about the transformation of one sort of energy into another. The jet aircraft gets its power from jet turbines. These powerful jet engines create a high-pressure stream of very hot gases that push the aircraft forward as they leave the engine. This is an example of heat being transformed into movement. This is sometimes described as Mechanical Energy. However, this transformation could not take place without the fuel that the aircraft gets within its wings or fuselage. Fuel is considered a chemical energy. This diagram shows how the jet engine acts as energy to lift the aircraft off the surface of earth. Fuel can take the form of gases, solids or liquids. When fuels combine with oxygen from the air, they release their stored energy as heat. We recognize this process as burning. The individual relies on food for fuel which contains energy-giving substances that our bodies can store until we need this energy to use our muscles. When we do use our muscles within us, we may not always be sure that heat is given off. Our bodies do not burst into flames but the perspiration on our skin is a clue to what is happening. The movement of the windsurfer has a different explanation. The windsurfer is propelled along by a sail which collects mechanical energy from the winds that sweep along the water. This energy has been produced by the sun which warms the earth's surface and sets the air above in motion. The sun's heat comes to the earth as a form of radiant energy. When the heat reaches the surface of the earth, it causes the land or seas to rise in temperature. The sun is very hot. Infact, the center of the sun can reach temperatures of up to 27 million degrees Fahrenheit. This is because of another kind of energy reaction where new substances are continually being created as others are being destroyed. This reaction is known to us as the Nuclear Reaction. Today we are trying to imitate this reaction in improving our energy supply. Scientists have calculated that the sun has enough fuel to go on producing energy at its present rate for about five billion years. On earth man-made nuclear reactions are used to produce a form of power we know as electricity. Electricity can be transformed into other kinds of energy such as heat, light and radio waves. Humans have also used the idea of nuclear reactions as a type of weapon. We call this powerful weapon the Atomic Bomb. Electrical energy can also be used to produce laser beams. This involves energy being concentrated to a specific narrow point where the impact of so much power creates heat able to cut through metals. Bibliography Discovering Energy, Frazer, Frank Trewin Copplestone Books Ltd, 1981. Encyclopedia Britannica, Vol. 6 Encyclopedia Britannica, 1979.
f:\12000 essays\sciences (985)\Genetics\Microsurgery.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
============================================================ Medicine: "Microsurgery: Sew Small" Uploaded: March 1, 1987 ------------------------------------------------------------ A man came into the emergency ward at one o'clock. His thumb came in an hour later. The surgeon's job: get them back together. The successful re-attaching of fingers to hand requires long hours of painstaking work in microsurgery. In the operating room , the surgeon doesn't stand, but sits in a chair that supports her body. Her arm is cradled by a pillow. Scalpels are present as are other standard surgical tools, but the suture threads are almost invisible, the needle thinner than a human hair. And all the surgical activity revolves around the most important instument, the microscope. The surgeon will spend the next few hours looking through the microscope at broken blood vessels and nerves and sewing them back together again. The needles are so thin that they have to be held with needlenosed jeweller's forceps and will sew together nerves that are as wide as the thickness of a penny. To make such a stitch, the surgeon's hands will move no more than the width of the folded side of a piece of paper seen end on! Imagine trying to sew two pieces of spaghetti together and you'll have some idea of what microsurgery involves. Twenty-five years ago, this man's thumb would have been lost. But in the 1960s, surgeon's began using microscopes to sew what previously had been almost invisible blood vessels and nerves in limbs. Their sewing technique had been developed on large blood vessels over a half century earlier but could not be used in microsurgery until the needles and sutures became small enough. The surgical technique, still widely used today, had taken the frustrating unreliability out of sewing slippery, round-ended blood vessels by ingeniously turning them into triangles. To do this, a cut end of a blood vessel was stitched at three equidistant points and pulled slightly apart to give an anchored, triangular shape. This now lent itself to easier, more dependable stitching and paved the way for microsurgery where as many as twenty stitches will have to be made in a blood vessel three millimetres thick. The needle used for this can be just 70 millimetres wide, only ten times the width of a human blood cell. All this technology is focused on getting body parts back together again successfully. The more blood vessels reattached, the better the survival chances for a toe or a finger. The finer the nerve resection, the better the feeling in a damaged part of the face, or control in a previously useless arm. But the wounded and severed body part must be treated carefully. If a small part of the body, such as a finger is cut off, instead of torn, wrapped in a clean covering, put on ice and then reattached within a few hours, the chance of success is over ninety percent, as long as one good artery and one good vein can be reattached. Not only is micro surgery allowing body parts to be reattached, it's also allowing them to be reshuffled. Before 1969, nothing could be done for you if you'd had your thumb smashed beyond repair. But in the past 14 years, you would have been in luck, if your feet were intact. Every year in North America, hundreds of big toes are removed from feet and grafted onto hands. Sometimes tendons are shifted from less important neighbouring fingers to allow the thumb to work better in its unique role of opposing the other fingers and allowing us to grip. While we in North America can live without our big toes and never really miss them, people in Japan can't. They need their big toes to keep the common footwear, the clog, on their feet. So their second toe is taken instead. Farmers, labourers car accident victims and home handymen are the people most often helped by microsurgery replants. And because blood vessels are being reattached, burn victims can now benefit. Flaps of their healthy skin are laboriously reattached more successfully, blood vessel by blood vessel, to increase chances that the graft will take. Some women, whose diseased Fallopian tubes have become blocked, can have them reopened microsurgically. When a cancerous esophagus must be removed, it can be replaced using a section of the person's own bowel. These people can then lead a more normal life, using their mouth to eat with instead of inserting food though a feeding tube in their stomach. Doctors have been able to rebuild an entire lower face by sculpting the lower jaw from living hip bone and covering it with the skin from that piece of bone. In all, over seventy parts of your body can be used as donor backups and recycled into other damaged sites. And because your body won't reject your own tissue - a constant hazard in transplants - in this case, you are your own best friend. In everyday use, however, microsurgery is proving to be a miracle worker, large and small. We take for granted, for instance, all the complex nerve and muscle control that goes into a simple a gesture as smiling. But one young woman couldn't. An accident left her with a face that was damaged and unable to smile. Microsurgery reconnected severed nerves, giving muscle control back to her face, restoring her looks and giving her something to smile about.
f:\12000 essays\sciences (985)\Genetics\Microwave.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You might remember the heroic role that newly-invented radar played in the Second World War. People hailed it then as "Our Miracle Ally". But even in its earliest years, as it was helping win the war, radar proved to be more than an expert enemy locator. Radar technicians, doodling away in their idle moments, found that they could focus a radar beam on a marshmallow and toast it. They also popped popcorn with it. Such was the beginning of microwave cooking. The very same energy that warned the British of the German Luftwaffe invasion and that policemen employ to pinch speeding motorists, is what many of us now have in our kitchens. It's the same as what carries long distance phone calls and cablevision. Hitler's army had its own version of radar, using radio waves. But the trouble with radio waves is that their long wavelength requires a large, cumbersome antenna to focus them into a narrow radar beam. The British showed that microwaves, with their short wavelength, could be focussed ina narrow beam with an antenna many times smaller. This enabled them to make more effective use of radar since an antenna could be carried on aircraft, ships and mobile ground stations. This characteristic of microwaves, the efficiency with which they are concentrated in a narrow beam, is one reason why they can be used in cooking. You can produce a high-powered microwave beam in a small oven, but you can't do the same with radio waves, which are simply too long. Microwaves and their Use The idea of cooking with radiation may seem like a fairly new one, but in fact it reaches back thousands of years. Ever since mastering fire, man has cooked with infrared radiation, a close kin of the microwave. Infrared rays are what give you that warm glow when you put your hand near a room radiator or a hotplate or a campfire. Infrared rays, flowing from the sun and striking the atmosphere, make the Earth warm and habitable. In a conventional gas or electric oven, infrared waves pour off the hot elements or burners and are converted to heat when they strike air inside and the food. Microwaves and infrared rays are related in that both are forms of electromagnetic energy. Both consist of electric and magnetic fields that rise and fall like waves on an ocean. Silently, invisibly and at the speed of light, they travel through space and matter. There are many forms of electromagnetic energy (see diagram). Ordinary light from the sun is one, and the only one you can actually see. X-rays are another. Each kind, moving at a separate wavelength, has a unique effect on any matter it touches. When you lie out in the summer sun, for example, it's the infrared rays that bring warmth, but ultraviolet radiation that tans your skin. If the Earth's protective atmosphere weren't there, intense cosmic radiation from space would kill you. So why do microwaves cook faster than infrared rays? Well, suppose you're roasting a chicken in a radar range. What happens is that when you switch on the microwaves, they're absorbed only by water molecules in the chicken. Water is what chemists call a polar molecule. It has a slightly positive charge at one end and a slightly negative charge at the opposite end. This peculiar orientation provides a sort of handle for the microwaves to grab onto. The microwaves agitate the water molecules billions of times a second, and this rapid movement generates heat and cooks the food. Since microwaves agitate only water molecules, they pass through all other molecules and penetrate deep into the chicken. They reach right inside the food. Ordinary ovens, by contrast, fail to have the same penetrating power because their infrared waves agitate all molecules. Most of the infarred radiation is spent heating the air inside the oven, and any remaining rays are absorbed by the outer layer of the chicken. Food cooks in an ordinary oven as the heat from the air and the outer layer of the food slowly seeps down to the inner layers. In short, oven microwaves cook the outside of the chicken at the same time as they cook the inside. Infrared energy cook from the outside in - a slower process. This explains why preheating is necessary in a conventional oven. The air inside must be lifted to a certain temperature by the infrared rays before it can heat the food properly.. It also explains why infrared ovens brown food and microwave ovens don't. Bread turns crusty and chicken crispy in a infrared oven simply because their outside gets much hotter than their interior. Finally, as anyone who owns a microwave oven knows, you never put an empty container inside a radar range. Since nonpolar materials such as plastic and glass don't warm up in the presence of microwaves, there will be nothing in the oven to absorb the radiation. Instead, it will bounce back and forth against the walls of the oven, creating an electrical arc that may burn a hole in the oven. This hushed energy, electromagnetic radiation, flows all around us. All forms of matter, even your own body, produce electromagnetism -- microwaves, x-rays, untraviolet rays. It may interest you to know that whereas the human eye is sensitive to light radiation, the eye of the snake can sense infrared. Your body emits infrared radiation day and night, so snakes can see you even when you can't see them. Though weak microwaves exist naturally, scientists didn't invent devices that harnass them for useful purposes until the 1930s. In a radar range, the device from which microwaves emanate is a small vacuum tube, called a magnetron. A magnetron takes electrical energy from an ordinary household outlet and uses it to push electrons in its core so that they oscillate fast enough to give off microwaves. These are then relayed by a small antenna to a hollow tube, called a waveguide, which channels the microwaves to a fanlike stirrer that scatters them around the oven's interior. They bounce off the oven walls and are absorbed by water molecules in the food. The U.S. Environmental Protection Agency estimates that our exposure to electromagnetic radiation increases by several percent a year. Look around you. The modern landscape fairly bristles with microwave dishes and antennae. Here again, in telecommuncations, it is the convenience with which microwaves can be focused in a narrow beam, that makes them so useful. Microwave dishes can be hundreds of times smaller than radio wave dishes. Industry employs microwaves heat in many ways -- to dry paints, bond plywood, roast coffee beans, kill weeds and insects, and cure rubber. Microwaves trigger garage door openers and burglar alarms. The new cellular car phone is a microwave instrument. Microwaves and Your Body Not surprisingly, as high-powered microwaves have proliferated in the atmosphere and the workplace, a passionate debate has grown over the pontential danger they pose to human health. But that is a topic for another article. For the moment, scientists at the University of Guelph have recently reported using microwaves to raise chickens. Housed in a large oven-like enclosure, young chicks keep warm under a slow drizzle of radiation. So far, the chicks seem to like their home in the range. They've even learned to turn on the microwaves whenever they feel cold. A similar scheme for heating human beings has actually been proposed by a scientist from Harvard University. Equipping buildings with microwave radiators would cut energy costs, he says, since microwaves heat people and not the surrounding air. Just set the thermostat dial to rare, medium or well done! Some researchers are concerned that people who work with microwave equipment are absorbing low levels of radiation that may prove harmful over the long term. One line of experiments has shown that uncoiled DNA molecules in a test tube can absorb microwave energy. The unravelled DNA chains resonate to the microwaves in the same way that a violin string vibrates when plucked. The question this raises is this: does microwave radiation vibrate coiled DNA in the human body, and if so, is this vibration strong enough to knock off vital molecules from the chain? You can subscribe to your own hard-copy of NewScience by sending your name and address and CDN$10 to: NewScience, Ontario Science Centre 770 Don Mills Rd., Don Mills, Ontario M3C 2T3.
f:\12000 essays\sciences (985)\Genetics\Morality and the Human Genome Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Morality and the Human Genome Project MWF 11:00 Bibliography Congress of the United States, Office of Technology Assessment, Mapping Our Genes: Genome Projects: How Big, How Fast?, Johns Hopkins University Press: Baltimore,1988. Gert, Bernard, Morality and the New Genetics: A Guide for Students and Health Care Providers, Jones and Bartlett: Sudbury, Massachusetts,1996. Lee, Thomas F., The Human Genome Project: Cracking the Genetic Code of Life, Plenum Press: New York, 1991. Murphy, Timothy F., and Lappe, Marc, ed., Justice and the Human Genome Project, University of California Press: Berkeley, 1994. Does the Human Genome Project affect the moral standards of society? Can the information produced by it become a beneficial asset or a moral evil? For example, in a genetic race or class distinction the use of the X chromosome markers can be used for the identification of a persons ethnicity or class (Murphy,34). A seemingly harmless collection of information from the advancement of the Human Genome Project. But, lets assume this information is used to explore ways to deny entry into countries, determine social class, or even who gets preferential treatment. Can the outcome of this information effect the moral standards of a society? The answers to the above and many other questions are relative to the issues facing the Human Genome Project. To better understand these topics a careful dissection of the terminology must be made. Websters Dictionary defines morality as ethics, upright conduct, conduct or attitude judged from the moral standpoint. It also defines a moral as concerned with right and wrong and the distinctions between them. A Genome is "the total of an individuals genetic material," including, "that part of the cell that controls heredity" (Lee,4). Subsequently, "reasearch and technology efforts aimed at mapping and sequencing large portions or entire genomes are called genome projects" (Congress,4). Genome projects are not a single organizations efforts, but instead a group of organizations working in government and private industry through out the world. Furthermore, the controversies surrounding the Human Genome Project can be better explained by the past events leading to the project, the structure of the project, and the moral discussion of the project. The major events of genetic history are important to the Human Genome Project because the structure and most of the project deals with genetics. Genetics is the study of the patterns of inheritance of specific traits (Congress,202). The basic beginnings of genetic history lay in the ancient techniques of selective breeding to yield special characteristics in later generations. This was and still is a form of genetic manipulation by "employing appropriate selection for physical and behavioral traits" (Gert,2). Futheralong, the work of Gregor Mendel, an Austrian monk, on garden peas established the quantitative discipline of genetics. Mendel's work explained the inheritance of traits can be stated by factors passed from one generation to the next; a gene. The complete set of genes for an organism is called it's genome (Congress,3). These traits can be explained due to the inheritance of single or multiple genes affected by factors in the environment (3). Mendel also correctly stated that two copies of every factor exists and that one factor of inheritance could be dominate over another (Gert,3).The next major events of genetic history involved DNA (deoxyribonucleic acid). DNA, as a part of genes, was discovered to be a double helix that encodes the blueprints for all living things (Congress,3). DNA was found to be packed into chromosomes, of which 23 pairs existed in each cell of the human body. Furthermore, one chromosome of each pair is donated from each parent. DNA was also found to be made of nucleotide chains made of four bases, commonly represented by A, C, T, and G. Any ordered pair of bases makes a sequence. These sequences are the instructions that produce molecules, proteins, for cellular structure and biochemical functions. In relation, a marker is any location on a chromosome where inheritance can be identified and tracked (202). Markers can be expressed areas of genes (DNA) or some segment of DNA with no known coding function but an inheritance could be traced (3). It is these markers that are used to do genetic mapping. By the use of genetic mapping isolated areas of DNA are used to find if a person has a specific trait, inherent factor, or any other numerous genetic information. In conclusion, the genetic history of ancient selective breeding to Mendel's garden peas to the current isolation of genes has been reached only through collaborative data of many organizations and scientist. The Human Genome Project has several objectives. To better understand the moral issues that exist the project itself must be examined. Among the many objectives, DNA databases that include sequences, location markers, genes, and the function of similar genes (Congress,7). The creation of human chromosome maps for DNA markers that would allow the location of genes to be found. A repository of research materials including ordered sets of DNA fragments representing the complete DNA in chromosomes. New instruments for analysis of DNA. New methods of analysis of DNA through chemical, physical, and computational methods. Develop similar research technologies for other organisms. Finally, to determine the DNA sequence of a large fraction of the human genome and other organisms. The objectives of the Human Genome Project are carried out by organizations such as the Department of Energy, National Institutes of Health, Howard Hughes Medical Institute, and various private organizations. These organizations all have two shared features, placing "new methods and instruments into toolkit of molecular biology" and "build reasearch infrastructure for genetics." Making the directives of the Human Genome Project apparent is important in making a moral judgment on this genetic technology. Any attempt to resolve moral issues involving new information from the Human Genome Project requires direct, clear, and total understanding of common morality. Subsequently, a moral theory is the attempt to explain, justify, and make visible "the moral system that people use in making their moral judgments and how to act when confronting a moral problem" (Gert,31). This theory is based on rational decisions. With this in mind, the moral system must be known by everyone who is judged by it. This leads to the rational statement that "morality must be a public system" (33). The individuals of the public system must know what morality requires of them, and the judgments and guidelines made must be rational to them. Just like any game, the players play by a set of rules and these rules dictate how play is done. The game is played only when everyone knows how to play. When rules are broken penalties are inforced by the other players judgment according to the rules allowed. However, if everyone agrees to change the rules then the game continues without any penalties. Therefore, "the goal of common morality is to lessen the amount of harm suffered by those protected by it" and it is constrained by the knowledge and need to be understood by all it applies to (47). Justified violations also exist in common morality. Just like in the game, a change in the rules causes acceptance, morality can be viewed not as an evil by the public perception but as a decision backed by common morals. Based on the pattern of common morality the issues of genetic race or class distinction or any other controversies involving the Human Genome Project can be put to a set of common moral standards. Just like the moral standard that says killing is wrong but killing is justifiable in self-defense, the Human Genome Project can be argued along the same pattern of moral discussion. The justifiable violations that genetic information is based on depends on the common morality which is based on the public system which is based on the decisions of right and wrong. In conclusion, the moral dilemma of genetics is that will it be an asset to the individuals public perception of common morality or will it be an evil to the individuals public perception of common morality based on the right and wrong of the information. This answer is based on the societies structure. In one time period it may be accepted in another in may not.
f:\12000 essays\sciences (985)\Genetics\Nanotechnology.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Curtis Shephard
Nanotechnology: Immortality or total annihilation?
Technology has evolved from ideals once seen as unbelievable to common everyday instruments.
Computers that used to occupy an entire room are now the size of notebooks. The human race has always
pushed for technological advances working at the most efficient level, perhaps, the molecular level. The
developments and progress in artificial intelligence and molecular technology have spawned a new form
of technology; Nanotechnology. Nanotechnology could give the human race eternal life, or it could cause
total annihilation.
The idea of nanotech was conceived by a man named K. Eric Drexler (Stix 94), which he defines
as "Technology based on the manipulation of individual atoms and molecules to build structures to
complex atomic specifications (Drexler, "Engines" 288)." The technology which Drexler speaks of will be
undoubtedly small, in fact, nano- structures will only measure 100 nanometers, or a billionth of a meter
(Stix 94).
Being as small as they are, nanostructures require fine particles that can only be seen with the
STM, or Scanning Tunneling Microscope (Dowie 4). Moreover the STM allows the scientists to not only
see things at the molecular level, but it can pick up and move atoms as well (Port 128). Unfortunately the
one device that is giving nanoscientists something to work with is also one of the many obstacles
restricting the development of nanotech. The STM has been regarded as too big to ever produce nanotech
structures (Port 128). Other scientists have stated that the manipulation of atoms, which nanotech relies
on, ignores atomic reality. Atoms simply don't fit together in ways which nanotech intends to use them
(Garfinkel 105). The problems plaguing the progress of nanotech has raised many questions among the
scientific community concerning it's validity. The moving of atoms, the gathering of information, the
restrictions of the STM, all restrict nanotech progress. And until these questions are answered, nanotech
is regarded as silly (Stix 98).
But the nanotech optimists are still out there. They contend that the progress made by a team at
IBM who was able to write letters and draw pictures atom by atom actually began the birth of nanotech
(Darling 49). These same people answer the scientific questions by replying that a breakthrough is not
needed, rather the science gained must be applied (DuCharme 33). In fact, Drexler argues that the
machines exist, trends are simply working on building better ones ("Unbounding" 24). Drexler continues
by stating that the machines he spoke about in "Engines of Creation" published in 1986 should be
developed early in the 21st century ("Unbounding" 116).
However many scientists still argue that because nanotech has produced absolutely nothing
physical, it should be regarded as science fiction (Garfinkel 111). Secondly, nano-doubters rely on
scientific fact to condemn nanotech. For example it is argued that we are very far away from ever seeing
nanotech due to the fact that when atoms get warm they have a tendency to bounce around. As a result
the bouncing atoms collide with other materials and mess up the entire structure (Davidson A1). Taken in
hand with the movement of electron charges, many regard nanotech as impossible (Garfinkel 106). But
this is not the entirety of the obstacles confining nanotech development. One major set-back is the fact
that the nanostructures are too small to reflect light in a visible way, making them practically invisible
(Garfinkel 104).
Nevertheless, Nanotech engineers remain hopeful and argue that; "With adequate funding,
researchers will soon be able to custom build simple molecules that can store and process information and
manipulate or fabricate other molecules, including more of themselves. This may occur before the turn of
the century."(Roland 30) There are other developments also, that are pushing nanotech in the right
direction for as Lipkin pointed out recent developments have lead to possibilities of computers thinking in
3-D (5). Which is a big step towards the processing of information that nanotech requires. Although
there are still unanswered questions from some of the scientific community, researchers believe that they
are moving forward and will one day be able to produce nanomachines.
One such machine is regarded as a replicator. A replicator, as it's name implies, will replicate;
much like the way in which genes are able to replicate themselves (Drexler, "Engines" 23). It is also
believed that once a replicator has made a copy of itself, it will also be able to arrange atoms to build
entirely new materials and structures (Dowie 5).
Another perceived nanomachine is the assembler. The assembler is a small machine that will
take in raw materials, follow a set of specific instructions, re-arrange the atoms, and result in an
altogether new product (Darling 53). Hence, one could make diamonds simply by giving some assemblers
a lump of coal. Drexler states that the assemblers will be the most beneficial nanites for they will build
structures atom by atom ("Engines" 12). Along with the assemblers comes its opposite, the disassembler.
The disassembler is very similar to the assemblers, except it works backwards. It is believed that these
nanites will allow scientists to analyze materials by breaking them down, atom by atom (Drexler,
"Engines" 19). As a result of the enhanced production effects of assemblers Drexler believes that they will
be able to shrink computers and improve their operation, giving us nanocomputers. These machines will
be able to do all things that current computers can do, but at a much more efficient level.
Once these nanomachines are complete they will be able to grasp molecules, bond them together,
and eventually result in a larger, new structure (Drexler, "Engines" 13). Through this and similar
processes the possibilities of nanotech are endless. It is believed that nanites could build robots, shrunken
versions of mills, rocket ships, microscopic submarines that patrol the bloodstream, and more of
themselves (Stix 94). Hence, their is no limit to what nanotech can do, it could arrange circuits and build
super-computers, or give eternal life (Stix 97). Overall Drexler contends; "Advances in the technologies
of medicine, space, computation, and production-and warfare all depend on our ability to arrange atoms.
With assemblers, we will be able to remake our world, or destroy it" ("Engines" 14).
In a more specific spectrum, are the impacts nanotechnology could have on the area of
production. Nanotechnology could greatly increase our means of production. Nanites have the ability to
convert bulks of raw materials into manufactured goods by arranging atoms (DuCharme 58). As a result
of this increased efficiency, DuCharme believes that this will become the norm in producing goods, that
this whole filed will now be done at the molecular level (34). Thus, nanotech could eliminate the need for
production conditions that are harmful or difficult to maintain (Roland 31). Moreover, the impact that
nanotech will have on production could lead to a never before seen abundance of goods. Costs and labor
will all be significantly cheaper. Everyone would be able to use nanotech as a tool for increased efficiency
in the area of production (DuCharme 60). The overall effects of nanotech on producing materials were
best summed up by Dowie, "This new revolution won't require crushing, boiling, melting, etc. Goods
would now be built from the atom up by nanomachines" (4).
Nanotech will also be able to benefit us in other ways. One great advantage to nanotech will be
the improvements it will lend in the areas of medicine. With the production of microscopic submarines,
this branch of nanotech could be the most appealing. These nanites would be able to patrol the
bloodstream sensing friendly chemicals and converting bad ones into harmless waste (Darling 7). But
nanites will be able to do more than this, this brand of nanites could also repair damaged DNA and hunt
cancer (Port 128). Thus, nanites would be able to cure many illnesses and repair DNA. Moreover,
nanites could remove the need to keep animals for human use, they could simply produce the food inside
your body (Darling 59). As a result of nanites floating through your body and attacking harmful
substances such as cholesterol, people could live indefinitely - perhaps a millennia (Davidson A1).
This idea opens up another door in the field of nanotech research, dealing with the potential for
immortality. But aside from providing eternal life through fixing DNA and curing illnesses, nanotech
could be used with cryogenics in providing never-ending life. The current problem with cryogenics is
after a person is frozen the cells in their body expand and burst. Nanotech could solve for this problem for
they could find and replace the broken cells (DuCharme 152). Also, however, nanites wouldn't even
require the entire frozen body. They could simply replicate the DNA in a frozen head and then produce a
whole new person (DuCharme 155).
However, this poses a potential problem, that being overpopulation, and the environment.
DuCharme contends that this should not be a concern for a high standard of living will keep the
population from growing (61). However, if the population were to increase nanotech will have produced
the energy to allow us to live in currently uninhabitable areas of the earth (DuCharme 63). Nanites will
allow people to not only live on earth, but on the sea, under the sea, underground, and in space due to
increased flight capabilities (DuCharme 64). Hence, the human race will have a near infinite space for
living. Also, nanites would reduce the toxins manufactured from cars by producing cheap electric cars,
but also use disassemblers to clean up waste dumps (DuCharme 68). The benefits of nanotech are
countless, it could be used to do anything from spying to mowing the lawn (Davidson A1). However, with
the good comes the bad. Nanotech could also bring some distinct disadvantages.
One scenario which illustrates the danger of nanotech is referred to as the gray goo problem.
Gray Goo is referred to as when billions of nanites band together and eat everything they come into
contact with (Davidson A1). However, Davidson only gets the tip of the iceberg when it comes to the
deadliness of gray goo. Roland better illustrates this hazards threat; "Nanotechnology could spawn a new
form of life that would overwhelm all other life on earth, replacing it with a swarm of nanomachines.
This is sometimes called the 'gray goo' scenario. It could take the form of a new disease organism, which
might wipe out whole species, including Homo Sapiens"(32). Simply put the nanites would replicate to
quickly and destroy everything including the human race (Stix 95). Moreover, the rapid replication rate
that nanotech is capable of could allow it to out-produce real organisms and turn the biosphere to dust
(Drexler, "Engines" 172). However, death is only one of the dangers of gray goo. If controlled by the
wrong people, nanites could be used to alter or destroy those persons enemies (Roland 32). But gray goo
is only of one of the many potential harms of nanotech.
If so desired, nanotech could be used as a deadly weapon. Although microscopic robots don't
sound like a very effective weapon, Drexler states that they are more potent than Nuclear weapons, and
much easier to obtain ("Engines" 174). But aside from being used as a weapon, nanites would be able to
produce weapons at a quick and inexpensive rate. In fact, with the ability to separate isotopes and atoms
one would be able to extract fissionable Uranium 235 or Plutonium 239. With these elements, a person
has the key ingredients for a nuclear bomb (Roland 34). As a result of the lethality of nano-weapons the
first to develop nanotech could use it to destroy his rivals. New methods for domination will exist that is
greater than Nukes and more dangerous (Roland 33). This along with simple errors, such as receiving the
wrong instructions points toward nanotech doing more harm than good (Darling 56).
Moreover, the threats from nanotech could be a potential cause of extinction (Drexler, "Engines"
174). Drexler continues by saying that unless precautions are taken nano could lead to complete
annihilation ("Engines" 23).
However, if nanotech does not lead to extinction, it could be used to increase the power of states
and individuals. Bacon believes that only the very most elite individuals will receive benefits from
nanotech. Beyond that however, it is perceived that advanced tech extends the possibilities of torture used
by a state (Drexler, "Engines" 176). However, states will become more powerful in other ways. With the
increase means of production, nanotech could remove the need for any if not all people (Drexler,
"Engines" 176). This opens new doors for totalitarian states. They would no longer require keeping
anyone alive, individuals would not be enslaved, rather they would be killed (Drexler, "Engines" 176). It
is perceived that all the benefits would remove all interdependence, and destroy the quality of life itself
(Roland 34).
In the end, nanotech could give a lifestyle never before imagined. On the other hand, it could
destroy entire species. The effects and potentials of nanotech are best summed up by it's inventor,
Drexler, "Nanotechnology and artificial intelligence could bring the ultimate tools of destruction, but they
are not inherently destructive. With care, we can use them to build the ultimate tools of peace" ("Engines"
190). The question of how beneficial nanotech will prove to be, can only be answered by time. Time will
tell whether developments and progress in artificial intelligence and molecular technology will eventually
produce true nanotechnology. And, if produced, whether this branch of science will give us immortality
or total annihilation.
f:\12000 essays\sciences (985)\Genetics\Nuclear Power.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nuclear Power
Radioactive wastes, must for the protection of mankind
be stored or disposed in such a manner that isolation from
the biosphere is assured until they have decayed to
innocuous levels. If this is not done, the world could face
severe physical problems to living species living on this
planet.
Some atoms can disintegrate spontaneously. As they do,
they emit ionizing radiation. Atoms having this property are
called radioactive. By far the greatest number of uses for
radioactivity in Canada relate not to the fission, but to
the decay of radioactive materials - radioisotopes. These
are unstable atoms that emit energy for a period of time
that varies with the isotope. During this active period,
while the atoms are 'decaying' to a stable state their
energies can be used according to the kind of energy they
emit.
Since the mid 1900's radioactive wastes have been
stored in different manners, but since several years new
ways of disposing and storing these wastes have been
developed so they may no longer be harmful. A very
advantageous way of storing radioactive wastes is by a
process called 'vitrification'.
Vitrification is a semi-continuous process that enables
the following operations to be carried out with the same
equipment: evaporation of the waste solution mixed with the
additives necesary for the production of borosilicate glass,
calcination and elaboration of the glass. These operations
are carried out in a metallic pot that is heated in an
induction urnace. The vitrification of one load of wastes
comprises of the following stages. The first step is
'Feeding'. In this step the vitrification receives a
constant flow of mixture of wastes and of additives until it
is 80% full of calcine. The feeding rate and heating power
are adjusted so that an aqueous phase of several litres is
permanently maintained at the surface of the pot. The second
step is the 'Calcination and glass evaporation'. In this
step when the pot is practically full of calcine, the
temperature is progressively increased up to 1100 to 1500 C
and then is maintained for several hours so to allow the
glass to elaborate. The third step is 'Glass casting'. The
glass is cast in a special container. The heating of the
output of the vitrification pot causes the glass plug to
melt, thus allowing the glass to flow into containers which
are then transferred into the storage. Although part of the
waste is transformed into a solid product there is still
treatment of gaseous and liquid wastes. The gases that
escape from the pot during feeding and calcination are
collected and sent to ruthenium filters, condensers and
scrubbing columns. The ruthenium filters consist of a bed of
glass pellets coated with ferrous oxide and maintained at a
temperature of 500 C. In the treatment of liquid wastes, the
condensates collected contain about 15% ruthenium. This is
then concentrated in an evaporator where nitric acid is
destroyed by formaldehyde so as to maintain low acidity. The
concentration is then neutralized and enters the
vitrification pot.
Once the vitrification process is finished, the
containers are stored in a storage pit. This pit has been
designed so that the number of containers that may be stored
is equivalent to nine years of production. Powerful
ventilators provide air circulation to cool down glass.
The glass produced has the advantage of being stored as
solid rather than liquid. The advantages of the solids are
that they have almost complete insolubility, chemical
inertias, absence of volatile products and good radiation
resistance. The ruthenium that escapes is absorbed by a
filter. The amount of ruthenium likely to be released into
the environment is minimal.
Another method that is being used today to get rid of
radioactive waste is the 'placement and self processing
radioactive wastes in deep underground cavities'. This is
the disposing of toxic wastes by incorporating them into
molten silicate rock, with low permeability. By this method,
liquid
wastes are injected into a deep underground cavity with
mineral treatment and allowed to self-boil. The resulting
steam is processed at ground level and recycled in a closed
system. When waste addition is terminated, the chimney is
allowed to boil dry. The heat generated by the radioactive
wastes then melts the surrounding rock, thus dissolving the
wastes. When waste and water addition stop, the cavity
temperature would rise to the melting point of the rock. As
the molten rock mass increases in size, so does the surface
area. This results in a higher rate of conductive heat loss
to the surrounding rock. Concurrently the heat production
rate of radioactivity diminishes because of decay. When the
heat loss rate exceeds that of input, the molten rock will
begin to cool and solidify. Finally the rock refreezes,
trapping the radioactivity in an insoluble rock matrix deep
underground. The heat surrounding the radioactivity would
prevent the intrusion of ground water. After all, the steam
and vapour are no longer released. The outlet hole would be
sealed. To go a little deeper into this concept, the
treatment of the wastes before injection is very important.
To avoid breakdown of the rock that constitutes the
formation, the acidity of he wastes has to be reduced. It
has been established experimentally that pH values of 6.5 to
9.5 are the best for all receiving formations. With such a
pH range, breakdown of the formation rock and dissociation
of the formation water are avoided. The stability of waste
containing metal cations which become hydrolysed in acid can
be guaranteed only by complexing agents which form 'water-
soluble complexes' with cations in the relevant pH range.
The importance of complexing in the preparation of wastes
increases because raising of the waste solution pH to
neutrality, or slight alkalinity results in increased
sorption by the formation rock of radioisotopes present in
the form of free cations. The incorporation of such cations
causes a pronounced change in their distribution between the
liquid and solid phases and weakens the bonds between
isotopes and formation rock. Now preparation of the
formation is as equally important. To reduce the possibility
of chemical interaction between the waste and the formation,
the waste is first flushed with acid solutions. This
operation removes the principal minerals likely to become
involved in exchange reactions and the soluble rock
particles, thereby creating a porous zone capable of
accommodating the waste. In this case the required acidity
of the flushing solution is established experimentally,
while the required amount of radial dispersion is determined
using the formula:
R = Qt
2 mn
R is the waste dispersion radius (metres)
Q is the flow rate (m/day)
t is the solution pumping time (days)
m is the effective thickness of the formation (metres)
n is the effective porosity of the formation (%)
In this concept, the storage and processing are
minimized. There is no surface storage of wastes required.
The permanent binding of radioactive wastes in rock matrix
gives assurance of its permanent elimination in the
environment.
This is a method of disposal safe from the effects of
earthquakes, floods or sabotages.
With the development of new ion exchangers and the
advances made in ion technology, the field of application of
these materials in waste treatment continues to grow.
Decontamination factors achieved in ion exchange treatment
of waste solutions vary with the type and composition of the
waste stream, the radionuclides in the solution and the type
of exchanger.
Waste solution to be processed by ion exchange should
have a low suspended solids concentration, less than 4ppm,
since this material will interfere with the process by
coating the exchanger surface. Generally the waste solutions
should contain less than 2500mg/l total solids. Most of the
dissolved solids would be ionized and would compete with the
radionuclides for the exchange sites. In the event where the
waste can meet these specifications, two principal
techniques are used: batch operation and column operation.
The batch operation consists of placing a given
quantity of waste solution and a predetermined amount of
exchanger in a vessel, mixing them well and permitting them
to stay in contact until equilibrium is reached. The
solution is then filtered. The extent of the exchange is
limited by the selectivity of the resin. Therefore, unless
the selectivity for the radioactive ion is very favourable,
the efficiency of removal will be low.
Column application is essentially a large number of
batch operations in series. Column operations become more
practical. In many waste solutions, the radioactive ions are
cations and a single column or series of columns of cation
exchanger will provide decontamination. High capacity
organic resins are often used because of their good flow
rate and rapid rate of exchange.
Monobed or mixed bed columns contain cation and anion
exchangers in the same vessel. Synthetic organic resins, of
the strong acid and strong base type are usually used.
During operation of mixed bed columns, cation and anion
exchangers are mixed to ensure that the acis formed after
contact with the H-form cation resins immediately
neutralized by the OH-form anion resin. The monobed or mixed
bed systems are normally more economical to process waste
solutions.
Against background of growing concern over the exposure
of the population or any portion of it to any level of
radiation, however small, the methods which have been
successfully used in the past to dispose of radioactive
wastes must be reexamined. There are two commonly used
methods, the storage of highly active liquid wastes and the
disposal of low activity liquid wastes to a natural
environment: sea, river or ground. In the case of the
storage of highly active wastes, no absolute guarantee can
ever be given. This is because of a possible vessel
deterioration or catastrophe which would cause a release of
radioactivity. The only alternative to dilution and
dispersion is that of concentration and storage. This is
implied for the low activity wastes disposed into the
environment. The alternative may be to evaporate off the
bulk of the waste to obtain a small concentrated volume. The
aim is to develop more efficient types of evaporators. At
the same time the decontamination factors obtained in
evaporation must be high to ensure that the activity of the
condensate is negligible, though there remains the problem
of accidental dispersion. Much effort is current in many
countries on the establishment of the ultimate disposal
methods. These are defined to those who fix the fission
product activity in a non-leakable solid state, so that the
general dispersion can never occur. The most promising
outlines in the near future are; 'the absorbtion of
montmorillonite clay' which is comprised of natural clays
that have a good capacity for chemical exchange of cations
and can store radioactive wastes, 'fused salt calcination'
which will neutralize the wastes and 'high temperature
processing'. Even though man has made many breakthroughs in
the processing, storage and disintegration of radioactive
wastes, there is still much work ahead to render the wastes
absolutely harmless.
f:\12000 essays\sciences (985)\Genetics\Organism Adaptation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Organism Adaptation 5-4-1993 1)stimulus: a change in the environment that necessities a response, or adjustment by an organism (ex. swirling dust) response: the adjustment or change you make to a stimulus (ex. blinking your eyes) 2)Protists respond to a negative stimuli by moving away from it. Protists respond to: light, irritating chemicals, temperature, touch, etc. 3)Yes, they grow towards the stimulus (ex. light). photoropism: it means the organism grows towards the light. no geotropism: it means the organism grows towards the ground. no 4)This is because animals have the most highly developed sensory systems of all organisms. 5)Three factors that affect an organism's response are the type, number, and complexity of an animal's sense organs. The way they affect the response is determined by the type, number, and complexity of the animal's sense organs. 6)positive: food, money negative: a man pointing a gun at you neutral: sound of traffic 7)In general, organisms go towards positive stimuli, and go away from negative one. 8)voluntary: eating a bowl of hot chicken soup involuntary: watering of your mouth learned: talking 9)When an animal receives a scare, it can either Fight, Flight (go away from), Freeze the/from organism that is scaring that animal. The animal releases adrenaline that gives it the strength to do one of those things. pg. 136 #3,4,challenger) 3)automatic: i)blinking your eyes when dust gets in them ii)mouth waters when you smell food iii)moving your hand away when it gets burned voluntary: i)eat a bowl of soup ii)drink water iii)watching TV 4)The stimulus. You need the stimulus to make a response. b)No, it is not possible. This is because with an action, there is a reaction. No, you need a stimuli to make a response, otherwise it is not really a response. 5)i)it comes out of the ground ii)it crows iii)it barks and chases the perpetrator iv)it chases and eats a gazelle b)i)the flooding of its home ii)getting light iii)the person breaking in iv)its hunger Challenger It helps to keep the brain and heart from freezing. pg. 146 #1-5) 1)i)taste ii)touch iii)sight iv)smell v)hearing 2)The protists can only sense chemical. 3)This effect is called sensory adaptation. b)An advantage is that you aren't bothered by the smell. A disadvantage is if you are accustomed to the smell of smoke, the smell of smoke might not alert you if your house is on fire. b)cone: when it is light out rod: when it is dark out c)They aren't as developed as some other organisms. 5)Eyelid: this is because your hell cells are very tough from being walked on. This causes them not to be very sensitive. 5-6-1993 pg.13 #1-6) 1)environment: everything in an organism's surroundings biotic environment: all living things in an environment abiotic environment: non living things in an environment 2)When you breathe, your body extracts oxygen from the air. b) large animal eats smaller animal smaller animals larger animal dies and eats plants fertilizes ground soil grows plants 3)biology,ecology: they are the study of things on earth; ecology is the study of environment, biology is the study of animals b)producers,consumers: they live off the environment; pro. manufactures food, con. can't manufacture other food, but eat other organisms c)scavenger,decomposer: both live of off dead organisms; decom. break down the bodies of dead organisms d)habitat,niche: have to do with were an animal lives hab.=enviro. space were an organism lives, niche = way an organism reacts with its environment e)environment,ecosystem: were organisms live; enviro.= everything in an organism's surroundings, eco.= were organisms of a distinct group interact 4)a)auto b)hetro c) auto d)auto e)auto f)hetro 5)biosphere: layer of planet where living things exist and interact b)lithosphere: solid portion of the Earth's surface c)hydrosphere: layer of water that covers nearly 3/4 of the Earth's surface d)atmosphere: mass of air surrounding the Earth 6)The scavengers come and totally eat the carcass. The decomposers decompose the carcass and it fertilizes the ditch. pg. 18 #1-6) 1)herbivore: animals that consume only plant material (ex. cattle, sheep) trophic level: how directly a consumer interacts with the producers of its ecosystem food chain: a feeding sequence in which each kind of organism eats the one below it in the chain (ex. grass -> mouse -> wolf) 2)Because the producer provides the food for the consumers. 3)Herbivores, this is because you need the herbivores to feed the carnivores, and if there aren't enough herbivores, the carnivores will die out. b)Producers, this is because the producers feed the consumers, and consumers will die if there is not enough producers. 4)omnivores,carnivore: they both eat animals; omnivores also eat plants b)primary,tertiary: they both eat other organisms; primary eats at the first level, and tertiary eats at the third level c)food chain,food web: they describe feeding sequences; food chain goes from one level to the next, web is interconnecting 6)There are six food chains. There are more because the three overlap each other. b)grain, grass, berries c)deer, mouse, grasshopper, rabbit d)hawk, snake, owl, wolf wolf is the top carnivore pg.36 #1-8) 1)environment: everything in an organism's surroundings environmental interaction: interaction within the environment for food and shelter b)They relate to ecology because the purpose of ecology is to study the environment and environmental interaction. 2)pond water: abiotic: pond water is not alive b)plant seeds: biotic: seeds are alive because they have the c)ability to grow d)fossils: abiotic: this is because fossils are fossilized bones of e)dead animals f)soil: abiotic: soil is not alive g)soil organisms: biotic: this is because all organisms are living 3)autotroph heterotroph grass grasshopper, salmon seaweed grass snake, starfish b)producer consumer grass grasshopper, salmon seaweed grass snake, starfish c)The autotrophs were also the producers, and the heterotrophs were also the consumers. 4)Decomposers are the heterotrophs because they feed off of dead organisms and organism waste. b)Scavengers are consumers because they feed off of dead organisms. c)Because the scavengers and decomposers get rid of the waste and dead organisms. 5)A dead organism is a part of the abiotic environment because it no longer has life in it. b)First, scavengers come and eat the meat of the dead organism, then a decomposer carries out chemical decomposition. Large, complex molecules of living things are broken down to smaller, simpler molecules. c)If the corpses were indestructible, our roads and yards would be carpeted with dead bodies. 6)habitat: the environmental space in which an organism lives niche: all the ways in which an organism interacts with its biotic and abiotic environments b)Grass, plants, and a bison occupy different niches in the same habitat. 7The layer of our planet where living things exist and interact. b)lithosphere: solid portion of the Earth (ex. rocks) hydrosphere: the water portion of the Earth (ex. sea) atmosphere: the air surrounding the Earth (ex. air) c)The zones are different sections were many organisms live, but the ecosystem is a unit of the biosphere in which organisms forming a distinct group interact with each other and with their environment. 8)ecosystem: a unit of the biosphere in which organisms forming a distinct group interact with each other and with their environment (ex. pond) b)Because green plants feed the other organisms in one way or another. c)There would be more plants because they are used to feeding the other animals. 5-13-1993 Senses Sight: photoreception - cones and rods - location? - function? Hearing: effects of vibrations in the ear? - choclea? - mechanoreception? Smell: olfaction? - chemoreception? - location of receptors Taste: location of chemoreceptors - categories or types - how do we taste spicy food Touch: location of receptors (3 different types) - varying ability - does one receptor in the skin respond to all types of touch, pressure, and pain? Sight photoreception: direction of light by sensory cells cones: specialized eye cells for bright light and color reception rods: specialized eye cells for vision at low light levels Rods and cones are located on the retina. Hearing The effects of vibrations in the ear is that the vibrations travel through a series of small bones into a coiled, fluid-filled cone. The vibrating fluid moves the hair cells, nerve impulses are sent to the brain where they are interpreted as sound. cochlea: a fluid-filled cone that helps detect sound mechanoreception: the ability to detect motion Smell olfaction: the sense of smell chemoreception: the ability to detect chemical stimuli The olfactory receptors are located high in the nasal cavity in a human Taste The receptors are located in taste buds situated in crevices in the tongue, in humans. Human taste receptors are limited to just four categories: sweet, salty, sour, and bitter. You taste spicy foods from the interaction of your sense of smell with these four basic taste. Touch In humans, touch receptors are located in the skin. The three types are Meissner's corpuscles, Pacinian corpuscles, Ruffini corpuscles. There is a variety of touch receptors. They can sense heat, cold, pain, touch, pressure. The ability of touch is different between people. No, different receptors respond to different types of touch, pressure, and pain. Sensory Systems in other Organisms - protists often respond by eating or avoiding like a baby - Euglena have a pigment spot -> sensitive to light - sense organs in organisms can be different from those in humans e.g. dogs, bats, dolphins respond to higher sound frequencies e.g. birds of prey (ex. hawk) have a better sense of vision e.g. insects have a better sense of smell Coordinating Responses: Movement and Location 3 steps to sense and response: 1) sensory receptors 2) Organisms must be able to respond ex. move away 3) a coordinated system that links sensing and responding -> this is called nervous system 5-14-1993 Nervous System - simplest nervous system is found in an organism called the Hydra, a fresh water jelly fish - when the Hydra is touched, it contracts - sensory cells in the Hydra relay the message to neurons that carry the message to muscle cells - in complex animals, groups of neurons from nerves and sensory cells are grouped together to form sensory organs - the central nervous system consists of a nerve chord and a brain - Ganglia are clumps of nerve cells that coordinate nerve signals in different parts of the body Three Types of Neurons 1) Sensory neurons: carry signals from the sense receptors 2) motor neurons: carry signals to parts of the body (ex. muscle, glands) 3) inter neurons: connect sensory neurons to motor neurons When your hand touches a hot kettle, heat receptors in your fingertips detect this. -> sends the message to receptors in your arm -> brain and spinal chord's inter neurons -> motor neurons -> arm muscles Movement and Locomotion - for protists and animals, responses usually involves some form of movement - all animals are capable of some sort of movement - an animal's movement is controlled by its nervous system locomotion: movement from one location to another - Most animals have some form of locomotion. Locomotion can be difficult to study because some animals move very quickly Nervous and Locomotory Systems of the Earthworm - earthworms respond to light, touch, moisture, and chemicals - sense receptors are located under the skin - central nervous systems of the earthworm is a double spinal chord - nerve chord is connected to two larger ganglia in the worm's head - this is the brain - there are smaller ganglia for each segment of the worm's body 5-18-1993 Nervous and Locomotory systems of the Earthworm - continued - Part II - the ganglia enables the earthworm to move each segment independently - earthworm also has 2 sets of muscles -one perpendicular to the other -1) longitudinal muscles: when contracted, the worm becomes shorter and fatter -2) circular muscles: when contracted, the worm becomes thinner and longer - when the worm is moving forward, you can see a wave of motion passing along the body of the worm 5-19-1993 Locomotion in other Organisms - different types of locomotion: running, swimming, gliding, jumping, hopping, crawling or pseudopodia (false feet) amoeba - animals have different body parts that aid in locomotion -e.g. spider monkey - tail, kangaroo - hind legs, bat - wings Sensory Systems of Other Organisms Protists: have chemoreceptors in cell membrane - these receptors can also detect the presence of other organisms Euglena: have a pigment spot: sensitive to light - Euglena can't see, but it will move towards the light - when there is enough light, the Euglena will perform photosynthesis - different organisms possess sense organs that are more sensitive than those of humans e.g. dogs and bats can detect sounds of higher frequencies birds of prey have a more sensitive sense of vision insects have a more sensitive sense of smell Photosynthesis sunlight + H2O + CO2 -> glucose + O2 energy + H2O + CO2 <- glucose + O2 Altering and Adapting to the External Environment - adaptations: features and behaviors that enable an organism to suit or fit its environment e.g. musk oxen of the Canadian Arctic: form protective circle, strong grinding teeth, long digestive tube, thick hairy coat - the environment can alter an organism, and the organism can also alter the environment Exchanging Materials with the Environment - living organisms absorb oxygen and eliminate carbon dioxide - land dwellers and aquatic organisms will exchange gasses with their surroundings - land vertebrates have lungs: open sacs inside the body, connected to the outside by a tube - aquatic vertebrate exchange gasses through their gills - as water flows over the gills, dissolved oxygen diffuses into the fish's bloodstream, and carbon dioxide diffuses out - insects have a system of air tubes called the trachea extending throughout their abdomen. these trachea are connected to spiracles (tiny breathing holes) on the body of the insect - warm-blooded species consume more oxygen than cold-blooded organisms - babies breathe much faster than adult humans - your breathing slows down when you are asleep - hibernating animals breathe very slowly How Gas Exchange Alters the Environment - cell respiration occurs in human cells: oxygen + sugar -> carbon dioxide + water + energy - oxygen is supplied by green plants undergoing photosynthesis carbon dioxide + water + light -> sugar + oxygen Exchanging Other Materials: Elimination and Excretion - gases are not the only things exchanged with the environment - animals also release liquid and solid wastes into the environment - these are acted upon by micro-organisms such as bacteria that recycle these waste products by using the materials for their own life process - if too man animals congregate in one spot, their waste production may exceed the recycling capacity of the decomposers - in Peru and California, bird droppings are harvested pg.168 #1-5) 1)gas exchange: inside the animal's body, oxygen from the external environment is exchanged for the waste gas, carbon dioxide 2)gills - fish lungs - human trachea - grasshopper b)fish - under water human - everywhere on land grasshopper - in grassy fields and lawns 3)The amount of oxygen required by an organism is determined by its size, if its asleep or not, and if its warm or cold blooded. 4)Respiration removes oxygen molecules from the air and replaces them with carbon dioxide molecules or vice versa. b)They are cellular respiration and photosynthesis. c)This is because one uses liquid and solid waste materials in the form of urine, feces, and sweat. They are released by excretion and elimination. 5-20-1993 Altering the Environment - every organism alters its environment simply by living in it - the impact of human activities on the environment is sometimes beneficial, but often has unforeseen circumstances - there has been an increase in atmospheric pollution, largely due to the burning of fossil fuels - fossil fuels increase the amount of sulfur dioxide, nitrogen dioxide, carbon dioxide, and carbon monoxide in the atmosphere - the amount of carbon dioxide present has increased by more than 30% in the past 100 years. This has produced the Greenhouse Effect. - Acid rain is caused by the mixing of sulfur and nitrogen oxides with water vapors. Greenhouse Effect - carbon from fossil fuels and the tropical rain forest combine with oxygen to produce CO2 - the danger results from global warming of the atmosphere - this may affect the ecosystems, and destroy some species which can't adapt to warmer conditions - Also, icebergs in the Arctic and Antarctic may melt causing coastal flooding - Since sunlight warms the Earth's surface more than the atmosphere, the surface transfers heat to the atmosphere. This heat is absorbed by gasses such as CO2 in the atmosphere. As the amount of the atmospheric CO2 rises, the amount of heat increases, thereby warming the atmosphere. How Humans Alter the Environment - humans can develop specialized dwelling (e.g. igloos). clothes (e.g. astronauts), and heating and cooling methods that enable them to survive in several different environments - humans can replace fields and forests with highways and cities - however, waste accumulation is a problem - how to dispose of garbage and non biodegradable materials How the Environment Alters Humans - there are several differences that make some body features better suited for a particular environment - people with a lot of skin pigment, and therefore darker skin, are protected from sunburns, and this is an advantage in hot areas - at higher altitudes, the environment oxygen levels are lower, and therefore, people with a higher density of oxygen carrying red blood cells are at an advantage, therefore, people living in higher altitudes tend to develop more red blood cells pg.184 #1-6) 1)They work as a group, they defend better as a group, and each member of the group has a specific job. 2)inherited variability: this means that you have inherited certain traits from your ancestors, but not everyone in your family has them 3)They can inherit structural and behavioral adaptation. b)duck: migration (behavioral), oily back (structural), fly in flocks (behavioral) polar bear: whit (structural), much fat (structural), padded feet (structural) camel: humps (structural), large feet (structural), low body fat (structural) 4)It needs other termites to help it feed, breed, and defend itself. 5)caribou: hibernate? geese: fly south for the winter maple trees: start storing food in its branches and not feeding its leaves 6) The knowledge an organism has can help it to live longer and better and to adapt better. 5-21-1993 - physiological adaptations are adjustments to environmental change involving a change in body chemistry - however, there are limits to how quickly the human body can alter in response to changes in the external environment - for example, people could never adopt to oxygen levels above 6000m Adapting to Environmental Change - when an organism becomes so specialized, and accustomed to a particular environment factor such as food, or climate then, change in this environmental factor may result in death of that species - insects are most adaptable organisms - some insects (cockroach) have survived almost 300mil. years unaltered - several factors responsible for insects power of survival 1)most insects undergo dramatic metamorphoses, as a result, juvenile and adult form eat different food, and survive in different conditions. If one food supply or environment was affected, it wouldn't destroy the entire population 2) insects also reproduce in very large numbers 3) short life span, therefore, many generations are produced in a short time, and mutations are quickly passed to the next generation. - if an individual possesses a characteristic that gives it an advantage in the environment, any offspring that inherit that characteristic may have a better chance of survival. After a few generations, the inherited characteristic could be more widespread in the population - peppered moth provides an example of process of adoption - before 1845, most peppered moths had light colored wings with dark markings - however, with industrialization and pollution, city dwellings became darkened from soot and smoke. The bark on trees also became darker. - now light color moth were at a disadvantage and its population deminished - pretty soon, dark color moths outnumbered light ones - a structural adoption is an inner physical feature that increases an organism's chance of survival e.g. curved talons of on a hawk 5-25-1993 - behavioral adaptations: certain actions that increase an organism's chance of survival - hibernation: state of deep sleep in which an organism can remain without food for weeks or months - before hibernating, the animal eats a lot to accumulate extra fat reserves - during hibernation, the breathing and heart rates slow down significantly - in spring, the hibernating animal wakes up - migration: animals moving to a different location due to an environmental/seasonal change - estivation: some desert animals become dormant in summer when water is scarce e.g. desert frogs, snakes, lizards Adapting Through Social Structure - social living arrangements make it easier for an animal to find a mate, find food, and avoid danger e.g. bee colony - consists of a queen bee, infertile female worker bees that hunt for food, feed the young, and protect the colony. There are also male bees called drones that solely act as breeder, and they do not work at all. No individual bee can survive on its own because its structural and behavioral adaptations are so specialized.
f:\12000 essays\sciences (985)\Genetics\Ovarian Cancer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Of all gynecologic malignancies, ovarian cancer continues to have the
highest mortality and is the most difficult to diagnose. In the United States
female population, ovarian cancer ranks fifth in absolute mortality among
cancer related deaths (13,000/yr). In most reported cases, ovarian cancer,
when first diagnosed is in stages III or IV in about 60 to 70% of patients
which further complicates treatment of the disease (Barber, 3).
Early detection in ovarian cancer is hampered by the lack of appropriate
tumor markers and clinically, most patients fail to develop significant
symptoms until they reach advanced stage disease. The characteristics
of ovarian cancer have been studied in primary tumors and in established
ovarian tumor cell lines which provide a reproducible source of tumor material.
Among the major clinical problems of ovarian cancer, malignant progression,
rapid emergence of drug resistance, and associated cross-resistance remain
unresolved. Ovarian cancer has a high frequency of metastasis yet generally
remains localized within the peritoneal cavity. Tumor development has been
associated with aberrant, dysfunctional expression and/or mutation of
various genes. This can include oncogene overexpression, amplification or
mutation, aberrant tumor suppressor expression or mutation. Also, subversion
of host antitumor immune responses may play a role in the pathogenesis of
cancer (Sharp, 77).
Ovarian clear cell adenocarcinoma was first described by Peham in 1899 as
"hypernephroma of the ovary" because of its resemblance to renal cell carcinoma.
By 1939, Schiller noted a histologic similarity to mesonephric tubules and
classified these tumors as "mesonephromas." In 1944, Saphir and Lackner described
two cases of "hypernephroid carcinoma of the ovary" and proposed "clear cell"
adenocarcinoma as an alternative term. Clear cell tumors of the ovary are now
generally considered to be of mullerian and in the genital tract of mullerian origin.
A number of examples of clear cell adenocarcinoma have been reported to arise
from the epithelium of an endometriotic cyst (Yoonessi, 289). Occasionally, a renal
cell carcinoma metastasizes to the ovary and may be confused with a primary clear
cell adenocarcinoma.
Ovarian clear cell adenocarcinoma (OCCA) has been recognized as a distinct
histologic entity in the World Health Organization (WHO) classification of ovarian
tumors since 1973 and is the most lethal ovarian neoplasm with an overall five year
survival of only 34% (Kennedy, 342). Clear cell adenocarcinoma, like most ovarian
cancers, originates from the ovarian epithelium which is a single layer of cells found on
the surface of the ovary. Patients with ovarian clear cell adenocarcinoma are typically
above the age of 30 with a median of 54 which is similar to that of ovarian epithelial
cancer in general. OCCA represents approximately 6% of ovarian cancers and bilateral
ovarian involvement occurs in less that 50% of patients even in advanced cases.
The association of OCCA and endometriosis is well documented (De La Cuesta,
243). This was confirmed by Kennedy et al who encountered histologic or intraoperative
evidence of endometriosis in 45% of their study patients. Transformation
from endometriosis to clear cell adenocarcinoma has been previously demonstrated in
sporadic cases but was not observed by Kennedy et al. Hypercalcemia occurs in a
significant percentage of patients with OCCA. Patients with advanced disease are more
typically affected than patients with nonmetastatic disease. Patients with OCCA are also
more likely to have Stage I disease than are patients with ovarian epithelial cancer in
general (Kennedy, 348).
Histologic grade has been useful as an initial prognostic determinant in some studies
of epithelial cancers of the ovary. The grading of ovarian clear cell adenocarcinoma has
been problematic and is complicated by the multiplicity of histologic patterns found in
the same tumor. Similar problems have been found in attempted grading of clear cell
adenocarcinoma of the endometrium (Disaia, 176). Despite these problems, tumor
grading has been attempted but has failed to demonstrate prognostic significance.
However, collected data suggest that low mitotic activity and a predominance of clear
cells may be favorable histologic features (Piver, 136).
Risk factors for OCCA and ovarian cancer in general are much less clear than for
other genital tumors with general agreement on two risk factors: nulliparity and family
history. There is a higher frequency of carcinoma in unmarried women and in married
women with low parity. Gonadal dysgenesis in children is associated with a higher risk
of developing ovarian cancer while oral contraceptives are associated with a decreased
risk. Genetic and candidate host genes may be altered in susceptible families. Among
those currently under investigation is BRCA1 which has been associated with an
increased susceptibility to breast cancer. Approximately 30% of ovarian adenocarcinomas
express high levels of HER-2/neu oncogene which correlates with a poor prognosis
(Altcheck, 375-376). Mutations in host tumor suppresser gene p53 are found in 50% of
ovarian carcinomas. There also appears to be a racial predilection, as the vast majority
of cases are seen in Caucasians (Yoonessi, 295).
Considerable variation exists in the gross appearance of ovarian clear cell
adenocarcinomas and they are generally indistinguishable from other epithelial ovarian
carcinomas. They could be cystic, solid, soft, or rubbery, and may also contain
hemorrhagic and mucinous areas (O'Donnell, 250). Microscopically, clear cell
carcinomas are characterized by the presence of variable proportions of clear and hobnail
cells. The former contain abundant clear cytoplasm with often centrally located nuclei,
while the latter show clear or pink cytoplasm and bizarre basal nuclei with atypical
cytoplasmic intraluminal projections. The cellular arrangement may be tubulo acinar,
papillary, or solid, with the great majority displaying a mixture of these patterns. The
hobnail and clear cells predominate with tubular and solid forms, respectively (Barber,
214).
Clear cell adenocarcinoma tissue fixed with alcohol shows a high cytoplasmic
glycogen content which can be shown by means of special staining techniques.
Abundant extracellular and rare intracellular neutral mucin mixed with sulfate and
carboxyl group is usually present. The clear cells are recognized histochemically and
ultrastructurally (short and blunt microvilli, intercellular tight junctions and desmosomes,
free ribosomes, and lamellar endoplasmic reticulum). The ultrastructure of hobnail and
clear cells resemble those of the similar cells seen in clear cell carcinomas of the
remainder of the female genital tract (O'Brien, 254). A variation in patterns of histology
is seen among these tumors and frequently within the same one.
Whether both tubular components with hobnail cells and the solid part with clear cells
are required to establish a diagnosis or the presence of just one of the patterns is
sufficient has not been clearly established. Fortunately, most tumors exhibit a mixture of
these components. Benign and borderline counterparts of clear cell ovarian
adenocarcinomas are theoretical possibilities. Yoonessi et al reported that nodal
metastases could be found even when the disease appears to be grossly limited to the
pelvis (Yoonessi, 296). Examination of retroperitoneal nodes is essential to allow for
more factual staging and carefully planned adjuvant therapy.
Surgery remains the backbone of treatment and generally consists of removal of the
uterus, tubes and ovaries, possible partial omentectomy, and nodal biopsies. The
effectiveness and value of adjuvant radiotherapy and chemotherapy has not been clearly
demonstrated. Therefore, in patients with unilateral encapsulated lesions and
histologically proven uninvolvement of the contralateral ovary, omentum, and biopsied
nodes, a case can be made for (a)no adjuvant therapy after complete surgical removal
and (b) removal of only the diseased ovary in an occasional patient who may be young
and desirous of preserving her reproductive capacity (Altchek, 97). In the more adv-
anced stages, removal of the uterus, ovaries, omentum, and as much tumor as possible
followed by pelvic radiotherapy (if residual disease is limited to the pelvis) or
chemotherapy must be considered. The chemotherapeutic regimens generally involve
adriamycin, alkylating agents, and cisPlatinum containing combinations (Barber, 442).
OCCA is of epithelial origin and often contains mixtures of other epithelial tumors
such as serous, mucinous, and endometrioid. Clear cell adenocarcinoma is characterized
by large epithelial cells with abundant cytoplasm. Because these tumors sometimes
occur in association with endometriosis or endometrioid carcinoma of the ovary and
resemble clear cell carcinoma of the endometrium, they are now thought to be of
mullerian duct origin and variants of endometrioid adenocarcinoma. Clear cell tumors of
the ovary can be predominantly solid or cystic. In the solid neoplasm, the clear cells are
arranged in sheets or tubules. In the cystic form, the neoplastic cells line the spaces.
Five-year survival is approximately 50% when these tumors are confined to the ovaries,
but these tumors tend to be aggressive and spread beyond the ovary which tends to make
5-year survival highly unlikely (Altchek, 416).
Some debate continues as to whether clear cell or mesonephroid carcinoma is a
separate clinicopathological entity with its own distinctive biologic behavior and natural
history or a histologic variant of endometrioid carcinoma. In an effort to characterize
clear cell adenocarcinoma, Jenison et al compared these tumors to the most common of
the epithelial malignancies, the serous adenocarcinoma (SA). Histologically determined
endometriosis was strikingly more common among patients with OCCA than with SA.
Other observations by Jenison et al suggest that the biologic behavior of clear cell
adenocarcinoma differs from that of SA. They found Stage I tumors in 50% of the
observed patient population as well as a lower incidence of bilaterality in OCCA
(Jenison, 67-69). Additionally, it appears that OCCA is characteristically larger than
SA, possibly explaining the greater frequency of symptoms and signs at presentation.
Risk Factors
There is controversy regarding talc use causing ovarian cancer. Until recently, most
talc powders were contaminated with asbestos. Conceptually, talcum powder on the
perineum could reach the ovaries by absorption through the cervix or vagina. Since
talcum powders are no longer contaminated with asbestos, the risk is probably no longer
important (Barber, 200). The high fat content of whole milk, butter, and meat products
has been implicated with an increased risk for ovarian cancer in general.
The Centers for Disease Control compared 546 women with ovarian cancer to 4,228
controls and reported that for women 20 to 54 years of age, the use of oral
contraceptives reduced the risk of ovarian cancer by 40% and the risk of ovarian cancer
decreased as the duration of oral contraceptive use increased. Even the use of oral
contraceptives for three months decreased the risk. The protective effect of oral
contraceptives is to reduce the relative risk to 0.6 or to decrease the incidence of disease
by 40%. There is a decreased risk as high as 40% for women who have had four or
more children as compared to nulliparous women. There is an increase in the incidence
of ovarian cancer among nulliparous women and a decrease with increasing parity. The
"incessant ovulation theory" proposes that continuous ovulation causes repeated trauma
to the ovary leading to the development of ovarian cancer. Incidentally, having two or
more abortions compared to never having had an abortion decreases one's risk of
developing ovarian cancer by 30% (Coppleson, 25-28).
Etiology
It is commonly accepted that cancer results from a series of genetic alterations that
disrupt normal cellular growth and differentiation. It has been proposed that genetic
changes causing cancer occur in two categories of normal cellular genes, proto-
oncogenes and tumor suppressor genes. Genetic changes in proto-oncogenes facilitate
the transformation of a normal cell to a malignant cell by production of an altered or
overexpressed gene product. Such genetic changes include mutation, translocation, or
amplification of proto-oncogenes Tumor suppressor genes are proposed to prevent
cancer. Inactivation or loss of these genes contributes to development of cancer by the
lack of a functional gene product. This may require mutations in both alleles of a tumor
suppressor gene. These genes function as regulatory inhibitors of cell proliferation, such
as a DNA transcription factor, or a cell adhesion molecule. Loss of these functions
could result in abnormal cell division or gene expression, or increased ability of cells in
tissues to detach. Cancer such as OCCA most likely results from the dynamic interaction
of several genetically altered proto-oncogenes and tumor suppressor genes (Piver, 64-
67).
Until recently, there was little evidence that the origin of ovarian was genetic. Before
1970, familial ovarian cancer had been reported in only five families. A familial cancer
registry was established at Roswell Park Cancer Institute in 1981 to document the
number of cases occurring in the United States and to study the mode of inheritance. If
a genetic autosomal dominant transmission of the disease can be established, counseling
for prophylactic oophorectomy at an appropriate age may lead to a decrease in the death
rate from ovarian cancer in such families.
The registry at Roswell Park reported 201 cases of ovarian cancer in 94 families in
1984. From 1981 through 1991, 820 families and 2946 cases had been observed.
Familial ovarian cancer is not a rare occurrence and may account for 2 to 5% of all cases
of ovarian cancer. Three conditions that are associated with familial ovarian cancer are
(1) site specific, the most common form, which is restricted to ovarian cancer, and (2)
breast/ovarian cancer with clustering of ovarian and breast cases in extended pedigrees
(Altchek, 229-230). One characteristic of inherited ovarian cancer is that it occurs at a
significantly younger age than the non-inherited form.
Cytogenetic investigations of sporadic (non-inherited) ovarian tumors have revealed
frequent alterations of chromosomes 1,3,6, and 11. Many proto-oncogenes have been
mapped to these chromosomes, and deletions of segments of chromosomes (particularly
3p and 6q) in some tumors is consistent with a role for loss of tumor suppressor genes.
Recently, a genetic linkage study of familial breast/ovary cancer suggested linkage of
disease susceptibility with the RH blood group locus on chromosome 1p.
Allele loss involving chromosomes 3p and 6q as well as chromosomes 11p, 13q, and
17 have been frequently observed in ovarian cancers. Besides allele loss, point mutations
have been identified in the tumor suppressor gene p53 located on chromosome17p13.
Deletions of chromosome 17q have been reported in sporadic ovarian tumors suggesting
a general involvement of this region in ovarian tumor biology. Allelic loss of MYB and
ESR genes map on chromosome 6q near the provisional locus for FUCA2, the locus for
a-L-fucosidase in serum. Low activity of a-L-fucosidase in serum is more prevalent in
ovarian cancer patients. This suggests that deficiency of a-L-fucosidase activity in serum
may be a hereditary condition associated with increased risk for developing ovarian
cancer. This together with cytogenetic data of losses of 6q and the allelic losses at 6q
point to the potential importance of chromosome 6q in hereditary ovarian cancer
(Altchek, 208-212).
Activation of normal proto-oncogenes by either mutation, translocation, or gene
amplification to produce altered or overexpressed products is believed to play an
important role in the development of ovarian tumors. Activation of several proto-
oncogenes (particularly K-RAS, H-RAS, c-MYC, and HER-2/neu) occurs in ovarian
tumors. However, the significance remains to be determined. It is controversial as to
whether overexpression of the HER-2/neu gene in ovarian cancer is associated with poor
prognosis. In addition to studying proto-oncogenes in tumors, it may be beneficial to
investigate proto-oncogenes in germ-line DNA from members of families with histories
of ovarian cancer (Barber, 323-324). It is questionable whether inheritance or rare
alleles of the H-RAS proto-oncogene may be linked to susceptibility to ovarian cancers.
Diagnosis and Treatment
The early diagnosis of ovarian cancer is a matter of chance and not a triumph of
scientific approach. In most cases, the finding of a pelvic mass is the only available
method of diagnosis, with the exception of functioning tumors which may manifest
endocrine even with minimal ovarian enlargement. Symptomatology includes vague
abdominal discomfort, dyspepsia, increased flatulence, sense of bloating, particularly
after ingesting food, mild digestive disturbances, and pelvic unrest which may be present
for several months before diagnosis (Sharp, 161-163).
There are a great number of imaging techniques that are available. Ultrasounds,
particularly vaginal ultrasound, has increased the rate of pick-up of early lesions,
particularly when the color Doppler method is used. Unfortunately, vaginal sonography
and CA 125 have had an increasing number of false positive examinations. Pelvic
findings are often minimal and not helpful in making a diagnosis. However, combined
with a high index of suspicion, this may alert the physician to the diagnosis.
These pelvic signs include:
Mass in the ovarian area
Relative immobility due to fixation of adhesions
Irregularity of the tumor
Shotty consistency with increased firmness
Tumors in the cul-de-sac described as a handful of knuckles
Relative insensitivity of the mass
Increasing size under observation
Bilaterality (70% for ovarian carcinoma versus 5% for benign cases) (Barber, 136)
Tumor markers have been particularly useful in monitoring treatment, however, the
markers have and will probably always have a disadvantage in identifying an early
tumor. To date, only two, human gonadotropin (HCG) and alpha fetoprotein, are
known to be sensitive and specific. The problem with tumor markers as a means of
making a diagnosis is that a tumor marker is developed from a certain volume of tumor.
By that time it is no longer an early but rather a biologically late tumor (Altchek, 292).
Many reports have described murine monoclonal antibodies (MAbs) as potential tools
for diagnosing malignant ovarian tumors. Yamada et al attempted to develop a MAb
that can differentiate cells with early malignant change from adjacent benign tumor cells
in cases of borderline malignancy. They developed MAb 12C3 by immunizing mice with
a cell line derived from a human ovarian tumor. The antibody reacted with human
ovarian carcinomas rather than with germ cell tumors. MAb 12C3 stained 67.7% of
ovarian epithelial malignancies, but exhibited an extremely low reactivity with other
malignancies. MAb 12C3 detected a novel antigen whose distribution in normal tissue is
restricted. According to Yamada et al, MAb 12C3 will serve as a powerful new tool for
the histologic detection of early malignant changes in borderline epithelial neoplasms.
MAb 12C3 may also be useful as a targeting agent for cancer chemotherapy (Yamada,
293-294).
Currently there are several serum markers that are available to help make a diagnosis.
These include CA 125, CEA, DNB/70K, LASA-P, and serum inhibin. Recently the
urinary gonadotropin peptide (UCP) and the collagen-stimulating factor have been
added. Although the tumor markers have a low specificity and sensitivity, they are often
used in screening for ovarian cancer. A new tumor marker CA125-2 has greater
specificity than CA125. In general, tumor markers have a very limited role in screening
for ovarian cancer.
The common epithelial cancer of the ovary is unique in killing the patient while being,
in the vast majority of the cases, enclosed in the anatomical area where it initially
developed: the peritoneal cavity. Even with early localized cancer, lymph node
metastases are not rare in the pelvic or aortic areas. In most of the cases, death is due to
intraperitoneal proliferation, ascites, protein loss and cachexia. The concept of
debulking or cytoreductive surgery is currently the dominant concept in treatment.
The first goal in debulking surgery is inhibition of debulking surgery is inhibition of
the vicious cycle of malnutrition, nausea, vomiting, and dyspepsia commonly found in
patients with mid to advanced stage disease. Cytoreductive surgery enhances the
efficiency of chemotherapy as the survival curve of the patients whose largest residual
mass size was, after surgery, below the 1.5 cm limit is the same as the curve of the
patients whose largest metastatic lesions were below the 1.5 cm limit at the outset
(Altchek, 422-424).
The aggressiveness of the debulking surgery is a key question surgeons must face
when treating ovarian cancers. The debulking of very large metastatic masses makes no
sense from the oncologic perspective. As for extrapelvic masses the debulking, even if
more acceptable, remains full of danger and exposes the patient to a heavy handicap.
For these reasons the extra-genital resections have to be limited to lymphadenectomy,
omentectomy, pelvic abdominal peritoneal resections and rectosigmoid junction
resection. That means that stages IIB and IIC and stages IIIA and IIB are the only true
indications for extrapelvic cytoreductive surgery. Colectomy, ileectomy, splenectomy,
segmental hepatectomy are only exceptionally indicated if they allow one to perform a
real optimal resection. The standard cytoreductive surgery is the total hysterectomy with
bilateral salpingoophorectomy. This surgery may be done with aortic and pelvic lymph
node sampling, omentectomy, and, if necessary, resection of the rectosigmoidal junction
(Barber. 182-183).
The concept of administering drugs directly into the peritoneal cavity as therapy of
ovarian cancer was attempted more than three decades ago. However, it has only been
within the last ten years that a firm basis for this method of drug delivery has become
established. The essential goal is to expose the tumor to higher concentrations of drug
for longer periods of time than is possible with systemic drug delivery. Several agents
have been examined for their efficacy, safety and pharmacokinetic advantage when
administered via the peritoneal route.
Cisplatin has undergone the most extensive evaluation for regional delivery. Cisplatin
reaches the systemic compartment in significant concentrations when it is administered
intraperitoneally. The dose limiting toxicity of intraperitoneally administered cisplatin is
nephrotoxicity, neurotoxicity and emesis. The depth of penetration of cisplatin into the
peritoneal lining and tumor following regional delivery is only 1 to 2 mm from the
surface which limits its efficacy. Thus, the only patients with ovarian cancer who would
likely benefit would be those with very small residual tumor volumes. Overall,
approximately 30 to 40% of patients with small volume residual ovarian cancer have
been shown to demonstrate an objective clinical response to cisplatin-based locally
administered therapy with 20 to 30% of patients achieving a surgically documented
complete response. As a general rule, patients whose tumors have demonstrated an
inherent resistance to cisplatin following systemic therapy are not considered for
treatment with platinum-based intraperitoneal therapy (Altchek, 444-446).
In patients with small volume residual disease at the time of second look laparotomy,
who have demonstrated inherent resistance to platinum-based regimens, alternative
intraperitoneal treatment programs can be considered. Other agents include
mitoxantrone, and recombinant alpha-interpheron. Intraperitoneal mitoxanthone has
been shown to have definite activity in small volume residual platinum-refractory ovarian
cancer. Unfortunately, the dose limiting toxicity of the agent is abdominal pain and
adhesion formation, possibly leading to bowel obstruction. Recent data suggests the
local toxicity of mitoxanthone can be decreased considerably by delivering the agent in
microdoses.
Ovarian tumors may have either intrinsic or acquired drug resistance. Many
mechanisms of drug resistance have been described. Expression of the MDR1 gene that
encodes the drug efflux protein known as p-glycoprotein, has been shown to confer the
characteristic multi-drug resistance to clones of some cancers. The most widely
considered definition of platinum response is response to first-line platinum treatment
and disease free interval. Primary platinum resistance may be defined as any progression
on treatment. Secondary platinum resistance is the absence of progression on primary
platinum-based therapy but progression at the time of platinum retreatment for relapse
(Sharp, 205-207).
Second-line chemotherapy for recurrent ovarian cancer is dependent on preferences of
both the patient and physician. Retreatment with platinum therapy appears to offer
significant opportunity for clinical response and palliation but relatively little hope for
long-term cure. Paclitaxel (trade name: Taxol), a prototype of the taxanes, is cytotoxic
to ovarian cancer. Approximately 20% of platinum failures respond to standard doses of
paclitaxel. Studies are in progress of dose intensification and intraperitoneal
administration (Barber, 227-228). This class of drugs is now thought to represent an
active addition to the platinum analogs, either as primary therapy, in combination with
platinum, or as salvage therapy after failure of platinum.
In advanced stages, there is suggestive evidence of partial responsiveness of OCCA to
radiation as well as cchemotherapy, adriamycin, cytoxan, and cisPlatinum-containing
combinations (Yoonessi, 295). Radiation techniques include intraperitoneal radioactive
gold or chromium phosphate and external beam therapy to the abdomen and pelvis. The
role of radiation therapy in treatment of ovarian canver has diminished in prominence as
the spread pattern of ovarian cancer and the normal tissue bed involved in the treatment
of this neoplasm make effective radiation therapy difficult. When the residual disease
after laparotomy is bulky, radiation therapy is particularly ineffective. If postoperative
radiation is prescribed for a patient, it is important that theentire abdomen and pelvis are
optimally treated to elicit a response from the tumor (Sharp, 278-280).
In the last few decades, the aggressive attempt to optimize the treatment of
ovarian clear cell adenocarcinoma and ovarian cancer in general has seen remarkable
improvements in the response rates of patients with advanced stage cancer without
dramatically improving long-term survival. The promises of new drugs with activity
when platinum agents fail is encouraging and fosters hope that, in the decades to come,
the endeavors of surgical and pharmacoogical research will make ovarian cancer an
easily treatable disease.
Bibliography
Altchek, A., & Deligdisch, L. (1996). Diagnosis and Management of Ovarian Disorders.
New York: Igaku Shoin.
Barber, H. (1993). Ovarian Carcinoma: Etiology, Diagnosis, and Treatment. New York:
Springer Verlag.
Coppleson, M. (Ed.). (1981). Gynecologic Oncology (vol. 2). New York: Churchill
Livingstone.
Current Clinical Trials Oncology. (1996). Green Brook, NJ: Pyros Education.
De La Cuesta, R., & Eichorn, J. (1996). Histologic transformation of benign
endometriosis to early epithelial ovarian cancer. Gynecologic Oncology, 60, 238-
244.
Disaia, P, & Creasman, W. (1989). Clinical Gynecologic Oncology (3rd ed.). St. Louis:
Mosby.
Jenison, E., Montag, A., & Griffiths, T. (1989). clear cell adenocarcinoma of the ovary:
a clinical analysis and comparison with serous carcinoma. Gynecologic Oncology,
32, 65-71.
Kennedy, A., & Biscotti, C. (1993). Histologic correlates of progression-free interval and
survival in ovarian clear cell adenocarcinoma. Gynecologic Oncology, 50, 334-338.
Kennedy, A., & Biscotti, C. (1989). Ovarian clear cell adenocarcinoma. Gynecologic
Oncology, 32, 342-349.
O'Brien, M., Schofield, J., & Tan, S. (1993). Clear cell epithelial ovarian cancer: Bad
prognosis only in early stages. Gynecologic Oncology, 49, 250-254.
O'Donnell, M, & Al-Nafussi, A. (1995). Intracytoplasmic lumina and mucinous inclusions
in ovarian carcinoma. Histopathology, 26, 181-184.
Piver, S. (Ed.). (1987). Ovarian Malignancies. New York: Churchill Livingstone.
Sharp, F., Mason, P., Blackett, T., & Berek, J. (1995). Ovarian Cancer 3. New York:
Chapman & Hall Medical.
Yamada, K., & Kiyoshi, O. (1995). Monoclonal antibody, Mab 12C3, is a sensitive
immunohistochemical marker of early malignant change in epithelial ovarian tumors.
Anatomic Pathology, 103, 288-294.
Yoonessi, M., Weldon, D., & Sateesh, S. (1984). Clear cell ovarian carcinoma. Journal
of Surgical Oncology, 27, 289-297.
f:\12000 essays\sciences (985)\Genetics\Prolonged Preservation of the Heart Prior to Transplantation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Biochemistry
Prolonged Preservation of the Heart Prior to Transplantation
Picture this. A man is involved in a severe car crash in
Florida which has left him brain-dead with no hope for any
kind of recovery. The majority of his vital organs are
still functional and the man has designated that his organs
be donated to a needy person upon his untimely death.
Meanwhile, upon checking with the donor registry board, it
is discovered that the best match for receiving the heart of
the Florida man is a male in Oregon who is in desperate need
of a heart transplant. Without the transplant, the man will
most certainly die within 48 hours. The second man's
tissues match up perfectly with the brain-dead man's in
Florida. This seems like an excellent opportunity for a
heart transplant. However, a transplant is currently not a
viable option for the Oregon man since he is separated by
such a vast geographic distance from the organ. Scientists
and doctors are currently only able to keep a donor heart
viable for four hours before the tissues become irreversibly
damaged. Because of this preservation restriction, the
donor heart is ultimately given to someone whose tissues do
not match up as well, so there is a greatly increased chance
for rejection of the organ by the recipient. As far as the
man in Oregon goes, he will probably not receive a donor
heart before his own expires.
Currently, when a heart is being prepared for
transplantation, it is simply submerged in an isotonic
saline ice bath in an attempt to stop all metabolic activity
of that heart. This cold submersion technique is adequate
for only four hours. However, if the heart is perfused with
the proper media, it can remain viable for up to 24 hours.
The technique of perfusion is based on intrinsically simple
principles. What occurs is a physician carefully excises
the heart from the donor. He then accurately trims the
vessels of the heart so they can be easily attached to the
perfusion apparatus. After trimming, a cannula is inserted
into the superior vena cava. Through this cannula, the
preservation media can be pumped in.
What if this scenario were different? What if doctors were
able to preserve the donor heart and keep it viable outside
the body for up to 24 hours instead of only four hours? If
this were possible, the heart in Florida could have been
transported across the country to Oregon where the perfect
recipient waited. The biochemical composition of the
preservation media for hearts during the transplant delay is
drastically important for prolonging the viability of the
organ. If a media can be developed that could preserve the
heart for longer periods of time, many lives could be saved
as a result.
Another benefit of this increase in time is that it would
allow doctors the time to better prepare themselves for the
lengthy operation. The accidents that render people
brain-dead often occur at night or in the early morning.
Presently, as soon as a donor organ becomes available,
doctors must immediately go to work at transplanting it.
This extremely intricate and intense operation takes a long
time to complete. If the transplanting doctor is exhausted
from working a long day, the increase in duration would
allow him enough time to get some much needed rest so he can
perform the operation under the best possible circumstances.
Experiments have been conducted that studied the effects of
preserving excised hearts by adding several compounds to the
media in which the organ is being stored. The most
successful of these compounds are pyruvate and a pyruvate
containing compound known as
perfluoroperhydrophenanthrene-egg yolk phospholipid
(APE-LM). It was determined that adding pyruvate to the
media improved postpreservation cardiac function while
adding glucose had little or no effect. To test the
function of these two intermediates, rabbit hearts were
excised and preserved for an average of 24.5 1 0.2 hours on
a preservation apparatus before they were transplanted back
into a recipient rabbit. While attached to the preservation
apparatus, samples of the media output of the heart were
taken every 2 hours and were assayed for their content. If
the compound in the media showed up in large amounts in the
assay, it could be concluded that the compound was not
metabolized by the heart. If little or none of the compound
placed in the media appeared in the assay, it could be
concluded that compound was used up by the heart metabolism.
The hearts that were given pyruvate in their media
completely consumed the available substrate and were able to
function at a nearly normal capacity once they were
transplanted. Correspondingly, hearts that were preserved
in a media that lacked pyruvate had a significantly lower
rate of contractile function once they were transplanted.
The superior preservation of the hearts with pyruvate most
likely resulted from the hearts use of pyruvate through the
citric acid cycle for the production of energy through
direct ATP synthesis (from the reaction of succinyl-CoA to
succinate via the enzyme succinyl CoA synthetase) as well
as through the production of NADH + H+ for use in the
electron transport chain to produce energy.
After providing a preservation media that contained
pyruvate, a better recovery of the heart tissue occurred.
Most of the pyruvate consumed during preservation was
probably oxidized by the myocardium in the citric acid
cycle. Only a small amount of excess lactate was detected
by the assays of the preservation media discharged by the
heart. The lactate represented only 15% of the pyruvate
consumed. If the major metabolic route taken by pyruvate
during preservation had been to form lactate dehydrogenase
for regeneration of NAD+ for continued anaerobic glycolysis,
rather than by the aerobic citric acid cycle (pyruvate
oxidation), then a higher ratio of excess lactate produced
to pyruvate consumed would have been observed.
Hearts given a glucose substrate did not transport or
consume that substrate, even when it was provided as the
sole exogenous substrate. It might be expected that glucose
would be used up in a manner similar to that of pyruvate.
This expectation is because glucose is a precursor to
pyruvate via the glycolytic pathway however, this was not
the case. It was theorized this lack of glucose use may
have been due to the fact that the hormone insulin was not
present in the media. Without insulin, one may think the
tissues of the heart would be unable to adequately take
glucose into their tissues in any measurable amount, but
this is not the case either. It is known that hearts
working under physiologic conditions do use glucose in the
absence of insulin, but glucose consumption in that
situation is directly related to the performance of work by
the heart, not the presence of insulin.
To further test the effects of the addition of insulin to
the glucose media, experiments were done in which the
hormone was included in the heart preservation media5-7.
Data from those studies does not provide evidence that the
hormone is essential to insure glucose use or to maintain
the metabolic status of the heart or to improve cardiac
recovery. In a hypothermic (80C) setting, insulin did not
exert a noticeable benefit to metabolism beyond that
provided by oxygen and glucose. This hypothermic setting is
analogous to the setting an actual heart would be in during
transportation before transplant.
Another study was done to determine whether the compound
perfluoroperhydrophenanthrene-egg yolk phospholipid,
(APE-LM) was an effective media for long-term hypothermic
heart preservation3. Two main factors make APE-LM an
effective preservation media. (1) It contains a lipid
emulsifier which enables it to solubilize lipids. From this
breakdown of lipids, ATP can be produced. (2) APE-LM
contains large amounts of pyruvate. As discussed earlier,
an abundance of energy is produced via the oxidation of
pyruvate through the citric acid cycle.
APE-LM-preserved hearts consumed a significantly higher
amount of oxygen than hearts preserved with other media.
The higher oxygen and pyruvate consumption in these hearts
indicated that the hearts had a greater metabolic oxidative
activity during preservation than the other hearts. The
higher oxidative activity may have been reflective of
greater tissue perfusion, especially in the coronary beds,
and thereby perfusion of oxygen to a greater percentage of
myocardial cells. Another factor contributing to the
effectiveness of APE-LM as a transplantation media is its
biologically compatible lipid emulsifier, which consists
primarily of phospholipids and cholesterol. The lipid
provides a favorable environment for myocardial membranes
and may prevent perfusion-related depletion of lipids from
cardiac membranes. The cholesterol contains a bulky steroid
nucleus with a hydroxyl group at one end and a flexible
hydrocarbon tail at the other end. The hydrocarbon tail of
the cholesterol is located in the non polar core of the
membrane bilayer. The hydroxyl group of cholesterol
hydrogen-bonds to a carbonyl oxygen atom of a phospholipid
head group. Through this structure, cholesterol prevents
the crystallization of fatty acyl chains by fitting between
them. Thus, cholesterol moderates the fluidity of
membranes.8
The reason there are currently such strict limits on the
amount of time a heart can remain viable out of the body is
because there must be a source of energy for the heart
tissue if it is to stay alive. Once the supply of energy
runs out, the tissue suffers irreversible damage and dies.
Therefore, this tissue cannot be used for transplantation.
If hypothermic hearts are not given exogenous substrates
that they can transport and consume, like pyruvate, then
they must rely on glycogen or lipid stores for energy
metabolism. The length of time that the heart can be
preserved in vitro is thus related to the length of time
before these stores become too low to maintain the required
energy production needs of the organ. It is also possible
that the tissue stores of ATP and phosphocreatine are
critical factors. It is known that the amount of ATP in
heart muscle tissues is sufficient to sustain contractile
activity of the muscle for less than one second. This is
why phosphocreatine is so important. Vertebrate muscle
tissue contains a reservoir of high-potential phosphoryl
groups in the form of phosphocreatine. Phosphocreatine can
transfer its phosphoryl group to ATP according to the
following reversible reaction:
phosphocreatine + ADP + H+ 9 ATP + creatine
Phosphocreatine is able to maintain a high concentration of
ATP during periods of muscular contraction. Therefore, if
no other energy producing processes are available for the
excised heart, it will only remain viable until its
phosphocreatine stores run out.
A major obstacle that must be overcome in order for heart
transplants to be successful, is the typically prolonged
delay involved in getting the organ from donor to recipient.
The biochemical composition of the preservation media for
hearts during the transplant and transportation delays are
extremely important for prolonging the viability of the
organ. It has been discovered that adding pyruvate, or
pyruvate containing compounds like APE-LM, to a preservation
medium greatly improves post-preservation cardiac function
of the heart. As was discussed, the pyruvate is able to
enter the citric acid cycle and produce sufficient amounts
of energy to sustain the heart after it has been excised
until it is transplanted.
Increasing the amount of time a heart can remain alive
outside of the body prior to transplantation from the
current four hours to 24 hours has many desirable benefits.
As discussed earlier, this increase in time would allow
doctors the ability to better match the tissues of the donor
with those of the recipient. Organ rejection by recipients
occurs frequently because their tissues do not suitably
match those of the donors. The increase in viability time
would also allow plenty of opportunity for the organ to be
transported to the needy person, even if it must go across
the country.
f:\12000 essays\sciences (985)\Genetics\Rasmussens Encephalitis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Keyur P.
Biology...Science
Rasmussen's Encephalitis
The human immune system is an amazing system that is constantly on the alert protecting us from
sicknesses. Thousands of white blood cells travel in our circulatory system destroying all foreign
substances that could cause harm to our body or to any of the millions of processes going on inside. Now
imagine a condition where this awesome system turns against the most complex organ in the human body,
the brain. Deadly as it is, this condition is known as Rasmussen's encephalitis.
The meaningful research on Rasmussen's encephalitis was begun (unintentionally) by Scott Rogers
and Lorise Gahring, two neurologists, who were at the time measuring the distribution of glutamate
receptors in the brain. Later on when more provocative information was found they enlisted the help of
James McNamara and Ian Andrews, epilepsy experts at Duke University Medical Center.
The details on Rasmussen's encephalitis were very bleak at the time when the men began their
research. All that was known is that Rasmussen's encephalitis was a degenerative disease of the brain
that caused seizures, hemiparesis, and dementia normally in the first ten years of life. The seizures that
were caused by Rasmussen's encephalitis were unstoppable by normal anti-seizure drugs used
conventionally. What the worst part of the disease was that the pathogenesis for it were not known and
even worse was how it developed.
The first clue was delivered when Rogers and Gahring were trying to register the distribution of the
glutamate receptors using antibodies, that tag on to the receptor itself. The proteins that make up the
glutamate receptors(GluR) are only found inside the blood brain barrier(BBB). Glutamate and a few
related amino acids are the dominant form of excitatory neurotransmitter in the central nervous system of
mammals. If one of these GluRs happens to wander into the actual bloodstream, that is outside the BBB,
it would be considered an outsider and destroyed immediately. So if these GluRs were put into the normal
blood stream then the immune system would produce antibodies which could then be used in the
searching for the glutamate receptors.
In order to test this theory the researchers injected the GluRs into the blood stream of a normal
healthy rabbit hoping to produce good results. At this point the experiment took a dramatic turn, after
receiving a few doses of the protein two of the three rabbits began to twitch, as though they were suffering
the pain of an epileptic seizure. Now the help of McNamara and Andrews was enlisted.
When McNamara and Andrews examined the brain tissue of the rabbits, they saw what seemed to be
a familiar inflammatory pattern, clumps of immune cells all around blood vessels. This description
exactly matched the description of persons suffering from Rasmussen's encephalitis, moreover something
as this would never be found in a healthy brain. A healthy brain has its blood capillaries enclosed in the
BBB membrane, so such a case as the one mentioned above would not be possible.
As protective as the BBB is, it can be breached by something like a head injury. What was
happening was that the antibodies which were out to get the GluR proteins were somehow finding a way
into the brain and directing an attack towards all GluR receptor proteins in the brain itself.
After some more examinations Rogers and McNamara decided that these attacks were the cause of
the seizures that are often experienced by sufferers of Ramussen's encephalitis. Then if the case is of
antibodies in the bloodstream, than sufferers of Ramussen's encephalitis should have them in their
bloodstream and healthy normal peoples shouldn't. When this was actually tested the results were
positive that Rasmussen sufferers did have these antibodies in their bloodstreams and healthy people did
not. These were not only the right kind of antibodies but, the very antibodies that caused the seizures in
people and rabbits. Thus when these antibodies were removed by plasma exchange(PEX) it caused a
temporary relief from the seizures but soon the body starts making more antibodies of the type and the
seizures start once again. After all the examinations two questions remained, why does the body mount
an immune response against one of its own brain proteins, and how do these antibodies get through the
BBB?
What is thought right now is that people get antibodies when they are infected by a microorganism
like a bacterium or a virus that is similar in structure to the GluR. When this happens the body mounts an
immune response against, and it just so happens that at this stage you suffer a blow to the head. This will
open your BBB to the antibodies and they will attack the friendly GluRs in the brain, causing seizures and
further opening your BBB to more antibodies.
Now a malicious rhythm begins: antibodies break through the BBB, inflammation is caused due to
the break in, seizures are now caused and BBB opens up further, further opening in the BBB cause more
seizures. The inflammation is caused by the autoimmune process against the GluR. All the seizures
occur where the initial break in the BBB happened due to a blow to the head, explaining why they
seizures are confined to just one hemisphere. The only problem with this theory is that the rabbits
developed seizures without ever being whacked on the head, but that also could be because a rabbit's brain
is not as well insulated as a human's.
Normally what happens to an individual is that after he or she is involved in this cycle the only
thing that can make for relief is the recurrent plasma exchange. This will only cease the seizures
temporarily, but they will start again when the body has made more new antibodies. After this has been
done many times the hemisphere in which sufferers of Ramussen's encephalitis is present will deteriorate
to the point where a hemispherectomy has to be performed. This will render the person to mental
disintegration where he or she has no more mental capacity and generally to the point of no return, death.
Rasmussen's encephalitis is a very deadly disease, but it is also a very rare disease, occurring in only
48 people between 1957 and 1987. As of now there are no FDA approved drugs for the sufferers of
Ramussen's encephalitis. Now the researchers are working on a drug that will block the activity of this
particular antibody, but this could lead to further problems. If this drug is being administered and a
bacteria or virus of a similar structure as the GluR is present the body would disregard it and this would
cause more health problems. After all this bad news all one can say is, "Good luck" to the ones suffering
from this living hell.
Atkins, "Rasmussen's encephalitis: nueroimaging findings in four patients." AJR-Am-J-
Roentgenol. June 1992.
Blume, "Rasmussen's chronic encephalitis in adults." Arch-Nuerol. March 1993.
Hanovar, "Rasmussen's encephalitis in surgery for epilepsy." Dev-Med-Child-Nuerol.
January 1992.
Leary, "Clues Found To Rare Form of Epilepsy." New York Times. December 5 1994,
pp. A4.
Whisenand, "Autoantibodies to glutamate receptor GluR3 in Rasmussen's encephalitis,"
Science. July 29 1994.
f:\12000 essays\sciences (985)\Genetics\REPRODUCTION.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
====================================================================== REPRODUCTION: A-Courting to Nature! LIFE SCIENCES SIG ---------------------------------------------------------------------- For some time she had watched his movements, appearing coyly in his haunts. And now, had it paid off? Doubtless, he was in love. His muscles were taut; he swooped through the air more like an eagle than a Greylag gander. The only problem was, it was not for her that he then landed in a flurry of quacks and wingbeats, or for her that he dashed off surprise attacks on his fellows. It was, rather, for another - for her preening rival across the Bavarian lake. Poor goose. Will she mate with the gander of her dreams? Or will she trail him for years, laying infertile egg clutches as proof of her faithfulness? Either outcome is possible in an animal world marked daily by scenes of courtship, spurning and love triumphant. And take note: these are not the imaginings of some Disney screen-16 writer. Decades ago Konrad Lorenz, a famed Austrian naturalist, made detailed studies of Greylags and afterwards showed no hesitation in using words like love, grief and even embarrassment to describe the behavior of these large, social birds. At the same time he did not forget that all romance - animal and human - is tied intimately to natural selection. Natural selection brought on the evolution of males and females during prehistoric epochs when environmental change was making life difficult for single-sex species such as bacteria and algae. Generally, these reproduced by splitting into identical copies of themselves. New generations were thus no better than old ones at surviving in an altered world. With the emergence of the sexes, however, youngsters acquired the qualities of two parents. This meant that they were different from both - different and perhaps better at coping with tough problems of survival. At the same time, nature had to furnish a new set of instincts which would make "parents" out of such unreflective entities as mollusks and jellyfish.. The peacock's splendid feathers, the firefly's flash, the humpback whale's resounding bellow - all are means these animals have evolved to obey nature's command: "Find a mate. Transmit your characteristics through time!" But while most males would accept indiscriminate mating, females generally have more on their minds. In most species, after all, they take on reproduction's hardest chores such as carrying young, incubating eggs and tending newborns. Often they can produce only a few young in a lifetime. (Given half a chance, most males would spawn thousands.) So it's no surprising that the ladies are choosy. They want to match their characteristics with those of a successful mate. He may flap his wings or join a hockey team, but somehow he must show that his offspring will not likely be last to eat or first in predatory jaws. Strolling through the Australian underbrush that morning, she had seen nothing that might catch a female bowerbird's eye. True, several males along the way had built avenue bowers - twin rows of twigs lined up north and south. True, they had decorated their constructions with plant juices and charcoal. Yet they displayed nothing out front! Not a beetle's wing. Not a piece of flower. Then she saw him. He stood before the largest bower and in his mouth held a most beautiful object. It was a powder blue cigarette package, and beneath it there glinted a pair of pilfered car keys. Without hesitation she hopped forward to watch his ritual dance. Males have found many ways to prove their worth. Some, like bowerbirds, flaunt possessions and territory, defending these aggressively against the intrusion of fellow males. Others, like many birds and meat-eating mammals, pantomime nest building or otherwise demonstrate their capacity as dads. Still others, however, do nothing. Gentlemen may bring flowers, but most male fish just fertilize an egg pile some unknown female has left in underwater sand. For a fish, survival itself is a romantic feat. For other species, though, love demands supreme sacrifices. Shortly after alighting on the back of his mate, the male praying mantis probably had no idea what was in store. This would have been a good thing too, because as he continued to fertilize his partner's eggs, she twisted slowly around and bit off his head. She continued to put away his body parts until well nourished and thus more able to sustain her developing young. Luckily for most species, the urge to mate come on only occasionally, usually in springtime. For love can hurt, particularly if you intended has difficulty telling a mate from a meal. Pity the poor male of the spider species, Xysticus Cristatus, for instance. His only hope of survival is to tie a much larger female to the ground with silk thread, and keep her there. Every time a moth releases its attracting scent, or a bullfrog sings out its mating call, these animals are risking a blind date with some predator. Such alluring traits have long puzzled scientists, particularly those which seem not only risky but useless as well. Why, after all, should a frigate bird mate more if he puffs out an extra large red throat sac? How does ownership of such a thing indicate a superior individual? Until recently, the question stymied biologists, but then researchers in the U.S. and Sweden announced a possible answer. While studying widowbirds, among whom extravagant tail feathers are hip, they discovered that the longest-tailed males also carried a lower number of blood parasites. Sexual ornamentation seemed to be a means by which males could show of superfluous health and energy. All of which may bring us to fast sports cars, flashy clothes and other accessories of the human suitor. After all, if he can afford dinner at the city's most expensive restaurant, chances are he could finance a baby too.
f:\12000 essays\sciences (985)\Genetics\Reptiles.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Essay on Reptiles
Reptiles are vertebrate, or backboned animals constituting the class
Reptilia and are characterized by a combination of features, none of which
alone could separate all reptiles from all other animals.
The characteristics of reptiles are numerous, therefore can not be
explained in great detail in this report. In no special order, the
characteristics of reptiles are: cold-bloodedness; the presence of lungs;
direct development, without larval forms as in amphibians; a dry skin with
scales but not feathers or hair; an amniote egg; internal fertilization; a
three or four-chambered heart; two aortic arches (blood vessels) carrying
blood from the heart to the body, unlike mammals and birds that only have
one; a metanephric kidney; twelve pairs of cranial nerves; and skeletal
features such as limbs with usually five clawed fingers or toes, at least
two spinal bones associated with the pelvis, a single ball-and-socket
connection at the head-neck joint instead of two, as in advanced amphibians
and mammals, and an incomplete or complete partition along the roof of the
mouth, separating the food and air passageways so that breathing can
continue while food is being chewed.
These and other traditional defining characteristics of reptiles have been
subjected to considerable modification in recent times. The extinct flying
reptiles, called pterosaurs or pterodactyls, are now thought to have been
warm-blooded and covered with hair. Also, the dinosaurs are also now
considered by many authorities to have been warm-blooded. The earliest
known bird, archaeopteryx, is now regarded by many to have been a small
dinosaur, despite its covering of feathers The extinct ancestors of the
mammals, the therapsids, or mammallike reptiles, are also believed to have
been warm-blooded and haired. Proposals have been made to reclassify the
pterosaurs, dinosaurs, and certain other groups out of the class Reptilia
into one or more classes of their own.
The class Reptilia is divided into 6 to 12 subclasses by different
authorities. This includes living and extinct species. In addition, a number
of these subclasses are completely extinct. The subclasses contain about 24
orders, but only 4 of these are still represented by living animals.
Of the living orders of reptiles, two arose earlier than the age of
reptiles, when dinosaurs were dominant. Tuataras, of the order
Rhynchocephalia, are found only on New Zealand islands, whereas the equally
ancient turtles, order Chelonia, occur nearly worldwide. The order
Crocodilia emerged along with the dinosaurs. Snakes and lizards, order
Squamata, are today the most numerous reptile species.
The Rhynchocephalia constitute the oldest order of living reptiles; the
only surviving representative of the group is the tuatara, or sphenodon
(Sphenodon punctatus). Structurally, the tuatara is not much different from
related forms, also assigned to the order Rhynchocephalia, that may have
appeared as early as the Lower Triassic Period (over 2 000 000 000 years
ago). The tuatara has two pairs of well-developed limbs, a strong tail, and
a scaly crest down the neck and back. The scales, which cover the entire
animal, vary in size. The tuatara also has a bony arch, low on the skull
behind the eye, that is not found in lizards. Finally, the teeth of the
tuatara are acrodont - i.e., attached to the rim of the jaw rather than
inserted in sockets.
Chelonia, another ancient order of reptiles, is chiefly characterised by a
shell that encloses the vital organs of the body and more or less protects
the head and limbs. The protective shell, to which the evolutionary success
of turtles is largely attributed, is a casing of bone covered by horny
shields. Plates of bone are fused with ribs, vertebrae, and elements of
shoulder and hip girdles. There are many shell variations and modifications
from family to family, some of them extreme. At its highest development, the
shell is not only surprisingly strong but also completely protective. The
lower shell (plastron) can be closed so snuggly against the upper (carapace)
that a thin knife blade could not be inserted between them.
A third order of the class Reptilia is Crocodilia. Crocodiles are generally
large, ponderous, amphibious animals, somewhat lizardlike in appearance, and
carnivorous. They have powerful jaws with conical teeth and short legs and
clawed, webbed toes. The tail is long and massive and the skin thick and
plated. Their snout is relatively long and varies considerably in
proportions and shape. The thick, large horny plates that cover most of the
body are generally arranged in a regular pattern. The form of the is adapted
to its amphibious way of life. Finally, the elongated body with its long,
muscular paddletail is well suited to rapid swimming.
The final living order of the class Reptilia is Squamata. Both snakes and
lizards are classified in this order, but lizards are separated into their
own suborder, Sauria. Lizards can be distinguished from snakes by the
presence of two pairs of legs, external ear openings, and movable eyelids,
but these convenient external diagnostic features, while absent in snakes,
are also absent in some lizards. Lizards can be precisely separated from
snakes, however, on the basis of certain internal characteristics. All
lizards have at least a vestige of a pectoral girdle (skeletal supports for
the front limbs) and sternum (breastbone). The lizard's brain is not
totally enclosed in a bony case but has a small region at the front covered
only by a membranous septum. The lizard's kidneys are positioned
symmetrically and to the rear; in snakes the kidneys are far forward, with
the right kidney placed farther front than the left. Finally, the lizard's
ribs are never forked, as are one or two pairs in the snake.
A natural classification of reptiles is more difficult than that of many
animals because the main evolution of the group was during Mesozoic time (a
time of transition in the history of life and in the evolution of the
Earth); 13 of 17 recognized orders are extinct. There is still little
agreement on reptile taxonomy among herpetologists and paleontologists.
Even the major categories of reptile classification are still in dispute. On
the other hand, there is general agreement that the base reptilian stock is
the Cotylosauria, which evolved from an amphibian labyrinthodont stock. It
is also quite clear that the coty losaurs early divided into two lines, one
of which (the pelycosaurs) represented the stock that gave rise to the
mammals. Another branch led to all of the other reptiles, and later, to the
birds as well. Thus, most of the questions of reptilian evolution and
classification deal with the reptiles' interrelationship, rather than with
their relationships with other animals.
Leonardo da Vinci (I wish)
____
/ \
| ^ ^ |
/ O O | I know it's cheap, but hey, I did it in 2 minutes!!!
\ _\ |
| \___/|
\____/
f:\12000 essays\sciences (985)\Genetics\Solar Cells.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Solar cells
Solar cells today are mostly made of silicon, one of the
most common elements on Earth. The crystalline silicon solar
cell was one of the first types to be developed and it is still
the most common type in use today. They do not pollute the
atmosphere and they leave behind no harmful waste products.
Photovoltaic cells work effectively even in cloudy weather and
unlike solar heaters, are more efficient at low temperatures.
They do their job silently and there are no moving parts to wear
out. It is no wonder that one marvels on how such a device would
function.
To understand how a solar cell works, it is necessary to go
back to some basic atomic concepts. In the simplest model of the
atom, electrons orbit a central nucleus, composed of protons and
neutrons. each electron carries one negative charge and each
proton one positive charge. Neutrons carry no charge. Every atom
has the same number of electrons as there are protons, so, on the
whole, it is electrically neutral. The electrons have discrete
kinetic energy levels, which increase with the orbital radius.
When atoms bond together to form a solid, the electron energy
levels merge into bands. In electrical conductors, these bands
are continuous but in insulators and semiconductors there is an
"energy gap", in which no electron orbits can exist, between the
inner valence band and outer conduction band [Book 1]. Valence
electrons help to bind together the atoms in a solid by orbiting
2 adjacent nucleii, while conduction electrons, being less
closely bound to the nucleii, are free to move in response to an
applied voltage or electric field. The fewer conduction electrons
there are, the higher the electrical resistivity of the material.
In semiconductors, the materials from which solar sells are
made, the energy gap Eg is fairly small. Because of this,
electrons in the valence band can easily be made to jump to the
conduction band by the injection of energy, either in the form of
heat or light [Book 4]. This explains why the high resistivity of
semiconductors decreases as the temperature is raised or the
material illuminated. The excitation of valence electrons to the
conduction band is best accomplished when the semiconductor is in
the crystalline state, i.e. when the atoms are arranged in a
precise geometrical formation or "lattice".
At room temperature and low illumination, pure or so-called
"intrinsic" semiconductors have a high resistivity. But the
resistivity can be greatly reduced by "doping", i.e. introducing
a very small amount of impurity, of the order of one in a million
atoms. There are 2 kinds of dopant. Those which have more valence
electrons that the semiconductor itself are called "donors" and
those which have fewer are termed "acceptors" [Book 2].
In a silicon crystal, each atom has 4 valence electrons,
which are shared with a neighbouring atom to form a stable
tetrahedral structure. Phosphorus, which has 5 valence electrons,
is a donor and causes extra electrons to appear in the conduction
band. Silicon so doped is called "n-type" [Book 5]. On the other
hand, boron, with a valence of 3, is an acceptor, leaving so-
called "holes" in the lattice, which act like positive charges
and render the silicon "p-type"[Book 5]. The drawings in Figure
1.2 are 2-dimensional representations of n-and p-type silicon
crystals, in which the atomic nucleii in the lattice are
indicated by circles and the bonding valence electrons are shown
as lines between the atoms. Holes, like electrons, will
remove under the influence of an applied voltage but, as the
mechanism of their movement is valence electron substitution from
atom to atom, they are less mobile than the free conduction
electrons [Book 2].
In a n-on-p crystalline silicon solar cell, a shadow
junction is formed by diffusing phosphorus into a boron-based
base. At the junction, conduction electrons from donor atoms in
the n-region diffuse into the p-region and combine with holes in
acceptor atoms, producing a layer of negatively-charged impurity
atoms. The opposite action also takes place, holes from acceptor
atoms in the p-region crossing into the n-region, combining with
electrons and producing positively-charged impurity atoms [Book
4]. The net result of these movements is the disappearance of
conduction electrons and holes from the vicinity of the junction
and the establishment there of a reverse electric field, which is
positive on the n-side and negative on the p-side. This reverse
field plays a vital part in the functioning of the device. The
area in which it is set up is called the "depletion area" or
"barrier layer"[Book 4].
When light falls on the front surface, photons with energy
in excess of the energy gap (1.1 eV in crystalline silicon)
interact with valence electrons and lift them to the conduction
band. This movement leaves behind holes, so each photon is said
to generate an "electron-hole pair" [Book 2]. In the crystalline
silicon, electron-hole generation takes place throughout the
thickness of the cell, in concentrations depending on the
irradiance and the spectral composition of the light. Photon
energy is inversely proportional to wavelength. The highly
energetic photons in the ultra-violet and blue part of the
spectrum are absorbed very near the surface, while the less
energetic longer wave photons in the red and infrared are
absorbed deeper in the crystal and further from the junction
[Book 4]. Most are absorbed within a thickness of 100 'm.
The electrons and holes diffuse through the crystal in an
effort to produce an even distribution. Some recombine after a
lifetime of the order of one millisecond, neutralizing their
charges and giving up energy in the form of heat. Others reach
the junction before their lifetime has expired. There they are
separated by the reverse field, the electrons being accelerated
towards the negative contact and the holes towards the positive
[Book 5]. If the cell is connected to a load, electrons will be
pushed from the negative contact through the load to the positive
contact, where they will recombine with holes. This constitutes
an electric current. In crystalline silicon cells, the current
generated by radiation of a particular spectral composition is
directly proportional to the irradiance [Book 2]. Some types of
solar cell, however, do not exhibit this linear relationship.
The silicon solar cell has many advantages such as high
reliability, photovoltaic power plants can be put up easily and
quickly, photovoltaic power plants are quite modular and can
respond to sudden changes in solar input which occur when clouds
pass by. However there are still some major problems with them.
They still cost too much for mass use and are relatively
inefficient with conversion efficiencies of 20% to 30%. With
time, both of these problems will be solved through mass
production and new technological advances in semiconductors.
Bibliography
1) Green, Martin Solar Cells, Operating Principles, Technology
and System Applications. New Jersey, Prentice-Hall, 1989. pg
104-106
2) Hovel, Howard Solar Cells, Semiconductors and Semimetals. New
York, Academic Press, 1990. pg 334-339
3) Newham, Michael ,"Photovoltaics, The Sunrise Industry", Solar
Energy, October 1, 1989, pp 253-256
4) Pulfrey, Donald Photovoltaic Power Generation. Oxford, Van
Norstrand Co., 1988. pg 56-61
5) Treble, Fredrick Generating Electricity from the Sun. New
York, Pergamon Press, 1991. pg 192-195
f:\12000 essays\sciences (985)\Genetics\SOLAR ENERGY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Genetics\The Big Bang.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It is always a mystery about how the universe began, whether if and when it will end. Astronomers construct hypotheses called cosmological models that try to find the answer. There are two types of models: Big Bang and Steady State. However, through many observational evidences, the Big Bang theory can best explain the creation of the universe. The Big Bang model postulates that about 15 to 20 billion years ago, the universe violently exploded into being, in an event called the Big Bang. Before the Big Bang, all of the matter and radiation of our present universe were packed together in the primeval fireball--an extremely hot dense state from which the universe rapidly expanded.1 The Big Bang was the start of time and space. The matter and radiation of that early stage rapidly expanded and cooled. Several million years later, it condensed into galaxies. The universe has continued to expand, and the galaxies have continued moving away from each other ever since. Today the universe is still expanding, as astronomers have observed. The Steady State model says that the universe does not evolve or change in time. There was no beginning in the past, nor will there be change in the future. This model assumes the perfect cosmological principle. This principle says that the universe is the same everywhere on the large scale, at all times.2 It maintains the same average density of matter forever. There are observational evidences found that can prove the Big Bang model is more reasonable than the Steady State model. First, the redshifts of distant galaxies. Redshift is a Doppler effect which states that if a galaxy is moving away, the spectral line of that galaxy observed will have a shift to the red end. The faster the galaxy moves, the more shift it has. If the galaxy is moving closer, the spectral line will show a blue shift. If the galaxy is not moving, there is no shift at all. However, as astronomers observed, the more distance a galaxy is located from Earth, the more redshift it shows on the spectrum. This means the further a galaxy is, the faster it moves. Therefore, the universe is expanding, and the Big Bang model seems more reasonable than the Steady State model. The second observational evidence is the radiation produced by the Big Bang. The Big Bang model predicts that the universe should still be filled with a small remnant of radiation left over from the original violent explosion of the primeval fireball in the past. The primeval fireball would have sent strong shortwave radiation in all directions into space. In time, that radiation would spread out, cool, and fill the expanding universe uniformly. By now it would strike Earth as microwave radiation. In 1965 physicists Arno Penzias and Robert Wilson detected microwave radiation coming equally from all directions in the sky, day and night, all year.3 And so it appears that astronomers have detected the fireball radiation that was produced by the Big Bang. This casts serious doubt on the Steady State model. The Steady State could not explain the existence of this radiation, so the model cannot best explain the beginning of the universe. Since the Big Bang model is the better model, the existence and the future of the universe can also be explained. Around 15 to 20 billion years ago, time began. The points that were to become the universe exploded in the primeval fireball called the Big Bang. The exact nature of this explosion may never be known. However, recent theoretical breakthroughs, based on the principles of quantum theory, have suggested that space, and the matter within it, masks an infinitesimal realm of utter chaos, where events happen randomly, in a state called quantum weirdness.4 Before the universe began, this chaos was all there was. At some time, a portion of this randomness happened to form a bubble, with a temperature in excess of 10 to the power of 34 degrees Kelvin. Being that hot, naturally it expanded. For an extremely brief and short period, billionths of billionths of a second, it inflated. At the end of the period of inflation, the universe may have a diameter of a few centimetres. The temperature had cooled enough for particles of matter and antimatter to form, and they instantly destroy each other, producing fire and a thin haze of matter-apparently because slightly more matter than antimatter was formed.5 The fireball, and the smoke of its burning, was the universe at an age of trillionth of a second. The temperature of the expanding fireball dropped rapidly, cooling to a few billion degrees in few minutes. Matter continued to condense out of energy, first protons and neutrons, then electrons, and finally neutrinos. After about an hour, the temperature had dropped below a billion degrees, and protons and neutrons combined and formed hydrogen, deuterium, helium. In a billion years, this cloud of energy, atoms, and neutrinos had cooled enough for galaxies to form. The expanding cloud cooled still further until today, its temperature is a couple of degrees above absolute zero. In the future, the universe may end up in two possible situations. From the initial Big Bang, the universe attained a speed of expansion. If that speed is greater than the universe's own escape velocity, then the universe will not stop its expansion. Such a universe is said to be open. If the velocity of expansion is slower than the escape velocity, the universe will eventually reach the limit of its outward thrust, just like a ball thrown in the air comes to the top of its arc, slows, stops, and starts to fall. The crash of the long fall may be the Big Bang to the beginning of another universe, as the fireball formed at the end of the contraction leaps outward in another great expansion.6 Such a universe is said to be closed, and pulsating. If the universe has achieved escape velocity, it will continue to expand forever. The stars will redden and die, the universe will be like a limitless empty haze, expanding infinitely into the darkness. This space will become even emptier, as the fundamental particles of matter age, and decay through time. As the years stretch on into infinity, nothing will remain. A few primitive atoms such as positrons and electrons will be orbiting each other at distances of hundreds of astronomical units.7 These particles will spiral slowly toward each other until touching, and they will vanish in the last flash of light. After all, the Big Bang model is only an assumption. No one knows for sure that exactly how the universe began and how it will end. However, the Big Bang model is the most logical and reasonable theory to explain the universe in modern science.
ENDNOTES 1. Dinah L. Mache, Astronomy, New York: John Wiley & Sons, Inc., 1987. p. 128. 2. Ibid., p. 130. 3. Joseph Silk, The Big Bang, New York: W.H. Freeman and Company, 1989. p. 60. 4. Terry Holt, The Universe Next Door, New York: Charles Scribner's Sons, 1985. p. 326. 5. Ibid., p. 327. 6. Charles J. Caes, Cosmology, The Search For The Order Of The Universe, USA: Tab Books Inc., 1986. p. 72. 7. John Gribbin, In Search Of The Big Bang, New York: Bantam Books, 1986. p. 273.
BIBLIOGRAPHY Boslough, John. Stephen Hawking's Universe. New York: Cambridge University Press, 1980. Caes, J. Charles. Cosmology, The Search For The Order Of The Universe. USA: Tab Books Inc., 1986. Gribbin, John. In Search Of The Big Bang. New York: Bantam Books, 1986. Holt, Terry. The Universe Next Door. New York: Charles Scribner's Sons, 1985. Kaufmann, J. William III. Astronomy: The Structure Of The Universe. New York: Macmillan Publishing Co., Inc., 1977. Mache, L. Dinah. Astronomy. New York: John Wiley & Sons, Inc., 1987. Silk, Joseph. The Big Bang. New York: W.H. Freeman and Company, 1989. ------------------------------------------------------------------------------
f:\12000 essays\sciences (985)\Genetics\The DustCloud Hypothesis.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Dust-Cloud Hypothesis
The universe contains huge clouds made up of very large amounts
of dustand gas. About 6,000,000,000 (billion) years ago, one of
these clouds began to condense. Gravitation--the pull that all
objects in the universe have for one another--pulled the gas and
dust particles together. As the dust cloud condensed, it began
to spin. It spun faster and faster and flattened as it spun. It
became shaped like a pancake that is thick at the centre and thin
at the edges.
The slowly spinning centre condensed to make the sun. But the
outer parts of the pancake, or disk, were spinning too fast to
condense in one piece. They broke up into smaller swirls, or
eddies, which condensed separately to make the planets.
The forming sun and planets were made up mostly of gas. They
contained much more gas than dust. The earth was far bigger than
it is now and probably weighed 500 times as much.
The large body of dust and gas forming the sun collapsed rapidly
to a much smaller size. The pressure that resulted from the
collapse caused the sun to become very hot and to glow brightly.
The newly born sun began to heat up the swirling eddy of gas and
dust that was to become the earth. The gas expanded, and some of
it flowed away into space. The dust that remained behind then
collected together because of gravity. Although the shrinking
earth generated a lot of heat, most of this heat was lost into
space. Therefore, the original earth was most likely solid, not
molten.
This hypothesis was developed by a scientest, Harold C. Urey in
1952. It is also known as the Urey's hypothesis. He showed that
methane, ammonia, and water are the stable forms of carbon,
nitrogen, and oxygen if an excess of hydrogen is present. Cosmic
dust clouds, from which the earth formed, contained a great
excess of hydrogen.
f:\12000 essays\sciences (985)\Genetics\THE EFFECTS OF ALTITUDE ON HUMAN PHYSIOLOGY.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
THE EFFECTS OF ALTITUDE ON HUMAN PHYSIOLOGY
Changes in altitude have a profound effect on the human body. The body
attempts to maintain a state of homeostasis or balance to ensure the optimal
operating environment for its complex chemical systems. Any change from this
homeostasis is a change away from the optimal operating environment. The body
attempts to correct this imbalance. One such imbalance is the effect of
increasing altitude on the body's ability to provide adequate oxygen to be
utilized in cellular respiration. With an increase in elevation, a typical
occurrence when climbing mountains, the body is forced to respond in various
ways to the changes in external
environment. Foremost of these changes is the diminished ability to obtain
oxygen from the atmosphere. If the adaptive responses to this stressor are
inadequate the performance of body systems may decline dramatically. If
prolonged the results can be serious or even fatal. In looking at the effect
of altitude on body functioning we first must understand what occurs in the
external environment at higher elevations and then observe the important
changes that occur in the internal environment of the body in response.
HIGH ALTITUDE
In discussing altitude change and its effect on the body mountaineers
generally define altitude according to the scale of high (8,000 - 12,000
feet), very high (12,000 - 18,000 feet), and extremely high (18,000+ feet),
(Hubble, 1995). A common misperception of the change in external environment
with increased altitude is that there is decreased oxygen. This is not
correct as the concentration of oxygen at sea level is about 21% and stays
relatively unchanged until over 50,000 feet (Johnson, 1988).
What is really happening is that the atmospheric pressure is decreasing and
subsequently the amount of oxygen available in a single breath of air is
significantly less. At sea level the barometric pressure averages 760 mmHg
while at 12,000 feet it is only 483 mmHg. This decrease in total atmospheric
pressure means that there are 40% fewer oxygen molecules per breath at this
altitude compared to sea level (Princeton, 1995).
HUMAN RESPIRATORY SYSTEM
The human respiratory system is responsible for bringing oxygen into the
body and transferring it to the cells where it can be utilized for cellular
activities. It also removes carbon dioxide from the body. The respiratory
system draws air initially either through the mouth or nasal passages. Both
of these passages join behind the hard palate to form the pharynx. At the
base of the pharynx are two openings. One, the esophagus, leads to the
digestive system while the other, the glottis, leads to the lungs. The
epiglottis covers the glottis when swallowing so that food does not enter the
lungs. When the epiglottis is not covering the opening to the lungs air may
pass freely into and out of the trachea.
The trachea sometimes called the "windpipe" branches into two bronchi which
in turn lead to a lung. Once in the lung the bronchi branch many times into
smaller bronchioles which eventually terminate in small sacs called alveoli.
It is in the alveoli that the actual transfer of oxygen to the blood takes
place.
The alveoli are shaped like inflated sacs and exchange gas through a
membrane. The passage of oxygen into the blood and carbon dioxide out of the
blood is dependent on three major factors: 1) the partial pressure of the
gases, 2) the area of the pulmonary surface, and 3) the thickness of the
membrane (Gerking, 1969). The membranes in the alveoli provide a large
surface area for the free exchange of gases. The typical thickness of the
pulmonary membrane is less than the thickness of a red blood cell. The
pulmonary surface and the thickness of the alveolar membranes are not
directly affected by a change in altitude. The partial pressure of oxygen,
however, is directly related to altitude and affects gas transfer in the
alveoli.
GAS TRANSFER
To understand gas transfer it is important to first understand something
about the
behavior of gases. Each gas in our atmosphere exerts its own pressure and
acts independently of the others. Hence the term partial pressure refers to
the contribution of each gas to the entire pressure of the atmosphere. The
average pressure of the atmosphere at sea level is approximately 760 mmHg.
This means that the pressure is great enough to support a column of mercury
(Hg) 760 mm high. To figure the partial pressure of oxygen you start with the
percentage of oxygen present in the atmosphere which is about 20%. Thus
oxygen will constitute 20% of the total atmospheric pressure at any given
level. At sea level the total atmospheric pressure is 760 mmHg so the partial
pressure of O2 would be approximately 152 mmHg.
760 mmHg x 0.20 = 152 mmHg
A similar computation can be made for CO2 if we know that the concentration
is approximately 4%. The partial pressure of CO2 would then be about 0.304
mmHg at sea level.
Gas transfer at the alveoli follows the rule of simple diffusion. Diffusion
is movement of molecules along a concentration gradient from an area of high
concentration to an area of lower concentration. Diffusion is the result of
collisions between molecules. In areas of higher concentration there are more
collisions. The net effect of this greater number of collisions is a movement
toward an area of lower concentration. In Table 1 it is apparent that the
concentration gradient favors the diffusion of oxygen into and carbon dioxide
out of the blood (Gerking, 1969). Table 2 shows the decrease in partial
pressure of oxygen at increasing altitudes (Guyton, 1979).
Table 1
ATMOSPHERIC AIR ALVEOLUS VENOUS BLOOD
OXYGEN 152 mmHg (20%) 104 mmHg (13.6%) 40 mmHg
CARBON DIOXIDE 0.304 mmHg (0.04%) 40 mmHg (5.3%) 45 mmHg
Table 2
ALTITUDE (ft.) BAROMETRIC PRESSURE (mmHg) Po2 IN AIR (mmHg) Po2 IN ALVEOLI
(mmHg) ARTERIAL OXYGEN SATURATION (%)
0 760 159* 104 97
10,000 523 110 67 90
20,000 349 73 40 70
30,000 226 47 21 20
40,000 141 29 8 5
50,000 87 18 1 1
*this value differs from table 1 because the author used the value for the
concentration of O2 as 21%.
The author of table 1 choose to use the value as 20%.
CELLULAR RESPIRATION
In a normal, non-stressed state, the respiratory system transports oxygen
from the lungs to the cells of the body where it is used in the process of
cellular respiration. Under normal conditions this transport of oxygen is
sufficient for the needs of cellular respiration. Cellular respiration
converts the energy in chemical bonds into energy that can be used to power
body processes. Glucose is the molecule most often used to fuel this process
although the body is capable of using other organic molecules for energy.
The transfer of oxygen to the body tissues is often called internal
respiration (Grollman, 1978). The process of cellular respiration is a
complex series of chemical steps that ultimately allow for the breakdown of
glucose into usable energy in the form of ATP (adenosine triphosphate). The
three main steps in the process are: 1) glycolysis, 2) Krebs cycle, and 3)
electron transport system. Oxygen is required for these processes to function
at an efficient level. Without the presence of oxygen the pathway for energy
production must proceed anaerobically. Anaerobic respiration sometimes called
lactic acid fermentation produces significantly less ATP (2 instead of 36/38)
and due to this great inefficiency will quickly exhaust the available supply
of glucose. Thus the anaerobic pathway is not a permanent solution for the
provision of energy to the body in the absence of sufficient oxygen.
The supply of oxygen to the tissues is dependent on: 1) the efficiency with
which blood is oxygenated in the lungs, 2) the efficiency of the blood in
delivering oxygen to the tissues, 3) the efficiency of the respiratory
enzymes within the cells to transfer hydrogen to molecular oxygen (Grollman,
1978). A deficiency in any of these areas can result in the body cells not
having an adequate supply of oxygen. It is this inadequate supply of oxygen
that results in difficulties for the body at higher elevations.
ANOXIA
A lack of sufficient oxygen in the cells is called anoxia. Sometimes the
term hypoxia, meaning less oxygen, is used to indicate an oxygen debt. While
anoxia literally means "no oxygen" it is often used interchangeably with
hypoxia. There are different types of anoxia based on the cause of the oxygen
deficiency. Anoxic anoxia refers to defective oxygenation of the blood in the
lungs. This is the type of oxygen deficiency that is of concern when
ascending to greater altitudes with a subsequent decreased partial pressure
of O2. Other types of oxygen deficiencies include: anemic anoxia (failure of
the blood to transport adequate quantities of oxygen), stagnant anoxia (the
slowing of the circulatory system), and histotoxic anoxia (the failure of
respiratory enzymes to adequately function).
Anoxia can occur temporarily during normal respiratory system regulation of
changing cellular needs. An example of this would be climbing a flight of
stairs. The increased oxygendemand of the cells in providing the mechanical
energy required to climb ultimately produces a local hypoxia in the muscle
cell. The first noticeable response to this external stress is usually an
increase in breathing rate. This is called increased alveolar ventilation.
The rate of our breathing is determined by the need for O2 in the cells and
is the first response to hypoxic conditions.
BODY RESPONSE TO ANOXIA
If increases in the rate of alveolar respiration are insufficient to supply
the oxygen needs of the cells the respiratory system responds by general
vasodilation. This allows a greater flow of blood in the circulatory system.
The sympathetic nervous system also acts to stimulate vasodilation within the
skeletal muscle. At the level of the capillaries the normally closed
precapillary sphincters open allowing a large flow of blood through the
muscles. In turn the cardiac output increases both in terms of heart rate and
stroke volume. The stroke volume, however, does not substantially increase in
the non-athlete (Langley, et.al., 1980). This demonstrates an obvious benefit
of regular exercise and physical conditioning particularly for an individual
who will be exposed to high altitudes. The heart rate is increased by the
action of the
adrenal medulla which releases catecholamines. These catecholamines work
directly on the myocardium to strengthen contraction. Another compensation
mechanism is the release of renin by the kidneys. Renin leads to the
production of angiotensin which serves to increase blood pressure (Langley,
Telford, and Christensen, 1980). This helps to force more blood into
capillaries. All of these changes are a regular and normal response of the
body to external stressors. The question involved with altitude changes
becomes what happens when the normal responses can no longer meet the oxygen
demand from the cells?
ACUTE MOUNTAIN SICKNESS
One possibility is that Acute Mountain Sickness (AMS) may occur. AMS is
common at high altitudes. At elevations over 10,000 feet, 75% of people will
have mild symptoms (Princeton, 1995). The occurrence of AMS is dependent upon
the elevation, the rate of ascent to that elevation, and individual
susceptibility.
Acute Mountain Sickness is labeled as mild, moderate, or severe dependent on
the presenting symptoms. Many people will experience mild AMS during the
process of acclimatization to a higher altitude. In this case symptoms of AMS
would usually start 12-24 hours after arrival at a higher altitude and begin
to decrease in severity about the third day. The symptoms of mild AMS are
headache, dizziness, fatigue, shortness of breath, loss of appetite, nausea,
disturbed sleep, and a general feeling of malaise (Princeton, 1995). These
symptoms tend to increase at night when respiration is slowed during sleep.
Mild AMS does not interfere with normal activity and symptoms generally
subside spontaneously as the body acclimatizes to
the higher elevation.
Moderate AMS includes a severe headache that is not relieved by medication,
nausea and vomiting, increasing weakness and fatigue, shortness of breath,
and decreased coordination called ataxia (Princeton, 1995). Normal activity
becomes difficult at this stage of AMS, although the person may still be able
to walk on their own. A test for moderate AMS is to have the individual
attempt to walk a straight line heel to toe. The person with ataxia will be
unable to walk a straight line. If ataxia is indicated it is a clear sign
that immediate descent is required. In the case of hiking or climbing it is
important to get the affected individual to descend before the ataxia reaches
the point where they can no longer walk on their own.
Severe AMS presents all of the symptoms of mild and moderate AMS at an
increased level of severity. In addition there is a marked shortness of
breath at rest, the inability to walk, a decreasing mental clarity, and a
potentially dangerous fluid buildup in the lungs.
ACCLIMATIZATION
There is really no cure for Acute Mountain Sickness other than
acclimatization or
descent to a lower altitude. Acclimatization is the process, over time, where
the body adapts to the decrease in partial pressure of oxygen molecules at a
higher altitude. The major cause of altitude illnesses is a rapid increase in
elevation without an appropriate acclimatization period. The process of
acclimatization generally takes 1-3 days at the new altitude. Acclimatization
involves several changes in the structure and function of the body. Some of
these changes happen immediately in response to reduced levels of oxygen
while others are a slower adaptation. Some of the most significant changes
are:
Chemoreceptor mechanism increases the depth of alveolar ventilation. This
allows for an increase in ventilation of about 60% (Guyton, 1969). This is an
immediate response to oxygen debt. Over a period of several weeks the
capacity to increase alveolar ventilation may increase 600-700%.
Pressure in pulmonary arteries is increased, forcing blood into portions of
the
lung which are normally not used during sea level breathing.
The body produces more red blood cells in the bone marrow to carry oxygen.
This process may take several weeks. Persons who live at high altitude often
have red blood cell counts 50% greater than normal.
The body produces more of the enzyme 2,3-biphosphoglycerate that facilitates
the release of oxygen from hemoglobin to the body tissues (Tortora, 1993).
The acclimatization process is slowed by dehydration, over-exertion, alcohol
and other depressant drug consumption. Longer term changes may include an
increase in the size of the alveoli, and decrease in the thickness of the
alveoli membranes. Both of these changes allow for more gas transfer.
TREATMENT FOR AMS
The symptoms of mild AMS can be treated with pain medications for headache.
Some physicians recommend the medication Diamox (Acetazolamide). Both Diamox
and headache medication appear to reduce the severity of symptoms, but do not
cure the underlying problem of oxygen debt. Diamox, however, may allow the
individual to metabolize more oxygen by breathing faster. This is especially
helpful at night when respiratory drive is decreased. Since it takes a while
for Diamox to have an effect, it is advisable to start taking it 24 hours
before going to altitude. The recommendation of the Himalayan Rescue
Association Medical Clinic is 125 mg.
twice a day. The standard dose has been 250 mg., but their research shows no
difference with the lower dose (Princeton, 1995). Possible side effects
include tingling of the lips and finger tips, blurring of vision, and
alteration of taste. These side effects may be reduced with the 125 mg. dose.
Side effects subside when the drug is stopped. Diamox is a sulfonamide drug,
so people who are allergic to sulfa drugs such as penicillin should not take
Diamox. Diamox has also been known to cause severe allergic reactions to
people with no previous history of Diamox or sulfa
allergies. A trial course of the drug is usually conducted before going to a
remote location where a severe allergic reaction could prove difficult to
treat. Some recent data suggests that the medication Dexamethasone may have
some effect in reducing the risk of mountain sickness when used in
combination with Diamox (University of Iowa, 1995).
Moderate AMS requires advanced medications or immediate descent to reverse
the problem. Descending even a few hundred feet may help and definite
improvement will be seen in descents of 1,000-2,000 feet. Twenty-four hours
at the lower altitude will result in significant improvements. The person
should remain at lower altitude until symptoms have subsided (up to 3 days).
At this point, the person has become acclimatized to that altitude and can
begin ascending again. Severe AMS requires immediate descent to lower
altitudes (2,000 - 4,000 feet). Supplemental oxygen may be helpful in
reducing the effects of altitude sicknesses but does not overcome all the
difficulties that may result from the lowered barometric pressure.
GAMOW BAG
This invention has revolutionized field treatment of high altitude
illnesses. The Gamow bag is basically a portable sealed chamber with a pump.
The principle of operation is identical to the hyperbaric chambers used in
deep sea diving. The person is placed inside the bag and it is inflated.
Pumping the bag full of air effectively increases the concentration of oxygen
molecules and therefore simulates a descent to lower altitude. In as little
as 10 minutes the bag creates an atmosphere that corresponds to that at 3,000
- 5,000 feet lower. After 1-2 hours in the bag, the
person's body chemistry will have reset to the lower altitude. This lasts for
up to 12 hours outside of the bag which should be enough time to travel to a
lower altitude and allow for further acclimatization. The bag and pump weigh
about 14 pounds and are now carried on most major high altitude expeditions.
The gamow bag is particularly important where the possibility of immediate
descent is not feasible.
OTHER ALTITUDE-INDUCED ILLNESS
There are two other severe forms of altitude illness. Both of these happen
less
frequently, especially to those who are properly acclimatized. When they do
occur, it is usually the result of an increase in elevation that is too rapid
for the body to adjust properly. For reasons not entirely understood, the
lack of oxygen and reduced pressure often results in leakage of fluid through
the capillary walls into either the lungs or the brain. Continuing to higher
altitudes without proper acclimatization can lead to potentially serious,
even life-threatening illnesses.
HIGH ALTITUDE PULMONARY EDEMA (HAPE)
High altitude pulmonary edema results from fluid buildup in the lungs. The
fluid in the lungs interferes with effective oxygen exchange. As the
condition becomes more severe, the level of oxygen in the bloodstream
decreases, and this can lead to cyanosis, impaired cerebral function, and
death. Symptoms include shortness of breath even at rest, tightness in the
chest,
marked fatigue, a feeling of impending suffocation at night, weakness, and a
persistent productive cough bringing up white, watery, or frothy fluid
(University of Iowa, 1995.). Confusion, and irrational behavior are signs
that insufficient oxygen is reaching the brain. One of the methods for
testing for HAPE is to check recovery time after exertion. Recovery time
refers to the time after exertion that it takes for heart rate and
respiration to return to near normal. An increase in this time may mean fluid
is building up in the lungs. If a case of HAPE is suspected an immediate
descent is a necessary life-saving measure (2,000 - 4,000 feet). Anyone
suffering
from HAPE must be evacuated to a medical facility for proper follow-up
treatment. Early data suggests that nifedipine may have a protective effect
against high altitude pulmonary edema (University of Iowa, 1995).
HIGH ALTITUDE CEREBRAL EDEMA (HACE)
High altitude cerebral edema results from the swelling of brain tissue from
fluid leakage. Symptoms can include headache, loss of coordination (ataxia),
weakness, and decreasing levels of consciousness including, disorientation,
loss of memory, hallucinations, psychotic behavior, and coma. It generally
occurs after a week or more at high altitude. Severe instances can lead to
death if not treated quickly. Immediate descent is a necessary life-saving
measure (2,000 - 4,000 feet). Anyone suffering from HACE must be evacuated
to a medical facility for proper follow-up
treatment.
CONCLUSION
The importance of oxygen to the functioning of the human body is critical.
Thus the effect of decreased partial pressure of oxygen at higher altitudes
can be pronounced. Each individual adapts at a different speed to exposure to
altitude and it is hard to know who may be affected by altitude sickness.
There are no specific factors such as age, sex, or physical condition that
correlate with susceptibility to altitude sickness. Most people can go up to
8,000 feet with minimal effect. Acclimatization is often accompanied by fluid
loss, so the ingestion of large amounts of fluid to remain properly hydrated
is important (at least 3-4 quarts per day). Urine output should be copious
and clear.
From the available studies on the effect of altitude on the human body it
would appear apparent that it is important to recognize symptoms early and
take corrective measures. Light activity during the day is better than
sleeping because respiration decreases during sleep, exacerbating the
symptoms. The avoidance of tobacco, alcohol, and other depressant drugs
including, barbiturates, tranquilizers, and sleeping pills is important.
These depressants further decrease the respiratory drive during sleep
resulting in a worsening of the symptoms. A high carbohydrate diet (more than
70% of your calories from carbohydrates) while at altitude also
appears to facilitate recovery.
A little planning and awareness can greatly decrease the chances of altitude
sickness. Recognizing early symptoms can result in the avoidance of more
serious consequences of altitude sickness. The human body is a complex
biochemical organism that requires an adequate supply of oxygen to function.
The ability of this organism to adjust to a wide range of conditions is a
testament to its survivability. The decreased partial pressure of oxygen with
increasing
altitude is one of these adaptations.
Sources:
Electric Differential Multimedia Lab, Travel Precautions and Advice,
University of Iowa Medical College, 1995.
Gerking, Shelby D., Biological Systems, W.B. Saunders Company, 1969.
Grolier Electronic Publishing, The New Grolier Multimedia Encyclopedia, 1993.
Grollman, Sigmund, The Human Body: Its Structure and Physiology, Macmillian
Publishing Company, 1978.
Guyton, Arthur C., Physiology of the Human Body, 5th Edition, Saunders
College Publishing, 1979.
Hackett, P., Mountain Sickness, The Mountaineers, Seattle, 1980.
Hubble, Frank, High Altitude Illness, Wilderness Medicine Newsletter,
March/April 1995.
Hubble, Frank, The Use of Diamox in the Prevention of Acute Mountain
Sickness, Wilderness Medicine Newsletter, March/April 1995.
Isaac, J. and Goth, P., The Outward Bound Wilderness First Aid Handbook,
Lyons & Burford, New 1991.
Johnson, T., and Rock, P., Acute Mountain Sickness, New England Journal of
Medicine, 1988:319:841-5
Langley, Telford, and Christensen, Dynamic Anatomy and Physiology,
McGraw-Hill, 1980.
Princeton University, Outdoor Action Program, 1995.
Starr, Cecie, and Taggart, Ralph, Biology: The Unity and Diversity of Life,
Wadsworth Publishing Company, 1992.
Tortora, Gerard J., and Grabowski, Sandra, Principles of Anatomy and
Physiology, Seventh Edition, Harper Collins College Publishers, 1993.
Wilkerson., J., Editor, Medicine for Mountaineering, Fourth Edition, The
Mountaineers, Seattle, 1992.
f:\12000 essays\sciences (985)\Genetics\The Great Imposter.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Great Imposters
Finding good day care can certainly pose a problem these days,
unless, of course, you're an African widow bird. When it comes
time for a female widow bird to lay her eggs, she simply locates
the nest of a nearby Estrildid finch and surreptitiously drops
the eggs inside.
That's the last the widow bird ever sees of her offspring.
But not to worry, because the Estrildid finch will take devoted
care of the abandoned birds as if they were her own.
And who's to tell the difference? Though adult widow birds
and Estrildid finches don't look at all alike, their eggs do. Not
only that, baby widow birds are dead ringers for Estrildid finch
chicks, both having the same colouration and markings. They even
act and sound the same, thus ensuring that the widow bird
nestlings can grow up among their alien nestmates with no risk of
being rejected by their foster parents.
MASTERS OF DISGUISE
Things aren't always as they seem, and nowhere is this more
true than in nature, where dozens of animals (and plants) spend
their time masquerading as others. So clever are their disguises
that you've probably never known you were being fooled by spiders
impersonating ants, squirrels that look like shrews, worms
copying sea anemones, and roaches imitating ladybugs. There are
even animals that look like themselves, which can also be a form
of impersonation. The phenomenon of mimicry, as it's called
by biologists, was first noted in the mid-1800s by an English
naturalist, Henry W. Bates. Watching butterflies in the forests
of Brazil, Bates discovered that many members of the Peridae
butterfly family did not look anything like their closest
relatives. Instead they bore a striking resemblance to members of
the Heliconiidae butterfly family.
Upon closer inspection, Bates found that there was a major
advantage in mimicking the Heliconiids. Fragile, slow-moving and
brightly coloured, the Heliconiids are ideal targets for
insectivorous birds. Yet, birds never touch them because they
taste so bad.
Imagine that you're a delicious morsel of butterfly. Wouldn't
it be smart to mimic the appearance of an unpalatable Heliconiid
so that no bird would bother you either? That's what Bates
concluded was happening in the Brazilian jungle among the
Pieridae. Today, the imitation of an inedible species by an
edible one is called Batesian mimicry.
Since Bates' time, scientists have unmasked hundreds of cases
of mimicry in nature. It hasn't always been an easy job, either,
as when an animal mimics not one, but several other species. In
one species of butterfly common in India and Sri Lanka, the
female appears in no less than three versions. One type resembles
the male while the others resemble two entirely different species
of inedible butterflies.
Butterflies don't "choose" to mimic other butterflies in the
same way that you might pick out a costume for a masquerade ball.
True, some animals, such as the chameleon, do possess the
ability to change body colour and blend in the with their
surroundings. But most mimicry arises through evolutionary
change. A mutant appears with characteristics similar to that of
a better protected animal. This extra protection offers the
mutant the opportunity to reproduce unharmed, and eventually
flourish alongside the original.
In the world of mimics, the ant is another frequently
copied animal, though not so much by other ants as by other
insects and even spiders. Stoop down to inspect an ant colony,
and chances are you'll find a few interlopers that aren't really
ants at all but copycat spiders (or wasps or flies). One way you
might distinguish between host and guest is by counting legs:
Ants have six legs while spiders have eight. Look carefully and
you might see a few spiders running around on six legs while
holding their other two out front like ant feelers.
COPYCATS
Mimicry can not only be a matter of looking alike, it can
also involve acting the same. In the Philippine jungle there is a
nasty little bug, the bombardier beetle. When threatened by a
predator, it sticks its back end in the air, like a souped-up
sports car, and lets out a blast of poisonous fluid. In the same
jungle lives a cricket that is a living xerox of the bombardier
beetle. When approached by a predator, the cricket will also prop
up its behind -- a tactic sufficient to scare off the enemy,
even though no toxic liquid squirts out.
Going one step further than that is a native of the United
States, the longicorn beetle, which resembles the unappetizing
soft-shelled beetle. Not content to merely look alike, the
longicorn beetle will sometimes attack a soft-shelled beetle and
devour part of its insides. By ingesting the soft-shelled
beetle's bad-tasting body fluid, the longicorn beetle gives
itself a terrible taste, too!
Protection is by no means the only advantage that mimicry
offers. Foster care can be another reward, as proven by the
African widow bird. And then there's the old wolf-in-sheep's-
clothing trick, which biologists call aggressive mimicry.
The master practitioner of aggressive mimicry is the ocean-
going anglerfish. Looking like a stone overgrown with algae, the
anglerfish disguises itself among the rocks and slime on the
ocean bottom. Protruding from its mouth is a small appendage, or
lure, with all the features of a fat, juicy pink worm.
The anglerfish lacks powerful teeth so it can't take a tight
grip on its prey. Instead, it waits motionless until a small fish
shows interest in the lure, and then wiggles the lure in front of
the fish's mouth. When the small fish is just about to snap at
the lure, the angler swallows violently, sucking the fish down
its hatch. Diner instantly becomes dinner.
SEXUAL IMITATORS
Of all the many impostures found in nature, probably the
sneakiest are those of the sexual mimics: males who imitate
females to gain an advantage at mating time. Here in Ontario we
have a sexual mimic, the bluegill fish. Male bluegills come in
two types: the standard male and the satellite male, which looks
just like a female bluegill.
In preparation for mating, the standard male bluegill
performs the job of building the nest, where he bides his time
until a female enters it to spawn. Satellite fish don't build
nests, choosing instead to hover around the nest of a standard
male until the moment when a pregnant female enters. The
satellite fish follows her into the nest, deceiving the
nestbuilder into believing that he is now in the presence of two
females. The three fish swim around together, and when the female
drops her eggs, both males release a cloud of sperm. Some of the
eggs are fertilized by the resident male, some by the satellite
male, thus passing on passing on different sets of male genes to
a new generation of bluegills.
Another case of sexual mimicry has recently been uncovered in
Manitoba among the red-sided garter snakes. The little town of
Inwood, Manitoba and the surrounding countryside is garter snake
heaven, where you can find the largest snake colonies on Earth.
Every spring, the red-sided garter snake engages in a curious
mating ritual. Soon after spring thaw, the males emerge first
from their winter cave and hover nearby. The females then slither
out a few at a time, each one exuding a special "perfume" which
signals to the fellows that she's ready to mate. At first whiff
of this lovely odour, a mass of frenetic males immediately
besieges the female, wrapping her up in a "mating ball" of 10, 20
or sometimes as many as 100 writhing males, all hoping to get
lucky.
Scientists have now discovered that some male red-sided
garters give off the same perfume as the female, and they do this
while intertwined in the mating ball. Male and female red-sided
garters look exactly alike, so the male with the female scent can
effectively distract many of the males from the real female,
giving the imposter a better shot at getting close to the female
and impregnating her.
Males passing as females, fish as bait, beetles as ants --
amidst all this confusion, it still sometimes pays to just be
yourself, which could certainly be the motto of the amazing hair-
streak butterfly family.
Decorating the hair-streak's lower hind wings are spots that
look like eyes, and out-growths that look like antennae, creating
the illusion that the butterfly has a second head. Whenever the
hair-streak alights, it jerks its dummy antennae up and down
while keeping its real antennae immobile. Presumably, this dummy
head exists to distract predators. If so, we finally have the
first scientific proof that two heads are better than one.
f:\12000 essays\sciences (985)\Genetics\The Greenhouse Effect.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Greenhouse Effect
The greenhouse effect occurs when gases such as methane, carbon dioxide,
nitrogen oxide and CFCs trap heat in the atmosphere by acting as a pane of
glass in a car. ³The glass² lets the sun light in to make heat but when the
heat tries to get out the gases absorb the heat. Holding this heat in causes
heat waves, droughts and climate changes which could alter our way of
living.
The main gases that cause the greenhouse effect are water vapor, carbon
dioxide (CO2), and methane which comes mainly from animal manure.
Other gases like nitrogen oxide and man made gases called
chloroflurocarbons get caught in the atmosphere as well. The decay of
animals and respiration are two main but natural sources of carbon dioxide.
In my opinion we people of the whole world should try and slow down the
emmission of greenhouse gases and/or find ways to balance the gases so the
climate doesn't change so rapidly. If it did we would be forced to adapt to
the new climate that we brought upon our selves. If we had a international
cooperation to put a damper on the production of chloroflurocarbons and
slowed down the use of fossil fuels it would dramatically slow down the
process of "global warming."
Over the last 100 years the global temperatures have been increasing slowly
but steadily. Since 1980 the temperature has risen 0.2 degrees C (0.4 degrees F
) each decade. Scientists predict that if we continue putting the same
amount of gas into the atmosphere by the year 2030 the temperature will be
rising as much as 0.5 degrees C (0.9 degrees F ) or more per decade. Over all
the global temperature could rise anywhere from 5 to 9 degrees over the
next fifty years.
If the temperatures do rise as predicted several things could happen. The
increases of temperature could alter the growth of crops in areas near the
equator due to insufficient rain and heat. This could really hurt countries
that rely on imported food. With the high temperatures the polar ice caps
could melt and cause the sea water level to go up 1 to 3 feet. This increase
could take out small islands, coastal cities and some shallow rivers. The
Everglades in Florida would be almost if not totally wiped right off the map.
The Everglades is the home for many animals and plant life. If it did get
flooded, they would all have to move northward across very dry land
which they will not be able to endure for very long. When the hot
temperatures do spread southward and northward, tropical disease will
spread with it. Disease that were down in Mexico will maybe occur in The
Carolinas or eventually Vermont. These new diseases will be hard to deal
with causing many more deaths and illnesses than before. The financial
problem with this is, that the flooding will cause dams to be built and cities
to be reconstructed. The shortage of food will cause the price of the food to
go up and with all the diseases we will need more medical supplies and
workers. All of this combined could and will cost a lot of money if we don't
do something about it now.
The computer models can¹t predict exactly what the climate is going to be in
the future, but they can come close to what it will be like down the road.
Scientists proved this by predicting with computers what the climate was in
the past. Then by looking back in records, they found that the predictions
were close to being right.
The "Topex" (Topographic Experiment) satellite has been collecting
information on the changes of the sea level, and temperatures across the
globe and the amount of gases emitted into the atmosphere. Each day the
satellite makes 500,000 measurements, each at a different place on the earth.
Measurements are all made between 66 degrees north and south latitudes.1
The Cretaceous occurred over 100 million years ago. It was the warmest
period we have knowledge of yet. There was so much carbon dioxide in the
air that the oceans rose many meters. North America was flooded and split
apart into two pieces. The temperature then was more than fifteen degrees
greater than the average temperature today.
Scientists believe that the tilt of the earth's axis changes to tilt the opposite
way every 10,000 years like a cycle. While going through this cycle it will
change the climate of areas. Right now it is moving so that North America
is going to be close to the sun in the winter. Seasons become more extreme
when the opposite happens. This controls the cycle of ice ages. Volcanoes
when they erupt, send clouds of dust into the air blocking sunlight. This
would cool the earth off more. Oceans are known to absorb CO2 because of
the ocean currents and the action of plankton. There is some evidence that
there is naturally rapid climate change between each Ice Age, which
confuses the whole global warming and idea.
I think every human being should take part in the fight to stop global
warming. The government is the key to this and they better do something
soon or it will be too late. First, the American government should sponsor
a meeting between the nations of this world. They should establish a
committee for handling the money, politics, and scientific research in order
to help cut back the emission of gases into the atmosphere. Every country
will contribute by donating money. Each country would be required to give
0.01 of their GNP to this committee. If they refuse they will be boycotted and
nothing will be sold to them by the participating countries.
Global warming is a big threat to our nation and the world. If we do not act
now, it may be too late. Of course, there is no sure way of telling if there
actually is a greenhouse effect, but let¹s not take any chances. Look at what is
happening to this world, and you will see that there is a pollution problem.
There are steps being taken at this moment to reduce the gases put into the
air but it still isn't enough. We need to cut back more by taking a few easy
steps. Plant a tree, or take a bus to work instead of driving your own car.
Those things may not seem like a lot, but if more and more people do it, it
will make a difference.
f:\12000 essays\sciences (985)\Genetics\THE HUMAN EYE IN SPACE.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
=========================================================================== THE HUMAN EYE IN SPACE by Lambert Parker (edited) ASTRONOMY AND SPACE SIG --------------------------------------------------------------------------- Human visual hardware is a result of a billion years of evolution within the earths atmosphere where light is scattered by molecules of air, moisture, particular matter etc. However as we ascend into our atmosphere with decrease density, light distribution is changed resulting in our visual hardware receiving visual data in different format. Some Aspects to Consider: 1. Visual acuity is the degree to which the details and contours of objects are perceived. Visual acuity is usually defined in terms of minimum separable.Large variety of factors influence this complex phenomenon which includes : # Optical factors- state of the image forming mechanisms of the eye. # Retinal factors such as the state of the cones. # Stimulus factors such as illumination, brightness of the stimulus, contrast between the stimulus and background, length of time exposed to the stimulus. * Minimum separable: shortest distance by which two lines can be separated and still be perceived as two lines. "During the day, the earth has a predominantly bluish cast..... I could detect individual houses and streets in the low humidity and cloudless areas such as the Himalaya mountain area.... I saw a steam locomotive by seeing the smoke first..... I also saw the wake of a boat on a large river in the Burma-India area... and a bright orange light from the British oil refinery to the south of the city (Perth,Australia.)" The above observation was made by Gordon Cooper in Faith 7 [1963] and which generated much skepticism in the light of the thesis by Muckler and Narvan "Visual Surveillance and Reconnaissance from space vehicles" in which they determined that a visual angle of ten minutes was the operational minimum, and that the minimum resolvable object length [M.R.O.L] at an altitude of 113 miles would be 1730 ft. This limitation of acuity was revised the next year to 0.5 seconds of arc for an extended contrasting line and 15 seconds of arc for minimum separation of two points sharply contrasting with the background. Orbiting at 237 miles in the skylab it was possible to see the entire east coast [Canada to Florida Keys] and resolve details of a 500 feet long bridge based on inference. Of Interest is the fact that even though the mechanical eye [camera systems] can resolve objects greater than fifty times better than the human eye, without the human ability to infer, interpretation of the data is meaningless. Conclusion: Visual acuity in space exceeds that of earth norm when objects with linear extension such as roads, airfields, wake of ships etc. 2. Stereoscopic vision: the perception of two images as one by means of fusing the impressions on both retinas. In space one has to deal with a poverty of reference points. For hardware evolved in a reference oriented paradigm, this possess a grave problem. Once out of the space craft and gazing outward, the eye can only fix on the stars [without even a twinkle] which for all practical purpose is at infinity ie. without stereoscopic vision "Empty field myopia" prevails. Empty Field Myopia is a condition in which the eyes, having nothing in the visual field upon which to focus, focus automatically at about 9 feet . An astronaut/cosmonaut experiencing empty field myopia focusing at 9 ft would be unable see objects at a range close as 100 ft. If another spacecraft, satellite, meteorite or L.E.M entered his field of vision, he would not be able to determine the size nor the distance. Solution: Man does not face any hostile environment in his birthday suit, the clothing industry and need for walk in closet say it all. In space we will wear our exoskeleton just as we wear winter jackets in winter and we will wear our helmets with visors to maintain our internal environment, filter out all those nasty rads etc. Since Empty Field Myopia is secondary to loss of reference points why not just build them into the visor itself giving the eye points of reference-- create a virtual reality ??? This line of speculation leads to amazing concepts...... To learn more about the concept of virtual universe in the helmet read: Journal: Air & Space, [smithsonian publication] article: Big Picture by Steven L.Thompson. illustrated by Dale Glasgow. About creation of virtual universe with new computer and software tech in the helmets of F-16 fighter pilots-- this is not a theoretical possibility but a reality. A MUST READ. Note: One aspect of adaptation to microgravity [space sickness] is an increased dependence on visual as opposed to vestibular mechanisms in the stabilization of retinal image during head movements only underscores the importance in being aware of our visual ability. 3. PERCEPTION OF COLORS. Studies done by the Russian cosmonauts on effects on perception of colors in space suggests a reduction in the perception of brightness of all colors. The greatest degradation seem to affect purple, azure, & green. 4. LIGHT FLASHES. Not the so-called fireflies noted in orbital flights by astronauts [shown graphically in the movie right stuff] but lights as faint spots / flashes seen after dark adaptation in the cabin of the Apollo missions. Generally described as white/colorless and classified as three types. # Described as "spots" / "starlike" 66 % of the time. Appearing in both eyes simultaneously or one eye at a time. # Described as "streaks" 25 % of the time. # Described as "lightning discharge seen behind clouds" 9 % of the times. It was of interest that the very same astronauts who reported them in the Apollo flights failed to see them in previous Gemini flights. After the Apollo flights this phenomena was noted by the crew of all three Skylab missions especially when they crossed the South Atlantic Anomaly. W.Zachary Osborne, Ph.D., Lawrence Pinsky, Ph.D., at University of Houston & J.Vernon Bailey at Lyndon B. Johnson Space Center conducted an investigation on this phenomena and concluded that they were due to heavy cosmic radiation penetrating thru the craft and impinging on the retina to cause this phenomena of flashes. The fact that this was noted only after the eyes were darkadapted points to retinal interaction than optic nerve per se.
f:\12000 essays\sciences (985)\Genetics\The Origin of the Moon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Evolution of solar system
Theories of The Origin of the Moon
The Moon is the only natural satellite of Earth. The distance from Earth
is about
384,400km with a diameter of 3476km and a mass of 7.35*1022kg. Through
history it has had many names: Called Luna by the Romans, Selene and
Artemis
by the Greeks. And of course, has been known through prehistoric times.
It is
the second brightest object in the sky after the Sun. Due to its size and
composition, the Moon is sometimes classified as a terrestrial "planet"
along with
Mercury, Venus, Earth and Mars.
Origin of the Moon
Before the modern age of space exploration, scientists had three major
theories for the origin of the moon: fission from the earth; formation in
earth
orbit; and formation far from earth. Then, in 1975, having studied moon
rocks
and close-up pictures of the moon, scientists proposed what has come to be
regarded as the most probable of the theories of formation, planetesimal
impact
or giant impact theory.
Formation by Fission from the Earth
The modern version of this theory proposes that the moon was spun off from
the earth when the earth was young and rotating rapidly on its axis. This
idea
gained support partly because the density of the moon is the same as that
of
the rocks just below the crust, or upper mantle, of the earth. A major
difficulty
with this theory is that the angular momentum of the earth, in order to
achieve
rotational instability, would have to have been much greater than the
angular
momentum of the present earth-moon system.
Formation in Orbit Near the Earth
This theory proposes that the earth and moon, and all other bodies of the
solar
system, condensed independently out of the huge cloud of cold gases and
solid
particles that constituted the primordial solar nebula. Much of this
material
finally collected at the center to form the sun.
Formation Far from Earth
According to this theory, independent formation of the earth and moon, as
in
the above theory, is assumed; but the moon is supposed to have formed at a
different place in the solar system, far from earth. The orbits of the
earth and
moon then, it is surmised, carried them near each other so that the moon
was
pulled into permanent orbit about the earth.
Planetesimal Impact
First published in 1975, this theory proposes that early in the earth's
history,
well over 4 billion years ago, the earth was struck by a large body called
a
planetesimal, about the size of Mars. The catastrophic impact blasted
portions
of the earth and the planetesimal into earth orbit, where debris from the
impact
eventually coalesced to form the moon. This theory, after years of research
on
moon rocks in the 1970s and 1980s, has become the most widely accepted
one for the moon's origin. The major problem with the theory is that it
would
seem to require that the earth melted throughout, following the impact,
whereas
the earth's geochemistry does not indicate such a radical melting.
Planetesimal Impact Theory (Giant Impact Theory)
As the Apollo project progressed, it became noteworthy that few scientists
working on the project were changing their minds about which of these three
theories they believed was most likely correct, and each of the theories
had its
vocal advocates. In the years immediately following the Apollo project,
this
division of opinion continued to exist. One observer of the scene, a
psychologist,
concluded that the scientists studying the Moon were extremely dogmatic and
largely immune to persuasion by scientific evidence. But the facts were
that the
scientific evidence did not single out any one of these theories. Each one
of them
had several grave difficulties as well as one or more points in its favor.
In the mid-1970s, other ideas began to emerge. William K. Hartmann and D.R.
Davis (Planetary Sciences Institute in Tucson AZ) pointed out that the
Earth, in
the course of its accumulation, would undergo some major collisions with
other
bodies that have a substantial fraction of its mass and that these
collision would
produce large vapor clouds that they believe might play a role in the
formation of
the Moon. A.G.W. Cameron and William R. Ward (Harvard University,
Cambridge MA) pointed out that a collision with a body having at least the
mass
of Mars would be needed to give the Earth the present angular momentum of
the
Earth-Moon system, and they also pointed out that such a collision would
produce a large vapor cloud that would leave a substantial amount of
material in
orbit about the Earth, the dissipation of which could be expected to form
the
Moon. The Giant Impact Theory of the origin of the Moon has emerged from
these suggestions.
These ideas attracted relatively little comment in the scientific community
during
the next few years. However, in 1984, when a scientific conference on the
origin
of the Moon was organized in Kona, Hawaii, a surprising number of papers
were
submitted that discussed various aspects of the giant impact theory. At the
same
meeting, the three classical theories of formation of the Moon were
discussed in
depth, and it was clear that all continued to present grave difficulties.
The giant
impact theory emerged as the "fashionable" theory, but everyone agreed that
it
was relatively untested and that it would be appropriate to reserve
judgement on
it until a lot of testing has been conducted. The next step clearly called
for
numerical simulations on supercomputers.
"The author in collaboration with Willy Benz (Harvard), Wayne L.Slattery at
(Los
Alamos National Laboratory, Los Alamos NM), and H. Jay Melosh (University
of
Arizona, Tucson, AZ) undertook such simulations. They have used an
unconventional technique called smooth particle hydrodynamics to simulate
the
planetary collision in three dimensions. With this technique, we have
followed a
simulated collision (with some set of initial conditions) for many hours of
real
time, determining the amount of mass that would escape from the Earth-Moon
system, the amount of mass that would be left in orbit, as well as the
relative
amounts of rock and iron that would be in each of these different mass
fractions.
We have carried out simulations for a variety of different initial
conditions and
have shown that a "successful" simulation was possible if the impacting
body had
a mass not very different from 1.2 Mars masses, that the collision occurred
with
approximately the present angular momentum of the Earth-Moon system, and
that the impacting body was initially in an orbit not very different from
that of the
Earth.
"The Moon is a compositionally unique body, having not more than 4% of its
mass in the form of an iron core (more likely only 2% of its mass in this
form).
This contrasts with the Earth, a typical terrestrial planet in bulk
composition,
which has about one-third of its mass in the form of the iron core. Thus, a
simulation could not be regarded as 'successful' unless the material left
in orbit
was iron free or nearly so and was substantially in excess of the mass of
the
Moon. This uniqueness highly constrains the conditions that must be imposed
on
the planetary collision scenario. If the Moon had a composition typical of
other
terrestrial planets, it would be far more difficult to determine the
conditions that
led to its formation.
The early part of this work was done using Los Alamos Cray X-MP computers.
This work established that the giant impact theory was indeed promising and
that
a collision of slightly more than a Mars mass with the Earth, with the
Earth-Moon
angular momentum in the collision, would put almost 2 Moon masses of rock
into
orbit, forming a disk of material that is a necessary precursor to the
formation of
the Moon from much of this rock. Further development of the hydrodynamics
code made it possible to do the calculations on fast small computers that
are
dedicated to them.
Subsequent calculations have been done at Harvard. The first set of
calculations
was intended to determine whether the revised hydrodynamics code reproduced
previous results (and it did). Subsequent calculations have been directed
toward
determining whether "successful" outcomes are possible with a wider range
of
initial conditions than were first used. The results indicate that the
impactor must
approach the Earth with a velocity (at large distances) of not more than
about 5
kilometers. This restricts the orbit of the impactor to lie near that of
the Earth. It
has also been found that collisions involving larger impactors with more
than the
Earth-Moon angular momentum can give "successful" outcomes. This initial
condition is reasonable because it is known that the Earth-Moon system has
lost
angular momentum due to solar tides, but the amount is uncertain. These
calculations are still in progress and will probably take 1 or 2 years more
to
complete
Bibliography
GIANT IMPACT THEORY OF THE ORIGIN OF THE MOON, A.G.W. Cameron,
Harvard-Smithsonian Center for Astrophysics, Cambridge MA 02138,
PLANETARY GEOSCIENCES-1988, NASA SP-498
EARTH'S ROTATION RATE MAY BE DUE TO EARLY COLLISIONS, Paula
Cleggett-Haleim, Michael Mewhinney, Ames Research Center, Mountain View,
Calif. RELEASE: 93-012
Hartmann, W. K. 1969. "Terrestrial, Lunar, and Interplanetary Rock
Fragmentation."
Hartmann, W. K. 1977. "Large Planetesimals in the Early Solar System."
1 "Landmarks of the Moon," Microsoft(r) Encarta(r) 96 Encyclopedia.
(c) 1993-1995 Microsoft Corporation. All rights reserved.
2 "Characteristics of the Moon," Microsoft(r) Encarta(r) 96
Encyclopedia. (c) 1993-1995 Microsoft Corporation. All rights
reserved.
f:\12000 essays\sciences (985)\Genetics\The Turner Syndrome.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Turner Syndrome
There are many possible reasons why a child may grow slowly,
including: hereditary factors (short parents), diseases affecting
the kidneys; heart, lungs or intestines; hormone imbalances;
severe stress or emotional deprivation; infections in the womb
before birth; bone diseases; and genetic or chromosomal
abnormalities.
The Turner Syndrome (known as Ullrich-Turner Syndrome in Germany)
is a congenital disease. A German doctor named Ullrich published
his article in 1930. American doctor Henry Turner recognized a
pattern of short stature and incomplete sexual maturation in
otherwise normal females. He published a comprehensive medical
description of the syndrome. It was not until 1959, that it
became clear the syndrome was due to lack of sex chromosome
material. Turner's Syndrome is a rare chromosomal disorder
that affects one in approximately 2,500 females. Females normally
have two X-chromosomes. However, in those with Turner's Syndrome,
one X chromosome is absent or is damaged.
OTHER NAMES
Depending on the doctor, Turner's Syndrome may be diagnosed with
one of the following alternative names: 45 - X Syndrome,
Bonnevie-Ulrich Syndrome, Chromosome X, Monosomy X, Morgagni-
Turner-Albright Syndrome, Ovarian Dwarfism, Turner Type, among
others.
SYNDROME CHARACTERISTICS
A reduced growth in height is the commonest visible
characteristic of the syndrome, (the average adult height is 4
feet 8 inches) and may be the only sign before puberty. Their
body proportions are normal. Girls with this syndrome may have
many middle ear infections during childhood; if not treated,
these chronic infections could cause hearing loss. Up to
the age of about 2 years, growth in height is approximately
normal, but then it lags behind that of other girls. Greatly
reduced growth in height of a female child should lead to a
chromosome test if no diagnosis has already been made. Early
diagnosis is very importance in order to be able to give enough
correct information to the parents, and gradually to the child
herself, so that she has the best possibilities for development.
Early diagnosis is also important in case surgical
treatment of the congenital heart defect (seen in about 20 per
cent of cases) is indicated. The commonest defect is a narrowing
of the main artery from the heart aortic coarctation. A regular
ultrasound examination of the heart is recommended in all girls
with Turner syndrome. This type of heart defect is present at
birth and can be corrected surgically. If not present at birth,
it does not develop later in life. The lack of sexual development
at puberty is the second most common characteristic. Having
abnormal chromosomes does not mean that girls with Turner
syndrome are not really female; they are women with a condition
that causes short stature and poorly developed ovaries.
Affected females may also exhibit the following symptoms:
infertility, kidney abnormalities, thyroid disease, heart
disease, abnormalities of the eyes and bones, webbed neck, low
hairline, drooping of eyelids, abnormal bone development, absent
or retarded development of physical features that normally appear
at puberty, decrease of tears when crying, simian crease (a
single crease in the palm), a "caved-in" appearance to the chest,
puffy hands and feet, unusual shape and rotation of ears,
soft upturned nails, small lower jaw, arms turned out slightly at
elbows, shortened 4th fingers, small brown moles, hearing loss,
scoliosis, cataracts , scars, overweight, Chrohn disease.
Chromosome Patterns The normal female has 46 chromosomes, of
which the two sex chromosomes are X-chromosomes. This is
expressed as 46,XX (men: 46,XY). In many women with Turner
syndrome, one of the X-chromosomes lacks completely, and the
chromosome pattern then becomes 45,X. The X-chromosome in women
is the carrier of genes related to production of ovaries and
female sex hormones, and to growth in height. Girls with
Turner syndrome are generally born with ovaries and egg cells,
but the lack of X-chromosome material results in gradual
disappearance of the egg cells. At some point in childhood,
usually during the first years of life, no egg cells remain.
Ovaries are then present without egg cells. The female sex
hormone (oestrogen), necessary for the girl to start
puberty, is usually produced by the egg cells. In girls with
Turner syndrome, insufficient oestrogen is produced for the girl
to start puberty. Neither spontaneous development of puberty nor
the accompanying growth spurts are seen in girls with Turner
syndrome.
Cause
The cause of the change in the sex chromosome that leads to
Turner syndrome is not known, nor is it known why the different
symptoms related to the syndrome develop. Other chromosome
defects are more often seen in children of elderly mothers,
sometimes also elderly fathers, but this does not seem to apply
to Turner syndrome. . In some cases of Turner's Syndrome,
however, one X chromosome is missing from the cells (45,X);
research studies suggest that approximately 40 percent of these
individuals may have some Y chromosomal material in addition to
the one X chromosome. In other affected females, both X-
chromosomes may be present, but one may have genetic defects. In
still other cases, some cells may have the normal pair of X-
chromosomes while other cells do not (45,X/46,XX mosaicism).
Although the exact cause of Turner's Syndrome is not known, it is
believed that the disorder may result from an error during the
division (meiosis) of a parent's sex cells. In about 80 per cent
of cases, where the whole of one of the X-chromosomes is missing
(45,X), it is thought to be the father's X-chromosome that is
absent.
Treatment
In recent years, the condition has been treated using a growth
hormone, given as injections under the skin (subcutaneous) in the
evening. Experiments are relatively limited. So far, researchers
think it will be possible to increase the final height by 5-10
centimeters, depending on the duration of treatment. Treatment is
started at slightly different ages in different countries, but
often at an age of about 6-7 years. In order to achieve puberty
development and a body height of more than the average of about
146 cm, oestrogen and growth hormone must be given. Oestrogen
therapy should start after one has taken the growth hormone for
at least two years (about 12-13 years old is average), using
small doses at first to promote sexual development. Oestrogen is
gradually supplemented by progesterone (a stronger female
hormone) as the girl matures. The treatment can be given as
tablets, injections, or oestrogen plaster.
Identification and Cure Although Turner Syndrome can be
identified in the fetus or with a blood test, there is not a
known cure for it. With growth hormone replacement therapy and
oestrogen injections (female hormones), the female victim of
Turner syndrome can live an outwardly normal life. Ongoing
research in reproduction and adoption make it possible for these
women to marry and raise children.
Word Count: 1005
f:\12000 essays\sciences (985)\Genetics\The Universe.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Universe
Á
what lies beyond our planet. The universe that we live in is so diverse
and unique, and it interests us to learn about all the variance that
lies beyond our grasp. Within this marvel of wonders our universe holds
a mystery that is very difficult to understand because of the complications
that arise when trying to examine and explore the principles of space.
That mystery happens to be that of the ever clandestine, black hole.
Á
of the concepts, properties, and processes involved with the space
phenomenon of the black hole. It will describe how a black hole is
generally formed, how it functions, and the effects it has on the universe.
Á
take a look at the basis for the cause of a black hole. All black holes
are formed from the gravitational collapse of a star, usually having a
great, massive, core. A star is created when huge, gigantic, gas clouds
bind together due to attractive forces and form a hot core, combined from
all the energy of the two gas clouds. This energy produced is so great
when it first collides, that a nuclear reaction occurs and the gases within
the star start to burn continuously. The Hydrogen gas is usually the first
type of gas consumed in a star and then other gas elements such as Carbon,
Oxygen, and Helium are consumed.
Á
years depending upon the amount of gases there are.
Ô
Á
equilibrium achieved by itself. The gravitational pull from the core of
the star is equal to the gravitational pull of the gases forming a type of
orbit, however when this equality is broken the star can go into several
different stages.
Á
consumed while some of it escapes. This occurs because there is not a
tremendous gravitational pull upon those gases and therefore the star
weakens and becomes smaller. It is then referred to as a White Dwarf.
If the star was to have a larger mass however, then it may possibly
Supernova, meaning that the nuclear fusion within the star simply goes
out of control causing the star to explode. After exploding a fraction
of the star is usually left (if it has not turned into pure gas) and that
fraction of the star is known as a neutron star.
Á
the core of the star is so massive (approximately 6(c)8 solar masses;
one solar mass being equal to the sun's mass) then it is most likely that
when the star's gases are almost consumed those gases will collapse inward,
forced into the core by the gravitational force laid upon them.
Á
to pull in space debris and other type of matters to help add to the
mass of the core, making the hole stronger and more powerful.
Á
Ô
the Event Horizon) that is formed around the black hole. The matter keeps
within the Event Horizon until it has spun into the centre where it is
concentrated within the core adding to the mass. Such spinning black holes
are known as Kerr Black Holes.
Á
were a star, and this may cause some problems for the neighbouring stars.
If a black hole gets powerful enough it may actually pull a star into it
and disrupt the orbit of many other stars. The black hole could then
grow even stronger (from the star's mass) as to possibly absorb another.
Á
Ergosphere, which sweeps all the matter into the Event Horizon, named for
it's flat horizontal appearance and because this happens to be the place
where mostly all the action within the black hole occurs. When the star is
passed on into the Event Horizon the light that the star endures is bent
within the current and therefore cannot be seen in space. At this exact
point in time, high amounts of radiation are given off, that with the
proper equipment can be detected and seen as an image of a black hole.
Through this technique astronomers now believe that they have found a black
hole known as Cygnus X1. This supposed black hole has a huge star orbiting
around it, therefore we assume there must be a black hole that it is in
orbit with.
Á
and the collapsing of stars, were a professor, Robert Oppenheimer and his
Ô
on the basis of Einstein's theory of relativity that if the speed of light
was the utmost speed over any massive object, then nothing could escape
a black hole once in it's clutches. **(1)
Á
could not escape from the gravitational pull from the core, thus making the
black hole impossible for humans to see without using technological
advancements for measuring such things like radiation. The second part of
the word was named "hole" due to the fact that the actual hole, is where
everything is absorbed and where the centre core presides. This core is
the main part of the black hole where the mass is concentrated and appears
purely black on all readings even through the use of radiation
detection devices.
Á
Á
known as The Hubble Telescope. This telescope has just recently found
what many astronomers believe to be a black hole, after being focused on
an star orbiting empty space. Several picture were sent back
to Earth from the telescope showing many computer enhanced pictures of
various radiation fluctuations and other diverse types of readings that
could be read from the area in which the black hole is suspected to be in.
Á
if somehow you were to survive through the centre of the black hole that
there would be enough gravitational force to possible warp you to another
end in the universe or possibly to another universe. The creative ideas
that can be hypothesized from this discovery are endless.
Á
Ô
Á
phenomenons, it is our duty to continue exploring them and to continue
learning, but in the process we must not take any of it for granted.
Á
and they contain so much curiosity that they could possibly hold
unlimited uses. Black holes are a sensation that astronomers are
still very puzzled with. It seems that as we get closer to solving
their existence and functions, we just end up with more and more questions.
Á
problems we seek and find refuge into them, dreaming that maybe one day,
one far off distant day, we will understand all the conceptions and we
will be able to use the universe to our advantage and go where only
our dreams could take us.
Á
Ô
Áà
**(1): Parker, Barry. Colliding Galaxies. PG#96
f:\12000 essays\sciences (985)\Genetics\Transplants and Diabetes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
============================================================ Transplants and Diabetes ------------------------------------------------------------ Three Toronto scientists have developed an organ transplant procedure that could, among its many benefits, reverse diabetes. The procedure was developed by Bernard Leibel, Julio Martin and Walter Zingg at the University of Toronto and the Hospital for Sick Children. The story of their work began in 1978, when they delved into research which had never before been tried. They wanted to determine if the success rate of organ transplants would increase if the recipient was injected with minute amounts of organ tissue prior to the transplant. The intention was to adapt the recipient to the transplanted tissue and thereby raise the threshold of rejection. In the case of the diabetes experiment, this meant injecting rats with pancreatic tissue before transplanting islets of Langerhans, small clusters of cells scattered throughout the pancreas which produce insulin, glucagon, and somatostatin. In their first experiment, outbred Wistar rats were injected with increasing amounts of minced pancreas from unrelated donor rats for one year while a control group was left untreated. Then both the treated and control groups received injections of approximately 500-800 islets of Langerhans from unrelated donors. Of the five treated animals, two became clinically and biochemically permanently normal. Six months later, Martin examined the cured rats and found intact, functioning islets secreting all of their hormones, including insulin. None of the controls were cured. Encouraged by their first results, Leibel, Martin, and Zingg decided to repeat the experiment with rats with much stronger immune barriers (higher levels of rejection). Seven rats out of nine were cured. "We set up a protocol and worked patiently with small numbers," says Leibel, "but the results are indisputable." In addition to reversing diabetes, there are two other benefits to the pre-treatment procedure, according to the scientists. The first is that the pancreas produces all the other hormones of a normal pancreas, not just insulin. The second benefit is that the transplant recipient doesn't have to take immunosuppressive drugs, which are so toxic for diabetics. At present, diabetics who receive a transplanted pancreas must take such drugs for life. The scientists eventual goal is a human trial, but they admit it will be years before such a study is conducted. The obvious benefit for diabetics, if human trials prove successful, would be a return to a normal life without dietary restrictions or insulin shots. But to Liebel, the most important reason to continue research is to eliminate the debilitating, degenerative diseases such as kidney, eye and heart failure that eventually plague the aging diabetic.
f:\12000 essays\sciences (985)\Genetics\Virtual Reality.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[Error] - File could not be written...
f:\12000 essays\sciences (985)\Math\About Carl Friedrich Gauss.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gauss, Carl Friedrich (1777-1855). The German scientist and mathematician Gauss is frequently he was called the founder of modern mathematics. His work is astronomy and physics is nearly as significant as that in mathematics.
Gauss was born on April 30, 1777 in Brunswick (now it is Western Germany). Many biographists think that he got his good health from his father. Gauss said about himself that, he could count before he can talk.
When Gauss was 7 years old he went to school. In the third grade students came when they were 10-15 years old, so teacher should work with students of different ages. Because of it he gave to half of students long problems to count, so he in that time could teach other half. One day he gave half of students, Gauss was in this half, to add all natural numbers from 1 to 100. 10 year old Gauss put his paper with answer on the teacher's desk first and he was the only who has got the right answer. From that day Gauss was popular in the whole school.
On October 15, 1795, Gauss was admitted to Georgia Augusta as "matheseos cult."; that is to say, as a mathematics student. But it is often pointed out that at first Gauss was undecided whether he should become a mathematician or a philologist. The reason for this indecision was probably that humanists at that time had a better economic future than scientists.
Gauss first became completely certain of his choice of studies when he discovered the construction of the regular 17-sided polygon with ruler and compass; that is to say, after his first year at the university.
There are several reasons to support the assertion that Gauss hesitated in his choice of a career. But his matriculation as a student of mathematics does not point toward philology, and probably Gauss had already made his decision when he arrived at Gottingen. He wrote in 1808 that it was noteworthy how number theory arouses a special passion among everyone who has seriously studied it at some time, and, as we have seen, he had found new results in this and other areas of mathematics while he was still at Collegium Carolinum.
Gauss made great discoveries in many fields of math. He gave the proof of the fundamental theorem of algebra: every polynomial equation with complex coefficients has at least one complex root. He developed the theory of some important special functions, in particular, the theory of the hypergeometric function. This function plays significant role in modern mathematical physics. Gauss discovered the method of so-called least squares. It is a method of obtaining the best possible average value for a measured magnitude, for many observations of the magnitude. The other part of mathematics that also has close connections to Gauss, is the theory of complex numbers. Gauss gave a very important geometric interpretation of a complex number as a point in the plane. Besides pure mathemaics, Gauss made very important contributions in astronomy, geodesy and other applied disciplines. For example, he predicted the location of some sky bodies.
In 1803 Gauss had met Johanna Osthoff, the daughter of a tannery owner in Braunschweig. She was born in 1780 and was an only child. They were married on October 9, 1805. They were lived on in Braunschweig for a time, in the house which Gauss had occupied as a bachelor.
On August 21, 1806, his first son Joseph was born. He received his name after Peazzi, the discoverer of Ceres. On February 29, 1808 a daughter followed, and gauss jokingly complained that she would only have a birthday every fourth year. As a mark of respect to Olbers she was christened Wilhelmina. The third child, a son, born on september 10, 1809, was named Ludwig, after Harding, but was called Louis.After a difficult third delivery, Johanna died on October 11, 1809. Louis died suddenly on March 1, 1810.
Minna Waldeck was born in 1799, she was the youngest daughter of a Professor Of Law, Johann Peter Waldeck, Of Gottingen. Gauss married her on August 4, 1810. The new marriage was a happy solution to Gauss's nonscientific problems.
Two sons and a daughter were born in the new marriage, Eugene on July 29, 1811, Wilhelm on October 23, 1813, and Therese on June 9, 1816.
In 1816 Gauss and his family moved into the west wing, while Harding lived in the east. During the following years, Gauss and Harding installed the astronomical instruments. New ones were ordered in Munich. Among other times, Gauss visited Munich in 1816.
After the intense sorrow of Johanna's death had been mollified in his second marriage, Gauss lived an ordinary academic life which was hardly disturbed by the violent events of the time. His powers and his productivity were unimpaired, and he continued with a work program which in a short time would have brought an ordinary man to collapse.
Although Gauss was often upset about his health, he was healthy almost all of his life. His capacity for work was colossal and it is best likened to the contributions of different teams of researchers over a period of many years, in mathematics, astronomy, geodesy, and physics. He must have been as strong as a bear in order not to have broken under such a burden. He distrusted all doctors and did not pay much attention to Olbers' warnings. During the winters of 1852 and 1853 the symptoms are thought to have become more serious, and in January of 1854 Gauss underwent a careful examination by his colleague Wilhelm Baum, professor of surgery.
The last days were difficult, but between heart attacks Gauss read a great deal, half lying in an easy chair. Sartorius visited him the middle of January and observed that his clear blue eyes had not lost their gleam. The end came about a month later. In the morning of February 23, 1855 Gauss died peacefully in his sleep. He was seventy-seven years old.
BIBLIOGRAPHY
Gindikin, S.G., Stories about physicists and mathematicians, Russia, Moscow, "Nauka", 1982 (in Russian).
Hall, T., Carl Friedrich Gauss, The Massachusetts Institute of Technology, 1970.
Muir, Jane, Of Men and Numbers: The Story of Great Mathematicians. Dodd, Mead, and Co, New York, 1961.
f:\12000 essays\sciences (985)\Math\Albert Einstein.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Albert Einstein
Einstein was born on March 14, 1879, in Ulm Germany. He lived there with his parents,
Herman and Pauline. Einstein attended a Catholic School near his home. But, at age 10, Einstein
was transferred to the "Luitpold Gymnasium", where he learned Latin, Greek, History, and
Geography. Einstein's father wanted him to attend a university but he could not because he did not
have a diploma from the Gymnasium. But there was a solution to this problem over the Alps, in
Zurich. There was The Swiss Federal Institute of Technology which did not require a diploma to
attend. The one thing it did require was applicant to pass an entrance exam. But then yet another
problem arose most scholars were 18 when they entered the institute, and Einstein was only 16.
In Berne, on January 6, 1903; Einstein married Mileva Maric. The twowitnesses at the
small, quiet wedding, were Maurice Solovine and Conard Habicht. After the wedding, there was a
meal to celebrate at a local restaurant. But no honeymoon. After the meal, the newlyweds returned
to their new home. It was a small flat, about 100 yards away from Bere's famous clock tower.
Upon returning home, a small incident occured, that was to occur many times throughout Einstern's
life; he had forgotten his key. A year later, in 1904 they had a child, Hans Albert. In that same
year, he recieved a job at the swiss patent office.
In 1905, three of Einstein's 4 famous papers; "about a 'heuristical' perspective about the
creation and modulation of light, about the movement of in still liquids mixed objects supported by
the molecularkinetical theory of heat and about the electrodynamics of moving objects". In autumn
of 1922 Einstein received the Nobel Prize for Physics, for his work on the photoelectric effect. He
did not receive the prize for his "theory of relativity" because it was thought that at the time it did
not meet the criteria of something that a Nobel Prize is awarded for. So when the prize was
awarded to him, they said it was awared to him for his work on the photoelectric effect, if his
theory of relativity is proven false, and if his theory of relativitywas proven correct, the prize was
for that.
Einstein died on April 18, 1955. He died of "leakage of blood from a hardened aorta". And
he refused the surgery that could have saved his life. The doctors told him that he could go anytime
from a minute to a few days. And Einstein still refused the surgery. The day passed quietly, and on
Starurday morning, Einstein seemed to be better, but then Einstein began to have intense pain His
nurse called the doctor who arrived quickly, and persuaded Einstein that he would be better in a
hospital, an ambulance was called, and Einstein went the the hospital. On Sunday he told his
daugther "Don't let the house become a museum." He died the next day.
f:\12000 essays\sciences (985)\Math\algebra investigation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ALGEBRAIC
INVESTIGATION
To the average person algebra is when you solve equations with variables but the true meaning goes much deeper and after numerous inquisitions I found the meaning but first you have to look at the what ways it is used. It is used to solve problems that have missing numbers , these are replaced by variables which are usually represented by letters in lower forms of algebra. But , this is not the only thing that algebra is it also consists of the numbers that are in the algebraic equations. This all put together tells the definition :
Algebra - a form of mathematics that is used to solve equations that use variables
and definite numbers
By : Brian Chamberlayne
f:\12000 essays\sciences (985)\Math\Algebra.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Em... I just wanna make a web in the Cheat Homepage, like searching for warez. Actually,
I have encountered some problems in registering my Paint Shop Pro 4.1.2. So I just......
wanna find the solution in this page. Please give me a password
f:\12000 essays\sciences (985)\Math\ANCIENT ADVANCES IN MATHEMATICS.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ancient Advances in Mathematics
Ancient knowledge of the sciences was often wrong and wholly unsatisfactory by modern standards. However not all of the knowledge of the more learned peoples of the past was false. In fact without people like Euclid or Plato we may not have been as advanced in this age as we are. Mathematics is an adventure in ideas. Within the history of mathematics, one finds the ideas and lives of some of the most brilliant people in the history of mankind's' populace upon Earth.
First man created a number system of base 10. Certainly, it is not just coincidence that man just so happens to have ten fingers or ten toes, for when our primitive ancestors first discovered the need to count they definitely would have used their fingers to help them along just like a child today. When primitive man learned to count up to ten he somehow differentiated himself from other animals. As an object of a higher thinking, man invented ten number-sounds. The needs and possessions of primitive man were not many. When the need to count over ten aroused, he simply combined the number-sounds related with his fingers. So, if he wished to define one more than ten, he simply said one-ten. Thus our word eleven is simply a modern form of the Teutonic ein-lifon. Since those first sounds were created, man has only added five new basic number-sounds to the ten primary ones. They are "hundred," "thousand," "million," "billion" (a thousand millions in America, a million millions in England), "trillion" (a million millions in America, a million-million millions in England). Because primitive man invented the same number of number-sounds as he had fingers, our number system is a decimal one, or a scale based on ten, consisting of limitless repetitions of the first ten number sounds.
Undoubtedly, if nature had given man thirteen fingers instead of ten, our number system would be much changed. For instance, with a base thirteen number system we would call fifteen, two-thirteen's.
While some intelligent and well-schooled scholars might argue whether or not base ten is the most adequate number system, base ten is the irreversible favorite among all the nations.
Of course, primitive man most certainly did not realize the concept of the number system he had just created. Man simply used the number-sounds loosely as adjectives. So an amount of ten fish was ten fish, whereas ten is an adjective describing the noun fish.
Soon the need to keep tally on one's counting raised. The simple solution was to make a vertical mark. Thus, on many caves we see a number of marks that the resident used to keep track of his possessions such a fish or knives. This way of record keeping is still taught today in our schools under the name of tally marks.
The earliest continuous record of mathematical activity is from the second millennium BC When one of the few wonders of the world were created mathematics was necessary. Even the earliest Egyptian pyramid proved that the makers had a fundamental knowledge of geometry and surveying skills. The approximate time period was 2900 BC
The first proof of mathematical activity in written form came about one thousand years later. The best known sources of ancient Egyptian mathematics in written format are the Rhind Papyrus and the Moscow Papyrus. The sources provide undeniable proof that the later Egyptians had intermediate knowledge of the following mathematical problems: applications to surveying, salary distribution, calculation of area of simple geometric figures' surfaces and volumes, simple solutions for first and second degree equations.
Egyptians used a base ten number system most likely because of biologic reasons (ten fingers as explained above). They used the Natural Numbers (1,2,3,4,5,6, etc.) also known as the counting numbers. The word digit, which is Latin for finger, is also another name for numbers which explains the influence of fingers upon numbers once again.
The Egyptians produced a more complex system then the tally system for recording amounts. Hieroglyphs stood for groups of tens, hundreds, and thousands. The higher powers of ten made it much easier for the Egyptians to calculate into numbers as large as one million. Our number system which is both decimal and positional (52 is not the same value as 25) differed from the Egyptian which was additive, but not positional.
The Egyptians also knew more of pi then its mere existence. They found pi to equal C/D or 4(8/9)ª whereas a equals 2. The method for ancient peoples arriving at this numerical equation was fairly easy. They simply counted how many times a string that fit the circumference of the circle fitted into the diameter, thus the rough approximation of 3.
The biblical value of pi can be found in the Old Testament (I Kings vii.23 and 2 Chronicles iv.2)in the following verse
"Also, he made a molten sea of ten cubits from
brim to brim, round in compass, and five cubits
the height thereof; and a line of thirty cubits did
compass it round about."
The molten sea, as we are told is round, and measures thirty cubits round about (in circumference) and ten cubits from brim to brim (in diameter). Thus the biblical value for pi is 30/10 = 3.
Now we travel to ancient Mesopotamia, home of the early Babylonians. Unlike the Egyptians, the Babylonians developed a flexible technique for dealing with fractions. The Babylonians also succeeded in developing more sophisticated base ten arithmetic that
were positional and they also stored mathematical records on clay tablets.
Despite all this, the greatest and most remarkable feature of Babylonian Mathematics was their complex usage of a sexagesimal place-valued system in addition a decimal system much like our own modern one. The Babylonians counted in both groups of ten and sixty. Because of the flexibility of a sexagismal system with fractions, the Babylonians were strong in both algebra and number theory. Remaining clay tablets from the Babylonian records show solutions to first, second, and third degree equations.
Also the calculations of compound interest, squares and square roots were apparent in the tablets.
The sexagismal system of the Babylonians is still commonly in usage today. Our system for telling time revolves around a sexagesimal system. The same system for telling time that is used today was also used by the Babylonians. Also, we use base sixty with circles (360 degrees to a circle).
Usage of the sexagesimal system was principally for economic reasons. Being, the main units of weight and money were mina,(60 shekels) and talent (60 mina). This sexagesimal arithmetic was used in commerce and in astronomy.
The Babylonians used many of the more common cases of the Pythagorean Theorem for right triangles. They also used accurate formulas for solving the areas, volumes and other measurements of the easier geometric shapes as well as trapezoids. The Babylonian value for pi was a very rounded off three. Because of this crude approximation of pi, the Babylonians achieved only rough estimates of the areas of circles and other spherical, geometric objects.
The real birth of modern math was in the era of Greece and Rome. Not only did the philosophers ask the question "how" of previous cultures, but they also asked the modern question of "why." The goal of this new thinking was to discover and understand the reason for mans' existence in the universe and also to find his place. The philosophers of Greece used mathematical formulas to prove propositions of mathematical properties. Some of who, like Aristotle, engaged in the theoretical study of logic and the analysis of correct reasoning. Up until this point in time, no previous culture had dealt with the negated abstract side of mathematics, of with the concept of the mathematical proof.
The Greeks were interested not only in the application of mathematics but also in its philosophical significance, which was especially appreciated by Plato (429-348 BC). Plato was of the richer class of gentlemen of leisure. He, like others of his class, looked down upon the work of slaves and craftsworker. He sought relief, for the tiresome worries of life, in the study of philosophy and personal ethics. Within the walls of Plato's academy at least three great mathematicians were taught, Theaetetus, known for the theory of irrational, Eodoxus, the theory of proportions, and also Archytas (I couldn't find what made him great, but three books mentioned him so I will too). Indeed the motto of Plato's academy "Let no one ignorant of geometry enter within these walls" was fitting for the scene of the great minds who gathered here.
Another great mathematician of the Greeks was Pythagoras who provided one of the first mathematical proofs and discovered incommensurable magnitudes, or irrational numbers. The Pythagorean theorem relates the sides of a right triangle with their corresponding squares. The discovery of irrational magnitudes had another consequence for the Greeks: since the length of diagonals of squares could not be expressed by rational numbers in the form of A over B, the Greek number system was inadequate for describing them.
As, you might have realized, without the great minds of the past our mathematical experiences would be quite different from the way they are today. Yet as some famous (or maybe infamous) person must of once said "From down here the only way is up," so you might say that from now, 1996, the future of mathematics can only improve for the better.
Bibliography
Ball, W. W. Rouse. A Short Account of The History of Mathematics. Dover Publications Inc.
Mineloa, N.Y. 1985
Beckmann, Petr. A History of Pi. St. Martin's Press. New York, N.Y. 1971
De Camp, L.S. The Ancient Engineers. Double Day. Garden City, N.J. 1963
Hooper, Alfred. Makers of Mathematics. Random House. New York, N.Y. 1948
Morley, S.G. The Ancient Maya. Stanford University Press. 1947.
Newman, J.R. The World of Mathematics. Simon and Schuster. New York, N.Y. 1969.
Smith, David E. History of Mathematics. Dover Publications Inc. Mineola, N.Y. 1991.
Struik, Dirk J. A Concise History of Mathematics. Dover Publications Inc. Mineola, N.Y. 1987
øø
f:\12000 essays\sciences (985)\Math\Apollonius of Perga.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Apollonius of Perga
Apollonius of Perga
Apollonius was a great mathematician, known by his contempories as " The Great
Geometer, " whose treatise Conics is one of the greatest scientific works from the ancient world.
Most of his other treatise were lost, although their titles and a general indication of their contents
were passed on by later writers, especially Pappus of Alexandria.
As a youth Apollonius studied in Alexandria ( under the pupils of Euclid, according to
Pappus ) and subsequently taught at the university there. He visited Pergamum, capital of a
Hellenistic kingdom in western Anatolia, where a university and library similar to those in
Alexandria had recently been built. While at Pergamum he met Eudemus and Attaluus, and he
wrote the first edition of Conics. He addressed the prefaces of the first three books of the final
edition to Eudemus and the remaining volumes to Attalus, whom some scholars identify as King
Attalus I of Pergamum.
It is clear from Apollonius' allusion to Euclid, Conon of Samos, and Nicoteles of Cyrene
that he made the fullest use of his predecessors' works. Book 1-4 contain a systematic account
of the essential principles of conics, which for the most part had been previously set forth by
Euclid, Aristaeus and Menaechmus. A number of theorems in Book 3 and the greater part of
Book 4 are new, however, and he introduced the terms parabola, eelipse, and hyperbola. Books
5-7 are clearly original. His genius takes its highest flight in Book 5, in which he considers
normals as minimum and maximum straight lines drawn from given points to the curve
( independently of tangent properties ), discusses how many normals can be drawn from
particular points, finds their feet by construction, and gives propositions determining the center
of curvature at any points and leading at once to the Cartesian equation of the evolute of any
conic.
The first four books of the Conics survive in the original Grrek and the next three in
Arabic translation. Book 8 is lost. The only other extant work of Apollonius is Cutting Off of a
Ratio ( or On Proportional Section ), in an Arabic translation. Pappus mentions five additional
works, Cutting off an Area ( or On Spatial Section ) , On Determinate Section, Tangencies, and
Plane Loci.
Tangencies embraced the following general problem : given three things, each of which
may be a point, straight line, or circle, construct a circle tangent to the three. Sometimes known
as the problem of Apollonius, the most difficult case arises when the three given things are
circles.
Of the other works of Apollonius referred to by ancient writers, one, On the Burning
Mirror, concerned optics. Apollonius demonstrated that parallel light rays striking a spherical
mirror would not be reflected to the center of sphericity, as was previously believed. The focal
properties of the parabolic mirror were also discussed. A work on entitled On the Cylindrical
Helix is mentioned by Proclus. Apollonius also wrote Comparison of the Dodecahedron and the
Icosahedron, considering the case in which they are inscribed in the same sphere. According to
Eutocius, in Apollonius' work Quick Delivery, closer limits for the value of Pi than the
3 1/7 and 3 10/71 of Archimedes were calculated. In a work of unknown title Apollonius
developed his system of tetrads, a method for expressing and multiplying large numbers. His On
Unordered Irrationals extended the theory of irrationals originally advanced by Eudoxus of
Cnidus and found in Book 10 of Euclid's Elements.
Lastly, from references in Ptolemy's Almagest, it is known that Apollonius introduced the
systems of eccentric and epicyclic motion to explain planetary motion. Of particular interest
was his determination of the points where a planet appears stationary.
Bibliography
1. Boyer, Carl B. , The History of Analytic Geometry (1956) McGraw - Hill
2. Heath, Thomas L. , Manual of Greek Mathematics (1921; repr. 1981)
3. Van der Waerden, Bartel L., Science Awakening (1961).
f:\12000 essays\sciences (985)\Math\Blaise Pascal 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Blaise Pascal was born at Clermont, Auvergne, France on June 19, 1628. He was the son of Étienne Pascal, his father, and Antoinette Bégone, his mother who died when Blaise was only four years old. After her death, his only family was his father and his two sisters, Gilberte, and Jacqueline, both of whom played key roles in Pascal's life. When Blaise was seven he moved from Clermont with his father and sisters to Paris. It was at this time that his father began to school his son. Though being strong intellectually, Blaise had a pathetic physique.
Things went quite well at first for Blaise concerning his schooling. His father was amazed at the ease his son was able to absorb the classical education thrown at him and "tried to hold the boy down to a reasonable pace to avoid injuring his health." (P 74,Bell) Blaise was exposed to all subjects, all except mathematics, which was taboo. His father forbid this from him in the belief that Blaise was strain his mind. Faced with this opposition, Blaise demanded to know 'what was mathematics?' His father told him, "that generally speaking, it was the way of making precise figures and finding the proportions among them." (P 39,Cole) This set him going and during his play times in this room he figured out ways to draw geometric figures such as perfect circles, and equilateral triangles, all of this he accomplished. Due to the fact that Étienne took such painstaking measures to hide mathematics from Blaise, to the point where he told his friends not to mention math at all around him, Blaise did not know the names to these figures. So he created his own vocab for them, calling a circle a "round" and lines he named "bars". "After these definitions he made himself axioms, and finally made perfect demonstrations." (P 39,Cole) His progression was far enough that he reached the 32nd proposition of Euclid's Book one. Deeply enthralled in this task his father entered the room un-noticed only to observe his son, inventing mathematics. At the age of 13 Étienne began taking Blaise to meetings of mathematicians and scientists which gave Blaise the opportunity to meet with such minds as Descartes and Hobbes. Three years later at the age of 16 Blaise amazed his peers by submitting a paper on conic sections. His sister was quoted as having said "that it was considered so great an intellectual achievement that people have said they have seen nothing as mighty since the time of Archimedes." (I:Pascal) This was his first real contribution to mathematics, but not his last.
Note: www.nd.edu/StudentLinks/akoehl/Pascal.html
Pascal's contributions to mathematics from then on were surmasing. From a young age he was 'creating science.' His first scientific work, an essay on sounds he prepared at a very young age. Once at a dinner party someone tapped a glass with a spoon. Pascal went about the house tapping the china with his fork then dissappeard into his room only to emerge hours later having completed a short essay on sound. He used the same approach to all of the problems he encountered; working at them until he was satisfied with his understanding of the problem at hand. A few of his disocoveries stood out more then others, of them his calculating machine,
and his contributions to combinatorial analysis have made a signifigant contribution to mathematics.
The mechanical calculator was devised by Pascal in 1642 and was brought to a commercial version in 1645. It was one of the earliest in the history of computing. 'Side by side in an oblong box were places six small drums, round the upper and lower halves chich the numbers 0 to 9 were written, in decending and ascending orders respectively. According to whichever aritchmatical process was currently in use, one half of each drum was shut off from outside view by a sliding metal bar: the upper row of figures was for subtraction, the lower for addition. Below each drum was a wheel consisting of ten (or twenty of twelve) movable spokes inside a fixed rim numbered in ten (or more) equal sections from 0 to 9 etc, rather like a clockface. Wheels and rims were all visible on the box lid, and indeed the numbers to be added or subtracted were fed into the machine by means of the wheels: 4 for instance, being recorded by using a small pin to turn the stoke opposite division 4 as far as a catch positioned close to the outer edge of the box. The procedure for basic arithmatical process then as follows.
To add 315+172, first 315 was recorded on the three (out of six) drums closest to the right-hand side: 5 would appear in the sighting aperture to the extremem right, 1 next to it, and 3 next to that again. To increase by one the number showing in any aperture, it was necessary to turn the appropriate frum forward 1/10th of a revolution. Tus in this sum, the drum on the extremem right of the machine would be given two turns, the drum immediately to its left would be moved on 7/10ths of a revolution, whilst the drum to its immediate left would be rotated forward by 1/10th. Tht total of 487 could then be read off in the appropriate slots. But, easy as thes operation was, a problem clearly arose when the numbers to be added together involved totals needing to be carried forward: say 315 + 186. At the perios at which Pascal was working, and because there had been no previous attempt at a calculating-machine capable of carrying column totals forward, this presened a serious technical challenge.(adamson p 23)
Pascal is also accredited with the advent of Pascal's triangle; An arrangement of numbers which were originally discovered by the chinese but named after Pascal due to his furthur discoveries into the properties which it possesed.
ex. (Pascals Triangle) 1
1 1
1 2 1
1 3 3 1
.
.
.
'Pascal investigated binomial coefficients and laid the foundations of the binomial
theorem.'(adamson p37) 'A triangular array of numbers consists of ones written on the vertical leg and on the hypotenuse of a right angled isosceled triangle; each other element composing the triangle is the sum of the element directly above it and of the element above it and to the left. Pascal proceeded from this to demonstrate that the numbers in the (n+1)st row are the coeffieients in the binomial expansion of (x+y)n. Due to the ease and clarity of the formation of the problems involved, Pascal's triangle, although not original was one of his finest achievements. It has greatly influenced mandy discoveries including the theoritical basis of the computer). It has also made an essential contribution to the field of combinatory analysis. It also 'through the work of John Wallis it led Isaac Newton to the discovery of the binomial theorem for fractional and negative indices, and it was central to Leibniz's discovery of the calculus.'(adamson p37)
As stated looking closer at the triangle Pascal was able to deduce many properties. First of all, the enteries in any row of the triangle are an equal distance from each other.
He found another property can be derived from the triangle. He discovered that any number in the triangle is the sum of the two numbers directly above it. This hls true for both triangles, the solved and unsolved. (3/1) + (3/2) = (4/2). Similarly, (5/1) + (5/2) = (6/2). The generalization of this property is known as Pascal's theorem.
Furthur studies in hydrodynamics, hydrostatic and atmospheric pressure led Pascal to many dicoveries still in use today such as the syringe and hydrolic press. Both these inventions came after years of him experimenting with vacuum tubes. One such experiment was to 'Take a tube which is curved at its bottom end, sealed at its top end A and open its extermity B. Another tube, a completely straight one open at both extermities M and N, is joined into the curved end of the first tube by its extermity M. Seal B, the opening of the curved end of the first tube, either with your finger or in some other manner and turn the entire apparatus upside down so that, in other words, the two tubes really only consist of one tube, being interconnected. Fill this tube with quicksilver and turn it the right way up again so that A is at the top; then place the end N in a dishfull of quicksilver. The whole of the quicksilver in the upper tube will fall down, with the result that it will all recede into the curve unless by any chance part of it also flows through the aperture M into the tube below. But the quicksilver in the lover tube will only partially subside as part of it will also remain suspended at a heright of 26'-27' according to the place and weather conditions in which the experiment is being carried out.
The reason for this difference is because the air weights down on the quicksilver in the dish beneath the lower tube, and thus the quicksilver which is inside that tube is held suspened in balence.
But it does not weigh down upon the quicksilver at the curved end of the upper tube, for the finger or bladder sealing this prevents any access to it, so that, as no air is pressing down at this point, the quicksilver in the upper tube drops freely because there is nothing to hold it up or to resist its fall.
All of these contibutions have made a lasting impact of all of mankind. Everything that Pascal created is still in use today in someway or another. His primative form of a syringe is still used in the medical field today to administer drugs and remove blood. The work he did on combinatory mathematics can be applied by anyone to 'figure out the odds' concerning a situation, which is exactly how he used it; by going to casinos and playing games smart. Something that anyone can do today. The work he did concerning hydrolic pressses are still in use today in factories, and car garages.
f:\12000 essays\sciences (985)\Math\Blaise Pascal.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Blaise Pascal was born in Clermont France on June 19, 1623, and
died in Paris on Aug. 19, 1662. His father, a local judge at Clermont, and
also a man with a scientific reputation, moved the family to Paris in 1631,
partly to presue his own scientific studies, partly to carry on the education of
his only son, who had already displayed exceptional ability. Blaise was kept
at home in order to ensure his not being overworked, and it was directed
that his education should be at first confined to the study of languages, and
should not include any mathematics. Young Pascal was very curious, one
day at the age of twelve while studying with his tutor, he asked about the
study of geometry. After this he began to give up his play time to persue the
study of geometry. After only a few weeks he had mastered many properties
of figures, in particular the proposition that the sum of the angles of a
triangle is equal to two right angles. His father noticed his sons ability in
mathematics and gave him a copy of Euclids's Elements, a book which
Pascal read and soon mastered. At the young age of fourteen he was
admitted to the weekly meetings of Roberval, Mersenne, Mydorge, and
other French geometricians. At the age of sixteen he wrote an essay on
conic sections; and in 1641 at the age of 18 he construced the first
arithmetical machine, an instrument with metal dials on the front on which
the numbers were entered. Once the entries had been completed the answer
would be displayed in small windows on the top of the device. This device
was improved eight years later. His correspondence with Fermat about this
time shows that he was then thurning his attention to analytical geometry
and physics. At this time he repeated Torricelli's experiments, by which the
pressure of the atmosphere could be estimated as a weight, and he
confirmed his theory of the cause of barometrical variations by obtaining at
the same instant readings at different altitudes on the hill of Puy-de-Dôme.
A strange thing about Pascal was that in 1650 he stoped all he reasearched
and his favorite studies to being the study of religion, or as he sais in his
Pensees, "contemplate the greatness and the misery of man." Also about this
time he encouraged the younger of his two sisters to enther the Port Royal
society. In 1653 after the death of his father he returned to his old studies
again, and made several experiments on the pressure exerted by gases and
liquids; it wasalso about this period that he invented the arithmetical
triangle, and together with Fermat created the calculus of probabilities. At
this time he was thinking about getting married but an accident caused him
to return to his religious life.While he was driving a four horse carrige the
two lead horses ran off the bridge. The only thing that saved him was the
traces breaking. Always somewhat of a mystic, he considered this a special
summons to abandon the world of science and return to his studies of
religion. He wrote an account of the accident on a small piece of paper,
which for the rest of his life he wore next to his heart, to remind him of his
covenant. Shortly after the accident he moved to Port Royal, where he
continued to live until his death in 1662. Besides the arithmetical machine
and Pascals Theorem, Pascal also made the Arithmetical Triangle in 1653
and his work on the theory of probabilities in 1654.
f:\12000 essays\sciences (985)\Math\Calculus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Calculus
"One of the greatest contributions to modern mathematics, science, and engineering was
the invention of calculus near the end of the 17th century," says The New Book of Popular
Science. Without the invention of calculus, many technological accomplishments, such as the
landing on the moon, would have been difficult.
The word "calculus" originated from the Latin word meaning pebble. This is probably
because people many years ago used pebbles to count and do arithmetic problems.
The two people with an enormous contribution to the discovery of the theorems of
calculus were Sir Isaac Newton of England and Baron Gottfried Wilhelm of Germany. They
discovered these theorems during the 17th century within a few years of each other.
Isaac Newton was considered one of the great physicists all time. He applied calculus to
his theories of motion and gravitational pull. He was able to discover a function and
describe mathematically the motion of all objects in the universe.
Calculus was invented to help solve problems dealing with "changing or
varying" quantities. Calculus is considered "mathematics of change."
There are some basic or general parts of calculus. Some of these are functions,
derivative, antiderivatives, sequences, integral functions, and multivariate calculus.
Some believe that calculus is too hard or impossible to learn without much memorization
but if you think that calculus is all memorizing then you will not get the object of learning
calculus. People say that calculus is just the revision or expansion of old or basic equations and I
believe that also.
In economics and business there are some uses for calculus. One important application of
integral calculus in business is the evaluation of the area under a function. This can be used
in a probability model. Probability is another uses in integral calculus for business because you
could find how often something will appear in a certain range in a certain time. A function used
for probability in uniform distribution. The function is f(x) = 1 \ (b - a) for a < x >= b. Some
economics uses is figuring marginal and total cost. The function is TC = {MC = TVC + FC.
Another is the demand on a sales product. ex. Demand on Beer that brings in different variables
to see how the consumption of beer is. The function is a multivariate function f(m, p, r, s) =
(1.058)(m^.136)(p^-.727)(r^.914)(s^.816) where
m = aggregate real income : p = average retail price of beer
r = average retail price level of all other consumer goods
s = measure of strength of beer (how consumers like it)
as you can see if everything but r stays constant then the demand will go up.
Some terms used in calculus frequently used to learn you need to know what they are.
Derivative is the fundamental concept of calculus that is how things change. (ex. instantaneous
velocity) Functions are always used in all applications. A function is an equation with one or
more variables where only one x value will produce only one y value is a function. Also you
will need to learn and memorize some theorems and identities to be able to expand and breakdown
equations.
THE NEW BOOK OF POPULAR SCIENCE
Calculus p.431 - 433
Brief Calculus and its Applications
Larry J. Goldstein / David C. Lay / David L. Schneider
McGraw Hill Encyclopedia of Science and Technology
#3 ed. 7
A First Course in Calculus
Lang, Serge
Addison-Wesley Publishing Company
Calculus for Business and Economics
Childress, Robert L.
Prentice Hall inc. Englewood Cliffs, New Jersey
f:\12000 essays\sciences (985)\Math\Carl Friedrich Gauss.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Carl Friedrich Gauss was a German mathematician and scientist who
dominated the mathematical community during and after his lifetime. His
outstanding work includes the discovery of the method of least squares,
the discovery of non-Euclidean geometry, and important contributions to
the theory of numbers.
Born in Brunswick, Germany, on April 30, 1777, Johann Friedrich
Carl Gauss showed early and unmistakable signs of being an extraordinary
youth. As a child prodigy, he was self taught in the fields of reading
and arithmetic. Recognizing his talent, his youthful studies were
accelerated by the Duke of Brunswick in 1792 when he was provided with a
stipend to allow him to pursue his education.
In 1795, he continued his mathematical studies at the University
of Göttingen. In 1799, he obtained his doctorate in absentia from the
University of Helmstedt, for providing the first reasonably complete
proof of what is now called the fundamental theorem of algebra. He
stated that: Any polynomial with real coefficients can be factored into
the product of real linear and/or real quadratic factors.
At the age of 24, he published Disquisitiones arithmeticae, in
which he formulated systematic and widely influential concepts and
methods of number theory -- dealing with the relationships and
properties of integers. This book set the pattern for many future
research and won Gauss major recognition among mathematicians. Using
number theory, Gauss proposed an algebraic solution to the geometric
problem of creating a polygon of n sides. Gauss proved the possibility
by constructing a regular 17 sided polygon into a circle using only a
straight edge and compass.
Barely 30 years old, already having made landmark discoveries in
geometry, algebra, and number theory Gauss was appointed director of the
Observatory at Göttingen. In 1801, Gauss turned his attention to
astronomy and applied his computational skills to develop a technique
for calculating orbital components for celestial bodies, including the
asteroid Ceres. His methods, which he describes in his book Theoria
Motus Corporum Coelestium, are still in use today. Although Gauss made
valuable contributions to both theoretical and practical astronomy, his
principle work was in mathematics, and mathematical physics.
About 1820 Gauss turned his attention to geodesy -- the
mathematical determination of the shape and size of the Earth's surface
-- to which he devoted much time in the theoretical studies and field
work. In his research, he developed the heliotrope to secure more
accurate measurements, and introduced the Gaussian error curve, or bell
curve. To fulfill his sense of civil responsibility, Gauss undertook a
geodetic survey of his country and did much of the field work himself.
In his theoretical work on surveying, Gauss developed results he needed
from statistics and differential geometry.
Most startling among the unpublished discoveries of Gauss is that
of non-Euclidean geometry. With a fellow student at Göttingen, he
discussed attempts to prove Euclid's parallel postulate -- Through a
point outside of a line, one and only one line exists which is parallel
to the first line. As he got closer to solving the postulate, the closer
he was to non-Euclidean geometry, and by 1824, he had concluded that it
was possible to develop geometry based on the denial of the postulate.
He did not publish this work, conceivably due to its controversial
nature.
Another striking discovery was that of noncommutative algebras,
which has been known that Gauss had anticipated by many years but again
failed to publish his results.
In the 1820s, in collaboration with Wilhelm Weber, he explored
many areas of physics. He did extensive research on magnetism, and his
applications of mathematics to both magnetism and electricity are among
his most important works. He also carried out research in the field of
optics, particularly in systems of lenses. In addition, he worked with
mechanics and acoustics which enabled him to construct the first
telegraph in 1833.
Scarcely a branch of mathematics or mathematical physics was
untouched by this remarkable scientist, and in whatever field he
labored, he made unprecedented discoveries. On the basis of his
outstanding research in mathematics, astronomy, geodesy, and physics, he
was elected as a fellow in many academies and learned societies. On
February 23, 1855, Gauss died an honored and much celebrated man for his
accomplishments.
f:\12000 essays\sciences (985)\Math\Carl Gauss.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Karl Gauss
~Biography~
Karl Gauss lived from 1777 to 1855. He was a German mathematician, physician, and astronomer. He was born in Braunschweig, Germany, on April 30th, 1777. His family was poor and uneducated. His father was a gardener and a merchant's assistant.
At a young age, Gauss taught himself how to read and count, and it is said that he spotted a mistake in his father's calculations when he was only three. Throughout the rest of his early schooling, he stood out remarkably from the rest of the students, and his teachers persuaded his father to train him for a profession rather than learn trade.
His skills were noticed while he was in high school, and at age 14 he was sent to the Duke of Brunswick to demonstrate. The Duke was so impressed by this boy, that he offered him a grant that lasted from then until the Duke's death in 1806.
Karl began to study at the Collegium Carolinum in 1792. He went on to the University of Gottingen, and by 1799 was awarded his doctorate from the University. However, by that time most of his significant mathematical discoveries had been made, and he took up his interest in astronomy in 1801.
By about 1807, Gauss began to gain recognition from countries all over the world. He was invited to work in Leningrad, was made a member of the Royal Society in London, and was invited membership to the Russian and French Academies of Sciences. However, he remained in his hometown in Germany until his death in 1855.
~Acomplishments~
During his Teen years, Karl Gauss developed many mathematical theories and proofs, but these would not be recognized for decades because of his lack of publicity and publication experience. He discovered what we now call Bode's Law, and the principle of squares, which we use to find the best fitting curve to a group of observations.
Having just finished some work in quadratic residues in 1795, Karl Gauss moved to the University to access the works of previous mathematicians. He quickly began work on a book about the theory of numbers, which is seen as his greatest accomplishment. This book was a summary of the work that had been established up to the time, and contained questions that are still relevant today.
While at the University in 1796, he discovered that a 17-sided polygon could be inscribed in a circle with only the tools of a compass and a ruler. This marked the first discovery of Euclidean geometry that had been found in 2000 years.
In 1799, Gauss found and proved a theorem of Algebra that fundamental today, that every algebraic equation has a root of the form a+bi. In this, a and b are real numbers, while 'i' is the square root of -1. He demonstrated that numbers of the form, which are called complex numbers, can be represented to points on a plane.
During the next 10 years, Gauss concentrated on astronomy. Astronomy was different because he had several collaborators to work with, while in mathematics he worked alone. In 1801 when Giuseppe Piazza discovered Ceres, the first asteroid, it gave Gauss a chance to use his mathematical skill. Through only three observations he found a method of calculating the orbit of an asteroid, and it was published in 1809. For this, the 1001st planetoid discovered was named Gaussia in his honor.
Karl Gauss also pioneered in the field of topography, crystallography, optics, mechanics, and capillarity. While at the University, he invented a heliotrope, which was an instrument that allowed more precise calculations of the shape of the earth. Then, in 1831 he grouped with Wilhelm Weber, and together they produced an electromagnetic telegraph. Also, Gauss developed logical sets of units for magnetic phenomena, thus the unit of magnetic flux density is named after him.
f:\12000 essays\sciences (985)\Math\CTBS EDITION NUMBER 4 Mathematic Concepts and Applications .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CTBS EDITION NUMBER 4: Mathematic Concepts and Applications
(KEY)
1)B 2)J 3)D 4)J 5)A 6)G 7)A 8)H 9)C 10)F
11)C 12)D 13)D 14)G 15)B 16)F 17)C 18)H 19)A 20)H
21)C 22)F 23)B 24)G 25)B 26)H 27)A 28)H 29)B 30)J
31)A 32)H 33)C 34)H 35)B 36)H 37)C 38)F 39)C 40)G
41)C 42)J 43)B 44)G 45)D 46)G 47)A 48)F 49)D 50)H
f:\12000 essays\sciences (985)\Math\Eddie Vedder is a Vampire .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Although at first he may seem to be just your average angst ridden lead man for a popular rock and roll band, Eddie Vedder, the vocalist and lyricist for Pearl Jam, may very well be a vampire. Although it is impossible to tell, everything points to his being an immortal. An in depth analysis of his lyrics shows that Pearl Jam's second album, "Versus", has been used by Vedder as sounding boards for the complex emotions and change of perspective that come with one's transition to vampirism. Other lyricists have used vampiric images before - for instance Sting, in Moon Over Bourbon Street, which was written in first person - but Vedder is unique in that his lyrics evolve over time as being indicative of his vampiric state. Either he has become a vampire, he believes himself to be a vampire, or he is leading a fictional double life, from which he draws inspiration for his lyrics.
What exactly is a vampire? Numerous myths, folk tales, and works of fiction exist on the matter of what makes up a vampire, but if they do exist, vampires have been incredibly careful to conceal their presence from most people (supposedly following a law known as the Masquerade), and very little is known about them definitively. However, some basic facts are common to most sources. These are: vampires drink blood, vampires live forever if not killed, and vampires undergo grievous bodily harm if exposed to sunlight; this normally kills them.
Many other things about vampires, such as their aversion to garlic, their superhuman abilities, and their prohibition on entering abodes unless invited, are mentioned in some sources and not others, and so it is unclear as to how much of this applies to real vampires, and how much is pure myth.
Eddie's vampiric tendencies became apparent in the lyrics to "Versus", Pearl Jam's second album. Pearl Jam's first album, "Ten", contains no real evidence of vampirism, and his lyric writing style is subtly different from that in "Versus". In "Ten", the lyrics are often in ballad form, generally relating tales of normal people. The songs Jeremy, Alive, Deep, and Black were all number one hits in the U.S. from "Ten". Eddie was not writing about himself in these songs, and was only assuming personas for the narrative, a standard device for composers of fiction of any kind. Thus, the lyrics were simply Eddie's view of the world around him, incorporating characters and situations which he could relate to.
Eddie's lyric writing style had change considerably in the second album, "Versus". Although he still wrote some songs similar to those on "Ten", expounding upon the specific lives of characters and the situation they encountered (i.e. Daughter), there is also a tendency for social commentary. The general trend in "Versus" is for the lyrics to offer a critical view of human society, often comparing it to vampiric society. It would seem that at this stage, Eddie had become aware of the existence of vampires, and had been offered the chance to become one of them. This is corroborated by the lyrics.
Eddie views Vampires as a different "species" to human, with a different society, customs, and moral code. Many of the lyrics on "Versus" are attempts by Eddie to compare the two "species", humans and vampires. A general disgust with the human race and it's customs is evident, and Eddie is considering vampirism as an alternative to all that he dislikes about human existence. The song Rats is a good example. At first it would seem to be a comparison of humans with rats, but even a brief glance at the lyrics would indicate that several qualities are mentioned common to both rats and humans: "they don't eat, don't sleep". The correct interpretation becomes clear when one considers Eddie's comparison of humanity with vampirism. In the song, the humans are represented by rats, and vampires by "they". It is essentially a list of all things bad about the human race, which Eddie hopes to rid of through the change to vampirism:
"they don't... lick the dirt off a larger one's feet
they don't push
don't crowd
congregate until they're much too loud
fu#? to procreate 'till they are dead
drink the blood of their so called best friend"
While the last line may appear to contradict the vampiric interpretation, in fact it strengthens it. Most known vampiric codes strictly prohibit the drinking of a fellow vampire's blood (known as "diablerie"), and tales exist of vampires being ostracized for it.
Several of the other songs on "Versus" have vampiric interpretations. Animal is indicative of Vedder's disgust with the human race; he'd "rather be with an animal" than with a human. W.M.A. is also a song of general disgust with human society, focusing on the race conflict in the United States of America. By becoming a vampire, Eddie hopes to distance himself from this sort of persecution. Essentially Eddie is trying to escape from his responsibility as a human by becoming a vampire. Indifference shows Eddie's final considerations of vampiric society, although he remains cynical. However, it is clear that he has made his decision ("soon light will be gone" and "but I won't change my mind"). The vampiric implications are the most clear in the second verse:
"I will hold the cradle
'til it burns up my arm
I'll keep taking punches
'til their arms grow tired
I will stare the sun down
until my eyes go blind
hey, I won't change direction
and I won't change my mind"
This verse deals with one's conversion to vampirism, the exact process of which isn't known for sure, but Eddie's version seems to confirm the most popular rumors, which hold that a vampire (the sire) must first drain the prospective vampire's blood, killing the victim, who then must drink of the sire's blood, or remain dead forever. Thus, the conversion from human to vampire involves dying, but remaining animate after death. This is what Eddie is describing in the second verse, although he has varied the cause of death for the sake of poetry, and in keeping with the Masquerade. The chorus, however, shows an increasing cynicism with vampirism: "how much difference does it make?".
Indifference was Eddie's last song before his conversion, a romantic attempt to crystallize his last thoughts as a human. The reality turned out to be much less sedate, as is evidence by Blood. Apparently the bloodletting wasn't as clean as imagined: "my blood... drains and spills, soaks the pages, fills their sponges". The song, musically primal and violent, is as much a homage to Eddie's last remaining drops of human blood, ("It's my blood" repeated over the thrashing guitars and drums), as it is to his violent conversion. The greatest indication of Eddie's vampirism, though, is on the lyric sheet of "Versus" before Blood, on which Eddie scribbles:
"This meeting is driving me crazy... changing me
I will never trust anyone again...
[unintelligible]... in a different light... Biting the bullet
SWALLOW
You've blocked out the sun
You're killing my only flower
I've studied this question... now I study this answer"
Although the exact events of Eddie's conversion can only be guessed at, it was obviously a harrowing affair, and one which affected Eddie deeply. It seems that Eddie's perception had forever changed, which is evident in further songs about vampirism on "Versus."
Three other songs on Versus would seem to have been written after Eddie's conversion to a vampire: Elderly Woman Behind the Counter in a Small Town, Leash, and Rearviewmirror.
Rearviewmirror is the companion song of Blood, dealing also with Eddie's conversion. However, while Blood is a description of the encounter at which Eddie was changed, Rearviewmirror relates Eddie's feelings after he has had a chance to adjust to his new condition. Throughout the song, a car trip is used as an analogy of Eddie's transformation, "I took a drive today, time to emancipate". Eddie remains cynical about the experience, "I'm not about to give thanks, or apologize," and he describes his transformation once more in retrospect:
"I couldn't breath
holding me down
hand on my face
enmity gauged
knotted by fear
forced to endure
what I could not forgive
head at your feet
fool to your crown
fist on my plate
swallowed it down"
The last four lines reflecting Eddie's diminutive status when compared to his sire, especially during the humiliating conversion. Since drinking from his sire was necessary, the wrist was obviously offered ("fist"). After the conversion, vampires remain physically the same as before, hence "it wasn't my surface most defiled". Eddie obviously fled his sire after the meeting:
"I gather speed
from you fu#?ing with me
once and for all
I'm far away
hardly believe
finally the shades are raised"
The last two lines reflect the new perspective that Eddie has gained from his newfound state.
Elderly Woman Behind the Counter in a Small Town is a relatively sedate song, and relates Eddie's thoughts on seeing an old friend, someone whom he had only known before his conversion. The vampiric link is tenuous, and relies on the fact that physically Vampires remain exactly as they were when first converted: "lifetimes are catching up with me, all these changes taking place," implies that Eddie has already noticed how others around him have changed, while he hasn't: "I changed by not changing at all." The irony is that he has changed more than anybody else.
Leash is also a look back at Eddie's former life, comparing humanity and vampirism from the other side of the fence. "Troubled souls untie, we've got ourselves tonight," and, "we got the means to make amends," would seem to indicate that Eddie is ready to make a life without life, entering a vampire society and leaving humanity behind. However, he has trouble adjusting and hence, "I am lost." However, he is confident of eventually settling down: "will myself to find a home... we will find a way, we will find a place." At the end of the song, Eddie sings, "the lights, the lights" displaying his newfound sensitivity to sunlight, and then sings, "I used" with the same melody as he sang, "I proved to be a man," leading to the obvious statement, "I used to be a man."
"Versus" represents the early stages of Eddie's vampirism, from his initial consideration of the idea, to his conversion and subsequent disillusionment, and his beginning to come to terms with what has happened to him. However, several other songs not related to vampirism are also featured on the album, either written before Eddie begun brooding over the matter, or as a form of artistic relief from his transformation physically, mentally, and emotionally. Also, the order of the songs on the album isn't chronological, something which may have something to do with the Masquerade, but probably has more to do with the arrangement of songs int
f:\12000 essays\sciences (985)\Math\electricity.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Electricity
I=current, and is measured in Amps.
R=resistance, and is measured in Ohms.
P=power, and is measured in watts.
E=energy, and is measured in volts.
1 hosepower is equal to 746 watts
Cost for electricity is based on a kilowatt hour.
A voltmeter is always placed in parallel to the lines of a circut.
In a series voltage is seperated at each outlet.
Voltage acts like a force, because it moves electrons through a circut.
AC current changes direction 60 times per. second.
The normal Voltage supplied in homes is 220.
DC power can only be stepped down.
AC power could be either stepped up, or stepped down.
An ammeter is hooked up in a series.
Electrical formulas.
P
E|I
E=IxR find volts
P=ExI find watts
R=E/I find ohms
I=P/E find amps
*Example problem*
I have a 5hp motor wired at 220V, 3 ohms. How many amps does it draw and
what size wire do I need to run it?
5hpx746watts=3730watts
formula:P/E=I
3730/220=16.95 amps
#12 wire(refer to wire size chart below.)
Wire Size Chart
15 amps=#14 wire
20 amps=#12 wire
30 amps=#10 wire
40 amps=#8 wire
50 amps=#6 wire
60 amps=#4 wire
70 amps=#2 wire
*note-If wiring in an appliance with a heating element, drop down one
wire size. Ex. if amerage draw requires a #8 wire drop to a #6
wire.
*Example problem.*
What size wire do I need for a stove\oven with 4 top burners
@600watts ea., the top heat. element @2600watts, and the bot. heat.
element @2000watts? (220V)
2600+2000+2400=7000watts
7000/220=31.81amps=32amps(round to whole number)
#8 wire, but you are using a heating elemant, so drop to a #6 wire.
f:\12000 essays\sciences (985)\Math\Euclidean Geometry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EUCLIDEAN GEOMETRY
Geometry was thoroughly organized in about 300 BC, when the Greek
mathematician Euclid gathered what was known at the time, added original work of his
own, and arranged 465 propositions into 13 books, called 'Elements'. The books covered
not only plane and solid geometry but also much of what is now known as algebra,
trigonometry, and advanced arithmetic.
Through the ages, the propositions have been rearranged, and many of the
proofs are different, but the basic idea presented in the 'Elements' has not changed. In the
work facts are not just cataloged but are developed in a fashionable way.
Even in 300 BC, geometry was recognized to be not just for mathematicians.
Anyone can benefit from the basic learning of geometry, which are how to follow lines of
reasoning, how to say precisely what is intended, and especially how to prove basic
concepts by following these lines of reasoning. Taking a course in geometry is beneficial
for all students, who will find that learning to reason and prove convincingly is necessary
for every profession. It is true that not everyone must prove things, but everyone is
exposed to proof. Politicians, advertisers, and many other people try to offer convincing
arguments. Anyone who cannot tell a good proof from a bad one may easily be persuaded
in the wrong direction. Geometry provides a simplified universe, where points and lines
obey believable rules and where conclusions are easily verified. By first studying how to
reason in this simplified universe, people can eventually, through practice and
experience, learn how to reason in a complicated world.
Geometry in ancient times was recognized as part of everyone's education. Early
Greek philosophers asked that no one come to their schools who had not learned the
'Elements' of Euclid. There were, and still are, many who resisted this kind of education.
It is said that Ptolemy I asked Euclid for an easier way to learn the material. Euclid told
him there was no "royal road" to geometry instead he told Ptolemy you will not learn
what geometry is all about. What you will learn is the basic shapes of some of the figures
dealt with in geometry and a few facts about them. It takes a geometry course, with
textbook and teacher, to show the complete and orderly arrangement of the facts and how
each is proved.
f:\12000 essays\sciences (985)\Math\Fibonacci Numbers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Fibonacci numbers were first discovered by a man named Leonardo
Pisano. He was known by his nickname, Fibonacci. The Fibonacci sequence is a
sequence in which each term is the sum of the 2 numbers preceding it. The first 10
Fibonacci numbers are: (1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89). These numbers are
obviously recursive.
Fibonacci was born around 1170 in Italy, and he died around 1240 in Italy.
He played an important role in reviving ancient mathematics and made significant
contributions of his own. Even though he was born in Italy he was educated in
North Africa where his father held a diplomatic post. He did a lot of traveling with
his father. He published a book called Liber abaci, in 1202, after his return to Italy.
This book was the first time the Fibonacci numbers had been discussed. It was
based on bits of Arithmetic and Algebra that Fibonacci had accumulated during his
travels with his father. Liber abaci introduced the Hindu-Arabic place-valued
decimal system and the use of Arabic numerals into Europe. This book, though,
was somewhat contraversial because it contradicted and even proved some of the
foremost Roman and Grecian Mathematicians of the time to be false. He published
many famous mathematical books. Some of them were Practica geometriae in
1220 and Liber quadratorum in 1225.
The Fibonacci sequence is also used in the Pascal trianle.
The sum of each diagnal row is a
fibonacci number. They are also in the right sequence: 1,1,2,5,8.........
Fibonacci sequence has been a big factor in many patterns of things in nature.
One has found that the fractions u/v representing the screw-like arrangement of
leaves quite often are members of the fibonacci sequence. On many plants, the
number of petals is a Fibonacci number: buttercups have 5 petals; lilies and iris
have 3 petals; some delphiniums have 8; corn marigolds have 13 petals; some asters
have 21 whereas daisies can be found with 34, 55 or even 89 petals. Fibonacci
nmbers are also used with animals. The first problem Fibonacci had wehn using the
Fibonacci numbers was trying to figure out was how fast rabbits could breed in
ideal circumstances. Using the sequence he was ale to approximate the answer.
The Fibonacci numbers can also be found in many other patterns. The diagram
below is what is known as the Fibonacci spiral. We can make another picture
showing the Fibonacci numbers 1,1,2,3,5,8,13,21,.. if we start with two small
squares of size 1, one on top of the other. Now on the right of these draw a square
of size 2 (=1+1). We can now draw a square on top of these, which has sides 3
units long, and another on the left of the picture which as side 5.
We can continue adding squares around the picture, each new square
having a side which is as long as the sum of the latest two squares
drawn.
If we take the ratio of two successive numbers in Fibonacci's series,
(1 1 2 3 5 8 1 3..) we find:
1/1=1; 2/1=2; 3/2=1.5; 5/3=1.666...; 8/5=1.6; 13/8=1.625;
It is easier to see what is happening if we plot the ratios on a graph:
The ratio seems to be settling down to a particular value, which the
Greeks called the golden ratio and has the value 1.61803. It has some interesting
properties, for instance, to square it, you just add 1. To take its reciprocal, you just
subtract 1. This means all its powers are just whole multiples of itself plus another
whole integer (and guess what these whole integers are? Yes! The Fibonacci
numbers again!) Fibonacci numbers are a big factor in Math, The Golden Ratio,
The Pascal Triangle, the production of many species, plants, and much much
more.
f:\12000 essays\sciences (985)\Math\Fractal Geometry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fractal Geometry
"Fractal Geometry is not just a chapter of mathematics, but one that
helps Everyman to see the same old world differently". - Benoit
Mandelbrot
The world of mathematics usually tends to be thought of as abstract.
Complex and imaginary numbers, real numbers, logarithms, functions, some
tangible and others imperceivable. But these abstract numbers, simply
symbols that conjure an image, a quantity, in our mind, and complex
equations, take on a new meaning with fractals - a concrete one.
Fractals go from being very simple equations on a piece of paper to
colorful, extraordinary images, and most of all, offer an explanation to
things. The importance of fractal geometry is that it provides an
answer, a comprehension, to nature, the world, and the universe.
Fractals occur in swirls of scum on the surface of moving water, the
jagged edges of mountains, ferns, tree trunks, and canyons. They can be
used to model the growth of cities, detail medical procedures and parts
of the human body, create amazing computer graphics, and compress
digital images. Fractals are about us, and our existence, and they are
present in every mathematical law that governs the universe. Thus,
fractal geometry can be applied to a diverse palette of subjects in
life, and science - the physical, the abstract, and the natural.
We were all astounded by the sudden revelation that the output of a
very simple, two-line generating formula does not have to be a dry and
cold abstraction. When the output was what is now called a fractal,
no one called it artificial... Fractals suddenly broadened the realm
in which understanding can be based on a plain physical basis.
(McGuire, Foreword by Benoit Mandelbrot)
A fractal is a geometric shape that is complex and detailed at every
level of magnification, as well as self-similar. Self-similarity is
something looking the same over all ranges of scale, meaning a small
portion of a fractal can be viewed as a microcosm of the larger fractal.
One of the simplest examples of a fractal is the snowflake. It is
constructed by taking an equilateral triangle, and after many iterations
of adding smaller triangles to increasingly smaller sizes, resulting in
a "snowflake" pattern, sometimes called the von Koch snowflake. The
theoretical result of multiple iterations is the creation of a finite
area with an infinite perimeter, meaning the dimension is
incomprehensible. Fractals, before that word was coined, were simply
considered above mathematical understanding, until experiments were done
in the 1970's by Benoit Mandelbrot, the "father of fractal geometry".
Mandelbrot developed a method that treated fractals as a part of
standard Euclidean geometry, with the dimension of a fractal being an
exponent.
Fractals pack an infinity into "a grain of sand". This infinity appears
when one tries to measure them. The resolution lies in regarding them
as falling between dimensions. The dimension of a fractal in general
is not a whole number, not an integer. So a fractal curve, a
one-dimensional
object in a plane which has two-dimensions, has a fractal dimension
that lies between 1 and 2. Likewise, a fractal surface has a dimension
between 2 and 3. The value depends on how the fractal is constructed.
The closer the dimension of a fractal is to its possible upper limit
which
is the dimension of the space in which it is embedded, the rougher, the
more filling of that space it is. (McGuire, p. 14)
Fractal Dimensions are an attempt to measure, or define the pattern, in
fractals. A zero-dimensional universe is one point. A one-dimensional
universe is a single line, extending infinitely. A two-dimensional
universe is a plane, a flat surface extending in all directions, and a
three-dimensional universe, such as ours, extends in all directions. All
of these dimensions are defined by a whole number. What, then, would a
2.5 or 3.2 dimensional universe look like? This is answered by fractal
geometry, the word fractal coming from the concept of fractional
dimensions. A fractal lying in a plane has a dimension between 1 and 2.
The closer the number is to 2, say 1.9, the more space it would fill.
Three-dimensional fractal mountains can be generated using a random
number sequence, and those with a dimension of 2.9 (very close to the
upper limit of 3) are incredibly jagged. Fractal mountains with a
dimension of 2.5 are less jagged, and a dimension of 2.2 presents a
model of about what is found in nature. The spread in spatial frequency
of a landscape is directly related to it's fractal dimension.
Some of the best applications of fractals in modern technology are
digital image compression and virtual reality rendering. First of all,
the beauty of fractals makes them a key element in computer graphics,
adding flare to simple text, and
texture to plain backgrounds. In 1987 a mathematician named Michael F.
Barnsley created a computer program called the Fractal Transform, which
detected fractal codes in real-world images, such as pictures which have
been scanned and converted into a digital format. This spawned fractal
image compression, which is used in a plethora of computer applications,
especially in the areas of video, virtual reality, and graphics. The
basic nature of fractals is what makes them so useful. If someone was
Rendering a virtual reality environment, each leaf on every tree and
every rock on every mountain would have to be stored. Instead, a simple
equation can be used to generate any level of detail needed. A complex
landscape can be stored in the form of a few equations in less than 1
kilobyte, 1/1440 of a 3.25" disk, as opposed to the same landscape being
stored as 2.5 megabytes of image data (almost 2 full 3.25" disks).
Fractal image compression is a major factor for making the "multimedia
revolution" of the 1990's take place.
Another use for fractals is in mapping the shapes of cities and their
growth.
Researchers have begun to examine the possibility of using mathematical
forms called fractals to capture the irregular shapes of developing
cities.
Such efforts may eventually lead to models that would enable urban
architects to improve the reliability of types of branched or
irregular
structures... ("The Shapes of Cities", p. 8)
The fractal mapping of cities comes from the concept of self-similarity.
The number of cities and towns, obviously a city being larger and a town
being smaller, can be linked. For a given area there are a few large
settlements, and many more smaller ones, such as towns and villages.
This could be represented in a pattern such as 1 city, to 2 smaller
cities, 4 smaller towns, 8 still smaller villages - a definite pattern,
based on common sense.
To develop fractal models that could be applied to urban development,
Batty and his collaborators turned to techniques first used in
statistical
physics to describe the agglomeration of randomly wandering particles
in two-dimensional clusters...'Our view about the shape and form of
cities is that their irregularity and messiness are simply a
superficial
manifestation of a deeper order'. ("Fractal Cities", p. 9)
Thus, fractals are used again to try to find a pattern in visible
chaos. Using a process called "correlated percolation", very accurate
representations of city
growth can be achieved. The best successes with the fractal city
researchers have been Berlin and London, where a very exact mathematical
relationship that included exponential equations was able to closely
model the actual city growth. The end theory is that central planning
has only a limited effect on cities - that people will continue to live
where they want to, as if drawn there naturally - fractally.
Man has struggled since the beginning of his existence to find the
meaning of life. Usually, he answered it with religion, and a "god".
Fractals are a sort of god of the universe, and prove that we do live in
a very mathematical world. My theory about "god" and existence has
always been that we have finite minds in an infinite universe - that the
answer is there, but we are simply not ever capable of comprehending it,
or creation, and a universe without an end. But, fractals, from their
definition of complex natural patterns to models of growth, seem to be
proving that we are in a finite, definable universe, and that is why
fractals are not about mathematics, but about us.
SOURCES
Magazine Articles:
"The Shapes of Citries: Mapping Out Fractal Models of Urban Growth",
Ivars Peterson,
Science News, January 6, 1996, p. 8-9
w
"Bordering on Infinity: Focusing on the Mandelbrot set's extraordinary
boundary", Ivars Peterson,
Science News, Novermber 23, 1991, p. 771
"From Surface Scum to fractal swirls", Ivars Peterson,
Science News, January 23, 1993, p. 53
"A better way to compress images", M.F. Barnsley and A.D. Sloan,
Byte, January 1988, p. 215-223.
Books:
McGuire, Michael. An Eye for Fractals.
Addison-Wesley Publishing Company, Reading, Mass., 1991.
World Wide Web Sites:
http://millbrook.lib.rmit.edu.au/fractals/exploring.html
http://www.min.ac.uk/%7Eccdva/
http://www.cis.oio-state.edu/hypertext/faq/uesenet/fractal-faq/faq.html
f:\12000 essays\sciences (985)\Math\Frank Lloyd Wright.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Math
Mr.Fisher
Frank Lloyd Wright
Frank Lloyd Wright was a courageos man in the sense that he was not afraid to accept critisism from people and fellow architects. Throught his careerhe has faced many types of disagreements. People did not believe that he was sane or normal because his buildings were so radical back then. People started to look and beleive in his work after they saw his first commision, which was Moore-Dugal house.
Wright was born in the year 1867 on the date June 8th, in Richland Center, Wisconsin. His name was to be Frank Lincoln Wright, the name was Franks great grandfathers name. His mother thought it would be a tradition if the name stayed in the family, and that it did.
Wright studied architecture at the University of Wisconsin. He thought that the school was the pits in architecture from 1885-1886. He did not lead the coolest life there but infact that of a nerd. After school he moved to Chiago in 1887. Worked and studied with Joesph Lyman Silsbee as an architectural detailer.
f:\12000 essays\sciences (985)\Math\Gauss 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
dfsklahjjjjjjjjdfojgshgfkjdfhggggggggggggggggggggggggggggggggggggggggggggggggggggggggggghhhhhhhhhhhksdfhjsdfghsd
dfpgsdfigklsdjfgkljsdl;fkgjl;sdfkjg;lskdfjgkldfjg;lkdjfgkdjfgklsjdfgkljsdf;lkgj;dlfkgj;skldfjg;lsdkfgj;skldfjg;lsdkfjgskdfgjdk;lkgjksdl;fgjskldfjg;skldfjgk;ldjfgskdjfgl;kdfjgkdfjgl;ksdfjgskdfjgklsdjfgk
sdfg;ksj;j
dfgk;sdfgjlllllllllllsdfgjksdfdfskldfgl;ksdjglkdfsgjksdlfjgklsdfgjdklfjgl;ksdjg;lskfdgj;lsdkfjg;sdkfgjsdk;lfjgklsdgjskdfgjkl;sdfgjk;dflsjg;sdfkljg;sdklfjg;sdfklgjk;sdflgjk;sdlfgj;sdkfgj;sdfkljg;skldfjg;dfkgjsdfkl;gj;lsdkfjg;fdgjksdkljfgkld;fjgsdkfjg;skldfjgksl;dfgjslkdfjgsldfkgjsdfkljgskldfgjksldfjgksdfgjksdfgkgjkgsdfjdfggjkldfjkjjksdfkjgsfkjsdfgjkjksdfjksfgjksfgdjksjkfgkjjksfgdjkfjkfgdjjsgdjkfdgjksfgdjkjkjkfsdgjkjksdfgjkfgsdjkfjkdgjskj
klsdfksdkjjksdfgjkgfjksfdjkdfgjksdfgjkjkfgsdjkjkfsdgjkjsdfgjkfdgjkksdfgjjkfjkfskjjksdfgjkfsdgjkfgsdjkjsdfgjjkgfkfgsdjkkjgdfkjsdjkkjfgsd
sdfkl;'gjk;dfjkgkjldfgsdkfjgkjlsdfgjksdklfgjklsdfjglskdfjgkldfjglkjgkljgklsdfjglkdfjgkldfjgkldfjglk;dfjglk;dfjglkdfsgjkldfgjkl;dfgj;kldfjg;lkdjfgl;kjgkl;sdfgjklsdf;jgsdfkljg]sdfgkl]
gsdfkgjkdsj;kldfsjksdjfgk;ldfjsg;ksdfjkl;jfdjs;dkfjgk;sdlfjg;klsdfgjksldfjg;klsdfgjsdklfjgkldfj;skfdjdksfgjs;dfkgjdklfgjk
f:\12000 essays\sciences (985)\Math\Gauss Carl Friedrich.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gauss was a German scientist and mathematician. People call him the
founder of modern mathematics. He also worked in astronomy and physics.
His work in astronomy and physics is nearly as significant as that in
mathematics. Gauss also worked in crystallography, optics, biostatisics,
and mechanics.
Gauss was born on April 30, 1777 in Brunswick. Brunswick is what is now
called West Germany, He was born to peasant couple. Gauss's father didn't
want Gauss to go to a University. In elementary school he soon impressed
his teacher, who is said to have convinced Gauss's father that his son
should be permitted to study with a view toward entering a university. In
secondary school nobody recognize his talent for math and science because
he rapidly distinguished himself in ancient languages. When Gauss was 14
he impressed the duke of Brunswick with his computing skill. The duke was
so impressed that he generously supported Gauss until his death in 1806.
Gauss conceived almost all his basic mathematical discoveries between the
ages of 14 and 17. In 1791 he began to do totally new and innovative work
in mathematics. In 1793-94 he did intensive research in number theory,
especially on prime number. He made this his life's passion and is regarded
as its modern founder.
Gauss studied at the University of Gottingen from 1795 to 1798. He soon
decided to write a book on the theory of numbers. It appeared in 1801
under the title 'Disquisitiones arithmeticae'. This classic work usually
is held to be Gauss's greatest accomplishment. Gauss discovered on March
30, 1796, that circle, using only compassses and straightedge the first
such discovery in Euclidean construction in more than 2,00 years.
His interest turned to astronomy in April 1799, and that field occupied
his attention for the remainder of his life. Gauss set up a speedy method
for the complete determination of the elements of a planet's orbit from
just three observations. He elaborated it in his second major work, a
classic in astronomy, published in 1809. In 1807 he was appointed director
of the University of Gottingen observatory and professor of mathematics,
a position he held for life.
Gauss research with Wilheim Weber after 1831. Gauss and Weber research
was on electricity and magnetism. In 1833 they devised an electromagnetic
telegraph. They observations and founded the Magnetic Union in 1836.
In conclusion Carl Friedrich Gauss was well versed in the Greek and Roman
classics, studied Sanskrit, and read extensively in European Literature.
In later years he was showered with honors from scientific bodies and
governments everywhere. He died in Gottingen on Feb. 23, 1855.
f:\12000 essays\sciences (985)\Math\Gauss.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Kevin Jean-Charles August 10, 1996
Seq. Math Course 2 Period 1&2
Carl Friedrich Gauss
This report is on Carl Friedrich Gauss. Gauss was a
German scientist and mathematician. People call him the
founder of modern mathematics. He also worked in
astronomy and physics. His work in astronomy and physics
is nearly as significant as that in mathematics. Gauss also
worked in crystallography, optics, biostatistics, andMaking
mechanics.
Gauss was born on April 30, 1777 in Brunswick.
Brunswick is what is now called West Germany. He was
born to a peasant couple. Gauss's father didn't want Gauss
to go to a University. In elementary school he soon
impressed his teacher, who is said to have convinced
Gauss's father that his son should be permitted to study with
a view toward entering a university. In secondary school
nobody recognize his is talent for math and science because
he rapidly distinguished himself in ancient languages. When
Gauss was 14 he impressed the duke of Brunswick with his
computing skill. The duke was so impressed that he
generously supported Gauss until his death in 1806.
Gauss conceived almost all his basic mathematical
discoveries between the ages of 14 and 17. In 1791 he
began to do totally new and innovative work in mathematics.
In 1793-94 he did intensive research in number theory,
especially on prime numbers. He made this his life's passion
and is regarded as its modern founder.
Gauss studied at the University of Gottingen from
1795 to 1798. He soon decided to write a book on the
theory of numbers. It appeared in 1801 under the title
'Disquisitiones arithmeticae'. This classic work usually is
held to be Gauss's greatest accomplishment. Gauss
discovered on March 30, 1796, that circle, using only
compasses and straightedge the first such discovery in
Euclidean construction in more than 2,000 years.
His interest turned to astronomy in April 1799, and that
field occupied his attention for the remainder of his life.
Gauss set up a speedy method for the complete
determination of the elements of a planet's orbit from just
three observations. He elaborated it in his second major
work, a classic in astronomy, published in 1809. In 1807 he
was appointed director of the University of Gottingen
observatory and professor of mathematics, a position he
held for life.
Gauss research with Wilheim Weber after 1831. Gauss
and Weber research was on electricity and magnetism. In
1833 they devised an electromagnetic telegraph. They
stemulated others in many lands to make magnetic
observations and founded the Magnetic Union in 1836.
In conclusion Carl Friedrich Gauss was well versed in
the Greek and Roman classics, studied Sanskrit, and read
extensively in European Literature. In later years he was
showered with honors from scientific bodies and
governments everywhere. He died in Gottingen on Feb. 23,
1855.
KJC
f:\12000 essays\sciences (985)\Math\Georg Cantor.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I. Georg Cantor
Georg Cantor founded set theory and introduced the concept of infinite numbers with his discovery of cardinal numbers. He also advanced the study of trigonometric series and was the first to prove the nondenumerability of the real numbers.
Georg Ferdinand Ludwig Philipp Cantor was born in St. Petersburg, Russia, on March 3, 1845. His family stayed in Russia for eleven years until the father's sickly health forced them to move to the more acceptable environment of Frankfurt, Germany, the place where Georg would spend the rest of his life.
Georg excelled in mathematics. His father saw this gift and tried to push his son into the more profitable but less challenging field of engineering. Georg was not at all happy about this idea but he lacked the courage to stand up to his father and relented. However, after several years of training, he became so fed up with the idea that he mustered up the courage to beg his father to become a mathematician. Finally, just before entering college, his father let Georg study mathematics.
In 1862, Georg Cantor entered the University of Zurich only to transfer the next year to the University of Berlin after his father's death. At Berlin he studied mathematics, philosophy and physics. There he studied under some of the greatest mathematicians of the day including Kronecker and Weierstrass. After receiving his doctorate in 1867 from Berlin, he was unable to find good employment and was forced to accept a position as an unpaid lecturer and later as an assistant professor at the University of Halle in1869. In 1874, he married and had six children.
It was in that same year of 1874 that Cantor published his first paper on the theory of sets. While studying a problem in analysis, he had dug deeply into its foundations, especially sets and infinite sets. What he found baffled him. In a series of papers from 1874 to 1897, he was able to prove that the set of integers had an equal number of members as the set of even numbers, squares, cubes, and roots to equations; that the number of points in a line segment is equal to the number of points in an infinite line, a plane and all mathematical space; and that the number of transcendental numbers, values such as pi(3.14159) and e(2.71828) that can never be the solution to any algebraic equation, were much larger than the number of integers.
Before in mathematics, infinity had been a sacred subject. Previously, Gauss had stated that infinity should only be used as a way of speaking and not as a mathematical value. Most mathematicians followed his advice and stayed away. However, Cantor would not leave it alone. He considered infinite sets not as merely going on forever but as completed entities, that is having an actual though infinite number of members. He called these actual infinite numbers transfinite numbers. By considering the infinite sets with a transfinite number of members, Cantor was able to come up his amazing discoveries. For his work, he was promoted to full professorship in 1879.
However, his new ideas also gained him numerous enemies. Many mathematicians just would not accept his groundbreaking ideas that shattered their safe world of mathematics. One of these critics was Leopold Kronecker. Kronecker was a firm believer that the only numbers were integers and that negatives, fractions, imaginaries and especially irrational numbers had no business in mathematics. He simply could not handle actual infinity. Using his prestige as a professor at the University of Berlin, he did all he could to suppress Cantor's ideas and ruin his life. Among other things, he delayed or suppressed completely Cantor's and his followers' publications, belittled his ideas in front of his students and blocked Cantor's life ambition of gaining a position at the prestigious University of Berlin.
Not all mathematicians were hostile to Cantor's ideas. Some greats such as Karl Weierstrass, and long-time friend Richard Dedekind supported his ideas and attacked Kronecker's actions. However, it was not enough. Cantor simply could not handle it. Stuck in a third-rate institution, stripped of well-deserved recognition for his work and under constant attack by Kronecker, he suffered the first of many nervous breakdowns in 1884.
In 1885 Cantor continued to extend his theory of cardinal numbers and of order types. He extended his theory of order types so that now his previously defined ordinal numbers became a special case. In 1895 and 1897 Cantor published his final double treatise on sets theory. Cantor proves that if A and B are sets with A equivalent to a subset of B and B equivalent to a subset of A then A and B are equivalent. This theorem was also proved by Felix Bernstein and by Schröder.
The rest of his life was spent in and out of mental institutions and his work nearly ceased completely. Much too late for him to really enjoy it, his theory finally began to gain recognition by the turn of the century. In 1904, he was awarded a medal by the Royal Society of London and was made a member of both the London Mathematical Society and the Society of Sciences in Gottingen. He died in a mental institution on January 6, 1918.
Today, Cantor's work is widely used in the many fields of mathematics. His theory on infinite sets reset the foundation of nearly every mathematical field and brought mathematics to its modern form.
II. Infinity
Most everyone is familiar with the infinity symbol . How many is infinitely many? How far away is "from here to infinity"? How big is infinity?
We can't count to infinity. Yet we are comfortable with the idea that there are infinitely many numbers to count with: no matter how big a number you might come up with, someone else can come up with a bigger one: that number plus one--or plus two, or times two. There simply is no biggest number.
Is infinity a number? Is there anything bigger than infinity? How about infinity plus one? What's infinity plus infinity? What about infinity times infinity? Children to whom the concept of infinity is brand new, pose questions like this and don't usually get very satisfactory answers. For adults, these questions don't seem to have very much bearing on daily life, so their unsatisfactory answers don't seem to be a matter of concern.
At the turn of the century Cantor applied the tools of mathematical rigor and logical deduction to questions about infinity in search of satisfactory answers. His conclusions are paradoxical to our everyday experience, yet they are mathematically sound. The world of our everyday experience is finite. We can't exactly say where the boundary line is, but beyond the finite, in the realm of the transfinite, things are different.
Sets and Set Theory
Cantor is the founder of the branch of mathematics called Set Theory, which is at the foundation of much of 20th century mathematics. At the heart of Set Theory is a hall of mirrors--the paradoxical infinity. Georg Cantor was known to have said, "I see it, but I do not believe it," about one of his proofs.
The set is the mathematical object which Cantor scrutinized. He defined a set as any collection of well-distinguished and well-defined objects considered as a single whole. A collection of matching dishes is a set, as well as a collection of numbers. Even a collection of seemingly unrelated things like, {television, aardvark, car, 6} is a set. They are well-defined and can be distinguished from one another.
Sets can be large or small. They can also be finite and infinite. A finite set has a finite number of members. No matter how many there are, given enough time, you can count them all. Cantor's surprising results came when he considered sets that had an infinite number of members. Sets such as all of the counting numbers, or all of the even numbers are infinite sets.
In order to study infinite sets, Cantor first formalized many of the things that are intuitive and obvious about finite sets. At first, it seems like these formalizations are just a whole lot of trouble, a way of making simple things complicated. Because the formalisms are clearly correct, however, they provide a powerful tool for examining things that are not so simple, intuitive or obvious.
Cantor needed a way to compare the sizes of sets, some method for determining whether sets had the same number of members. If two sets didn't have the same number of members, he needed a method for telling which one was larger. Of course this is simple for finite sets. You count the members in both sets. If the number is the same, they are the same size. If the number of members in one set is greater than the number of members in the other, then that set is larger.
You can't count the members in an infinite set, though, so this method won't work for comparing their sizes. If there are two infinite sets, one must need some other way to tell if one is larger.
The formal notion that Cantor used for comparing sizes of sets is the idea of a one-to-one correspondence. A one-to-one correspondence pairs up the members of one set with the members of another. Sets which can be matched to each other in this sense are said to have the same cardinality. We could pair up the elements of the imaginary set
{television, aardvark, car, 6} with the numbers {1,2,3,4}. It is possible to do this so that one member of each set is paired up with one member of the other, no member is left out, and no member has more than one partner. Then we can be sure that the set{1,2,3,4} has the same number of members as the set {television, aardvark, car, 6}.
one-to-one correspondence:
{television, aardvark, car, 6}
{ 1, 2, 3, 4}
So, what is bigger? infinity+X? infinity+infinity ? Or infinity(infinity)? To calculate which is bigger cantor used sets and one-to-one correspondence.
These one-to-one correspondence sets show that even though we add an unknown variable, multiply by two, and square a set, the upper and lower sets still remain equal. Since we will never run out of numbers any correspondence set with two infinite values will be equal. All these sets clearly have the same cardinality since its members can be put in a one-to-one correspondence with each other on and on forever. These sets are said to be countably infinite and their cardinality is denoted by the Hebrew letter aleph with a subscript nought, .
OTHER INFINITIES
Cantor thought once you start dealing with infinities, everything is the same size. This did not turn out to be the case. Cantor developed an entire theory of transfinite arithmetic, the arithmetic of numbers beyond infinity. Although the sizes of the infinite sets of counting numbers, even numbers, odd numbers, square numbers, etc., are the same, there are other sets, the set of numbers that can be expressed as decimals, for instance, that are larger. Cantor's work revealed that there are hierarchies of ever-larger infinities. The largest one is called the Continuum.
Some mathematicians who lived at the end of the 19th century did not want to accept his work at all. The fact that his results were so paradoxical was not the problem so much as the fact that he considered infinite sets at all. At that time, some mathematicians held that mathematics could only consider objects that could be constructed directly from the counting numbers. You can't list all the elements in an infinite set, they said, so anything that you say about infinite sets is not mathematics. The most powerful of these mathematicians was Leopold Kronecker who even developed a theory of numbers that did not include any negative numbers.
Although Kronecker did not persuade very many of his contemporaries to abandon all conclusions that relied on the existence of negative numbers, Cantor's work was so revolutionary that Kronecker's argument that it "went too far" seemed plausible. Kronecker was a member of the editorial boards of the important mathematical journals of his day, and he used his influence to prevent much of Cantor's work from being published in his lifetime. Cantor did not know at the time of his death, that not only would his ideas prevail, but that they would shape the course of 20th century mathematics.
f:\12000 essays\sciences (985)\Math\Gods gift to calculators.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Gods Gift to Calculators: The Taylor Series
It is incredible how far calculators have come since my parents were in
college, which was when the square root key came out. Calculators since then
have evolved into machines that can take natural logarithms, sines, cosines,
arcsines, and so on. The funny thing is that calculators have not gotten any
"smarter" since then. In fact, calculators are still basically limited to the four basic
operations: addition, subtraction, multiplication, and division! So what is it that
allows calculators to evaluate logs, trigonometric functions, and exponents? This
ability is due in large part to the Taylor series, which has allowed mathematicians
(and calculators) to approximate functions,such as those given above, with
polynomials. These polynomials, called Taylor Polynomials, are easy for a
calculator manipulate because the calculator uses only the four basic arithmetic
operators.
So how do mathematicians take a function and turn it into a polynomial
function? Lets find out. First, lets assume that we have a function in the form y=
f(x) that looks like the graph below.
We'll start out trying to approximate function values near x=0. To do this
we start out using the lowest order polynomial, f0(x)=a0, that passes through the
y-intercept of the graph (0,f(0)). So f(0)=ao.
Next, we see that the graph of f1(x)= a0 + a1x will also pass through x=0,
and will have the same slope as f(x) if we let a0=f1(0).
Now, if we want to get a better polynomial approximation for this function,
which we do of course, we must make a few generalizations. First, we let the
polynomial fn(x)= a0 + a1x + a2x2 + ... + anxn approximate f(x) near x=0, and let
this functions first n derivatives match the the derivatives of f(x) at x=0.
So if we want to make the derivatives of fn(x) equal to f(x) at x=0, we have to
chose the coefficients a0 through an properly. How do we do this? We'll write
down the polynomial and its derivatives as follows.
fn(x)= a0 + a1x + a2x2 + a3x3 + ... + anxn
f1n(x)= a1 + 2a2x + 3a3x2 +... + nanxn-1
f2n(x)= 2a2 + 6a3x +... +n(n-1)anxn-2
.
.
f(n)n(x)= (n!)an
Next we will substitute 0 in for x above so that
a0=f(0) a2=f2(0)/2! an=f(n)(0)/n!
Now we have an equation whose first n derivatives match those of f(x) at
x=0.
fn(x)= f(0) + f1(0)x + f2(0)x2/2! + ... + f(n)(0)xn/ n!
This equation is called the nth degree Taylor polynomial at x=0.
Furthermore, we can generalize this equation for x=a instead of just
approximating about 0.
fn(x)= f(a) + f1(a)(x-a) + f2(a)(x-a)2/2! + ... + f(n)(a)(x-a)n/ n!
So now we know the foundation by which mathematicians are able to
design calculators to evaluate functions like sine and cosine so that we do not
have to rely on a table of values like they did in days past. In addition to the
knowledge of how calculators approximate values of transcendental functions, we
can also see the applications of Taylor series in physics studies. These series
appear in mathematical descriptions of vibrating strings, heat flow, transmission
of electrical current, and motion of a simple pendulum.
f:\12000 essays\sciences (985)\Math\Leonhard Euler 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-Leonhard Euler-
Leonhard Euler, (born April 15, 1707, died Sept. 18, 1783), was the most prolific mathematician in history. His 866 books and articles represent about one third of the entire body of research on mathematics, theoretical physics, and engineering mechanics published between 1726 and 1800. In pure mathematics, he integrated Leibniz's differential calculus and Newton's method of fluxions into mathematical analysis; refined the notion of a function; made common many mathematical notations, including e, i, the pi symbol, and the sigma symbol; and laid the foundation for the theory of special functions, introducing the beta and gamma transcendal functions. He also worked on the origins of the calculus of variations, but withheld his work in deference to J. L. Lagrange. He was a pioneer in the field of topology and made number theory into a science, stating the prime number theorem and the law of biquadratic reciprocity. In physics he articulated Newtonian dynamics and laid the foundation of analytical mechanics, especially in his Theory of the Motions of Rigid Bodies (1765). Like his teacher Johann Bernoulli, he elaborated continuum mechanics, but he also set forth the kinetic theory of gases with the molecular model. With Alexis Clairaut he studied lunar theory. He also did fundamental research on elasticity, acoustics, the wave theory of light, and the hydromechanics of ships.
Euler was born in Basel, Switzerland. His father, a pastor, wanted his son to follow in his footsteps and sent him to the University of Basel to prepare for the ministry, but geometry soon became his favorite subject. Through the intercession of Bernoulli, Euler obtained his father's consent to change his major to mathematics. After failing to obtain a physics position at Basel in 1726, he joined the St. Petersburg Academy of Science in 1727. When funds were withheld from the academy, he served as a medical lieutenant in the Russian navy from 1727 to 1730. In St. Petersburg he boarded at the home of Bernoulli's son Daniel. He became professor of physics at the academy in 1730 and professor of mathematics in 1733, when he married and left Bernoulli's house. His reputation grew after the publication of many articles and his book Mechanica (1736-37), which extensively presented Newtonian dynamics in the form of mathematical analysis for the first time.
In 1741, Euler joined the Berlin Academy of Science, where he remained for 25 years. In 1744 he became director of the academy's mathematics section. During his stay in Berlin, he wrote over 200 articles, three books on mathematical analysis, and a scientific popularization, Letters to a Princess of Germany (3 vols., 1768-72). In 1755 he was elected a foreign member of the Paris Academy of Science; during his career he received 12 of its prestigious biennial prizes.
In 1766, Euler returned to Russia, after Catherine the Great had made him a generous offer. At the time, Euler had been having differences with Frederick the Great over academic freedom and other matters. Frederick was greatly angered at his departure and invited Lagrange to replace him. In Russia, Euler became almost entirely blind after a cataract operation, but was able to continue with his research and writing. He had a phenomenal memory and was able to dictate treatises on optics, algebra, and lunar motion. At his death in 1783, he left a vast backlog of articles. The St. Petersburg Academy continued to publish them for nearly 50 more years.
f:\12000 essays\sciences (985)\Math\Leonhard Euler.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Euler, Leonhard (1707-83), Swiss mathematician, whose major work was done in the field of pure mathematics, a field that he helped to found. Euler was born in Basel and studied at the University of Basel under the Swiss mathematician Johann Bernoulli, obtaining his master's degree at the age of 16. In 1727, at the invitation of Catherine I, empress of Russia, Euler became a member of the faculty of the Academy of Sciences in Saint Petersburg. He was appointed professor of physics in 1730 and professor of mathematics in 1733. In 1741 he became professor of mathematics at the Berlin Academy of Sciences at the urging of the Prussian king Frederick the Great. Euler returned to St. Petersburg in 1766, remaining there until his death. Although hampered from his late 20s by partial loss of vision and in later life by almost total blindness, Euler produced a number of important mathematical works and hundreds of mathematical and scientific memoirs.
In his Introduction to the Analysis of Infinities (1748; trans. 1748), Euler gave the first full analytical treatment of algebra, the theory of equations, trigonometry, and analytical geometry. In this work he treated the series expansion of functions and formulated the rule that only convergent infinite series can properly be evaluated. He also discussed three-dimensional surfaces and proved that the conic sections are represented by the general equation of the second degree in two dimensions. Other works dealt with calculus, including the calculus of variations, number theory, imaginary numbers, and determinate and indeterminate algebra. Euler, although principally a mathematician, made contributions to astronomy, mechanics, optics, and acoustics
f:\12000 essays\sciences (985)\Math\Math is Very Important.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Math has been used for centuries with out it, we couldn't know how to
calculate, save change and spend money. We also wouldn't know how much gas we need to fly arrive and boat to our destinations. Also we need to
know how pro teams are doing on TV and how good the chances of them winning.
One of the most popular things that people use their math for is
calculating saving, changing, and earning their money. Some people have
someone else to do it for him or her and other people do it themselves.
Gas is one of the most important things that we need to go from one
place to another. Whether is by land, sea, air, or space you need to know
how far your gas will take you and how long it will take.
Sports broadcasters use math a lot in heir daily broadcasts they then
batting averages, shootings averages, and ext. They do that so that you can
have an idea of how good they are.
Math is used in your daily life whether you like it or not. You use
math with money, gas, and sports TV. You can't live without it.
f:\12000 essays\sciences (985)\Math\Math.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Problem Solving
There are many different strategies that good problem solvers use to solve a
problem. Before using a strategy, you must remember a few things. First, take your time.
Few good problem solvers solve problems fast. Second, don't give up. You will never
solve a problem if you don't try. Last, be flexible. If a first you don't succeed, try another
way. And if the second way doesn't work, try a third way.
There are a few steps to solving a problem that you should follow. First, read the
problem very carefully. Try to understand every word and make sure you know what the
problem is asking. If you don't know the meaning of a word, look it up in a dictionary.
Second, sort out information that is not needed. Third, devise a plan. Even guesses have
to be planned out. Arrange information in tables, draw pictures, and compare the
information to another problem you know of. Fourth, carry out the plan. Attempt to solve
it and work with care. If the attempt doesn't work then go back and read the problem
again. Last, check your work carefully. Don't check by repeating the problem, estimate or
find another way to try and to solve the problem.
You can understand what the problem means yet still not be able to solve it
immediately. One good way to help you solve the problem is to draw a picture. One
example of this strategy is suppose you received a problem asking you how many
diagonals a heptagon has. The plan is very obvious. Draw a heptagon and then draw its
diagonals.
Another strategy is trial and error. Trial and error is a problem solving strategy
that everybody uses at one time or another. In trial and error, you try an answer. If the
answer is an error, you try something else. You keep trying until you get the correct
answer.This is a good strategy to use if the problem only has a few possible answers.
Another good problem solving strategy is to make a table. Some times it helps a
lot to make one. A table is just an arrangement of rows with columns. Making a table is
just one way of organizing information that you know. Below is an example of a table:
Number if minutes talked Cost of phone call
1 .25
2 .43
3 .61
4 .79
5 .97
Another problem solving strategies is to use a special case. A special case is an
instance of a pattern used for some definite purpose. Special cases help to test weather a
property or generalization is true. The only thing wrong with this strategy is that even if
several special cases of a pattern are true, the pattern may not always be true.
A strategy called try simpler numbers can be used to devise a plan, to solve
problems, and to check work. For this reason, it is a powerful strategy.
There certainly are many different problem solving strategies a problem solver can
use to solve the problem.
1. Mandy is saving to buy a present for her parents' anniversary. She has $10.00
and she adds $5.00 a week.
a. How much will she save in 12 weeks.
b. How much will she save in w weeks.
Answer:
a. 70
b. $10.00 + (w*$5.00)
2. Nine teams are to play each other in a tournament. How many games are needed.
Answer: 36
To solve this problem I made a table
12 23 34 45 56 67 78 89
13 24 35 46 57 68 79
14 25 36 47 58 69
15 26 37 48 59
16 27 38 49
17 28 39
18 29
19
3. Bill is older than Becky. Becky's younger than Bob. Bob is older than Barbara.
Barbara is older than bill. Who is the second oldest.
Answer: Bill
To solve this problem I drew a picture.
Becky, Barbara, Bob, Bill
f:\12000 essays\sciences (985)\Math\My interview with Einstein.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
0/15/96 ????? = your last name
My interview with Einstein
Me: Hello Mr. Einstein.
Einstein: Hello Mr. ????? (in strong german acent).
Me: I heard that you absolultly sucked at math...is that true???
Einstein: Well, when I was a child I constintly failed my math classes. However
after I got into higher level mathematics I found it easier...i still don't do incredibly
well in math however (in strong german acent).
Me: ok...enough small talk...now for the big question.
Einstein: what would that be??? (in strong german acent).
Me: What the hell does E=MC² mean!!!??
Einstein: Well...here's how it goes. E represents Energy, M represents Mass, and
C represents the speed of light. So E=MC² means Energy=Mass * the
speed of light to the second power (or squared). (in strong german acent).
Me: I wish i could say i i understand it , but now i only know what it represents.
Einstein: Well that equation is very theoretical. Unfortunatly i can not test it out to
see if i am correct (in strong german acent).
Me: Why?
Einstein: Cause i can't travel the speed of light. (in strong german acent).
Me: What does that have to do with it???
Einstein: Have you listened to a word i said??? (in strong german acent).
Me: Of course (looking very suspisious and guilty).
Einstein: Yea..i'm sure you were (in strong german acent).
Me: Oh look at the time...i better get going (looking at a wrist with no watch on it)
Einstein: Please don't go! It is very lonely being dead (in strong german acent).
Me: Well listen you have your people call my people and we'll talk...ok???
Einstein: Ok ok...bye (in strong german acent).
f:\12000 essays\sciences (985)\Math\Proportions of Numbers and Magnitudes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Proportions of Numbers and Magnitudes
In the Elements, Euclid devotes a book to magnitudes (Five), and he devotes a book to numbers (Seven). Both magnitudes and numbers represent quantity, however; magnitude is continuous while number is discrete. That is, numbers are composed of units which can be used to divide the whole, while magnitudes can not be distinguished as parts from a whole, therefore; numbers can be more accurately compared because there is a standard unit representing one of something. Numbers allow for measurement and degrees of ordinal position through which one can better compare quantity. In short, magnitudes tell you how much there is, and numbers tell you how many there are. This is cause for differences in comparison among them.
Euclid's definition five in Book Five of the Elements states that " Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth, when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order." From this it follows that magnitudes in the same ratio are proportional. Thus, we can use the following algebraic proportion to represent definition 5.5:
(m)a : (n)b :: (m)c : (n)d.
However, it is necessary to be more specific because of the way in which the definition was worded with the phrase "the former equimultiples alike exceed, are alike equal to, or alike fall short of....". Thus, if we take any four magnitudes a, b, c, d, it is defined that if equimultiple m is taken of a and c, and equimultiple n is taken of c and d, then a and b are in same ratio with c and d, that is, a : b :: c : d, only if:
(m)a > (n)b and (m)c > (n)d, or
(m)a = (n)b and (m)c = (n)d, or
(m)a < (n)b and (m)c < (n)d.
Though, because magnitudes are continuous quantities, and an exact measurement of magnitudes is impossible, it is not possible to say by how much one exceeds the other, nor is it possible to determine if a > b by the same amount that c > d.
Now, it is important to realize that taking equimultiples is not a test to see if magnitudes are in the same ratio, but rather it is a condition that defines it. And because of the phrase "any equimultiples whatever," it would be correct to say that if a and b are in same ratio with c and d, then any one of the three instances above, m and n being "any equimultiples whatever," are true. Likewise, as stated in proposition 4, the corresponding equimultiples are also in proportion. It would be incorrect, however; to say that equimultiples are taken of the original magnitudes to show that they are in same ratio. The two instances coexist. Furthermore, if there is any one possibility of taking "any equimultiple whatever," and not having any one of the above three instances come true, then the instance is not that of same ratio, but rather that of greater or lesser ratio as is stated in definition 7, Book 5.
In Book Seven, number replaces magnitude as the substance of ratios and proportions. A number is a multitude composed of units. Definition 20 states that "Numbers are proportional when the first is the same multiple, or the same part, or the same parts, of the second that the third is of the fourth." Thus, there are three instances of numerical proportions:
same multiple- 18 : 6 :: 6 : 2
same part- 2 : 4 :: 4 : 8
same parts- 5 : 6 :: 15 : 18.
Compared to the definition of proportion in Book 5, this one is much less complex and more easily comprehended because using numbers is more exact and concrete. First of all, there is no taking of equimultiples of the antecedents and consequents of two ratios. This is because the taking of equimultiples is a necessary condition when it is only possible to say that one magnitude is greater, lesser, or equal to another. With numbers, however; there is a more specific relationship. Two is less than five by three units. It is necessary to state by how many, which then limits the comparison. For instance, in the example above of "same multiples," one can see that eighteen is three multiples of six and that six is three multiples of two. Thus, the phrase "..... alike exceeding, alike equal to, or alike falls short of..." is replaced with "......same multiple, same part, or same parts...."
Numbers are representations of magnitude. They are more easily compared, but the proportion of numbers is fundamentally the same as that of magnitudes, since a proportion is generally a similarity between ratios. A proportion of numbers is therefore included in the proportion of magnitudes as a specific case.
f:\12000 essays\sciences (985)\Math\Pual Adrien Maurice Dirac.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Patrick Ennis
Mrs. Carter
Research
Monday, December 9, 1996
Paul Adrien Maurice Dirac
"Physical Laws should have mathematical beauty." This statement was Dirac's response to the
question of his philosophy of physics, posed to him in Moscow in 1955. He wrote it on a
blackboard that is still preserved today.[1]
Paul Adrien Maurice Dirac (1902-1984), known as P. A. M. Dirac, was the fifteenth
Lucasian Professor of Mathematics at Cambridge. He shared the Nobel Prize for Physics in 1933
with Erwin Schrodinger.[2] He is considered to be the founder of quantum mechanics, providing
the transition from quantum theory. The Cambridge Philosophical Society awarded him the
Hopkins Medal in 1930. He was awarded the Royal Medal by the Royal Society of London in
1939 and the James Scott Prize from the Royal Society of Edinburgh. In 1952 the Max Plank
Medal came from the Association of German Physical Societies, as well as the Copley Medal
from the Royal Society. The Akademie der Wissenschaften in the German Democratic Republic
presented him with the Helmholtz Medal in 1964. In 1969 he received the Oppenheimer Prize
from the University of Miami. Lastly in 1973, he received the Order of Merit.[3]
Dirac was well known for his almost anti--social behavior, but he was a member of many
scientific organizations throughout the world. Naturally, he was a member of the Royal Society,
but he was also a member of the Deutsche Akademie der Naturforsher and the Pontifical
Academy of Sciences. He was a foreign member of Academie des Sciences Morales et
Politiques and the Academie des Sciences, the Accademia delle Scienze Torino and the
Accademia Nazionale dei Lincei and the National Academy of Science. He was an honorary
member and fellow of the Indian Academy of Science, the Chinese Physical Society, the Royal
Irish Academy, the Royal Society of Edinburgh, the National Institute of Sciences in India, the
American Physical Society, the Tata Institute for Fundamental Research in India, the Royal
Danish Academy, and the Hungarian Academy of Sciences. He was a corresponding member of
the USSR Academy of Sciences.[4] The world wide respect he earned for his work was well
deserved.
A prolific writer, Dirac published over two hundred works between 1924 and 1987,
mainly papers in physics journals on topics relating to quantum mechanics. His book Principles
of Quantum Mechanics , published in 1930, was the first textbook in the discipline and became
the standard.[5] Some predictions made by Dirac are still untested because his theoretical work
was so far reaching, but many other predictions have been verified, assuring him of a special
place in the history of physics.[6]
Dirac was three years old when Einstein published his famous papers on relativity in
1905 and a year old when his predecessor Joseph Larmor began his tenure as Lucasian professor.
Physics had just begun its incredible transformation of the twentieth century when Dirac arrived
on the scene.
Dirac came to Cambridge as a graduate student in 1923 after graduating from the
University of Bristol. As a student in mathematics in St. John's College, he took his Ph.D. in
1926 and was elected in 1927 as a fellow. His appointment as university lecturer came in
1929.[7] He assumed the Lucasian professorship following Joseph Larmor in 1932 and retired
from it in 1969. Two years later he accepted a position at Florida State University where he lived
out his remaining years. The FSU library now carries his name. [8]
While at Cambridge, Dirac did not accept many research students. Those who worked
with him generally thought he was a good supervisor, but one who did not spend much time with
his students. A student needed to be extremely independent to work under Dirac.[9] One such
student was Dennis Sciama, who later became the supervisor of Stephen Hawking, the current
holder of the Lucasian Chair. Dirac's lectures were attended by Sir M. J. Lighthill while he was a
student at Cambridge and Lighthill was Dirac's successor to the Lucasian Chair.
Dirac offered the first course in quantum mechanics in Britain, entitled Quantum Theory (Recent
Developments) . Among his students was J. R. Oppenheimer, an American, who later on was in
charge of the Manhattan Project, which created the first atomic bomb.[10]
Dirac's work should be understood in the context of the development of quantum physics.
The theoretical work had been underway for several years before his entry into the field. It was
plagued with difficulties, in part because of the radical change in the way one thought about the
world around us, and in part because it was a difficult problem. The important developments of
the beginning of this century were carried out by Max Plank, Max Born, Niels Bohr, Albert
Einstein, Werner Heisenberg, Erwin Schrodinger, and Wolfgang Pauli. Quantum mechanics was
brought to life during the few short years of 1925 through 1927 by most of these men.[11]
Dirac was the first to apply quantum mechanics to an electromagnetic field, using the
method of second quantization. This work contained the basis for quantum field theory,[12]
which Dirac called quantum electrodynamics.[13] The singular delta function was invented by
Dirac in order to prove two problems were equivalent. He was working with the problems of
"diagnolizing the energy matrix in the Born--Heisenberg-Jordan theory" and "finding the energy
eigenvalues of Schrodinger's wave equation."[14] The delta function is now used in many
different areas of mathematics and physics and is considered basic. In 1926 he derived
Balmer-spectrum energy levels of the hydrogen atom. He was the first to derive the Lorentzian
shape of spectral lines using quantum mechanics. He introduced the terms bra and ket from the
word bracket to denote the use of parts of the bracket. The half brackets were for state vectors
and their eigenvalues. One of his major breakthroughs was the use of an algebraic version of
quantum mechanics based on Poisson brackets.
Dirac's life was dedicated to physics with no interests outside of his work, but, besides
quantum mechanics, he did work on isotope separation, magnetic monopoles, large-number
hypothesis and other physics areas. The large-number hypothesis was based on Dirac's belief that
very large constants should not exist in nature. Somehow these large constants that did exist
were a consequence of the age of the universe.[15] One of the interesting implications of his
work that predicted the positron was the prediction of a magnetic monopole. It is common
knowledge that a magnet has a north and a south pole, where opposites attract and sameness
repels. The idea that a pole could exist in isolation is quite foreign. Although theory predicts its
existence, none has ever been found. His work in isotope separation was a step from his
theoretical world into the world of experimental physics. He had done some work in the 1930s,
but stopped when his colleague, Peter Kapitza, found himself unable to leave the Soviet Union,
because Stalin had revoked the necessary exit permit.[16]
In the 1940s the war effort dragged Dirac back into isotope separation. A group at Oxford was
looking for an efficient means to do it. Dirac's method worked, but it was not considered the
most cost effective. However, he did continue to contribute to the effort, and even wrote a report
on the statistical method of isotope separation that contained concepts still used today.[17]
Dirac views on religion were very restricted. He seemed to have believed that nothing was as
important as his physics. Heisenberg related a story of an exchange between Dirac and Wolfgang
Pauli where Dirac expressed his agnostic views. Pauli responded with "Dirac has a new religion.
There is no God and Dirac is his prophet."[18] Dirac was a member of the Pontifical Academy of
Sciences at the Vatican, having written many papers for them. He was not anti-religious. His
wife maintained that he was deeply religious, but he has shown no evidence for it.[19]
f:\12000 essays\sciences (985)\Math\Pythagorean Theorem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Trigonometry uses the fact that ratios of pairs of sides of triangles are functions of the angles. The basis for mensuration of triangles is the right-angled triangle. The term trigonometry means literally the measurement of triangles. Trigonometry is a branch of mathematics that developed from simple measurements.
A theorem is the most important result in all of elementary mathematics. It was the motivation for a wealth of advanced mathematics, such as Fermat's Last Theorem and the theory of Hilbert space. The Pythagorean Theorem asserts that for a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. There are many ways to prove the Pythagorean Theorem. A particularly simple one is the scaling relationship for areas of similar figures.
Did Pythagoras derive the Pythagorean Theorem or did he piece it together by studying ancient cultures; Egypt, Mesopotamia, India and China? What did these ancient cultures know about the theorem? Where was the theorem used in their societies? In "Geometry and Algebra in Ancient Civilizations", the author discusses who originally derived the Pythagorean Theorem. He quotes Proclos, a commentator of Euclid's elements, "if we listen to those who wish to recount the ancient history we may find some who refer this theorem to Pythagoras, and say that he sacrificed an ox in honor of his discovery". If this statement is considered as a statement of fact, it is extremely improbable, for Pythagoras was opposed to the sacrifice of animals, especially cattle. If the saying is considered as just a legend, it is easy to explain how such a legend might have come into existence. Perhaps the original form of the legend said something like he who discovered the famous figure sacrificed a bull in honor of his discovery.
Van der Waerden goes on to comment that he believes the original discoverer was a priest, before the time of Babylonian texts, who was allowed to sacrifice animals and also was a mathematician. This question can never be answered, but evidence that societies used the theorem before the time of Pythagoras can be found.
The Theorem is useful in everyday life. For example, at a certain time of day, the sun's rays cast a three foot shadow off a four foot flag pole. Knowing these two lengths, and the fact that the pole forms a ninety degree angle with the ground, the distance from the end of the shadow to the top of the pole can be found without measuring. The first step is to substitute the given data into the actual formula. Now you can find from the length of the third side, which is five feet. Trigonometry is basicly the study of the relationship between the sides and the angles of right triangles. Knowing how to use these relationships and ratios, is absolutly necessary for almost everthing. It might not seem like it, but trigonometry is used almost everywhere.
Another example of the importance of the theorem is the world orb symbol, which depicts engineering studies. Although there are many parts to this symbol, the Pythagorean theorem is appropriately at the center, since much of engineering, mensuration, logarithms etc., are based on trigonometric functions.
f:\12000 essays\sciences (985)\Math\SAT Scores vs Acceptance Rates.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The experiment must fulfill two goals: (1) to produce a professional report of your experiment, and (2) to show your understanding of the topics related to least squares regression as described in Moore & McCabe, Chapter 2. In this experiment, I will determine whether or not there is a relationship between average SAT scores of incoming freshmen versus the acceptance rate of applicants at top universities in the country. The cases being used are 12 of the very best universities in the country according to US News & World Report. The average SAT scores of incoming freshmen are the explanatory variables. The response variable is the acceptance rate of the universities.
I used September 16, 1996 issue of US News & World Report as my source. I started out by choosing the top fourteen "Best National Universities". Next, I graphed the fourteen schools using a scatterplot and decided to cut it down to 12 universities by throwing out odd data.
A scatterplot of the 12 universities data is on the following page (page 2)
The linear regression equation is:
ACCEPTANCE = 212.5 + -.134 * SAT_SCORE
R= -.632 R^2=.399
I plugged in the data into my calculator, and did the various regressions. I saw that the power regression had the best correlation of the non-linear transformations.
A scatterplot of the transformation can be seen on page 4.
The Power Regression Equation is
ACCEPTANCE RATE=(2.475x10^23)(SAT SCORE)^-7.002
R= -.683 R^2=.466
The power regression seems to be the better model for the experiment that I have chosen. There is a higher correlation in the power transformation than there is in the linear regression model. The R for the linear model is -.632 and the R in the power transformation is -.683. Based on R^2 which measures the fraction of the variation in the values of y that is explained by the least-squares regression of y on x, the power transformation model has a higher R^2 which is .466 compared to .399. The residual plot for the linear regression is on page 5 and the residual plot for the power regression is on page 6. The two residuals plots seem very similar to one another and no helpful observations can be seen from them. The outliers in both models was not a factor in choosing the best model. In both models, there was one distinct outlier which appeared in the graphs.
The one outlier in both models was University of Chicago. It had an unusually high acceptance rate among the universities in this experiment. This school is a very good school academically which means the average SAT scores of incoming freshmen is fairly high. The school does not receive as many applicants to the school as the others, this due in part because of the many other factors besides academic where applicants would choose other schools than University of Chicago. Although the number applicants is relatively low, most of these applicants are very qualified which results in it having a high acceptance rate.
Rate = A*(SAT)^(B)
A=2.475x10^23
B=-7.002
From the model I have chosen, I predicted what the acceptance rate for a school would be if the average SAT score was a perfect 1600.
SAT = 1600
Rate = A*(SAT)^B = (2.475x10^23) *(1600)^(-7.002) = 9.1%
From the equation found, we have determined this "university" would have a acceptance rate of only 9.1%. This seems as a good prediction because such a school would have a very low acceptance rate compared to the other top universities. I believe causation does occur in this experiment. With there being a higher average SAT scores of applicants admitted, it would be harder to be admitted into that school. Although, I think the equation found is not very accurate when predicting far away from the median.
I do not believe there would be any sources in collecting the data. All the data was taken from the magazine, US News & World Reports. I strictly took twelve of the top 14 universities based on this magazine. I believe some lurking variable may be type of school, majors offered, and number of applicants. The number of applicants a school has would have somewhat an effect on its acceptance rate. If a school had a enormous amount of applicants, then this school would have a relatively low acceptance rate. One reason I think this experiment had a somewhat poor association is because of the schools selected. Two of these schools were technical schools which meant only certain applicants would want to apply to these schools while the other schools were more general overall.
In conclusion, the data used in this experiment had a greater than not association with one another. The higher the average SAT score of the incoming freshmen, chances are that the schools acceptance rate is lower.
f:\12000 essays\sciences (985)\Math\Solving and Checking Equations.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Solving And Checking Equations
In math there are many different types of equations to solve
and check. Some of them are easy and some are hard but all of
them have some steps that need to be followed. To solve the
problem 2(7x-4)-4(2x-6)=3x+31 you must follow many steps. The
first thing you will do is use the distributive property to take away
the parentheses. When you use the distributive property, your
equation will be 14x-8-8x+24=3x+31. Then you have to combine
like terms. Now that you've combined, your equation will be
6x+16=3x+31. The next step is to subtract 3x from both sides. Now
your equation will be 3x+16=31. The next step is to subtract 16
from both sides. Your equation has been reduced to 3x=15. The
last step is to divide both sides by 3 and your answer is x=5.
There are also many steps needed to check a problem. First,
you rewrite the problem. Under that you write the problem
substituting all the x's with 5. Next, you evaluate the problem left
of the equal sign. Then you evaluate the right of the equal sign. If
the answers are both the same, it means you solved the problem
right so you put a check mark next to it.
That is how you solve and check this type of equation.
f:\12000 essays\sciences (985)\Math\The esscence of Math.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This is my essay on math! If you have two tri to the second power you will not find out the meaning of life.
To cube gthe meaning of life is to subtract the meaning of love.
f:\12000 essays\sciences (985)\Math\The History of Calculus.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sir Isaac Newton and Gottfried Wilhelm Leibniz are two of the most supreme intellects of the 17th
century. They are both considered to be the inventors of Calculus. However, after a terrible dispute, Sir
Isaac Newton took most of the credit.
Gottfried Wilhelm Leibniz (1646-1716) was a German philosopher, mathematician, and statesman
born in the country of Leipzig. He received his education at the universities of Leipzig, Jena, and Altdorf.
He received a doctorate in law. He devoted much of his time to the principle studies of mathematics, science,
and philosophy.
Leibniz's contribution in mathematics was in the year 1675, when he discovered the fundamental
principles of infinitesimal calculus. He arrived at this discovery independently at the same time along with
the English scientist Sir Isaac Newton in 1666. However, Leibniz's system was published in 1684, three
years before Newton published his. Also at this time Leibniz's method of notation, known as mathematical
symbols, were adopted universally. He also contributed in 1672 by inventing a calculating machine that was
capable of multiplying, dividing, and extracting square roots. All this made him to be considered a pioneer in
the developement of mathematical logic.
Sir Isaac Newton is the other major figure in the development of Calculus. He was an English
mathemetician and physcist, whose considered to be one of the greatest scientists in history. Newton was
born on December 25, 1642 at Woolsthorpe, near Grantham in Lincolnshire. He attended Trinity College, at
the University of Cambridge. He received his bachelor's degree in 1665 and received his master's degree in
1668. However, there he ignored much of the universities established curriculum to pursue his own interests:
mathematics and natural philosophy. Almost immediately, he made fundamental discoveries in both areas.
Newtons dicoveries was made up of several different things. It consisted of combined infinite sums
which are known as infinite series. It also consisted of the binomial theorem for frational exponents and the
algebraic expression of the inverse relation between tangents and areas into methods that we refer to today as
calculus.
However, the story is not that simple. Being that both men were so-called universal geniuses, they
realized that in different ways they were entitled to have the credit for "inventing calculus". Both engaged in
a violent dispute over priority in the invention of calculus. Unfortunately, Newton had the upper hand,
considering that he was the president of the Royal Society. He used this position to to select a committee that
would investigate the unsolved question. Apparently, Newton included himself on this committee (illegally)
and submitted a false report that charged Leibniz with deliberate plagiarism. He was also the one who
compiled the book of evidence that the "society" was supposed to publish.
In my opinion, I believe that Leibniz was entitled to the credit more than Newton was. For one,
the phrase "First come, First serve". I also think that anyone who has to go about getting things in a
scandulous way doesn't deserve any recognition at all. Consequently, because of Newton's sneaky actions he
got the glamour he wanted. For example, when I was doing my research I read where they haad distinctively
put Newton before Leibniz by using the phrase "respectively". In conclusion, I believe that over the years
credit has been given to the wrong person.
f:\12000 essays\sciences (985)\Math\the independence of the judiciary .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
a) How is the independence of the judiciary guaranteed in Australia?
While the Westminster system had largely developed because of the doctrine of separation of powers, the Australian system of government is largely based on the Westminster. This doctrine of separation of powers proposes that the three institutions of government, the legislature, the executive and the judiciary should be exercised as separate and independent branches. It is this doctrine that stresses the need for the independence of the judiciary from the other two government institutions in order to protect the freedom of individuals. It is under this doctrine that no person can be a Member of Parliament and a judge at the same time. The doctrine of separation of powers offers several advantages, it proposes separate, specialized and efficient branches of government and it also reduces the abuse of government power by dividing it.
a) Why is the independence of the judiciary an important feature of Australia's system of justice?
The judiciary is the government branch that is concerned with the administration of justice. The judiciary is absolutely separate from the executive and the legislature, so it can check the concentration of government power. The independence of the judiciary is crucial of a democratic community because when judges are presiding over cases, there must be no interference and intimidation from the external forces. The independence issues touches upon the conflict of authority and freedom. If the doctrine of separation of powers did not exist, the authority would not be prevented from interfering in the administration of justice, therefore the basic freedoms of the citizens would not be guaranteed. It is up to the judiciary to exercise according to the law. It would be without the independence of the judiciary that the principles of rule of law and natural justice would be jeopardy and other institutions of government would interfere in the administration of justice.
There are three main elements of the independence of the judiciary they are, permanency of tenure, dismissal by parliament and fixed remuneration. Permanency of tenure means that judges are appointed by the executive government and have a permanent tenure until they have to retire at the age of seventy. It was a constitutional referendum in 1977 that placed this requirement on federal judges. Also state laws have been made, for the state judges to retire at the same age. The only exception is the Family court justices; they have to retire at the age of sixty-five.
Judges can only be dismissed on the grounds of proved misbehaviour or incapacity and can only be dismissed by parliament representatives. This is a very serious undertaking and has been used in the Australian parliament, but no federal judges have ever been dismissed. The constitution provides that a salary of a judge cannot be reduced. This is to prevent manipulation of salaries to a low level, which would force judges to retire from the bench. This would be suitable to an indirect interference in the independence on the judicature. However parliament can increase judges salary if the wish to.
Judges also must not interfere with each other's deliberations and decisions. While judges hear and make judgments and administer laws, the doctrine of precedent is so entrenched as a rule of conduct that it is the golden rule for judges to follow legal principles created as precedents in superior courts. Judicial independence is also necessary because a judge cannot hear an appeal from a case that she or he have just presided over, this would lead to an inconsistency in deciding the appeal.
Judges also have judicial independence. They have a law that protects them from having threats of civil litigation for their statements in their judgments. It is also a criminal offence for a person to interfere with a judge's performance while they are performing their duties. The rule of law is strictly applied; to acknowledge that everyone has an equal standing before the law and accepted judicial practices must be followed.
b) Give two examples how judges must comply with the rule of law.
The doctrine of precedent is a fixed rule of the judicial conduct. It is the inferior courts that have the obligations to follow the legal principles created in the superior courts. This when decisions made in the superior courts become binding precedents on inferior courts and judges cannot ignore them. For example if a District Court judge ignores the legal principle made in the Supreme Court, then on appeal it certain that the decision will be reversed because accepted judicial rules were not followed.
It is the principle of independent judiciary that conforms to the rule against bias. Everyone expects their justice to be administered by a member of the judiciary who is independent form the legislative and executive powers of the government and completely impartial to the case before them for resolution. Judges are expected to be disqualified themselves when they have any interference with the financial or other interests in the outcome of a case. This is the fundamental principle for the application of constitutional law as it is to criminal law. For example a person, who is challenging the legality of legislation at a great cost, would expect the judge to resolve the case on its merit rather than the power of the government institution.
f:\12000 essays\sciences (985)\Math\women in math.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Women In Math
Over the past 20 years the number of women in the fields of math, science and engineering have
grown at astronomical rate. The number of women which hold positions in these fields has more
than doubled. In post secondary education women are filling up the lecture halls and labs where
in the past where it was rare to see a woman at all. If a woman was able withstand the pressure
that was put on her it was more than likely that she wouldn't even be hired.
Many organizations have been established to help young women to prosue carriers in either math,
science and engineering. A few examples of these organizations are, AWM (Association for
Women in Mathematics), WISE (Women In Sience and Engineering), ASEM (Advocates for
Women in Science, Engineering and Mathematics) and many others.
Many young women do not prosue carriers in math for one or more key reasons. One is that they
have no female role models to look up to or any famous females in that field to inspire them.
Another is that they are often disgouraged by others, usually family members, " Why don't you
be like your mother and stay home and raise the children." is a common line used. This is most
likely because the parents don't want to see their daughter go out and fall flat on her face when
she doesn't make it. There is little support from others if a woman wants to go into these fields.
Equal opportunities is also a large factor in this, either as a decision maker wether to go into the
field or not or cold hard facts. Facts like 90 percent of engineering, math and science position are
held by men, this means that they don't hire very many women.
Another reason is that the maybe lacking the self-esteem within their self. This could be because
of the scary numbers that are related to women and math, science and engineering positions. Or
that they could be struggling in that area. Another factor could be that they fear that hiring
opportunities are very sexist and male shovenistic.
Measures have been taken to help ensure that women have a equal if not better chance to prosue a
career in math, science or engineering. Special funds, programs and organizations have been
setup to ecourage and assists women to go after the field they wish to work in. With these steps
taken it helps to get the ball rolling so that women can be pioneers and break into the math,
science and engineering fields and hopefully become role models or heros for future generations.
f:\12000 essays\sciences (985)\Math\Zeno of Elea.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Zeno of Elea was born in Elea, Italy, in 490 B.C. He died there in 430 B.C., in an
attempt to oust the city's tyrant. He was a noted pupil of Parmenides, from whom he
learned most of his doctrines and political ideas. He believed that what exists is one,
permanent, and unchanging. Zeno argued against multiplicity and motion. He did so by
showing the contradictions that result from assuming that they were real. His argument
against multiplicity stated that if the many exists, it must be both infinitely large and
infinitely small, and it must be both limited and unlimited in number. His argument
against motion is characterized by two famous illustrations: the flying arrow, and the
runner in the race. It is the illustration with the runner that is associated the first part of
the assignment. In this illustration, Zeno argued that a runner can never reach the end of a
race course. He stated that the runner first completes half of the race course, and then half
of the remaining distance, and will continue to do so for infinity. In this way, the runner
can never reach the end of the course, as it would be infinitely long, much as the semester
would be infinitely long if we completed half, and then half the remainder, ad infinitum.
This interval will shrink infinitely, but never quite disappear. This type of argument may
be called the antinonomy of infinite divisibilty, and was part of the dialectic which Zeno
invented.
These are only a small part of Zeno's arguments, however. He is believed to have
devised at least forty arguments, eight of which have survived until the present. While
these arguments seems simple, they have managed to raise a number of profound
philosophical and scientific questions about space, time, and infinity, throughout history.
These issues still interest philosophers and scientists today.
The problem with both Zeno's argument and yours is that neither of you deal with
adding the infinite. Your argument suggests that if one adds the infinite, the sum will be
infinity, which is not the case. If the numbers are shrinking infinitely at the same rate,
then eventually they will equal a certain number, not infinity as both Zeno's argument and
yours suggest. A simpler way to explain this would be to say that if the first half of the
semester takes a certain amount of time, and time always passes at the same rate, then the
second half of the semester will also take a certain amount of time, which can be
measured.
f:\12000 essays\sciences (985)\Physics\A Technical Analysis of Human Factors and Ergonomics in Moder.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A Technical Analysis of Ergonomics and Human Factors in Modern Flight Deck Design
I. Introduction
Since the dawn of the aviation era, cockpit design has become increasingly complicated owing to the advent of new technologies enabling aircraft to fly farther and faster more efficiently than ever before. With greater workloads imposed on pilots as fleets modernize, the reality of he or she exceeding the workload limit has become manifest. Because of the unpredictable nature of man, this problem is impossible to eliminate completely. However, the instances of occurrence can be drastically reduced by examining the nature of man, how he operates in the cockpit, and what must be done by engineers to design a system in which man and machine are ideally interfaced. The latter point involves an in-depth analysis of system design with an emphasis on human factors, biomechanics, cockpit controls, and display systems. By analyzing these components of cockpit design, and determining which variables of each will yield the lowest errors, a system can be designed in which the Liveware-Hardware interface can promote safety and reduce mishap frequency.
II. The History Of Human Factors in Cockpit Design
The history of cockpit design can be traced as far back as the first balloon flights, where a barometer was used to measure altitude. The Wright brothers incorporated a string attached to the aircraft to indicate slips and skids (Hawkins, 241). However, the first real efforts towards human factors implementation in cockpit design began in the early 1930's. During this time, the United States Postal Service began flying aircraft in all-weather missions (Kane, 4:9). The greater reliance on instrumentation raised the question of where to put each display and control. However, not much attention was being focused on this area as engineers cared more about getting the instrument in the cockpit, than about how it would interface with the pilot (Sanders & McCormick, 739).
In the mid- to late 1930's, the development of the first gyroscopic instruments forced engineers to make their first major human factors-related decision. Rudimentary situation indicators raised concern about whether the displays should reflect the view as seen from inside the cockpit, having the horizon move behind a fixed miniature airplane, or as it would be seen from outside the aircraft. Until the end of World War I, aircraft were manufactured using both types of display. This caused confusion among pilots who were familiar with one type of display and were flying an aircraft with the other. Several safety violations were observed because of this, none of which were fatal (Fitts, 20-21).
Shortly after World War II, aircraft cockpits were standardized to the 'six-pack' configuration. This was a collection of the six critical flight instruments arranged in two rows of three directly in front of the pilot. In clockwise order from the upper left, they were the airspeed indicator, artificial horizon, altimeter, turn coordinator, heading indicator and vertical speed indicator. This arrangement of instruments provided easy transition training for pilots going from one aircraft to another. In addition, instrument scanning was enhanced, because the instruments were strategically placed so the pilot could reference each instrument against the artificial horizon in a hub and spoke method (Fitts, 26-30).
Since then, the bulk of human interfacing with cockpit development has been largely due to technological achievements. The dramatic increase in the complexity of aircraft after the dawn of the jet age brought with it a greater need than ever for automation that exceeded a simple autopilot. Human factors studies in other industries, and within the military paved the way for some of the most recent technological innovations such as the glass cockpit, Heads Up Display (HUD), and other advanced panel displays. Although these systems are on the cutting edge of technology, they too are susceptible to design problems, some of which are responsible for the incidents and accidents mentioned earlier. They will be discussed in further detail in another chapter (Hawkins, 249-54).
III. System Design
A design team should support the concept that the pilot's interface with the system, including task needs, decision needs, feedback requirements, and responsibilities, must be primary considerations for defining the system's functions and logic, as opposed to the system concept coming first and the user interface coming later, after the system's functionality is fully defined. There are numerous examples where application of human-centered design principles and processes could be better applied to improve the design process and final product. Although manufacturers utilize human factors specialists to varying degrees, they are typically brought into the design effort in limited roles or late in the process, after the operational and functional requirements have been defined (Sanders & McCormick, 727-8). When joining the design process late, the ability of the human factors specialist to influence the final design and facilitate incorporation of human-centered design principles is severely compromised. Human factors should be considered on par with other disciplines involved in the design process.
The design process can be seen as a six-step process; determining the objectives and performance specifications, defining the system, basic system design, interface design, facilitator design, and testing and evaluation of the system. This model is theoretical, and few design systems actually meet its performance objectives. Each step directly involves input from human factors data, and incorporates it in the design philosophy (Bailey, 192-5).
Determining the objectives and performance specifications includes defining a fundamental purpose of the system, and evaluating what the system must do to achieve that purpose. This also includes identifying the intended users of the system and what skills those operators will have. Fundamentally, this first step addresses a broad definition of what activity-based needs the system must address. The second step, definition of the system, determines the functions the system must do to achieve the performance specifications (unlike the broader purpose-based evaluation in the first step). Here, the human factors specialists will ensure that functions match the needs of the operator. During this step, functional flow diagrams can be drafted, but the design team must keep in mind that only general functions can be listed. More specific system characteristics are covered in step three, basic system design (Sanders & McCormick, 728-9).
The basic system design phase determines a number of variables, one of which is the allocation of functions to Liveware, Hardware, and Software. A sample allocation model considers five methods: mandatory, balance of value, utilitarian, affective and cognitive support, and dynamic. Mandatory allocation is the distribution of tasks based on limitations. There are some tasks which Liveware is incapable of handling, and likewise with Hardware. Other considerations with mandatory allocation are laws and environmental restraints. Balance of value allocation is the theory that each task is either incapable of being done by Liveware or Hardware, is better done by Liveware or Hardware, or can only be done only by Liveware or Hardware. Utilitarian allocation is based on economic restraints. With the avionics package in many commercial jets costing as much as 15% of the overall aircraft price (Hawkins, 243), it would be very easy for design teams to allocate as many tasks to the operator as possible. This, in fact, was standard practice before the advent of automation as it exists today. The antithesis to that philosophy is to automate as many tasks as possible to relieve pressure on the pilot. Affective and cognitive support allocation recognizes the unique need of the Liveware component and assigns tasks to Hardware to provide as much information and decision-making support as possible. It also takes into account limitations, such as emotions and stress which can impede Liveware performance. Finally, dynamic allocation refers to an operator-controlled process where the pilot can determine which functions should be delegated to the machine, and which he or she should control at any time. Again, this allocation model is only theoretical, and often a design process will encompass all, or sometimes none of these philosophies (Sanders & McCormick, 730-4).
Basic system design also delegates Liveware performance requirements, characteristics that the operator must posses for the system to meet design specifications (such as accuracy, speed, training, proficiency). Once that is determined, an in-depth task description and analysis is created. This phase is essential to the human factors interface, because it analyzes the nature of the task and breaks it down into every step necessary to complete that task. The steps are further broken down to determine the following criteria: stimulus required to initiate the step, decision making which must be accomplished (if any), actions required, information needed, feedback, potential sources of error and what needs to be done to accomplish successful step completion. Task analysis is the foremost method of defining the Liveware-Hardware interface. It is imperative that a cockpit be designed using a process similar to this if it is to maintain effective communication between the operator and machine (Bailey, 202-6). It is widely accepted that the equipment determines the job. Based on that assumption, operator participation in this design phase can greatly enhance job enlargement and enrichment (Sanders & McCormick, 737; Hawkins, 143-4).
Interface design, the fourth process in the design model, analyzes the interfaces between all components of the SHEL model, with an emphasis on the human factors role in gathering and interpreting data. During this stage, evaluations are made of suggested designs, human factors data is gathered (such as statistical data on body dimensions), and any gathered data is applied. Any application of data goes through a sub-process that determines the data's practical significance, its interface with the environment, the risks of implementation, and any give and take involved. The last item involved in this phase is conducting Liveware performance studies to determine the capabilities and limitations of that component in the suggested design. The fifth step in the design stage is facilitator design. Facilitators are basically Software designs that enhance the Liveware-Hardware, such as operating manuals, placards, and graphs. Finally, the last design step is to conduct testing of the proposed design and evaluate the human factors input and interfaces between all components involved. An application of this process to each system design will enhance the operators ability to control the system within desired specifications. Some of the specific design characteristics can be found in subsequent chapters.
IV. Biomechanics
In December of 1981, a Piper Comanche aircraft temporarily lost directional control in gusty conditions within the performance specifications of the aircraft. The pilot later reported that with the control column full aft, he was unable to maintain adequate aileron control because his knees were interfering with proper control movement (NTSB database). Although this is a small incident, it should alert engineers to a potential problem area. Probably the most fundamental, and easiest to quantify interface in the cockpit is the physical dimensions of the Liveware component and the Hardware designs which must accommodate it. The comfort of the workspace has long been known to alleviate or perpetuate fatigue over long periods of time (Hawkins, 282-3). These facts indicate a need to discuss the factors involved in workspace design.
When designing a cockpit, the engineer should determine the physical dimensions of the operator. Given the variable dimensions of the human body, it is naturally impossible to design a system that will accommodate all users. An industry standard is to use 95% of the population's average dimensions, by discarding the top and bottom 2.5% in any data. From this, general design can be accomplished by incorporating the reach and strength limitations of smaller people, and the clearance limitations of larger people. Three basic design philosophies must be adhered to when designing around physical dimensions: reach and clearance envelopes, user position with respect to the display area, and the position of the body (Bailey, 273).
Other differences must be taken into account when designing a system, such as ethnic and gender differences. It is known, for example, that women are, on average, 7% shorter than men (Pheasant, 44). If the 95 percentile convention is used, the question arises, on which gender do we base that? One was to speak of the comparison is to discuss the F/M ratio, or the average female characteristic divided by the average male characteristic. Although this ratio doesn't take into account the possibility of overlap (i.e., the bottom 5th percentile of males are likely to be shorter than the top 5th percentile of females), that is not an issue in cockpit design (Pheasant, 44). The other variable, ethnicity must also be evaluated in system design. Some Asian races, for example have a sitting height almost ten centimeters lower than Europeans (Pheasant, 50). This can raise a potential problem when designing an instrument panel, or windshield.
Some design guides have been established to help the engineer with conceptual problems such as these, but for the most part, systems designers are limited to data gathered from human factors research (Tillman & Tillman, 80-7). As one story went, during the final design phase of the Boeing 777, the chairman of United Airlines was invited to preview it. When he stood in his first class seat, his head collided with an overhead baggage rack. Boeing officials were apologetic, but the engineers were grinning inside. A few months later, the launch of the first 777 in service included overhead baggage racks that were much higher, and less likely to be involved in a collision. Unlike this experience, designing clearances and reach envelopes for a cockpit is too expensive to be a trial and error venture.
V. Controls
In early 1974, the NTSB released a recomendation to the FAA regarding control inconsistencies:
"A-74-39. Amend 14 cfr 23 to include specifications for standardizing fuel selection valve handle designs, displays, and modes of operation" (NTSB database).
A series of safety accidents occurred during transition training of pilots moving from the Beechcraft Bonanza and Baron aircraft when flap and gear handles were mistakenly confused:
"As part of a recently completed special investigation, the safety board reviewed its files for every inadvertent landing gear retraction accident between 1975 and 1978. These accidents typically happened because the pilot was attempting to put the flaps control up after landing, and moved the landing gear control instead. This inadvertent movement of the landing gear control was often attributed to the pilot's being under stress or distracted, and being more accustomed to flying aircraft in which these two controls were in exactly opposite locations. Two popular light aircraft, the Beech Bonanza and Baron, were involved in the majority of these accidents. The bonanza constituted only about 30 percent of the active light single engine aircraft fleet retractable landing gear, but was involved in 16 of the 24 accidents suffered by this category of aircraft. Similarly, the baron constituted only 16 percent of the light twin fleet, yet suffered 21 of the 39 such accidents occurring to these aircraft" (NTSB database).
Like biomechanics, the design of controls is the study of physical relationships within the Liveware-Hardware interface. However, control design philosophy tends to be more subtle, and there is slightly more emphasis on psychological components. A designer determines what kind of control to use in a system only after the purpose of the system has been established, and what operator needs and limitations are.
In general, controls serve one of four actions: activation, discrete setting, quantitative setting, and continuous control. Activation controls are those that toggle a system on or off, like a light switch. Discrete setting switches are variable position switches with three or more options, such as a fuel selector switch with three settings. Quantitative setting switches are usually knobs that control a system along a predefined quantitative dimension, such as a radio tuner or volume control. Continuous controls are controls that require constant equipment control, such as a steering wheel. A control is a system, and therefore follows the same guidelines for system design described above. In general, there are a few guidelines to control design that are unique to that system. Controls should be easily identified by color coding, labeling, size and shape coding and location (Bailey, 258-64).
When designing controls, some general principles apply. Normal requirements for control operation should not exceed the maximum limitations of the least capable operator. More important controls should be given placement priority. The neutral position of the controls should correspond with the operator's most comfortable position, and full control deflection should not require an extreme body position (locked legs, or arms). The controls should be designed within the most biomechanically efficient design. The number of controls should be kept to a minimum to reduce workload, or when that is not possible, combining activation controls into discrete controls is preferable. When designing a system, it should be noted that foot control is stronger, but less accurate than hand control. Continuous control operation should be distributed around the body, instead of focused on one particular part, and should be kept as short as possible (Damon, 291-2).
Detailed studies have been conducted about control design, and some concerns were such things as the ability of an operator to discern one control with another, size and spacing of controls, and stereotypes. It was stated that even with vision available, easily discernible controls were mistaken for another (Fitts, 898; Adams, 276). A study by Jenkins revealed a set of control knobs that were not prone to such error, or were less likely to yield errors (Adams, 276-7). Some of these have been incorporated in aircraft designs as recent as the Boeing 777. Another study, conducted by Bradley in 1969 revealed that size and spacing of knobs was directly related to inadvertent operation. He believed that if a knob were too large, small, far apart, or close together, the operator was prone to a greater error yield. In the study, Bradley concluded that the optimum spacing between half-inch knobs would be one inch between their edges. This would yield the lowest inadvertent knob operation (Fitts, 901-2; Adams, 278). Population stereotypes address the issue of how a control should be operated (should a light switch be moved up, to the left, to the right, or down to turn it on?). There are four advantages that follow a model of ideal control relationship. They are decreased reaction time, fewer errors, better speed of knob adjustment, and faster learning. (Van Cott & Kinkdale, 349). These operational advantages become a great source of error to the operator unfamiliar with the aircraft and experiencing stress. During a time of high workload, one characteristic of the Liveware component is to revert to what was first learned (Adams, 279-80). In the case of the Bonanza and Baron pilots, this was the case in mistaking the gear and flap switches.
VI. Displays
In late 1986, the NTSB released the following recommendation to the FAA based on three accidents that had occurred within the preceding two years:
"A-86-105. Issue an Air Carrier Operations Bulletin-Part 135, directing Principal Operations Inspectors to ensure that commuter air carrier training programs specifically emphasize the differences existing in cockpit instrumentation and equipment in the fleet of their commuter operators and that these training programs cover the human engineering aspects of these differences and the human performance problems associated with these differences" (NTSB database).
The instrumentation in a cockpit environment provides the only source of feedback to the pilot in instrument flying conditions. Therefore, it is a very valuable design characteristic, and special attention must be paid to optimum engineering. There are two basic kinds of instruments that accomplish this task: symbolic and pictorial instruments. All instruments are coded representations of what can be found in the real world, but some are more abstract than others. Symbolic instrumentation is usually more abstract than pictorial (Adams, 195-6). When designing a cockpit, the first consideration involves the choice between these two types of instruments. This decision is based directly on the operational requirements of the system, and the purpose of the system. Once this has been determined, the next step is to decide what sort of data is going to be displayed by the system, and choose a specific instrument accordingly.
Symbolic instrumentation usually displays a combination of four types of information: quantitative, qualitative, comparison, and reading checking (Adams, 197). Quantitative instruments display the numerical value of a variable, and is best displayed using counters, or dials with a low degree of curvature. The preferable orientation of a straight dial would be horizontal, similar to the heading indicator found in glass cockpits. However, conflicting research has shown that no loss accuracy could be noted with high curvature dials (Murrell, 162). Another experiment showed that moving index displays with a fixed pointer are more accurate than a moving pointer on a fixed index (Adams, 200-1). Qualitative readings is the judgment of approximate values, trends, directions, or rate of variable change. This information is displayed when a high level of accuracy is not required for successful task completion (Adams, 197). A study conducted by Grether and Connell in 1948 suggested that vertical straight dials are superior to circular dials because an increase in needle deflection will always indicate a positive change. However, conflicting arguments came from studies conducted a few years later that stated no ambiguity will manifest provided no control inputs are made if a circular dial is used. It has also been suggested that moving pointers along a fixed background are superior to fixed pointers, but the few errors in reading a directional gyro seem to disagree with this supposition (Murrell, 163). Comparisons of two readings are best shown on circular dials with no markings, but if they are necessary, the markings should not be closer than 10 degrees to each other (Murrell, 163). Check reading involves verifying if a change has occurred from the desired value (Adams, 197). The most efficient instrumentation for this kind of task are any with a moving pointer. However, the studies concerning this type of informational display has only been conducted with a single instrument. It is not known if this is the most efficient instrument type when the operator is involved in a quick scan (Murrell, 163-4).
The pictorial instrument is most efficiently used in situation displays, such as the attitude indicator or air traffic control radar. In one experiment, pilots were allowed to use various kinds of situation instruments to tackle a navigational problem. Their performance was recorded, and the procedure was repeated using different pilots with only symbolic instruments. Interestingly, the pilots given the pictorial instrumentation performed no navigation errors, whereas those given the symbolic displays made errors almost ten percent of the time (Adams, 208-209). Regardless of these results, it has long been known that the most efficient navigational methods are accomplished by combining the advantages of these two types of instruments.
VII. Summary
The preceding chapters illustrate design-side techniques that can be incorporated by engineers to reduce the occurrence of mishaps due to Liveware-Hardware interface problems. The system design model presented is ideal and theoretical. To practice it would cost corporations much more money than they would save if they were to use less cost-efficient methods. However, today's society seems to be moving towards a global concensus to take safety more seriously, and perhaps in the future, total human factors optimization will become manifest. The discussion of biomechanics in chapter three was purposely broad, because it is such a wide and diverse field. The concepts touched upon indicate the areas of concern that a designer must address before creating a cockpit that is ergonomically friendly in the physical sense. Controls and displays hold a little more relevance, because they are the fundamental control and feedback devices involved in controlling the aircraft. These were discussed in greater detail because many of those concepts never reach the conscious mind of the operator. Although awareness of these factors is not critical to safe aircraft operation, they do play a vital role in the subconscious mind of the pilot during critical operational phases under high stress. Because of the unpredictable nature of man, it would be foolish to assume a zero tolerance environment to potential errors like these, but further investigation into the design process, biomechanics, control and display devices may yield greater insight as far as causal factors is concerned. Armed with this knowledge, engineers can set out to build aircraft not only to transport people and material, but also to save lives.
f:\12000 essays\sciences (985)\Physics\aerospace wind tunnel.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Wind Tunnel
In this report I will talk about the wind tunnel. I will described what they are used for. The
different types of wind tunnels from the slow speed subsonic to the high speed hypersonic tunnels.
I will also give A few examples of the wind tunnels used today.
The wind tunnel is a device used by many people, from High school students to NASA
engineers. The wind tunnel is a device used to test planes to see how well it will do under certain
conditions. The plane maybe as big as a full size 747 or as small as a match. To understand how a
wind tunnel is used to help in the designing process you have to know how a wind tunnel works.
How Wind Tunnels Work
A wind tunnel is a machine used to fly aircraft's, missiles, engines, and rockets on the ground
under pre-set conditions. With a wind tunnel you can chose the air speed, pressure, altitude and
temperature to name a few things. A wind tunnel is usually has a tube like appearance with which
wind is produced by a large fan to flow over what they are testing (plane, missiles, rockets, etc.)or
a model of it. The object in the wind tunnel is fixed and placed in the test section of the tunnel and
instruments are placed on the model to record the aerodynamic forces acting on the model.
Types of Wind Tunnels
There are four basic types of wind tunnels. Which are low subsonic, transonic,
supersonic, and hypersonic. The wind tunnels are classified by the amount of speed they can
produce. The subsonic has a speed lower then the speed of sound. The transonic has a speed
which is about equal to the speed of sound (Mach 1 760 miles per hour at sea level). . The
supersonic (Mach 2.75 to 4.96) has a speed of about five times the speed of sound And the fasts
of them all the hypersonic (Mach39.5) which has a speed of more then 30,000 miles per hour.
Wind Tunnel Test
There are basically two types of wind tunnel test which are static stability and the pressure
test. With these two test you can determine the aerodynamic characteristics of the aircraft. The
static stability test the measures the forces moments due to the external characteristic. These
forces include axial, side and normal force, rolling, pitching and yawing moment. This forces are
found by using a strain gauge which is located on the external portion of the plane. It measures
the external flow fields. Then the shadowgraph is used to show the shock waves and flow fields at
a certain speed or angle of attack. There is also the oil flow which shows you the surface flow
pattern.
The pressure test is used to provide the pressures acting on the test object. This is done by
placing taps over the surface. The taps are then connected to transducers that read the local
pressures. With this information they the can balance out the plane. Then the static stability and
the pressure test data are combined to find the distributed loads.
Wind Tunnels Used Today
Wind tunnel vary in size from a few inches to 12m by 24m (40ft by 80ft) located at the
Ames Research Center of the National Aeronautics and Space Administration or NASA, at moffet
Field, California. This wind tunnel at Ames can accommodate a Full-size aircraft with a wingspan
of 22m (72ft). They also have a hypervelocity tunnel at Ames that can create air velocities of up
to 30,000 mph (48,000 km/h) for one second. This high speed is able to be done by placing a
small model of the spacecraft in a device that produces an explosive charge into the tunnel in one
direction, while this is going on there is another explosive charge that simultaneously pushes gas
into the tunnel from the other direction. There is also a wind tunnel at the Lewis Flight Propulsion
Laboratory also own by NASA in Cleveland, Ohio, can test full-size jet engines at air velocities of
up to 2,400mph (3860km/h) and at altitudes of up to 100,000ft (30,500m).
Benefits of the Wind Tunnel
There are many benefits that one can gain in using a wind tunnel. Since designing an
airplane is a long and complicated process and an expensive one as well. With the wind tunnel you
can build models and test them at a fraction of the price compared to making the real thing. When
designing an airplane one has to take into account the public safety and still be able to keep the
design in mind to do what it is designed to do. With a wind tunnel you can design and test what
you make before you make it.
With a wind tunnel you can also solve problems that already exist. One example of this is
when the first jet engine propelled aircraft's where produced in the 40's . The problem occurred
when the jet planes released there missiles that where on the external part of the plane, the
missiles had a tendency to move up when released causing a collision with the plane resulting in
death of the pilot .With the wind tunnel the were able to solve this problem with out the lost of
any more lives.
On February 1, 1956 wind tunnels were so important that the Army formed the ABMA at
Redstone Arsenal in Huntsville, Alabama from army missile program assets. This program was
made to support for on going research and development projects for the army ballistic missile
program in this program they made a 14inc wind tunnel to test the missiles.
Early test were done to determine the aerodynamics of the Jupiter IRBM (Intermediate
Range Ballistic Missile)and its nose cone. The Jupiter C missile was one of the first Launch
Vehicles tested in the wind tunnel. The Jupiter C was a modified Redstone rocket made of nose
cone re-entry testing. A modified Jupiter C the Juno 1, launched America's first satellite, the
explorer 1 into orbit. Soon after this the ABMA wind tunnel went to NASA. The wind tunnel
played a vital role in the exploration of space. The wind tunnel played a major role in the Saturn
V, the first rocket that put the first man on the moon(Apollo mission) to the current Space Shuttle
Launch Vehicle. The tunnel mission changed from testing medium to long range missiles to
supporting America's "Race Into Space". NASA increased the payload of the original 10lb
satellite(explorer 1 ) to a man in a capsule(project Mercury). To the Apollo Project. The Saturn
family of launch vehicles spent hundreds of hours in the wind tunnel. There were various
configurations that were tried to find the best result. At first they were going to make a fully
reusable shuttle but that idea cost to much and was ruled out due to there budget. With the
budget in mind the current space shuttle started to be formed. But it still took many years in a
wind tunnel before the final design of the Orbiter, External Tank and Solid Rocket Boosters final
took there shape as the one we know of today. Even after the space shuttle took flight they were
still being tested to increase performance. Test were done to determine the cause of tile damage.
As the shuttle program continued to progress at a rapid pace it came to a stand still when the
Challenger Accident occurred. After the accident the 14in wind tunnel was immediately put into
use. to analyze what had occurred. These test verified what happen to the SRB leak and the
rupture of the aft external tank STA 2058 ring frame. The data was used to determine the
trajectory and control reconstruction. With all of this information they got from this they are
trying to develop a way to abort scenarios involving orbiter separation during transonic flight. All
of these configuration were done to the scale model that is .004 of the real shuttle.
This is just a few applications of the wind tunnel. There are many more things that they
can do. With the invention of the wind tunnel the cost of designing an aircraft and testing an
aircraft has been reduced, And most important lives have been saved. With out the wind tunnel
there would be no way for us to know what will happen before it happens.
f:\12000 essays\sciences (985)\Physics\Albert Einstein 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ALBERT EINSTEIN
Albert Einstein was born in Germany on March 14,
1879.As a kid he had trouble learning to speak. His parents
thought that he might be mentally retarded. He was not
smart in school. He suffered under the learning methods that
they used in the schools of Germany at that time so he was
never able to finish his studies. In 1894 his father's business
had failed and the family moved to Milan, Italy. Einstein who
had grown interested in science, went to Zurich, Switzerland,
to enter a famous technical school. There his ability in
mathematics and physics began to show.
When Einstein was graduated in 1900 he was unable to get a
teaching appointment at a university. Instead he got a
clerical job in the patent office at Bern, Switzerland. It was
not what he wanted but it would give him leisure for studying
and thinking. While over there he wrote scientific papers.
Einstein submitted one of his scientific papers to the
University of Zurich to obtain a Ph.D. degree in 1905. In
1908 he sent a second paper to the University of Bern and
became lecturer there. The next year Einstein received a
regular appointment as associate professor of physics at the
University of Zurich. By 1909, Einstein was recognized
throughout Europe as a leading scientific thinker. In 1909 the
fame that resulted from his theories got Einstein a job at the
University of Prague, and in 1913 he was appointed director of
a new research institution opened in Berlin, the Kaiser Wilhelm
Physics Institute.
In 1915, during World War 1, Einstein published a paper
that extended his theories. He put forth new views on the
nature of gravitation. Newton's theories he said were not
accurate enough. Einstein's theories seemed to explain the
slow rotation of the entire orbit of the planet Mercury, which
Newton's theories did not explain. Einstein's theories also
predicted that light rays passing near the sun would be bent
out of a straight line. When this was verified at the eclipse of
1919, Einstein was instantly accepted as the great scientific
thinker since Newton.
By now Germany had fallen in the hands of Adolf Hitler
and his Nazis. Albert Einstein was Jewish. In 1933 when the
Nazis came to power, Einstein happened to be in California.
He did not return to Germany. He went to Belgium instead.
The Nazis confiscated his possessions, publicly burned his
writings, and expelled him from all German scientific societies.
Einstein came back to the United States and became a
citizen.
The atomic bomb is an explosive device that depends
upon the release of energy in a nuclear reaction known as
FISSION, which is the splitting of atomic nuclei. Einstein sent
a letter to President Franklin D. Roosevelt, pointing out that
atomic bombs are possible and that enemy nations must be
allowed to make them first.
Roosevelt agreed with Einstein and funded the
Manhattan Project.
On April 18, 1955, Albert Einstein died. To his dying
day, he urged the world to come to some agreement that
would make nuclear wars forever impossible.
SHAHIN TEHRANI
f:\12000 essays\sciences (985)\Physics\Albert Einstein.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Einstein, Albert (1879-1955), German-born American physicist and Nobel laureate, best known as the creator of the special and general theories of relativity and for his bold hypothesis concerning the particle nature of light. He is perhaps the most well-known scientist of the 20th century.
Einstein was born in Ulm on March 14, 1879, and spent his youth in Munich, where his family owned a small shop that manufactured electric machinery. He did not talk until the age of three, but even as a youth he showed a brilliant curiosity about nature and an ability to understand difficult mathematical concepts. At the age of 12 he taught himself Euclidean geometry.
Einstein hated the dull regimentation and unimaginative spirit of school in Munich. When repeated business failure led the family to leave Germany for Milan, Italy, Einstein, who was then 15 years old, used the opportunity to withdraw from the school. He spent a year with his parents in Milan, and when it became clear that he would have to make his own way in the world, he finished secondary school in Arrau, Switzerland, and entered the Swiss National Polytechnic in Zürich. Einstein did not enjoy the methods of instruction there. He often cut classes and used the time to study physics on his own or to play his beloved violin. He passed his examinations and graduated in 1900 by studying the notes of a classmate. His professors did not think highly of him and would not recommend him for a university position.
For two years Einstein worked as a tutor and substitute teacher. In 1902 he secured a position as an examiner in the Swiss patent office in Bern. In 1903 he married Mileva Mariç, who had been his classmate at the polytechnic. They had two sons but eventually divorced. Einstein later remarried.
Early Scientific Publications
In 1905 Einstein received his doctorate from the University of Zürich for a theoretical dissertation on the dimensions of molecules, and he also published three theoretical papers of central importance to the development of 20th-century physics. In the first of these papers, on Brownian motion, he made significant predictions about the motion of particles that are randomly distributed in a fluid. These predictions were later confirmed by experiment.
The second paper, on the photoelectric effect, contained a revolutionary hypothesis concerning the nature of light. Einstein not only proposed that under certain circumstances light can be considered as consisting of particles, but he also hypothesized that the energy carried by any light particle, called a photon, is proportional to the frequency of the radiation. The formula for this is E = hu, where E is the energy of the radiation, h is a universal constant known as Planck's constant, and u is the frequency of the radiation. This proposal-that the energy contained within a light beam is transferred in individual units, or quanta-contradicted a hundred-year-old tradition of considering light energy a manifestation of continuous processes. Virtually no one accepted Einstein's proposal. In fact, when the American physicist Robert Andrews Millikan experimentally confirmed the theory almost a decade later, he was surprised and somewhat disquieted by the outcome.
Einstein, whose prime concern was to understand the nature of electromagnetic radiation, subsequently urged the development of a theory that would be a fusion of the wave and particle models for light. Again, very few physicists understood or were sympathetic to these ideas.
Einstein's Special Theory of Relativity
Einstein's third major paper in 1905, "On the Electrodynamics of Moving Bodies," contained what became known as the special theory of relativity. Since the time of the English mathematician and physicist Sir Isaac Newton, natural philosophers (as physicists and chemists were known) had been trying to understand the nature of matter and radiation, and how they interacted in some unified world picture. The position that mechanical laws are fundamental has become known as the mechanical world view, and the position that electrical laws are fundamental has become known as the electromagnetic world view. Neither approach, however, is capable of providing a consistent explanation for the way radiation (light, for example) and matter interact when viewed from different inertial frames of reference, that is, an interaction viewed simultaneously by an observer at rest and an observer moving at uniform speed.
In the spring of 1905, after considering these problems for ten years, Einstein realized that the crux of the problem lay not in a theory of matter but in a theory of measurement. At the heart of his special theory of relativity was the realization that all measurements of time and space depend on judgments as to whether two distant events occur simultaneously. This led him to develop a theory based on two postulates: the principle of relativity, that physical laws are the same in all inertial reference systems, and the principle of the invariance of the speed of light, that the speed of light in a vacuum is a universal constant. He was thus able to provide a consistent and correct description of physical events in different inertial frames of reference without making special assumptions about the nature of matter or radiation, or how they interact. Virtually no one understood Einstein's argument.
Early Reactions to Einstein
The difficulty that others had with Einstein's work was not because it was too mathematically complex or technically obscure; the problem resulted, rather, from Einstein's beliefs about the nature of good theories and the relationship between experiment and theory. Although he maintained that the only source of knowledge is experience, he also believed that scientific theories are the free creations of a finely tuned physical intuition and that the premises on which theories are based cannot be connected logically to experiment. A good theory, therefore, is one in which a minimum number of postulates is required to account for the physical evidence. This sparseness of postulates, a feature of all Einstein's work, was what made his work so difficult for colleagues to comprehend, let alone support.
Einstein did have important supporters, however. His chief early patron was the German physicist Max Planck. Einstein remained at the patent office for four years after his star began to rise within the physics community. He then moved rapidly upward in the German-speaking academic world; his first academic appointment was in 1909 at the University of Zürich. In 1911 he moved to the German-speaking university at Prague, and in 1912 he returned to the Swiss National Polytechnic in Zürich. Finally, in 1913, he was appointed director of the Kaiser Wilhelm Institute for Physics in Berlin.
The General Theory of Relativity
Even before he left the patent office in 1907, Einstein began work on extending and generalizing the theory of relativity to all coordinate systems. He began by enunciating the principle of equivalence, a postulate that gravitational fields are equivalent to accelerations of the frame of reference. For example, people in a moving elevator cannot, in principle, decide whether the force that acts on them is caused by gravitation or by a constant acceleration of the elevator. The full general theory of relativity was not published until 1916. In this theory the interactions of bodies, which heretofore had been ascribed to gravitational forces, are explained as the influence of bodies on the geometry of space-time (four-dimensional space, a mathematical abstraction, having the three dimensions from Euclidean space and time as the fourth dimension).
On the basis of the general theory of relativity, Einstein accounted for the previously unexplained variations in the orbital motion of the planets and predicted the bending of starlight in the vicinity of a massive body such as the sun. The confirmation of this latter phenomenon during an eclipse of the sun in 1919 became a media event, and Einstein's fame spread worldwide.
For the rest of his life Einstein devoted considerable time to generalizing his theory even more. His last effort, the unified field theory, which was not entirely successful, was an attempt to understand all physical interactions-including electromagnetic interactions and weak and strong interactions-in terms of the modification of the geometry of space-time between interacting entities.
Most of Einstein's colleagues felt that these efforts were misguided. Between 1915 and 1930 the mainstream of physics was in developing a new conception of the fundamental character of matter, known as quantum theory. This theory contained the feature of wave-particle duality (light exhibits the properties of a particle, as well as of a wave) that Einstein had earlier urged as necessary, as well as the uncertainty principle, which states that precision in measuring processes is limited. Additionally, it contained a novel rejection, at a fundamental level, of the notion of strict causality. Einstein, however, would not accept such notions and remained a critic of these developments until the end of his life. "God," Einstein once said, "does not play dice with the world."
World Citizen
After 1919, Einstein became internationally renowned. He accrued honors and awards, including the Nobel Prize in physics in 1921, from various world scientific societies. His visit to any part of the world became a national event; photographers and reporters followed him everywhere. While regretting his loss of privacy, Einstein capitalized on his fame to further his own political and social views.
The two social movements that received his full support were pacifism and Zionism. During World War I he was one of a handful of German academics willing to publicly decry Germany's involvement in the war. After the war his continued public support of pacifist and Zionist goals made him the target of vicious attacks by anti-Semitic and right-wing elements in Germany. Even his scientific theories were publicly ridiculed, especially the theory of relativity.
When Hitler came to power, Einstein immediately decided to leave Germany for the United States. He took a position at the Institute for Advanced Study at Princeton, New Jersey. While continuing his efforts on behalf of world Zionism, Einstein renounced his former pacifist stand in the face of the awesome threat to humankind posed by the Nazi regime in Germany.
In 1939 Einstein collaborated with several other physicists in writing a letter to President Franklin D. Roosevelt, pointing out the possibility of making an atomic bomb and the likelihood that the German government was embarking on such a course. The letter, which bore only Einstein's signature, helped lend urgency to efforts in the U.S. to build the atomic bomb, but Einstein himself played no role in the work and knew nothing about it at the time.
After the war, Einstein was active in the cause of international disarmament and world government. He continued his active support of Zionism but declined the offer made by leaders of the state of Israel to become president of that country. In the U.S. during the late 1940s and early '50s he spoke out on the need for the nation's intellectuals to make any sacrifice necessary to preserve political freedom. Einstein died in Princeton on April 18, 1955.
Einstein's efforts in behalf of social causes have sometimes been viewed as unrealistic. In fact, his proposals were always carefully thought out. Like his scientific theories, they were motivated by sound intuition based on a shrewd and careful assessment of evidence and observation. Although Einstein gave much of himself to political and social causes, science always came first, because, he often said, only the discovery of the nature of the universe would have lasting meaning. His writings include Relativity: The Special and General Theory (1916); About Zionism (1931); Builders of the Universe (1932); Why War? (1933), with Sigmund Freud; The World as I See It (1934); The Evolution of Physics (1938), with the Polish physicist Leopold Infeld; and Out of My Later Years (1950). Einstein's collected papers are being published in a multivolume work, beginning in 1987.
f:\12000 essays\sciences (985)\Physics\Albert Einstien.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ALBERT EINSTEIN
Einstein was a German/American physicist who contributed more to the 20th century vision of physical reality than any other scientist. Einstein's theory of RELATIVITY seemed to a lot of people to be pure human thought, as did his other theories.
LIFE
Albert Einstein was born in Ulm, Germany, on March 14, 1879. Einstein's parents were nonobservant Jews. They moved to Munich from Ulm when Einstein was an infant. The family business was to manufacture electrical equipment. When the business failed in 1894, the family move to Milan, Italy. He decided to officially give up his German citizenship. With in a year, still not having completed secondary school, he failed and examination that allow him to follow studies that would lead to a diploma in electrical engineering at the Swiss Federal Institute of Technology (the Zurich Polytechnic). He spent the following year in Aarau where there were excellent teachers and an excellent physics facility. In 1896 he returned to the Zurich Polytechnic, there he graduated in 1900 as a secondary school teacher of math and physics.
Two years later, he acquired a post at the Swiss patent office in Bern. While he was employed there from 1902 to 1909, he completed an extraordinary range of publications in theoretical physics. Most parts of there were written in his spare time. In 1905 he submitted one of his many scientific papers to the University of Zurich to obtain a Ph.D. degree. In 1908 he sent another scientific paper to the University of Bern and became a lecturer there.
In 1914 Einstein returned to Germany but did not reapply for citizenship. He was one of only handful of German professors who was opposed the use of force and did not support Germany's war aims. After the war, the allies wanted the removal of German scientist from international meetings, but Einstein was a Jew and traveling with a Swiss passport, he remained an acceptable German delegate. Albert Einstein's political views as a pacifist and a Zionist placed him against conservatives in Germany, who labeled him a traitor and a defeatist.
In Germany there was a rise of fascism, so he moved to the united states in 1933 and abandoned his pacifism. He unwillingly agreed that the new danger (the Germans) had to be brought down by force of arms. In 1939 he sent a letter to President Franklin D. Roosevelt that urged America to continue to develop an ATOMIC BOMB before the Germans did. This letter was one of many exchanges the White House and Einstein had. This contributed to Roosevelt's decision to fund what became the MANHATAN PROJECT.
Until the end of Einstein's life he searched for a Unified Field Theory, by which the phenomena of gravitation and electromagnetism could be derived from one set of equations. In 1955 Albert Einstein died in Princeton, New Jersey, where he held an analogous research position at the Institute for Advanced Study.
RELATIVITY
Einstein's theory of relativity had caused major revolution in 20th century physics and astronomy. It introduced the concept of "relativity" to science. It is the idea that there is no absolute motion only relative motion. Consequently replacing Isaac Newton's 200-year-old theory of mechanics. "Einstein showed that we do not reside in the flat, Euclidean space and uniform, absolute time of everyday life, but in another environment; curved-space time." The theory played a part in advances in physics. It led to the nuclear era, with potential for benefit as well as devastation, and made possible an understanding of the microworld of elementary particles and their interactions.
f:\12000 essays\sciences (985)\Physics\Aspirin.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Andrew Donehoo
January 15, 1997
8-3
Aspirin
Aspirin is a white crystalline substance made of carbon, hydrogen, and oxygen. It is used in the treatment of rheumatic fever, headaches, neuralgia, colds, and arthritis; reduce temperature and pain. The formula for aspirin is CH3CO2C6H4CO2H. Aspirin's scientific name is actylsalicylic acid (ASA). The main ingredient in ASA is salicylic acid. This ingredient grows in small roots, leaves, flowers and fruits on plants.
About 100 years ago, a German chemist, Felix Hoffmann, set out to find a drug that would ease his father's arthritis without causing severe stomach irritation that came from sodium salicylate, the standard anti-arthritis treatment of the time. Hoffmann figured that the acidity of the salicylate made it hard on the stomach's lining. He began looking for a less acidic formulation. His search led him to the synthesization of acetylsalicylic acid. The compound shared the therapeutic properties of other salicylates, but caused less stomach irritation. ASA reduced fever, relieved moderate pain, and, at higher doses, alleviated rheumatic fever and arthritic conditions.
Though Hoffmann was confident that ASA would prove more affective than other salicylates, but his superiors incorrectly stated that ASA weakens the heart and that physicians would not subscribe it. Hoffmann's employer, Friedrich Bayer and Company, gave ASA its now famous name, aspirin.
It is not yet fully known how aspirin works, but most authorities agree that it achieves some of its effects by hindering the flow of prostaglandins. Prostaglandins are hormone-like substances that influence the elasticity of blood vessels. John Vane, Ph. D., noted that many forms of tissue injury were followed by the release of prostaglandins. It was proved that prostaglndins caused redness and fever, common signs of inflammation. Vane's research showed that by blocking the flow of prostaglandins, aspirin prevented blood from aggregating and forming blood clots.
Aspirin can be used for the temporary relief of headaches, painful discomfort and fever from colds, muscular aches and pains, and temporary relief to minor pains of arthritis, toothaches, and menstrual pain. Aspirin should not be used in patients who have an allergic reaction to aspirin and/or nonsteroidal anti-inflammatory agents.
The usual adult dosage for adults and children over the age of 12 is one or two tablets with water. This may be repeated every 4 hours as necessary up to 12 tablets a day or as directed by your doctor. You should not give aspirin to children under the age of 12. An overdose of 200 to 500 mg/kg is in the fatal range. Early symptoms of overdose are vomiting, hypernea, hyperactivity, and convulsions. This progresses quickly to depression, coma, respiratory failure and collapse. In case of an overdose, intensive supportive therapy should be instituted immediately. Plasma salicylates levels should be measured in order to determine the severity of the poisoning and to provide a guide for therapy. Emptying the stomach should be accomplished as soon as possible.
Children and teenagers should not use aspirin for chicken pox or flu symptoms before a doctor is consulted. You should not take this product if you are allergies to aspirin, have asthma, stomach problems that reoccur, gastric ulcers or bleeding problems unless directed by a doctor. Aspirin should be kept out of reach of children. In case of an overdose, you should seek professional assistance or contact a poison control center immediately. If you are pregnant or nursing a baby, seek the advice of a health professional before taking aspirin.
Since the discovery of aspirin, it has been proved to prevent or protect against recurrent strokes, throat cancer, breast cancer, coon cancer, and reduce the effects of heart attacks and strokes. A heart attack occurs when the is a blockage of blood flow to the heart muscle. Without adequate blood supply, the affected area of muscle dies and the heart's pumping action is either impaired or stopped altogether. When aspirin is taken, it thins the blood, allowing it to pass trough the thinner than usual blood vessels. Studies show that people who take an aspirin on a daily basis have a reduced risk of heart attack or stroke.
Though aspirin is taken for granted, it is a product that over a process of many years, evolved from willow bark into the acetylsalicylic acid that we take form symptoms ranging from the common cold to heart attacks.
In the top diagram on the next page, the Kolbe Synthesis is shown. It shows how salicylic acid is produced. The middle diagram shows the process that turns salicylic acid into acetylsalicylic acid. In the 3-D model of aspirin, the gray atoms are carbon, the white atoms are hydrogen, and the red atoms are oxygen.
f:\12000 essays\sciences (985)\Physics\beam me up scotty.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Beam Me Up Scotty
Some people think that teleportation is not possible, while other people think that it is, and they are doing it.
The idea behind teleportation is that an object is equivalent to the information needed to construct it, the object can then be transported by transmitting the information in bytes,(1 byte= 1 yes or no answer) along a channel of telecommunications-communications, on the other end of the line is a receiver that reconstructs the object using the information given. Just like a fax machine except that a normal document fax takes up about 20 kilobytes (20,000 bytes) where as a human "fax" or teleportation would take 10 Gigabytes (100,000,000,000 bytes) for just one millimeter of human (A Fun Talk On Teleportation). But with a few technical breakthroughs, you might imagine, you'd be able to teleport over to a friend's house for dinner simply by stepping into a scanner that would record all the information about the atoms making you up, With all the data collected, the machine would then vaporize your body and relay the information to your friend's teleporter, which would rebuild you using basic molecular elements.
Some people don't try to think of a scientific answer to it, they just know that they can move something from point A to point B.
There are many kinds of teleportation, one kind is transferring a picture of an image to a piece of film in a special camera called a tele-camera, the teleporter sticks the lens of the camera to hiser fore head and thinks about the picture as hard as they can, most of the time it doesn't show up on the film but a couple of times the picture usually a picture of a building or historical marker barely shows up.
Another kind of teleportation is Water Witching, which is the act of bending small metal objects such as a spoon or some keys without touching them. A famous instance of Water Witching is a famous witcher was on a popular television show, during the show the man bent a fork and a spoon, several people called the station saying that while the show all the forks and spoons all bent up.
Usually when somebody says teleportation people usually think Star Trek, but instead of stepping onto a scanner and moving your body, some people can actually lift themselves in the air by just hypnotizing themselves. In maybe a few years with a little more technology people might replace cars, buses, trains and planes with a teleporter.
f:\12000 essays\sciences (985)\Physics\BEC the new phase of matter.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
B E C
The New Phase of Matter
A new phase of matter has been discovered seventy years after Albert Einstein predicted it's existence. In this new state of matter, atoms do not move around like they would in an ordinary gas condensate. The atoms move in lock step with one another and have identical quantum properties. This will make it easier for physicists to research the mysterious properties of quantum mechanics. It was named "Molecule of the Year" because it was such a major discovery, but it is not a molecule at all. The phase, called the Bose-Einstein condensate (BEC) follows the laws of quantum physics.
In early 1995, scientists at the National Institute of Standards and Technology and the University of Colorado were the first to uncover the BEC. They magnetically trapped rubidium atoms and then supercooled the atoms to almost absolute zero. The graphic on the cover shows the Bose-Einstien condensation, where the atom's velocities peak at close to zero velocity, and the atoms slowly emerge from the condensate. The atoms were slowed to the low velocity by using laser beams. The hardware needed to create the BEC is a bargain at $50,000 to $1000,000 which makes it accessible to physics labs around the world.
The next step is to test the new phase of matter. We do not know yet if it absorbs, reflects,or refracts light. BEC is related to superconductivity and may unlock some mysteries of why some minerals are able to conduct electricity without resistance. The asymmetrical pattern of BEC is is thought by some astrophysicists to explain the bumpy distribution of matter in the early universe, a distribution that eventually led to the formation of galaxies. Physicists are working on creating an atom laser, using new technology derived from the BEC. The new lasers would be able to create etchings finer than those that etch silicon chips today.
The discovery of BEC has prompted a lot of research of the new phase. BEC is expected to yield benefits to industry and society. I expect that large businesses will take advantage of the new technology and start making products using the technology. This will probably be in the form of the atom lasers BEC is expected to make possible. The lasers might be used for laser surgery, or any application where lasers are used today.
f:\12000 essays\sciences (985)\Physics\Black Holes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Every day we look out upon the night sky, wondering and dreaming of what lies beyond our planet. The universe that we live in is so diverse and unique, and it interests us to learn about all the variance that lies beyond our grasp. Within this marvel of wonders, our universe holds a mystery that is very difficult to understand because of the complications that arise when trying to examine and explore the principles of space. That mystery happens to be that of the ever elusive, black hole.
This essay will hopefully give you the knowledge and understanding of the concepts, properties, and processes involved with the space phenomenon of the black hole. It will describe how a black hole is generally formed, how it functions, and the effects it has on the universe.
By definition, a black hole is a region where matter collapses to infinite density, and where, as a result, the curvature of space-time is extreme. Moreover, the intense gravitational field of the black hole prevents any light or other electromagnetic radiation from escaping. But where lies the "point of no return" at which any matter or energy is doomed to disappear from the visible universe?
The black hole's surface is known as the event horizon. Behind this horizon, the inward pull of gravity is overwhelming and no information about the black hole's interior can escape to the outer universe. Applying the Einstein Field Equations to collapsing stars, Kurt Schwarzschild discovered the critical radius for a given mass at which matter would collapse into an infinitely dense state known as a singularity.
At the center of the black hole lies the singularity, where matter is crushed to infinite density, the pull of gravity is infinitely strong, and space-time has infinite curvature. Here it is no longer meaningful to speak of space and time, much less space-time. Jumbled up at the singularity, space and time as we know them cease to exist. At the singularity, the laws of physics break down, including Einstein's Theory of General Relativity. This is known as Quantum Gravity. In this realm, space and time are broken apart and cause and effect cannot be unraveled. Even today, there is no satisfactory theory for what happens at and beyond the rim of the singularity.
A rotating black hole has an interesting feature, called a Cauchy horizon, contained in its interior. The Cauchy horizon is a light-like surface which is the boundary of the domain of validity of the Cauchy problem. What this means is that it is impossible to use the laws of physics to predict the structure of the region after the Cauchy horizon. This breakdown of predictability has led physicists to hypothesize that a singularity should form at the Cauchy horizon, forcing the evolution of the interior to stop at the Cauchy horizon, rendering the idea of a region after it meaningless.
Recently this hypothesis was tested in a simple black hole model. A spherically symmetric black hole with a point electric charge has the same essential features as a rotating black hole. It was shown in the spherical model that the Cauchy horizon does develop a scalar curvature singularity. It was also found that the mass of the black hole measured near the Cauchy horizon diverges exponentially as the Cauchy horizon is approached. This led to this phenomena being dubbed "mass inflation."
In order to understand what exactly a black hole is, we must first take a look at the basis for the cause of a black hole. All black holes are formed from the gravitational collapse of a star, usually having a great, massive, core. A star is created when huge, gigantic, gas clouds bind together due to attractive forces and form a hot core, combined from all the energy of the two gas clouds. This energy produced is so great when it first collides, that a nuclear reaction occurs and the gases within the star start to burn continuously. The hydrogen gas is usually the first type of gas consumed in a star and then other gas elements such as carbon,
Oxygen, and helium are consumed.
This chain reaction fuels the star for millions or billions of years depending upon the amount of gases there are. The star manages to avoid collapsing at this point because of the equilibrium achieved by itself. The gravitational pull from the core of the star is equal to the gravitational pull of the gases forming a type of orbit, however when this equality is broken the star can go into several different stages.
Usually if the star is small in mass, most of the gases will be
consumed while some of it escapes. This occurs because there is not a tremendous gravitational pull upon those gases and therefore the star weakens and becomes smaller. It is then referred to as a white dwarf. A teaspoonful of white dwarf material would weigh five-and-a-half tons on Earth. Yet a white dwarf star can contract no further; it's electrons resist further compression by exerting an outward pressure that counteracts gravity. If the star was to have a larger mass, then it might go supernova, such as SN 1987A, meaning that the nuclear fusion within the star simply goes out of control, causing the star to explode.
After exploding, a fraction of the star is usually left (if it has not turned into pure gas) and that fraction of the star is known as a neutron star. Neutron stars are so dense, a teaspoonful would weigh 100 million tons on Earth. As heavy as neutron stars are, they too can only contract so far. This is because, as crushed as they are, the neutrons also resist the inward pull of gravity, just as a white dwarf's electrons do.
A black hole is one of the last options that a star may take. If the core of the star is so massive (approximately 6-8 times the mass of the sun) then it is most likely that when the star's gases are almost consumed those gases will collapse inward, forced into the core by the gravitational force laid upon them. The core continues to collapse to a critical size or circumference, or "the point of no return."
After a black hole is created, the gravitational force continues to pull in space debris and other types of matters to help add to the mass of the core, making the hole stronger and more powerful.
The most defining quality of a black hole is its emission of gravitational waves so strong they can cause light to bend toward it. Gravitational waves are disturbances in the curvature of space-time caused by the motions of matter. Propagating at (or near) the speed of light, gravitational waves do not travel through space-time as such -- the fabric of space-time itself is oscillating. Though gravitational waves pass straight through matter, their strength weakens as the distance from the original source increases.
Although many physicists doubted the existence of gravitational waves, physical evidence was presented when American researchers observed a binary pulsar system that was thought to consist of two neutron stars orbiting each other closely and rapidly. Radio pulses from one of the stars showed that its orbital period was decreasing. In other words, the stars were spiraling toward each other, and by the exact amount predicted if the system were losing energy by radiating gravity waves.
Most black holes tend to be in a consistent spinning motion as a result of the gravitational waves. This motion absorbs various matter and spins it within the ring (known as the event horizon) that is formed around the black hole. The matter keeps within the event horizon until it has spun into the center where it is concentrated within the core adding to the mass. Such spinning black holes are known as Kerr black holes.
Time runs slower where gravity is stronger. If we look at something next to a black hole, it appears to be in slow motion, and it is. The further into the hole you get, the slower time is running. However, if you are inside, you think that you are moving normally, and everyone outside is moving very fast.
Some scientists think that if you enter a black hole the forces inside will transport you to another place in space and time. At the other end would be a white hole, which is theoretically a point in space that just expels matter and energy.
Also as a result of the powerful gravitational waves, most black holes orbit around stars, partly due to the fact that they were once stars. This may cause some problems for the neighboring stars, for if a black hole gets powerful enough it may actually pull a star into it and disrupt the orbit of many other stars. The black hole can then grow strong enough (from the star's mass) as to possibly absorb another star.
When a black hole absorbs a star, the star is first pulled into the ergosphere, which sweeps all the matter into the event horizon, named for its flat horizontal appearance and because this happens to be the place where mostly all the action within the black hole occurs. When the star is passed on into the event horizon the light that the star endures is bent within the current and therefore cannot be seen in space. At this exact point in time, high amounts of radiation are given off, and with the proper equipment, can be detected and seen as an image of a black hole. Through this technique, astronomers now believe that they have found a black hole known as Centaurus A. The existence of a star apparently absorbing nothingness led astronomers to suggest and confirm the existence of another black hole, Cygnus X1.
By emitting gravitational waves, non-stationary black holes lose energy, eventually becoming stationary and ceasing to radiate in this manner. In other words, they decay and become stationary black holes, namely holes that are perfectly spherical or whose rotation is perfectly uniform. According to Einstein's Theory of General Relativity, such objects cannot emit gravitational waves.
Black hole electrodynamics is the theory of electrodynamics outside a black hole. This can be very trivial if you consider just a black hole described by the three usual parameters: mass, electric charge, and angular momentum. Initially simplifying the case by disregarding rotation, we simply get the well known solution of a point charge. This is not very physically interesting, since it seems highly unlikely that any black hole (or any celestial body) should not be rotating. Adding rotation, it seems that charge is present. A rotating, charged black hole creates a magnetic field around the hole because the inertial frame is dragged around the hole. Far from the black hole, at infinity, the black hole electric field is that of a point charge.
However, black holes do not even have charges. The magnitude of the gravitational pull repels even charges from the hole, and different charges would neutralize the charge of the hole.
The domain of a black hole can be separated into three regions, the first being the rotating black hole and the area near it, the accretion disk (a region of force-free fields), and an acceleration region outside the plasma.
Disk accretion can occur onto supermassive black holes at the center of galaxies and in binary systems between a black hole (not necessarily supermassive) and a supermassive star. The accretion disk of a rotating black hole, is, by the black hole, driven into the equatorial plane of the rotation. The force on the disk is gravitational.
Black holes are not really black, because they can radiate matter and energy. As they do this, they slowly lose mass, and thus are said to evaporate.
Black holes, it turns out, follow the basic laws of thermo-dynamics. The gravitational acceleration at the event horizon corresponds to the temperature term in thermo-dynamical equations, mass corresponds to energy, and the rotational energy of a spinning black hole is similar to the work term for ordinary matter, such as gas. Black holes have a finite temperature; this temperature is inversely proportional to the mass of the hole. Hence smaller holes are hotter. The surface area of the event horizon also has significance because it is related to the entropy of the hole.
Entropy, for a black hole, can be said to be the logarithm of the number of ways it could have been made. The logarithm of the number of microscopic arrangements that could give rise to the observed macroscopic state is just the standard definition of entropy. The enormous entropy of a black hole results from the lost information concerning the structural and chemical properties before it collapsed. Only three properties can remain to be observed in the black hole: mass, spin, and charge.
Physicist Stephen Hawking realized that because a black hole has a finite entropy and temperature, in can be in thermal equilibrium with its surroundings, and therefore must be able to radiate. Hawking radiation, as it is known, is allowed by a quantum mechanism called virtual particles. As a consequence of the uncertainty principle, and the equivalence of matter and energy, a particle and its antiparticle can appear spontaneously, exist for a very short time, and then turn back into energy. This is happening all the time, all over the universe. It has been observed in the "Lamb shift" of the spectrum of the hydrogen atom. The spectrum of light is altered slightly because the tiny electric fields of these virtual pairs cause the atom's electron to shake in its orbit.
Now, if a virtual pair appears near a black hole, one particle might become caught up in a the hole's gravity and dragged in, leaving the other without its partner. Unable to annihilate and turn back into energy, the lone particle must become real, and can now escape the black hole. Therefore, mass and energy are lost; they must come from someplace, and the only source is the black hole itself. So the hole loses mass.
If the hole has a small mass, it will have a small radius. This makes it easier for the virtual particles to split up and one to escape from the gravitational pull, since they can only separate by about a wavelength. Therefore, hotter black holes (which are less massive) evaporate much more quickly than larger ones. The evaporation timescale can be derived by using the expression for temperature, which is inversely proportional to mass, the expression for area, which is proportional to mass squared, and the blackbody power law. The result is that the time required for the black hole to totally evaporate is proportional to the original mass cubed. As expected, smaller black holes evaporate more quickly than more massive ones.
The lifetime for a black hole with twice the mass of the sun should be about 10^67 years, but if it were possible for black holes to exist with masses on the order of a mountain, these would be furiously evaporating today. Although only stars around the mass of two suns or greater can form black holes in the present universe, it is conceivable that in the extremely hot and dense very early universe, small lumps of overdense matter collapsed to form tiny primordial black holes. These would have shrunk to an even smaller size today and would be radiating intensely. Evaporating black holes will finally be reduced to a mass where they explode, converting the rest of the matter to energy instantly. Although there is no real evidence for the existence of primordial black holes, there may still be some of them, evaporating at this very moment.
The first scientists to really take an in depth look at black holes and the collapsing of stars, were professor Robert Oppenheimer and his student, Hartland Snyder, in the early nineteen hundreds. They concluded on the basis of Einstein's theory of relativity that if the speed of light was the utmost speed of any object, then nothing could escape a black hole once in its gravitational orbit.
The name "black hole" was given due to the fact that light could not escape from the gravitational pull from the core, thus making the "black hole" impossible for humans to see without using technological advancements for measuring such things as radiation. The second part of the word was given the name "hole" due to the fact that the actual hole is where everything is absorbed and where the central core, known as the singularity, presides. This core is the main part of the black hole where the mass is concentrated and appears purely black on all readings, even through the use of radiation detection devices.
Just recently a major discovery was found with the help of a device known as The Hubble Telescope. This telescope has just recently found what many astronomers believe to be a black hole, after focusing on a star orbiting empty space. Several pictures were sent back to Earth from the telescope showing many computer enhanced pictures of various radiation fluctuations and other diverse types of readings that could be read from the area in which the black hole is suspected to be in.
Several diagrams were made showing how astronomers believe that if somehow you were to survive through the center of the black hole that there would be enough gravitational force to possible warp you to another end in the universe or possibly to another universe. The creative ideas that can be hypothesized from this discovery are endless.
Although our universe is filled with many unexplained, glorious phenomena, it is our duty to continue exploring them and to continue learning, but in the process we must not take any of it for granted.
As you have read, black holes are a major topic within our universe and they contain so much curiosity that they could possibly hold unlimited uses. Black holes are a sensation that astronomers are still very puzzled with. It seems that as we get closer to solving their existence and functions, we only end up with more and more questions.
Although these questions just lead us into more and more unanswered problems we seek and find refuge into them, dreaming that maybe one far off distant day, we will understand all the conceptions and we will be able to use the universe to our advantage and go where only our dreams could take us.
Bibliography
1.) Parker, Barry. Colliding Galaxies.
2.) Hawking, Stephen. Black Holes and Baby Universes.
3.) Encyclopedia Brittanica. Volume II, Black Holes. (c) 1996
f:\12000 essays\sciences (985)\Physics\Carbon.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CARBON
Without the element of carbon, life as we know it would not exist.
Carbon provides the framework for all tissues of plants and animals. They
are built of elements grouped around chains or rings made of carbon atoms.
Carbon also provides common fuels--coal, oil, gasoline, and natural gas.
Sugar, starch, and paper are compounds of carbon with hydrogen and
oxygen. Proteins such as hair, meat, and silk contain these and other
elements such as nitrogen, phosphorus, and sulfur.
More than six and a half million compounds of the element carbon,
many times more then those of any other element, are known, and more are
discovered and synthesized each week. Hundreds of carbon compounds are
commercially important but the element itself in the forms of diamond,
graphite, charcoal, and carbon black is also used in a variety of manufactured
products.
Besides the wide occurrence of carbon in compounds, two forms of
the element--diamond and graphite, are deposited in widely scattered
locations around the Earth.
PROPERTIES OF CARBON
Symbol = C
Atomic Number = 6
Atomic Weight = 12.011
Density at 68 Degrees F = 1.88-3.53
Boiling Point = 8,721 degrees F
Melting Point = 6,420 degrees F
f:\12000 essays\sciences (985)\Physics\Charles Babbage.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Charles Babbage may have spent his life in vain, trying to make a machine considered by most of his friends to be ridiculous. 150 years ago, Babbage drew hundreds of drawings projecting the fundamentals on which today's computers are founded. But the technology was not there to meet his dreams. He was born on December 26, 1791, in Totnes, Devonshire, England. As a child he was always interested about the mechanics of everything and in the supernatural. He reportedly once tried to prove the existence of the devil by making a circle in his own blood on the floor and reciting the Lord's Prayer backward. In college, he formed a ghost club dedicated to verifying the existence of the supernatural. When in Trinity College in Cambridge, Charles carried out childish pranks and rebelled because of the boredom he felt from knowing more than his instructors. Despite this, however, he was on his way to understanding the advanced theories of mathematics and even formed an Analytical Society to present and discuss original papers on mathematics and to interest people in translating the works of several foreign mathematicians into English. His studies also led him to a critical study of logarithmic tables and was constantly reporting errors in them. During this analysis, it occurred to him that all these tables could be calculated by machinery. He was convinced that it was possible to construct a machine that would be able to compute by successive differences and to even print out the results. (He conceived of this 50 years before type-setting machines or typewriters were invented.)
In 1814, the age of 23, Charles married 22-year-old Georgina Whitmore. Georgina would have eight children in thirteen years, of which only three sons would survive to maturity. Babbage really took no interest in raising his children. After Georgina died at the age of 35, his mother took over the upbringing. In 1816, Babbage had his first taste of failure when his application for the professorship of mathematics at East India College in Haileybury was rejected due to political reasons, as was his application, three years later, for the chair of mathematics at the University of Edinburgh. Fortunately, his elder brother supported his family while Babbage continued his work on calculating machines.
At the age of 30, Babbage was ready to announce to the Royal Astronomical Society that he had embarked on the construction of a table-calculating machine. His paper, "Observations on the Application of Machinery to the Computation of Mathematical Tables" was widely acclaimed and consequently, Babbage was presented with the first gold medal awarded by the Astronomical Society. Babbage was also determined to impress the prestigious Royal Society and wrote a letter to its president, Sir Humphrey Davy, proposing and explaining his ideas behind constructing a calculating machine, or the Difference Engine, as he would call it. A 12-man committee considered Babbage's appeal for funds for his project and in May 1823, the Society agreed that the cause was worthy.
While constructing this machine, implementation problems arose as well as a misunderstanding with the British Government, both of whom regarded this machine as property of the other. This misunderstanding would cause problems for the next twenty years, and would result in delaying Babbage's work. Babbage also apparently miscalculated his task. The Engine would need about 50 times the amount of money he was given. In 1827, Babbage was overwhelmed by a number of personal tragedies: the deaths of his father, wife and two of his children. Consequently, Babbage took ill and his family advised him to travel abroad for a few months. Upon his return, he approached the Duke of Wellington, then prime minister, regarding the possibility of a further grant. In the duke, Babbage found a friend who could really understand the principles and capabilities of the Engine and the two would remain friends for the rest of the duke's life. Babbage was also granted more money. He continued work on the project for many years.
In old age, Babbage agreed, at the age of 71, to have the completed section of his Difference Engine shown to the public for the first time. Babbage's many disappointments led him to say that he never lived a happy day in his life. Babbage died in 1871, two months shy of his 80th birthday.
f:\12000 essays\sciences (985)\Physics\Chlorophyll.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
CHLOROPHYLL
NAME
Biology
November. 19
A. Chlorophyll belongs to the Plant Kingdom. Chlorophyll is not found in the Animal Kingdom. Chlorophyll is found inside of Chloroplasts, and Chloroplasts are found inside of plant cells.
B. Chlorophyll is a pigment that makes plants green. It is important because it converts sunlight to split water into hydrogen and oxygen.
C. Chlorophyll is found inside the chloroplast which is located near the cell wall. It is located here because the suns rays might not penetrate deep into the plant, and the plant needs the suns rays to generate hydrogen and oxygen.
D. If a plant did not have chlorophyll then the plant would be unable to get the energy from the sun, and it would slowly die. There are no diseases or dysfunction's of chlorophyll. If there were plants would have a serious problem.
E. I once had a friend named Bill,
and he was green with Chlorophyll,
He Didn't have to eat,
not a beat or any meat,
Instead of going to dine he would feast on sunshine
Bill went to the Land of the Midnight Sun
and there he was done.
f:\12000 essays\sciences (985)\Physics\Chromium.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Chromium is a very hard, brittle, gray metal, which is sometimes
referred to as Siberian red lead. It does not rust easily and becomes shiny and
bright when it is polished. The shiny trim on our automobile bumpers and
door handles is usually electroplated chromium.
Most chromium comes from something called chromite which is
a mixture of chromium , iron, and oxygen. Chromite is a common rather
ordinary black mineral that no one really noticed until more recent times.
Nearly all the world's supply of chromite comes from Zimbabwe, Russia,
Turkey, South Africa, Cuba, and the Philippines. The United States imports
almost all its chromite.
Chromium is added to other metals to make them harder and
more resistant to chemicals. Small quantities mixed with steel make the metal
stainless. Stainless steel is used to make cutlery and kitchen equipment
because it does not rust easily and takes a mirror-like polish. This steel
contains about 13 percent chromium and 0.3 percent carbon.
The hardness of steel can be increased by adding small
quantities of chromium to it. Chromium steel alloys (mixtures containing one
or more metals) are used to make armor plating for tanks and warships. They
are also used for ball bearings and the hard cutting edges of high-speed
machine tools.
Nickel is highly resistant to electric current and is often added to
chromium steels to make them easier to work. For example, stainless steel
sinks can be pressed out from flat sheets of steel that can contain 18 percent
chomium and 8 percent nickel.
When nickel is mixed with chromium, the resulting metal can
stand very high temperatures without corroding. For example, the heating
elements of toasters can be made from an alloy that is 80 percent nickel and
20 percent chromium. This metal operates at a temperature of about 1380
degrees Fahrenheit (750 degrees Celsius).
Chromium was discovered in 1798 by the French chemist
Nicolas Vauquelin. He chose the name chromium from the Greek word
chroma, which means color. Chromium was a good choice of name, many
chromium compounds are brightly colored. Rubies are red and emeralds are
green because they contain chromium compounds.
Some of the brightest colors in the artist's palette contain
chromium. Chrome yellow is made from a substance which contains
chromium, lead, and oxygen. Zinc yellow contains zinc, chromium and
oxygen. Chrome red is another chromium compound. Chrome green is used
in paints and in printing cotton fabrics.
Chromium salts are used in tanning leather. Leather tanned in
this way is very soft and flexible. It is used in the manufacture of soft gloves
and other luxury goods. Other chromium compounds are used to treat metal
and wood. This treatment helps to preserve objects from corrosion and rot.
Chromium is an element wit the chemical symbol Cr, an atomic
weight of 51.996. Although it is twice as heavy as aluminum, it is lighter tan
all other common metals. It melts at a temperature of 3434 degrees
Fahrenheit (1890 degrees Celsius).
f:\12000 essays\sciences (985)\Physics\Collisions.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Collisions
When two objects collide, their motions are changed as a result of the collision, as is shown when playing pool.
There are several laws governing collisions, the principal one being the law of conservation of linear momentum, which says the total momentum of an isolated system is the table and the balls, and the law then implies that the total momentum of the balls just before they collide is the total momentum just after the collision.
Therefore if the masses of two colliding objects are known, the velocity and the velocity of the other before the collision, you can calculate the final velocity of this second object after it has collided. To obtain an exact answer however, we must find out what type of collision takes place, whether it is elastic or not elastic.
The type of collision is characterized by what is called the coefficient of restitution. This quantity is approximately constant for a collision between two given objects, and can be determined experimentally. If the relative velocities of the two objects are the same before and after impact, the coefficient is equal to 1, and the collision is elastic. In practice, however, such perfectly elastic collisions occur only on an atomic scale; most collisions are therefor not elastic, with a coefficient of restitution of less than 1.
f:\12000 essays\sciences (985)\Physics\Comets.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A comet is generally considered to consist of a small, sharp
nucleus embedded in a nebulous disk called the coma. American
astronomer Fred L. Whipple proposed in 1949 that the nucleus,
containing practically all the mass of the comet, is a "dirty snowball"
conglomerate of ices and dust. Major proofs of the snowball theory
rest on various data. For one, of the observed gases and meteoric
particles that are ejected to provide the coma and tails of comets,
most of the gases are fragmentary molecules, or radicals, of the most
common elements in space: hydrogen, carbon, nitrogen, and oxygen.
The radicals, for example, of CH, NH, and OH may be broken away
from the stable molecules CH4 (methane), NH3 (ammonia), and H2O
(water), which may exist as ices or more complex, very cold
compounds in the nucleus. Another fact in support of the snowball
theory is that the best-observed comets move in orbits that deviate
significantly from Newtonian gravitational motion. This provides
clear evidence that the escaping gases produce a jet action, propelling
the nucleus of a comet slightly away from its otherwise predictable
path. In addition, short-period comets, observed over many
revolutions, tend to fade very slowly with time, as would be expected
of the kind of structure proposed by Whipple. Finally, the existence of
comet groups shows that cometary nuclei are fairly solid units.
The head of a comet, including the hazy coma, may exceed the planet
Jupiter in size. The solid portion of most comets, however, is
equivalent to only a few cubic kilometers. The dust-blackened
nucleus of Halley's comet, for example, is about 15 by 4 km (about 9
by 2.5 mi) in size.
f:\12000 essays\sciences (985)\Physics\Creation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Cosmos
Where is the universe from? Where is it going? How is it put together? How did it get to be this way.
These are Big questions. Very easy to ask but almost impossible to answer. We want answers for philosophical reason having nothing to do with science. No one will get rich from discovering the structure of the universe unless they right a book about it.
The area of science dealing with Big questions is called cosmology. The reason for it's study is found in the fact that:
The universe was born at a specific time in the past and has expanded ever since.
The Expansion of the Universe
Edwin Hubble established the existence of other galaxies. He noted that the light from these galaxies was shifted toward the red. That is it's wavelength was longer than that of the light emitted from the corresponding atoms in the lab. Furthermore he found that the farther away the galaxy was the more it was shifted toward the red end of the spectrum. Hubble attributed this shift to the doppler effect.
Hubble saw this and concluded that all galaxies are rushing away from us and the universe is expanding as a whole. Modern equipment has observed and verified this so-called Hubble expansion exists throughout the observable universe.
This shows three important things. First there is no significance to the fact that earth seems to be the center of the universal expansion. In any galaxy it would look as if you were standing still and all others were rushing away from you. Second the movement of the universe is not like an explosion. Galaxies are not moved through the universe but expand with the universe. Third the galaxies themselves do not expand only the space between them.
Finally if you ask where the expansion started the only answer is everywhere. In the words of the fifteenth-century philosopher Nicholas of Cusa, "the universe has its center everywhere and its edge nowhere."
This theory has one fact that is inescapable. The universe was not always there but did have an beginning. This has come to be known as the Big-Bang theory.
Universal Freezing
When the universe was younger it was smaller. When matter and energy are compacted the temperature inevitably rises. Thus when the universe was younger it was hotter. We can see six crucial events called "freezing's" where the fabric of the universe changed in a fundamental way.
The most recent occurred when the universe was about 500,000 years old, about 14,999,500,000 years ago. After 500,000 years permanent atoms started to form. Before 500,000 years matter existed as loose electrons and nuclei in a state called plasma.
Moving back in time the next freezing occurred at about three minutes. This was when nuclei first started to form. Before this only elementary particles existed in the universe.
From about three minutes to ten-millionths of a second the universe was a seething mass of elementary particles- protons, neutrons, electrons and the rest of the particle zoo.
Now there are four distinct forces in the universe the strong, electromagnetic, the weak, and gravitational. Before they must have all been one before the first ten-millionth of a second. The timetable of these forces as theorized today is:
(10 to the -10th power) second: the weak and electromagnetic forces form into one called electroweak.
(10 to the -33rd power) second: the strong force joins the electroweak leaving only gravity.
(10 to the -43rd power) second: known as the plank time for one of the founders of quantum mechanics. Before this time the universe and all forces were completely unified and as allinged as is possible.
f:\12000 essays\sciences (985)\Physics\Cryogenics and the future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Cryogenics and the Future
Cryogenics is a study that is of great importance to the human race and has been a major project for engineers for the last 100 years. Cryogenics, which is derived from the Greek word kryos meaning "Icy Cold," is the study of matter at low temperatures. However low is not even the right word for the temperatures involved in cryogenics, seeing as the highest temperature dealt with in cryogenics is 100 (C (-148 (F) and the lowest temperature used, is the unattainable temperature -273.15 (C (-459.67 (F). Also, when speaking of cryogenics, the terms Celsius and Fahrenheit are rarely used. Instead scientists use a different measurement called the Kelvin (K). The Kelvin scale for Cryogenics goes from 173 K to a fraction of a Kelvin above absolute zero. There are also two main sciences used in cryogenics, and they are Superconductivity and Superfluidity.
Cryogenics first came about in 1877, when a Swiss Physicist named Rasul Pictet and a French Engineer named Louis P. Cailletet liquefied oxygen for the first time. Cailletet created liquid oxygen in his lab using a process known as adiabatic expansion, which is a "thermodynamic process in which the temperature of a gas is expanded without adding or extracting heat from the gas or the surrounding system"(Vance 26). At the same time Pictet used the "Joule-Thompson Effect," a thermodynamic process that states that the "temperature of a fluid is reduced in a process involving expansion below a certain temperature and pressure"(McClintock 4). After Cailletet and Pictet, a third method, known as cascading, was developed by Karol S. Olszewski and Zygmut von Wroblewski in Poland. At this point in history Oxygen was now able to be liquefied at 90 K, then soon after liquid Nitrogen was obtained at 77 K, and because of these advancements scientist all over the world began competing in a race to lower the temperature of matter to Absolute Zero (0 K) [Vance, 1-10].
Then in 1898, James DeWar mad a major advance when he succeeded in liquifying hydrogen at 20 K. The reason this advance was so spectacular was that at 20 K hydrogen is also boiling, and this presented a very difficult handling and storage problem. DeWar solved this problem by inventing a double-walled storage container known as the DeWar flask, which could contain and hold the liquid hydrogen for a few days. However, at this time scientists realized that if they were going to make any more advances they would have to have better holding containers. So, scientists came up with insulation techniques that we still use today. These techniques include expanded foam materials and radiation shielding. [McClintock 43-55]
The last major advance in cryogenics finally came in 1908 when the Dutch physicist Heike Kamerling Onnes liquefied Helium at 4.2 and then 3.2 K. The rest of the advances in cryogenics have been extremely small since it is a fundamental Thermodynamic law that you can approach but never actually reach absolute zero. Since 1908 our technology has greatly increased and we can now freeze sodium gas to within 40 millionths of a Kelvin above absolute zero. However, in the back of every physicists head they want to break the Thermodynamic law and reach a temperature of absolute zero where every proton, electron, and neutron in an atom is absolutely frozen.
Also , their are two subjects that are also closely related to cryogenics called Superconductivity and Superfluidity. Superconductivity is a low-temperature phenomenon where a metal loses all electrical resistance below a certain temperature, called the Critical Temperature(Tc), and transfers to "...a state of zero resistance,..."(Tilley 11). This unusual behavior was also discovered by Heike Kamerlingh Onnes. It was discovered when Onnes and one of his graduate students realized that Mercury loses all of its electrical resistance when it reaches a temperature of 4.15 K. However, almost all elements and compounds have Tc's between 1 K and 15 K (or -457.68 (F and -432.67 (F) so they would not be very useful to us on a day to day basis[McClintock 208-226].
Then in 1986, J Gregore Bednorz and K. Alex Muller discovered that an oxide of lanthanum, barium, and copper becomes superconductive at 30 K. This discovery shocked the world and stimulated scientists to find even more "High-Temperature Superconductors". After this discovery, in 1987, scientists at the University of Houston and the University of Alabama discovered YBCO, a compound with a Tc of 95 K. This discovery made superconductivity possible above the boiling point of liquid Nitrogen, so now the relatively cheap, liquid nitrogen could replace the high priced liquid helium required for cryogenic experiments. To date the highest reported Tc is 125 K, which belongs to a compund made of Thallium, Barium, Calcium, Copper, and Oxygen. Now, with the availability of high-temperature superconductors, all the sciences including, cryogenics have made extraordinary advances. Some applications are demonstrated by magnetically levitated trains, energy storage, motors, and Zero-Loss Transmission Lines. Also, superconducting electromagnets are used in Particle Accelerators, Fusion Energy Plants, and Magnetic Resonance Imaging devices (MRI's) in Hospitals. Furthermore high-speed cryogenic computer memories and communication devices are in various stages of research. This field has grown immensely since 1986 as you can see and will probably keep growing.
The second subject related to cryogenics is Superfluidity. Superfluidity is a strange state of matter that is most common in liquid Helium, when it is below a temperature of 2.17 K. Superfluidity means that the liquid "...discloses no viscosity when traveling through a capillary or narrow slit..."(Landau 195) and also flows "...through the slit disclosing no friction..."(Landau 195) That this means is that when Helium reaches this state it can flow, without any friction, through the smallest holes and in between atoms in a compund. If the top is off the beaker it is also possible for the liquid Helium to flow up the side of the baker and out of the beaker until all the liquid helium is gone. It was then discovered that when any liquid approaches about .2 K it has almost the exact same properties of superconducting metals, as far as specific heat, magnetic properties, and thermal conductivity. Even though, both superconducting and Superfluidic materials have similar properties, the phenomenon of Superfluidity is much more complex, and is not completely understood by today's physicists.[McClintock 103-107]
Cryogenics also consists of many smaller sciences, including Cryobiology, which is "the study of the effects of low-temperatures on materials of biological origin."(Vance 528) Developments in this field have led to modern methods of preserving blood, semen, tissue, and organs below the temperature that was obtained by the use of liquid nitrogen. Also Cryobiology has led to the development of the cryogenic scalpel which can deaden or destroy tissue with a high degree of accuracy, making it possible to clot cuts as soon as you cut them. So in theory you could one day have surgery without having to deal with any blood.
Another field is Cryopumping. Cryopumping is the process "of condensing gas or vapor on a low-temperature surface."(Vance 339) This is done by extracting gas from a vacuum vessel by conventional methods then freezing the remaining gas on low temperature coils. This process has been useful when trying to simulate the properties that the vacuum in outerspace will have on electronic circuitry.
Cryogenics has also been a part of many modern advances including:
The transportation of energy in the form of a liquefied gas.
Processing, handling, and providing food by cryogenic means has become a large business, providing both frozen and freeze-dried food.
Liquid Oxygen powers rockets and propulsion systems for space research.
Liquid Hydrogen is used in high-energy physics experiments.
Using cryogenic drill bits so drilling for oil and other gases is easier.
Chemical synthesis and catalysis.
Better fire fighting fluids.
Gas separation.
Metal Fabrication.
As you can see by now cryogenics is still a very young science, but in the last ten years it has catapulted to being the backbone of almost every other form of science. However, its full potential will probably not be understood for quite a while. Though, as you can see, if we can grasp the concepts of cryogenics we will have a tool that will allow us to do things ranging from making better drill bits to exploring the universe. The future of cryogenics can best be summed up by Krafft A. Ehricke, a rocket developer, when he said, "It's centeral goal is the preservation of civilization."
References
Khalatnikov, I. M., An Introduction to the Theory of Superfluidity (New York: W.A. Benjamin Inc., 1965).
McClintock, Michael, Cryogenics (New York: Reinhold Publishing Corp., 1964)
Tilley, David R. and Tilley, John, Superfluidity and Superconductivity (New York: John Wiley and Sons, 1974)
Vance, Robert W., Cryogenic Technology (London: John Wiley & Sons, Inc., 1963)
f:\12000 essays\sciences (985)\Physics\differences in Radio AM and FM.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
RADIO
In the modern society, radio is the most widely used medium of broadcasting and electronic communication : it plays a major role in many areas such as public safety, industrial manufacturing, processing, agriculture, transportation, entertainment, national defense, space travel, overseas communication, news reporting and weather forecasting. In radio broadcasts, they use the radio waves which can be both microwaves and longer radio waves. These are transmitted in two ways: amplitude modulation (AM ) and frequency modulation ( FM ). These two kinds of wave have many differences.
Radio waves are among the many types of electromagnetic waves that travel within the electromagnetic spectrum. Radio waves can be defined by their frequency (in hertz, after Heinich Hertz , who first produced radio waves electronically), which is number of times they pass through a complete cycle per second; or by their wavelength, which is determined by the distance (by meters) that is traveled from the crest of one wave to the crest of the next.
Radio frequencies are measured in units called kilohertz, megahertz, and gigahertz. (1 kilohertz = 1000 hertz : 1 megahertz = 106 hertz, 1 gigahertz = 109 hertz). All radio waves fall within a frequency range of 3 kilohertz, or 3000 cycles per second to 30 gigahertz. Within the range of frequencies, radio waves are further divided into two groups or bands such as very low frequency ( VLF 10-30 kHz ), low frequency (LF 30-300 KHz), medium frequency ( MF 300-3000 KHz), high frequency ( HF 3-30 MHZ) and very high frequency ( VHF 30-300MHZ).
Amplitude modulation is the oldest method of transmitting voice and music through the airwaves is by amplitude modulation. This is accomplished by combining a sound wave from a microphone, tape, record, or CD with a "carrier" radio wave. The result : a wave that transmits voice or programming as its amplitude ( intensity ) increases and decreases. Amplitude modulation is used by station broadcasting in the AM band and by most international short wave stations.
Frequency modulation is another way to convey information, voice , and music on a radio wave is to slightly change, or modulate, the frequency. The main advantage of FM broadcasting is of it is static free. But the drawback to FM is since the frequency is varied, station takes up more room on the band. Frequency modulation is, of course, used on the FM band. And it is used for "action band" and ham transmission in the VHF/UHF frequency range.
In amplitude modulation, what is modified is the amplitude of a carrier wave on one specific frequency. The antenna sends out two kinds of Am waves : ground waves and sky waves. Ground waves spread out horizontally from the antenna. They travel through the air along the earth's surface. Sky waves spread up into the sky . When they reach the layer of atmosphere called the ionosphere, they may be reflected back to earth . This reflection enables AM radio waves to be received at great distances from the antenna.
Frequency modulation station generally reach audiences from 15 to 65 miles ( 24-105km) away. Because of frequency of the carrier wave is modulated, rather than amplitude, background noise is reduced. In FM transmission, the frequency of the carrier wave varies according to the strength of the audio signal or program. Unlike AM , where the strength of the carrier wave varies, the strength of the carrier wave in FM remains the same , while its frequency varies above or below a central value broadcast. FM transmission have a broadcast waves ( 88-108 MHZ) are shorter than AM broadcast waves (540 - 1600 kHz) and do not go as far.
In AM transmission, the amplitude of the carrier waves varies to match changes in the electromagnetic waves coming from the radio studio. In FM transmission, the amplitude of the carrier waves remains constant. However, the frequency of the waves changes to match the electromagnetic waves sent from the studio.
Two types of radio waves are broadcast by AM transmitter : ground waves, which spread out horizontally from the ground and travel along the earth's surface, and air waves, which travel up into the ionosphere, allows AM transmission to travel great distances. AM radio stations with powerful transmitters can reach listeners as far as 1000 miles ( 1600 km ) away.
FM radio waves also travel horizontally and skyward. However , due to the higher frequency of the carrier waves, the waves that go skyward are not reflected. They pass through the atmosphere and into space. Although AM waves can be received at greater distances than FM waves , FM. waves do have advantages. They are not affected by static as much as Am waves. Static is caused by electricity in the atmosphere. FM waves also result in a truer reproduction of sound than AM waves.
Furthermore, FM has much better sound than AM because AM has different frequency and wavelength than FM. AM stations broadcast on frequencies of between 535 and 1605 kilohertz. The FM band extends from 88 to 108 megahertz. So that people can compare two different bands on the radio.
f:\12000 essays\sciences (985)\Physics\Does life exist on other planets .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
After the recent discovery of single-cell life forms from mars were discovered contained in meteorite that crashed to the earth 12 YEARS AGO. I have many doubts to believe that it is the case. There is still no proof after all these years that the sightings of flying sources moving across the sky at tremendous speeds do really exist in the first place. Many Photos and Videos are taken but with the amount that turn out to be forges, the possibility that one of them is real is greatly reduced. Its not even worthwhile to screen the videos that get brought in with the amount that are not disprooved.
However there is no reason why human beings got the idea that there were aliens from other planets in the first place. Just the idea that they may exist shows that somewhere along the years of Human existent somebody must have really came in contact with an alien otherwise the terms 'Alien , UFOS and Flying Sources' would have never existed.
I however don't believe that living cells of any kind have ever existed in this solar system apart from on earth. I do see the possibility that maybe other solar systems is our solar system in history which is an interesting thought.
Just because NASA has found a single cell living creature in a meteorite doesn't prove anything in my mind. Especially because of the fact that it has been on earth for 12 years when over this time the cell could have attached itself to the meteorite.
Nobody knows enough about the subject to start telling the public that this is true, they should have never released it unless it were defiantly proven to be fact. NASA has a big name in the world, they should be more carefule next time.
I just hope that whatever NASA had to say to the president in private isn't different to what they told the public We hear often that the media finds people that used to work for the US Government that seem to believe that things in the US Air force to do with aliens and UFOS have been kept away from the public for so many years. It would be nice for them to start telling the truth for once.
f:\12000 essays\sciences (985)\Physics\Earthquakes Seeing Into The Furture.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Earthquakes have plagued our lives for as long as people have inhabited the earth. These dangerous acts of the earth have been the cause of many deaths in the past century. So what can be done about these violent eruptions that take place nearly with out warning? Predicting an earthquake until now has almost been technologically impossible. With improvements in technology, lives have been saved and many more will. All that remains is to research what takes place before, during, and after an earthquake. This has been done for years to the point now that a successful earthquake prediction was made and was accurate. This paper will discuss a little about earthquakes in general and then about how predictions are made.
Earthquake, "vibrations produced in the earth's crust when rocks in which elastic strain has been building up suddenly rupture, and then rebound."(Associated Press 1993) The vibrations can range from barely noticeable to catastrophically destructive. Six kinds of shock waves are generated in the process. Two are classified as body waves-that is, they travel through the earth's interior-and the other four are surface waves. The waves are further differentiated by the kinds of motions they impart to rock particles. Primary or compressional waves (P waves) send particles oscillating back and forth in the same direction as the waves are traveling, whereas secondary or transverse shear waves (S waves) impart vibrations perpendicular to their direction of travel. P waves always travel at higher velocities than S waves, so whenever an earthquake occurs, P waves are the first to arrive and to be recorded at geophysical research stations worldwide.(Associated Press 1993)
Earthquake waves were observed in this and other ways for centuries, but more scientific theories as to the causes of quakes were not proposed until modern times. One such concept was advanced in 1859 by the Irish engineer Robert Mallet. Perhaps drawing on his knowledge of the strength and behavior of construction materials subjected to strain, Mallet proposed that earthquakes occurred "either by sudden flexure and constraint of the elastic materials forming a portion of the earth's crust or by their giving way and becoming fractured."(Butler 1995)
Later, in the 1870s, the English geologist John Milne devised a forerunner of today's earthquake-recording device, or seismograph. A simple pendulum and needle suspended above a smoked-glass plate, it was the first instrument to allow discrimination of primary and secondary earthquake waves. The modern seismograph was invented in the early 20th century by the Russian seismologist Prince Boris Golitzyn. "His device", using a magnetic pendulum suspended between the poles of an electromagnet, "ushered in the modern era of earthquake research." (Nagorka 1989)
"The ultimate cause of tectonic quakes is stresses set up by movements of the dozen or so major and minor plates that make up the earth's crust."(Monastersky Oct, 95) Most tectonic quakes occur at the boundaries of these plates, in zones where one plate slides past another-as at the San Andreas Fault in California, North America's most quake-prone area-or is subducted (slides beneath the other plate). "Subduction-zone quakes account for nearly half of the world's destructive seismic events and 75 percent of the earth's seismic energy. They are concentrated along the so-called Ring of Fire, a narrow band about 38,600 km (about 24,000 mi) long, that coincides with the margins of the Pacific Ocean. The points at which crustal rupture occurs in such quakes tend to be far below the earth's surface, at depths of up to 645 km (400 mi)." (Monastersky Dec, 95) Alaska's disastrous Good Friday earthquake of 1964 is an example of such an event.
Seismologists have devised two scales of measurement to enable them to describe earthquakes quantitatively. "One is the Richter scale-named after the American seismologist Charles Francis Richter-which measures the energy released at the focus of a quake. It is a logarithmic scale that runs from 1 to 9; a magnitude 7 quake is 10 times more powerful than a magnitude 6 quake, 100 times more powerful than a magnitude 5 quake, 1000 times more powerful than a magnitude 4 quake, and so on."(Associated Press 1992)
The other scale, introduced at the turn of the 20th century by the Italian seismologist Giuseppe Mercalli, "measures the intensity of shaking with gradations from I to XII." (Associated Press 1992) Because seismic surface effects diminish with distance from the focus of the quake, the Mercalli rating assigned to the quake depends on the site of the measurement. "Intensity I on this scale is defined as an event felt by very few people, whereas intensity XII is assigned to a catastrophic event that causes total destruction. Events of intensities II to III are roughly equivalent to quakes of magnitude 3 to 4 on the Richter scale, and XI to XII on the Mercalli scale can be correlated with magnitudes 8 to 9 on the Richter scale."( Associated Press 1992)
Attempts at predicting when and where earthquakes will occur have met with some success in recent years. At present, China, Japan, Russia, and the U.S. are the countries most actively supporting such research. "In 1975 the Chinese predicted the magnitude 7.3 quake at Haicheng, evacuating 90,000 residents only two days before the quake destroyed or damaged 90 percent of the city's buildings. One of the clues that led to this prediction was a chain of low-magnitude tremors, called foreshocks, that had begun about five years earlier in the area." (Day 1988) Other potential clues being investigated are tilting or bulging of the land surface and changes in the earth's magnetic field, in the water levels of wells, and even in animal behavior. A new method under study in the U.S. involves measuring the buildup of stress in the crust of the earth. "On the basis of such measurements the U.S. Geological Survey, in April 1985, predicted that an earthquake of magnitude 5.5 to 6 would occur on the San Andreas fault, near Parkfield, California, sometime before 1993."(Day 1988) Many unofficial predictions of earthquakes have also been made. In 1990 a zoologist, Dr. Iben Browning, warned that a major quake would occur along the New Madrid fault before the end of the year. Like most predictions of this type, it proved to be wrong. "Groundwater has also played an important part in earthquake predictions. A peak in radon in the groundwater at Kobe, Japan 9 days before the 7.2 earthquake cause quite a stir. Radon levels peaked 9 days before the quake, then fell below the normal levels 5 days before it hit."(Monastersky July, 95)
In North America, the series of earthquakes that struck southeastern Missouri in 1811-12 were probably the most powerful experienced in the United States in historical time. The most famous U.S. earthquake, however, was the one that shook the San Francisco area in 1906, causing extensive damage and taking about 700 lives.(Nagorka 1989)
The whole idea behind earthquake predicting is to save lives. With the improvement in technology, lives have been saved. New ideas and equipment is starting to prove to be very helpful in predicting were and when an earthquake will strike. The time and research put into earthquake predicting has already started to pay off. It is only a matter of time before earthquakes will no longer be a threat to us.
Bibliography
Associated Press 1992, "The Big One: Recent Tremors May Be a 'Final Warning'"; SIRS 1993 Earth Science, Article 12, Aug. 30, 1992, pg. J1+.
Associated Press 1993, "Predicting the Effects of Large Earthquakes"; SIRS 1994 Applied Science, Article 17, Sept./Oct. 1993, pg. 7-17.
Butler, Steven 1995, "Killer Quake"; SIRS 1995 Earth Science, Article 47, Jan. 30, 1995, pg. 38-44.
Day, Lucille, 1988, "Predicting The Big One"; SIRS 1989 Earth Science, Article 5, Summer 1988, pg. 34-41.
Monastersky, R. 1995, "Electric Signals May Herald Earthquakes"; Science News, v. 148, Oct. 21 ,1995, pg. 260-1.
Monastersky, R. 1995, "Quiet Hints Preceded Kobe Earthquake"; Science News, v. 148, July 15, 1995, pg. 37.
Monastersky, R. 1995, "Radio Hints Precede a Small U.S. Quake"; Science News, v. 148, Dec. 23&30, 1995, pg. 431.
Nagorka, Jennifer 1989, "Earthquakes: Predicting Where Is Easy--It's When That's Tough"; SIRS 1990 Earth Science, Article 26, Oct.29, 1989, pg. E1-2.
f:\12000 essays\sciences (985)\Physics\Einstein.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sida 2
INNEHÅLLSFÖRTECKNING:
Sida 1 - Titelsida
Sida 2 - Innehållsförteckning
Sida 3 - Bakgrund, syfte, avgränsning
och metod
Sida 4-6 - Inledning på huvudtexten -
Renässansen
Sida 7-12 - Leonardo da Vinci´s liv
Sida 13-15 - Leonardo da Vinci´s
uppfinningar
Sida 16-18 - Leonardo da Vinci´s musik och
konstverk
Sida 19-21 - Leonardo da Vinci´s "Mona Lisa"
Sida 22 - Subjektiv sammanfattning
Sida 23 - Källförteckning
Sida 3
BAKGRUND:
Jag har alltid varit intresserad av
historia och då speciellt av lärda män som
Leonardo da Vinci.
SYFTE:
Att undersöka om Leonardo da Vinci var
typisk för sin tid.
AVGRÄNSNING:
Jag har valt att börja med en kort inledning
om vad renässansmänniska är för något. Sen
tänkte jag berätta om Leonardos uppväxt och
till sist om hans uppfinningar och hans
konstverk.
METOD:
Metoderna har varit att låna böcker på
biblioteket, slå i uppslagsverk och leta i
tidsskrifter.
Sida 4
INLEDNING PÅ HUVUDTEXTEN -
RENÄSSANSEN
Renässansen började under mitten av 1300-
talet och slutade ungefär vid 1600-talets
början.
Rinascità (italienska), renaissance
(franska) är ordet renässans på de olika
språken.
Ordet renäsans betyder pånyttfödelse och
syftar på italienarnas återuppväckande av
det antika kulturarvet efter det långa
avbrottet under medeltiden.
Renässansen var närmast en skördetid för
tankar och ideér som vuxit fram under
medeltiden och som skulle förändra Europas
kulturbild mellan 1300- och 1600-talet.
Det man nu skapade var en ny människosyn,
som utmärktes av naturvetenskapen, de
geografiska upptäckterna, antik- och
språkintresset och de tekniska
uppfinningarna.
Vad som var typiskt för en sann renässans
människa var att han (det var mest män)
skulle kunna lite av allt. Han skulle kunna
vilka de främsta konstnärerna, musikerna,
äventyrarna mm. var. Sen fanns det ju dem
som var experter på alla områdena och
naturligtvis var ju Leonardo den störste av
dem.
Han var ju känd redan under sin tid.
Vad som också var utmärkande var att man
Sida 5
utrustade stora skepp att segla iväg och
hitta nya landområden med mycket guld och
silver.
Många av skeppen kom aldrig tillbaka.
Sida 7
LEONARDO DA VINCI´S LIV:
Leonardo föddes den 15 april 1452 i den
lilla staden Vinci i Italien.
Han arbetade som målare, bildhuggare,
fortifikationsingenjör, vattenbyggare,
kartritare och teknisk konstruktör, men i
grund och botten experimenterade han bara
för sina egna syften.
Under sin tid var han känd som den stora
impulsgivaren och skaparen av två fresker
(målningar):
"Nattvarden" och "Anghiarislaget", båda två
är förstörda idag.
Den enda personen som stod Leonardo nära var
hans vän och lärljunge Melzi, som har
bevarat Leonardos teckningar, studier,
planer, uppfinningar och tankar till
eftervärlden.
Leonardo da Vinci (Leonardo från Vinci) har
fått sitt efternnamn från sin födelsestad.
Vinci betyder "betesmarker" och det har fått
sitt namn efter landskapet med dess
vingårdar och marker.
Leonardos far, Ser Piero, härstammade från
(på den tiden) en berömd florentinsk familj.
Man kan följa släkten tillbaka till 1200-
talet och ättlingar till Leonardos bröder
levde ända in på 1900-talet. Det var ett
segt och starkt släkte.
Sida 8
Släktens medlemmar var oftast väldigt stora
och starka, det var även Leonardo. Man har
hört berättas att Leonardo utan ansträngning
kunde böja ihop en hästsko med bara ena
handen.
Han far, Ser Piero, ägde en liten lantgård i
Vinci och hela familjen var välmående.
Leonardos mor var en bondkvinna. Man vet
bara hennes förnamn och det var Catarina.
Catarina överlämnade sonen i faderns vård
och gifte sig med en enkel man från Vinci.
Leonardo föddes utom äktenskapet men på den
tiden var det ingen skam att vara av oäkta
börd.
Ser Piero gifte sig fyra gånger med först i
det tredje och fjärde äktenskapet blev det
barn, sammanlagt elva stycken. Leonardos
farbror Francesco tog hand om pojken, som
den mycket strängt sysselsattefadern
knappast hade tid med. Men man vet med stor
säkerhet att Leonardos uppväxt var mycket
lycklig.
Han levde nära naturen, där det fanns mycket
att beskåda bland djur, växter och konstigt
formade stenar.
Han konstnärliga begåvning framträdde
tidigt. Han tecknade, knåpade och
modellerade. Hans far satte honom som
fjortonåring i lära hos Verrocchio (som var
en berömd konstnär i Italien), sedan han
Sida 9
visat några prov på Leonardos skicklighet
och frågat honom om han trodde att pojken
hade några framtidsutsikter som målare.
"Helt säkert" lär mästaren ha svarat. Så
blev Leonardo lärling i Verrocchio verkstad.
Han fick hjälpa till vid utförandet av
gravstenar och förfärdiga silverskålar,
mosaikarbeten, bronstatyer och
altarmålningar.
Leonardo hade här grunden till sitt tekniska
kunnande.
Det tog Leonardo sex år innan han fick sitt
mästarbrev.
Hos Verrocchio intresserade man sig också
för matematiken och den nyupptäckta
perspektivläran, som ett medel att återge
verklighetens former. Leonardo påverkades
av de nya teorierna och ägnade matematiken
ett lidelsefullt intresse.
Man har inget tillförlitligt porträtt av
honom som barn, men man har sannolikt med
viss rätt velat se hans drag i ärkeängeln
Mikaels huvud i Botticinis målning "Tobias
och ängeln", och kanske också i "Unge David"
av Verrocchio.
Leonardo ägnade sig också mycket åt musik.
Han både sjöng och spelade luta och var
berömd för sina sällskapstalanger.
Sida 10
Musiken var en konstart som på den tiden
gynnades av hovet och bäste lärare i
musikteori var Franchino Gafurio. Hans verk
om "Musikutövning" hör till de tidigast
tryckta böckerna i Milano.
I utgåvan av Gafurios arbete finns en
teckning av en orgelspelare som satts i
förbindelse med Leonardo.
Leonardo var mycket musikalisk, men bland
hans många anteckningar finns inga noter.
Hans intresse för musiken fick honom att
göra utkast till nya konstigt formade
instrument med större tonstyrka.
I sina skrifter och teckningar har Leonardo
upprepade gånger skildrat visioner av
strider, mänsklig vildhet, naturkatastrofer
och världens undergång.
Han tecknade mycket studier av unga pojkars
och äldre mäns huvuden.
"Han åstadkommer inte många målningar" säger
en av hans samtida, "ty han är aldrig
tillfreds med någonting, det må vara aldrig
så vackert".
Därför finns det endast ett fåtal verk av
honom.
Vetande skaffade han sig på alla de områden,
men han gjorde det på egen hand.
Sida 11
Leonardo strövade ibland runt i Rinardalen
och tecknade landskap.
Ett av hans största intressen var som jag
nämnde innan matematiska studier. Han
umgicks med lärda män, bland annat en läkare
och magiker som hette Paolo dal Pozz
Toscanelli, som var en av de första som
trodde på möjligheten att ta sig fram
sjövägen till Indien (han skickade faktiskt
en karta till Colombus). Det var troligen
han som väckte Leonardos intresse för
geografi. Leonardo blev med tiden (även
på detta område) en mycket skicklig
kartograf (kartritare). Han konstruerade
även ett vattenur och andra tidmätare.
Leonardo tålde inte att se djur i
fångenskap. Han köpte fåglar i bur som han
sedan öppnade och lät flyga ut.
På den tiden gick många konstnärer med
värja. Men Leonardo bar inget vapen. Trots
sin väldiga kroppsbyggnad var han stillsam
och närmast tillbakadragen. Det fanns i
hans uppträdande något skyggt och
hemlighetsfullt. Han var också väldigt rädd
att någon skulle själa hans idéer och
uppfinningar.
SIDA 12
LEONARDOS UPPFINNINGAR:
Hans ritningar på krigsmaskiner med sina
smart uttänkta detaljer är ofta bara
symboliska uttryck för Leonardos
föreställningar och känslor.
Han har gjort en ritning på en
artilleripjäs, där två mörsare (en sorts
kanoner) sköt iväg två läderbehållare med
med en massa kulor i. Dessa behållare skulle
dela sig så fort de skjutits iväg och ett
moln av kulor skulle regna över fienderna.
Varje kula skulle vara försedd med en
explosiv laddning som skulle explodera i ett
moln av små stjärnor.
Detta projekt var inget som han verkligen
gjorde utan det bara utspelades i hans
fantasi.
Att förse projektilerna med tidsinställda
tändanordningar som fick dem att explodera i
rätt ögonblick var en omöjlighet med den
tidens utveckling. Först på 1800-talet
uppfanns sådana tändanordningar.
Vapentekninken på Leonardos tid var inte
alls outvecklad. Kanongjuterierna i Milano
var mycket berömda.
Leonardo har tecknat en väldig kran och i
den hänger ett kanonrör. Som man ser på
bilden har röret en storlek som på den tiden
var omöjliga att framställa. Arbetarna som
är sysselsatta runt jättekanonen gör väldiga
Sida 13
rörelsestudier.
Leonardo har också konstruerat en filmaskin
som kunde slå filens räfflor jämnt på ett
slätt metallstycke, som sedan kunde härdas
med de metoder som man kände till då.
Den fungerade så att när en tyngd fallit
ner, lyftes en hammare för att skilja den
utskjutande kanten och kuggen, tyngden lyfts
sedan igen med hjälp av en vev och
präglingen av räfflorna på filens yta
fortsatte.
En av Leonardos mest berömda teckningar är
den av en enorm ballista (katapult) med
några avancerade egenskaper.
På nästa sida är en liten bild på den.
Teckningen är så skickligt gjord att den
blivit klassiker bland grafiska verk om
ingenjörskonst.
Leonardo återger den stora bågen i
laminerade delar för att få största möjliga
böjlighet. Bågsträngen dras tillbaka med
skruven och kugghjulet i bildens nedre högra
hörn. Där finns två avfyrningsspakar (nere
till vänster i bilden). Den översta har en
fjäderanordning som utlöses av ett slag med
en klubba, den nedersta utlöses av en
hävarm.
Sida 15
LEONARDOS MUSIK OCH
KONSTVERK:
Det enda säkra verk från Leonardos tidiga år
är det av munkarna i San Donato a Scopeto
beställda målningen som föreställer
"Konungarnas tillbedjan". Denna målning
visar att han var flera år före sin tid. Den
visar resultat som måleriet skulle uppnå
först vid sekelskiftet. Ett stort antal
studier visar hur omsorgsfullt han
förberedde sig för sitt verk.
Leonardo visar här vad han har lärt sig. Men
den glömdes bort och tack vare detta har den
blivit bevarad i oförändrat skick.
Ett annat verk från hans tidiga år var
"Hieronymus med lejonet" som aldrig
fullbordades och som återupptäcktes i början
av 1800-talet i skadat skick.
Det som utmärker Leonardos ritningar och
studier jämfört med andra ingenjörer under
hans tid är hans moderna och konstnärliga
ritningar.
Han lade in väldigt mycket konst i sina
ritningar.
Leonardo arbetade också med att måla
madonnabilder.
Det finns en hel rad med målnigar på junfru
Maria med barn, som Leonardo har målat.
En av dessa är "Madonna Litta".
Hon är hårt målad och dräkten är oskön, men
med huvudels graciösa böjningar förs tanken
Sida 16
direkt till Leonardo.
En av hans teckningar visar ett huvud som
kommer "Madonna Litta" mycket nära.
Leonardo fick så småningom elever och
medarbetare i Milano och en målarskola
började växa fram.
Under sin första Milano tid ingick han
kompanjonskap med familjen de Predis.
De fyra bröderna de Predis var mycket
duktiga konsthantverkare. En av dem var
myntgravör, en träsmidare, en miniatyrmålare
och en porträttmålare. Tillsammans med
porträttmålaren Ambrogio de Predis inledde
Leonardo underhandlingar om uppförandet av
en altartavla i kyrkan San Francesco i
Milano.
Leonardo utförde för altaruppsatsens
mittparti "Madonnan i grottan".
Den finns i två versioner, den ena finns
på Louvre Muséet i Paris och den andre finns
i London.
Den i London är förmodligen en kopia gjord
av en av Leonardos elever under hans
uppsikt.
I den versionen som finns i Louvre svävar
inga helgonglorior över figurernas huvud.
Deras hårlockar glänser och ansiktena lyser
mot bakgrunden av ljusdunklet i grottan.
Med sin tekniska begåvning och sina
talanger som festarrangör uppskattades han
Sida 17
mycket vid Moras hov.
Leonardo har skrivit många skämt, gåtor och
kurositeter som han kunde roa ett sällskap
med. Han har återgivit anekdoter som den om
målaren som man frågar varför han gjort sina
barn så fula men sina figurer på målningarna
så vackra, då svarade målaren:
- Mina tavlor gör jag ju på dagen men mina
barn på natten.
När hertig Gian Galeazzo skulle gifta sig
ritade Leonardo dekorationerna.
Den stora hallen i slottet förvandlades av
gröna grenar till en skogsinteriör. Vid
midnatt visades det ett festspel,
"Paradiset", författat av hovskalden
Bellincioni.
Sida 18
LEONARDOS "MONA LISA":
Mona Lisa är ett av Leonardos få fullbordade
konstverk, det är faktiskt den enda bevarade
målningen som man till hundra procent vet
att han har gjort. Innan våra dagar har
tavlan varit med om många episoder, bland
annat har den blivit beskuren cirka tio
centimeter på varje sida.
Mona Lisa blev engång stulen från Louvre
Muséet i Paris av en liten oansenlig
arbetsklädd italienare som hette Vinzenzo
Peruggia, han arbetade med att rama in och
sätta glas i tavlor. Vinzenzo gick helt
fräckt in på Louvre Muséet i agusti 1913 och
lyfte ner tavlan och gick ut igen.
Under två år höll han tavlan gömd i sin
vindsvåning Paris medan världspressen
rasade. 1915 förde han tavlan vidare till
Florens gömd i en låda med kläder och
verktyg. Han blev arresterad när han
försökte sälja den till en kosthandlare.
Under rättegången förklarade han att hans
enda avsikt var att föra tillbaka tavlan
till fosterlandet. Han slapp undan med sju
månaders fängelse, och tavlan återlämnades
under en högtidlig cermoni till den franska
regeringen.
Mona Lisa har blivit konsthistoriens mest
omskrivna, besjungna och kommenterade
porträtt. Hon har givit upphov till
Sida 19
noveller, romaner, lovsånger och operor.
Det mest omdiskuterade är hennes leende.
Somliga har funnit det grymt, det har
uppfattats som det "obarmhärtiga leendet hos
en kvinna som kuvat mannen". Enligt andra är
det älskligt. Enligt Walter Pater är det ett
uttryck för "den moderna själen med alla
dess sjukdomstecken".
Det finns ochså uttranden som Leonardo själv
skrivt ner i "Traktat om måleri". Han
skriver att genom lutspel och högläsning har
han framkallat detta ansiktsuttryck på sin
modell.
Man vet inte så mycket om denna modell mer
än att hon var en mycket vacker och fattig
ung flicka som gifte sig med en tjugo år
äldre rik man som var från en väl ansedd
familj. Paret fick en dotter som dog vid låg
ålder.
Modellen var runt 25 år när Leonardo började
med tavlan och det tog drygt fyra år att
göra klart den.
Sida 21
SUBJEKTIV SAMMANFATTNING
Vad jag har kommit fram till under denna tid
jag har arbetat med detta specialarbete är
att allt det negativa som hände under
medeltiden plötsligt vände och blev
någonting positivt. Alla hade ju haft det
väldigt svårt under medeltiden. Folk hade
haft en massa förbud över sig så de hade
aldrig haft chansen och verkligen göra
vad de ville. Men sen blev det för mycket
och folket gjorde "uppror" (om man nu kan
uttrycka sig så). Man började segla ut i
världen för att upptäcka nya kontineneter,
man gjorde tekniska framsteg, man målade
tavlor, man lärde sig språk, ja listan kan
göras nästan hur lång som helst.
Och den som utmärkte sig mest var ju
Leonardo da Vinci. Han blev ju expert på
vart enda område han gav sig in på. Se bara
på hans målningar och uppfinningar för att
nämna något. De är ju värda miljoner idag.
Detta arbete är något som jag verkligen
gillat att arbeta med.
Sida 22
KÄLLFÖRTECKNING:
"Leonardo Uppfinnaren" skriven av Ludwig H.
f:\12000 essays\sciences (985)\Physics\Einsteins theory of relativity basic.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Innehållsförteckning:
Innehållsförteckning 1
Inledning
1. Syfte och tillvägagångssätt 2
2. Begränsningar 2
3. Sammanfattning 2
Kap. 1, Relativistiska effekter, Ljuset.
1.1 Begynnelsen 3
1.2 Ljusets hastighet 3
1.3 Ljushastigheten är konstant 3
1.4 Teorin 4
1.5 Konsekvens 4
Kap. 2, Relativistiska effekter, Tiden.
2.1 Tidsdilationen 5
2.2 Bevis 5
2.3 Tolkning 5
2.4 Experimentella belägg för tidsdilationen 6
Kap. 3, Relativistiska effekter, Paradoxer.
3.1 Skenbara paradoxer 7
3.2 Tvillingparadoxen 7
3.3 Myonerna 7
3.4 The train experiment 7
Kap. 4, Relativistiska effekter, Massa & Energi.
4.1 Massans hastighetsberoende 8
4.2 Rörelseenergi 8
4.3 Massa och energi 8
Register.
1. Ordlista 9
2. Formler och konstanter 9
3. Källförteckning 10
4. Litteratur tips/ Författar tips 10
Bilagor
1. Tabell och diagram
över ökningen av rörelseenergi vid hastigheter nära ljushastigheten. 11
Inledning
1. Syfte och tillvägagångssätt
Syftet med rapporten är att på ett så lättförståeligt och enkelt sätt som möjligt redogöra, samt väcka intresse för, de mest basala idéerna och teorierna inom den Speciella Relativitetsteorin, med de avgränsningar som nämns. Materialet i rapporten är sekundärdata tagna och bearbetade från redovisad litteratur.
2. Begränsningar
Rapporten är begränsad till de grundläggande tankegångar kring den speciella relativitetsteorin, som man kan finna praktisk tillämpning för i det dagliga livet, samt för förståelsen av omvärlden. Ämnen som tagits upp är ljushastighetens konstans, tidsdilation samt effekter på massa och energi.
3. Sammanfattning
År 1905 utvecklade Albert Einstein sin speciella relativitetsteori. Denna berör föremål som rör sig i höga hastigheter, hastigheter närmare ljusets, och de fenomen som följer av hastigheten.
Ljushastigheten har, experimentellt och teoretiskt, beräknats vara konstant. Detta leder till konstaterandet att uppfattningar som tid och rum är relativa, d.v.s. att sinnesintryck såsom tid och längd är subjektiva uppfattningar och kan variera från en iakttagare till en annan beroende på vilka referenssystem som används. Det leder även till konstaterandet att massa, även det, är en relativ uppfattning, samt att massa och energi i grunden är samma sak. Detta kan sägas få långtgående konsekvenser inom t.ex. rymdforskningen då massan ökar med hastigheten för att nära ljushastigheten närma sig oändligheten och därigenom få till följd att resor i ljusets hastighet, med dagens kunskap, ej är möjligt då; tiden är icke existerande i ljushastigheten samt att massan blir oändlig och därigenom en acceleration till nämnda hastighet är omöjlig.
Kap. 1, Relativistiska effekter: Ljuset.
1.1 Begynnelsen
"The first stirrings of the theory came when, as a boy of fourteen, he wondered what the world would look like if he could ride on a beam of light." (Albert Einstein)
1.2 Ljusets hastighet för olika iakttagare
· En pistolkula som avlossas framåt från en bil i rörelse får högre hastighet relativt marken än om bilen står stilla. Kulans hastighet kommer i detta fallet att bli lika med kulans utgångshastighet plus bilens hastighet. Om bilen har hastigheten 180 km/ h (50 m/s) och kulan har utgångshastighet 200 m/s kommer kulans hastighet i förhållande till marken att bli 50+200=250 m/s (900 km/h).
· Antag att det efter en stund börjar skymma och vi slår på strålkastarna, kommer vi att på samma sätt kunna mäta hastigheten hos ljuset? I detta fall kommer vi inte att kunna mäta om det föreligger någon skillnad p.g.a. att differensen hos hastigheterna är så stor men om vi förelägger ett annat exempel kommer vi till klarhet.
· Jorden kretsar kring solen med en fart av 30 km/s, hur inverkar den farten på det värde man får då man mäter hastigheten hos det infallande ljuset från en avlägsen stjärna? En observatör mäter hastigheten hos det infallande ljuset när jorden står i läge A, ett halvår senare mäter han återigen hastigheten hos ljuset från samma stjärna. Om nu resonemanget från exemplet med bilen skulle kunna tillämpas här skulle vi få en högre hastighet hos ljuset i läge B än i läge A, så är inte fallet.
1.3 Ljushastigheten är konstant!
"Enligt Einsteins relativitetsteori är ljusets hastighet alltid densamma i förhållande till observatören -oavsett hur de förhåller sig till ljuskällan." Man har verifierat detta vid avancerade mätningar av mycket snabba partiklar och även där funnit att det inte spelar någon roll var i referenssystemet man befinner sig , ljushastigheten i vakuum är alltid konstant, 299 792 458 m/s. Detta kan vid en första tanke vara svårt att acceptera, men svårigheterna beror enbart på att vi är vana vid de förhållanden som råder på jorden och inte de extrema förhållanden som råder nära ljusets.
1.4 Teorin
"Einstein avvisade Newtons tanke på ett 'absolut rum' i rymden. Det går inte att tala om absolut rörelse, eftersom allt i universum rör sig- planeterna, solen, stjärnorna, galaxerna. När vetenskapsmannen mäter hastighet på något måste det ske i förhållande till något annat."
Albert Einstein byggde upp sin teori med utgångspunkt från principen om ljushastighetens konstans. Dessutom förutsatte han att varje skeende skall kunna förutsägas och beskrivas med samma fysikaliska lagar av observatörer i alla referenssystem, som rör sig med konstant hastighet i förhållande till varandra. Han "visade också att det inte finns någon absolut eller 'sann' tid. Liksom hastigheten måste tid relateras till något. Att sluta tänka i absolut tid var oerhört svårt har Einstein berättat."
1.5 Konsekvens
Konsekvensen av att ljushastigheten är konstant i vakuum är att tider, längder och massor kan vara olika stora för iakttagare som rör sig relativt varandra. En annan följd är att två händelser kan inträffa samtidigt för en iakttagare men vid olika tidpunkter för en annan.
Kap. 2, Relativistiska effekter: Tiden.
2.1 Tidsdilationen
"En av konsekvenserna av lagen om ljushastighetens konstans är att tiden måste gå olika fort för iakttagare som rör sig i förhållande till varandra.
· Antag att ett rymdskepp med hög fart passerar en observatör och att en ljusblixt just då sänds från rymdskeppets 'tak' mot dess 'golv', där en spegel reflekterar den tillbaka mot taket."
Förflyttning
vt
fig. A fig. B
Blixt Blixt
ct ct ct0 J
2 2 2
Rymdskepp J Rymdskepp
· Om vi tillämpar lagen att ljushastigheten är lika stor för båda observatörerna måste observatören utanför rymdskeppet, fig. A, uppmäta en längre tid för ljusblixtens färd än iakttagaren i rymdskeppet fig. B. Detta är en logisk konsekvens om man använder Pytagoras sats och ljushastighetens konstans, ty vägen som blixten vandrar, sett utifrån(fig. A), är längre än den vägen en iakttagare inuti skeppet uppmäter(fig. B).
2.2 Bevis a
Pytagoras sats; c² = a² + b² :
b c
Likformighet: (ct/2)2 = (ct0/2)2 + (vt/2)2 ? t = (t0) / (û (1-(v2/c2)))
2.3 Tolkning
En iakttagare inuti rymdskeppet uppmäter tiden t0 för ljusblixtens gång fram och tillbaka mellan rymdskeppets golv och tak. En observatör som ser rymdskeppet svepa fram med hastigheten v uppmäter en längre tid t för samma förlopp.
Sambandet gäller för alla förlopp som sker ombord på rymdskeppet. Den tid t0 som uppmäts ombord på rymdskeppet kallas egentid och är den kortast tid som kan uppmätas för ett förlopp, förlängningen av tiden som en utomstående iakttagare kan uppmäta kallas för tidsdilation och orsakas av en relativ hastighet mellan observatören och det system där förloppet utspelas.
2.4 Experimentella belägg för tidsdilationen
På olika hög höjd i atmosfären finns det s.k. myoner, detta är mycket små partiklar som rör sig med 99,5 % av ljushastigheten. De är emellertid inte stabila utan har en halveringstid av 1,5 ms, därmed menas att efter 1,5ms återstår endast hälften av ursprungsantalet. Antag att man mäter mängden myoner på arean av en viss yta på 2000 m höjd, hur många av dessa hinner ned till havsytan innan de omvandlats?
Partiklarnas gångtid beräknas; t = s/v = 2000 m/(3,0*108 m/s) = 6,7ms.
Under den tiden hinner partiklarna halveras mer än 4 gånger, mindre än 1/16 av myonerna bör följaktligen återfinnas vid havsytan. Detta stämmer inte, betydligt mer än hälften återfinns vid havsytan. "Felet är att vi jämför tider som gäller i olika referenssystem. Tiden 6,7 ms är myonernas gångtid i ett till jorden knutet referenssystem, medan 1,5 ms är egentiden för partiklarnas halveringstid," giltig i det system som partiklarna befinner sig i. Om vi använder formeln för beräkning av tidsdilation kan vi se skillnaden i tid mellan de olika systemen.
t = (t0) / (û (1-(v2/c2))) = (1,5 ms) / (û(1-0,9952)) = 15 ms
Detta är mer än dubbla gångtiden, och med det värdet på halveringstiden stämmer det observerade antalet myoner bättre.
Kap. 3, Relativistiska effekter: Paradoxer.
3.1 Skenbara paradoxer
Det finns många frågor man kan ställa sig med anledning av tidsdilationen. Om två rymdskepp passerar varandra och besättningarna jämför de båda skeppens klockor, vilket skepp uppmäter t och vilket uppmäter t0? Om myonerna passerar genom atmosfären på endast en tiondel av den på jorden uppmätta tiden, måste då inte jordatmosfären vara 1/10 så tjock som för oss på jorden? Kan två personer verkligen se samma händelser oberoende av varandra men med olika faktiska tidsuppfattningar?
3.2 Tvillingparadoxen
Vi tänker oss att den ene av två tvillingar i unga år ger sig ut på en rymdresa med ett rymdskepp som går i hastighet nära ljusets. Långt ute i rymden vänder han rymdskeppet och far hemåt igen, den totala resan för rymdfararen har enligt hans egen tideräkning tagit 5 år men på jorden har det gått 25 år. När tvillingarna åter träffas skiljer det 20 år i livstid och den ene är en mogen man medan den andre fortfarande är en yngling. Men, kan man tycka, de båda bröderna har färdats med precis samma hastigheter relativt varandra så varför är inte åldersfördelningen tvärtom?
Nu är inte situationen precis likadan för de båda bröderna, brodern i rymdskeppet har vid starter, landningar och vid vändningen i rymden utsatts för krafter som ej hans jordbundne bror har gjort och alltså har det uppstått asymmetri mellan bröderna, och frågeställningen blir i och med det inaktuell.
3.3 Myonerna
Om myonerna passerar jordatmosfären på 1/10 av den tid vi på jorden uppmäter, har då inte jordatmosfären 1/10 av den tjocklek vi uppmäter på jorden?
Detta är ingen paradox, för svaret på frågan är ja. Det som är svårt att acceptera är faktumet att samma sträcka kan vara olika lång beroende på var i referenssystemet man befinner sig.
3.4 The Train experiment
"In one of his famous 'thought experiments' Einstein considered what would be seen if lightning struck twice along the track of a moving train, one struck in front of the train the other an equal distance behind it. Suppose an observer, who is standing on the bank next to the track, sees these two lightnings strokes happen simultaneously; will a man in the train agree? Einstein's answer is no. The light from both flashes travels towards the man in the train at the same speed. It will take longer for the flash from behind to reach him, because the train is continuing to move forward while the light is travelling towards him. The opposite will be true for the flash seen ahead. So for the man on the train the two flashes are not simultaneous."
Kap. 4, Relativistiska effekter: Massa & Energi.
4.1 Massans hastighetsberoende
En konsekvens av tidsdilationen är, att massan hos ett föremål ökar, då hastigheten hos föremålet ökar. Detta kan med utgångspunkt i ljushastighetens konstans räknas fram och får vittgående praktiska konsekvenser för framtida rymdfart. Massformeln visar klara likheter med formeln för tidsdilation och det är också härifrån denna kan härledas.
m = (m0) / (û(1-(v2/c2)))
Av massformeln ser vi att m växer mot oändligheten då v närmar sig c. Detta innebär att det skulle behövas en oändlig mängd energi för att accelerera ett föremål till ljusets hastighet. "Det betyder att ingenting kan färdas lika fort som eller fortare än ljuset." Ljushastigheten verkar vara en övre gräns för de hastigheter ett föremål kan uppnå.
4.2 Rörelseenergi
Einsteins formel för rörelseenergi ser lite annorlunda ut än den klassiska formeln gör.
Klassiska formeln: Wk = 1/2 * mv2
Einsteins formel: Wk = mc2 - m0c2
Detta har bekräftats vid experiment i USA 1964 då man accelererade elektroner genom hög spänning varefter man beräknade deras rörelseenergi ur formeln Wk = eU, Einsteins formel behöver dock inte användas i normala hastigheter. (Bilaga 1)
U Wk v
(MV) (fj) (Mm/s)
0,50 80 260
1,00 160 273
1,50 240 288
4,50 720 296
4.3 Massa och energi
När man ökar rörelseenergin Wk hos ett föremål, ökar enligt sambandet Wk = mc2 - m0c2 också dess massa m. Massan hos ett föremål är tydligen ett mått på dess energi. "Den i praktiken viktigaste konsekvensen av relativitetsteorin är utan all gensägelse ekvivalensen mellan massa och energi. Summan av massa och energi förblir konstant i ett slutet system." Massan m motsvaras av en energi W, som är produkten av massan och faktorn c2.
W = mc2
Einstein kallade uttrycket m0c2 föremålets viloenergi och mc2 dess totala energi. Skillnaden mellan dessa, Wk , kallas föremålets rörelseenergi.
Om massa och energi i grunden är samma sak, bör all energi W motsvaras av en viss massa m.
m = W/c2
Om man exempelvis värmer ett föremål, bör dess massa öka. Detta stämmer också enligt kända faktum. En sten som lyfts till en högre höjd bör också öka i massa då lägesenergin ökar, detta är dock inte påvisat men det är ett känt faktum bland studenter att böckers tyngd ökar med antalet trappor de skall bäras uppför.
Register
1. Ordlista:
Ord Betydelse Hänvisning
Absolut Något fast, oföränderligt, fix 1.4
Differens Skillnad 1.2
Effekt Verkan, Påföljd Inl.2
Ekvivalens Likvärdig, Fullgod ersättning 4.3
Energi Arbetsförmåga, Kraft Inl.2-3, Bil.1, 4.1-3
Formel Uttryck för att förklara eller bestämma något 2.4, 4.2, Bil.1
Gångtid Tid som förflyter från en tid till annan 2.4
Halveringstid Något halveras på viss fix tid 2.4
Konsekvens Följd, Följdriktighet Inl.3, 1.5, 2.1, 4.1, 4.3
Konstant Oföränderlig Inl.3, 1.3-5, 4.3
Oändlig Motsats till ändlig Inl.3, 4.1
Paradox Skenbart motsägelsefull sats/ företeelse Kap.3
Referenssystem Miljö som används som referens Inl.3, 1.3-4, 2.4, 3.3
Relativ Föränderlig, I jämförelse med Hela Rapporten
Sinnesintryck Uppfattningar som inhämtats genom sinnena Inl.3
Subjektiv Personlig uppfattning Inl.3
Tidsdilation Förskjutning, Böjning av tiden Inl.2, 2.1, 2.3-4, 3.1, 4.1
2. Formler och konstanter:
Benämning Formel/ Förkortning Enhet Hänvisning
Arbete a = F*s Joule
Coulombs lag F = k((Q1Q2)/r2) F, Kraften mellan två punktformiga laddningar
Einsteins formel Wk = mc2 - m0c2 Joule 4.2
Elektrisk spänning = U 1 Volt = 1 Joule/ Coulomb
Elektrisk ström I = Q/t 1 Ampere = 1 Coulomb/ sekund
Elementarladdning = e 1,6022*10-19 Coulomb (Ampere*sekund)
Energi W = mc2 Joule 4.3
Hastighet v = s/t meter/ sekund Kap.1
Joule = J 1 Joule = 1 Newtonmeter (Nm)
Klassiska formeln Wk = 1/2*mv2 Joule 4.2
Coulombs konstant = k 8,988*109 Nm2/C2
Kraft = F N, (Newton)
Laddning = Q C, (Coulomb)
Laddningsavstånd = r meter
Ljushastigheten c = 299792485 (m/s), meter/ sekund Kap.1
Massa = m kg, (kilogram)
Massformeln m = (m0)/(Ö(1-(v2/c2))) kg 4.1
Pytagoras sats c2 = a2 + b2 2.1-2
Rörelseenergi = Wk Joule 4.2-3, Bil.1
Sträcka = s meter
Tid = t sekund
Tidsdilation t = (t0)/(Ö(1-(v2/c2))) sekunder Kap.2
3. Källförteckning
· Almqvist & Wiksell/ Gebers Förlag AB, Focus-Materien, Stockholm, MCMLXV.
· Alphonce, Björkman, Gunnvald, Lindahl, Fysik för gymnasieskolan 3: Kapitel 26, Biblioteksförlaget, 1993.
· Hildingsson Kaj, Vetenskapens Profiler, Natur och Kultur, Stockholm, 1989.
· Lindberg Yngve, Gymnasieskolans Tabeller och formler, Liber Läromedel, 1986.
· They made our world, Broadside Books ltd., 1991.
4. Litteratur tips/ Författar tips:
· Asimov Isac, Svarta hål och kosmiska ägg, Prisma Magnum,1983.
· Boschke Friedrich L., Det outforskade, Bernces.
· Ferris Timothy, Mot universums gränser, Forum, 1978.
· Hawking Stephen W. A Brief History Of Time; from the big bang to black holes, Guild Publishing London, 1990.
· Moore Patrick, Planeterna och universum, Generalstabens Litografiska Anstalts Förlag, 1978.
· Sagan Carl, Kosmos, Askild & Kärnekull, 1981.
· Semitiov Eugen
Bilagor
1. Tabell och diagram över ökningen av rörelseenergin vid hastigheter nära ljushastigheten.
Energi 1 är rörelseenergi beräknad med formeln Wk = 1/2 * mv2 och tar ej hänsyn till ljushastigheten. Energi 2 är rörelseenergi beräknad med formeln Wk = mc2 - m0c2 och tar hänsyn till ljushastigheten.
Energi 1 Massa Hastighet Energi 2 Massa0 Massa* Hastighet
Massa* är uträknad enligt massformeln
Massa anges i kg
Energi anges i Joule
Hastighet anges m/s
f:\12000 essays\sciences (985)\Physics\Fibre Optics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Assignment
Many modern medical materials and equipment work on a principle which is beyond the capacity of human transducers.
Comment and discuss the working principles of an endoscope, uteroscope or a rectoscope showing the illuminating path, the image path, transmission path and the liquid transfer or operating instrument ducts, showing the position of suitable valves.
This will therefore explain how light travels through an optical fibre and show how such fibres are used in medicinal equipment either to transmit light or to bring back images from within a patient.
Contents
Fibre Optics
Fibre-Optic Bundles
Coherent and Incoherent Bundles
Transimission efficiency and resolution
Types of Fibres: Single mode or Multimode ?
Fibre Properties
Fibre-Optic Endoscopy
Introduction
The Fibre-Optic Endoscope
Some Applications for Fibre-Optic Endoscopy
References
Fibre Optics
A relatively new technology with vast potential importance, fibre optics, is the
channelled transmission of light through hair-thin glass fibres.
The clear advantages of fibre optics are too often obscured by concerns that
may have been valid during the pioneering days of fibre, but that have since been
answered by technical advances.
Fibre is fragile
An optical fibre has greater tensile strength than copper or steel fibres of the same
diameter. It is flexible, bends easily, and resists most corrosive elements that attack
copper cable. Optical cables can withstand pulling forces of more than 150 pounds.
Fibre is hard to work with
This myth derives from the early days of fibre optic connectors. Early connectors
where difficult to apply; they came with many small parts that could tax even the
nimble fingered. They needed epoxy, curing, cleaving and polishing. On top of that,
the technologies of epoxy, curing, cleaving and polishing were still evolving.
Today, connectors have fewer parts, the procedures for termination are well
understood, and the craftsperson is aided by polishing machines and curing ovens to
make the job faster and easier.
Even better, epoxyless connectors eliminate the need for the messy and time-
consuming application of epoxy. Polishing is an increasingly simple, straightforward
process. Pre-terminated cable assemblies also speed installation and reduce a once
(but no longer) labour-intensive process.
Fibre Optic Bundles
If light enters the end of a solid glass rod so that the light transmitted into the
rod strikes the side of the rod at an angle O, exceeding the critical angle, then total
internal reflection occurs. The light continues to be internally reflected
back and forth in its passage along the rod, and it emerges from the other end
with very little loss of intensity.
This is the principle in fibre optics of which long glass fibres of very small
cross-sectional area transmit light from end to end, even when bent, without much
loss of light through their side walls. Such fibres can then be combined into 'bundles'
of dozens to thousands of fibres for the efficient conveyance of light from one (often
inaccessible) point to another.
If the glass fibre comes into contact with a substance of equal or higher
refractive index, such as an adjacent glass fibre, dirt or grease, then total internal
reflection does not occur and light is lost rapidly by transmission through the area of
contact. To avoid such 'leakage' and to protect the fibres, they are clad in 'glass
skins' of refractive index lower than that of the fibre core.
As the angle of incidence I increases, R increases and O ( = (n/2) -R)
decreases. Eventually, O reaches C, the critical angle,
and any further reduction in O results in transmission through the side wall.
The expression n0 sin Imax is called the numerical aperture of the fibre. A
typical value for this might be 0.55, making Imax about 33o in air. Sometimes Imax is
referred to as the half-angle of the fibre, since it describes half the field of view
acceptably transmitted. The numerical aperture (and hence Imax) can be increased by
using a core of high refractive index. However, these glasses have a lower efficiency
of transmission, especially at the blue end of the spectrum, and are not commonly
used.
The above analysis applies only to a straight line fibre. If the fibre is curved, the angles of incidence vary as the light travels along the fibre and losses occur if the angles fall below the critical angle. In practice, a radius of curvature down to about twenty times the fibre diameter can be tolerated without significant losses.
Coherent and Incoherent Bundles
An ideal fibre transmits light independently of its neighbours, so if a bundle of
fibres is placed together in an orderly manner along its length, with the relative
positions remaining unchanged, actual images may be transmitted along the fibre.
Such an arrangement is called a coherent bundle, and consists of fibres of
very small diameter about 10 µm. The ends of the bundle are cut square and
polished smooth to prevent distortions. Each fibre transmits a small element of the
image which is seen at the other end of the coherent bundle as a mosaic. The eye has
to 'look through' the fragmented structure to appreciate a clear image.
The image to be transmitted is either in direct contact with the end of the
bundle or focused on to this surface. The image formed at the other end is viewed
using an eyepiece incorporating magnification. One novel method of magnification is
to make one end of the fibres smaller than the other. For example, if they have an
average diameter of 5µm at the image end, and 50µm at the viewing end, a
magnification of x10 is achieved.
In contrast, a bundle of fibres arranged at random is known as an incoherent bundle, (or sometimes simply a light guide) and is suitable only for the transport of light not of images. The fibres of such a bundle are relatively large having diameters of about 50-100µm.
The fibre, must be cabled - enclosed within a protective structure. This usually includes strength members and an outer buffer. The most common strength member is Kevlar aramid yarn, which adds mechanical strength. During and after installation, strength members provide crush resistance and handle the tensile stresses applied to the cable so that the fibre is not damaged. Steel and fibreglass rods are also used as strength members in multifibre bundles.
The concentric layers of an optical fibre include the light-carrying core, the cladding and the protective buffer.
Core : the inner light-carrying member.
Cladding : the middle layer, which serves to confine the light to the core.
Buffer : the outer layer which serves as a 'shock absorber' to protect the core and cladding from damage.
The buffer protects against abrasion, oil, solvents and other contaminates.
The buffer usually defines the cable's duty and flammability rating.
Transmission efficiency and resolution
Light injected into a fibre can adopt any of several zigzag paths, or modes. When a large number of modes are present they may overlap, for each mode has a different velocity along the fibre (modal dispersion). The glass fibres used in present-day fibre-optic systems are based on ultrapure fused silica. Fibre made from ordinary glass is so dirty that impurities reduce signal intensity by a factor of on million in only about 5 m (16 ft) of fibre. These impurities must be removed -- often to the parts-per-billion level - before useful long-haul fibres can be drawn. But even perfectly pure glass is not perfectly transparent.
It attenuates, or weakens, light in two ways. One, occurring at shorter wavelengths, is a scattering caused by unavoidable density fluctuations within the fibre. The other is a longer wavelength absorption by atomic vibrations (photons).
For silica, the minimum attenuation, or the maximum transparency, occurs in wavelengths in the near infrared, at about 1.5 micrometers.
In addition, there are 'end losses' which are light losses at the end faces due to partial reflection and incidence on the cladding material. Thus, light sources need to be very powerful, and even then problems can arise when viewing coloured images since different wavelengths have different transmission efficiencies.
The thinner and more numerous the fibres, the greater should be the resolution. However, when the core diameter falls below about 5µm diffraction starts to occur and transmission efficiency drops. Hence, although fibres with core diameters down to about 1µm have been used, typical diameters are nearer 10µm. A deterioration in image quality may occur for a number of reasons, for example defects in the end faces of the fibres, misalignment of fibres, broken fibres (causing black spots), or light leakage between adjacent fibres (producing 'cross-talk').
Types of fibres : Singlemode or Multimode ?
In the simplest optical fibre, the relatively large core has uniform optical properties. Termed a step-index multimode fibre, this fibre supports thousands of modes and offers the highest dispersion - and hence the lowest bandwidth.
By varying the optical properties of the core, the graded-index multimode fibre reduces dispersion and increases bandwidth. Grading makes light following longer paths travel slightly faster than light following a shorter path. Put another way, light travelling straight down the core without reflecting travels slowest.
The net result is that the light does not spread out nearly as much. Nearly all multimode fibres used in medical application have a graded index.
Fibre Properties
Numerical aperture (NA) of the fibre defines which light will be propagated and will not. NA defines the light-gathering ability of the fibre. Imagine a cone coming from the core. Light entering the core from within this cone will be propagated by total internal reflection. Light entering from outside the cone will not be propagated.
NA has an important consequence. A large NA makes it easier to inject more light into a fibre, while a small NA tends to give the fibre a higher bandwidth. A large NA allows greater modal dispersion by allowing more modes in which light can travel. A smaller NA reduces dispersion by limiting the number of modes.
Fibre-optic endoscopy
Introduction
An endoscope is an instrument designed to provide a direct view of an internal part of the body, and possibly to perform tasks such as the removal of samples, injection of fluids and diathermy. Fibre optics has extended the scope of the instrument considerably by permitting the transmission of images from inaccessible areas such as the oesophagus, stomach, intestines, heart and lungs.
The fibre-optic endoscope
The long flexible shaft of the instrument is usually constructed of steel mesh, often with a crush-resistant covering of a bronze or steel spiral, it is then sheathed with a protective, low-friction covering of PVC or some other impervious material, which forms a hermetic seal around the instrument. The shaft is about 10mm in diameter; about 0.6-1.8 m long (depending on the application) and has a short deflectable section about 50-85mm long leading to its distal tip.
Within the shaft lie:
at least one non-coherent fibre-optic bundle to transmit light from the distant light source to the distal tip;
a coherent fibre-optic bundle transmitting the image from the objective lens at the distal tip;
an irrigation channel through which water can be pumped to wash the objective lens;
(d) an operations channel for the Performance of tasks;
(e) control cables.
The viewing end of the endoscope contains:
an eyepiece, with focus controls and camera attachment;
distal tip deflection controls, giving polydirectional control up to about
200o, plus locking capability;
objective lens control which may be a push-pull wire effecting focusing;
valve controls for air aspiration, (suctioned withdrawal of body fluids through the operations channel) and lens washing and air insufflation (application of water or air jet through the irrigation channel);
operating channel valve, which controls the entry of catheters, electrodes, biopsy forceps and other flexible devices;
connection with the umbilical tube, providing light through a non-coherent fibre-optic bundle and water or air from the pump or aspirator system.
A typical Micro-video Endoscopy Unit would contain:
Optical catheter system as described above,
Colour video monitor
CO2 and fluid insufflation,
Instrumentation,
Disposables,
Miscellaneous accessories.
Some applications of Fibre-optic endoscopy
Endoscopic examination of the gastrointestinal tract has proved especially successful with the diagnosis and treatment of ulcers, cancers, constrictions, bleeding sites, and so on. The heart, respiratory system and pancreas have also been investigated.
Another application is the measurment of the proportion of haemoglobin in the blood which is combined with oxygen using an oximeter. Two incoherent bundles are introduced into the blood stream: one is used to illuminate a sample of blood and the other to assess the absorption of light by the blood.
An endoscope can also be equipped with a laser that can vaporize, coagulate, or cut structures, often with more ease and flexibility than a more rigid cutting knife. It is a less invasive method that causes less scarring and a quicker recovery time than other surgical techniques.
Common types of endoscopes are the cytoscope to view the bladder, the bronchoscope to view the lungs, the otoscope to view the ear, the arthroscope to view the knee and other joints, and the laparascope to view the female reproductive structures. The most common surgery performed through endoscopy is biopsy, the removal of tissue for microscopic study to detect a malignancy. Diagnostic hysteroscopy with directed biopsy and dilatation and curettage, removal of polyps, and removal of foreign objects and Cystourethroscopy are other important fields which endoscopy makes possible.
References:
Pope, Jean A.; Medical Physics 2nd r.e. Heinemann Educational Printers 1973.
Brown, B. H. , Smallwood R. H.; Medical Physics and Physiological Measurement;
Blackwell Scientific Publications, Billing and Son's Publishers; 1981.
f:\12000 essays\sciences (985)\Physics\Fire Retardant Fabrics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Purpose: To find which fabric is most flammable. (Which fabric will catch fire fastest)
Materials:
Lighter
safety goggles
Stop Watch
5cm x 5cm piece of Denim
5cm x 5cm piece of Nylon
5cm x 5cm piece of Polyester
5cm x 5cm piece of Cotton
5cm x 5cm piece of Rayon
5cm x 5cm piece of Acrylic
Procedure:
1) Place a piece fabric under flame. Make sure that each time you do this, the flame is placed at an equal distance from the fabric.
2) As soon as fabric is exposed to flame, begin timing with the stop watch
3) Stop timing when fabric ignites.
4) Put out the fire. Try not to destroy the fabric, it can be used for presentation.
5) Record the time it took for the fabric to ignite
6) Repeat steps 1 to 5 three times for each piece of fabric.
Hypothesis: We think that the Polyester will ignite the fastest, because it is very light, and delicate. We think that the Nylon will ignite in the longest time because it is much like a plastic.
Results: In order to control all of the variables, we burned and timed each type fabric three times. Below is a table showing our results in seconds.
Next Page
Fabric 1st Time 2nd Time 3rd Time Average
Denim 5.3 sec 4.8 sec 5 sec 5.03 sec
Nylon 4 sec 3.3 sec 4.1 sec 3.8 sec
Polyester 0.5 sec 0.7 sec 0.7 sec 0.63 sec
Cotton 4.6 sec 4.2 sec 5 sec 4.6 sec
Rayon 3.5 sec 3 sec 2.4 sec 2.97 sec
Acrylic 4.7 sec 5 sec 4.1 sec 4.6 sec
Conclusion:
The Polyester ignited the fastest, which is what we hypothesized. The flame was nearly an inch away from the fabric, when it ignited. It burned up in a flash. The majority of the fabric was burned to ashes in a matter of split seconds. The Denim ignited in the longest amount of time. If we were testing which fabric is safest to wear near a flame, it would be Denim. The Polyester is extremely dangerous to wear near a fire.
Research:
Denim: Denim is a heavy cotton twilled fabric, usually colored; coarser weaves are used for overalls, etc.; finer, for drapery and upholstery. The name comes from French town of Nimes (serge de Nimes).
Nylon: Nylon is used for several purposes. Clothing is just one. Nylon was invented in 1938 by a team of researchers, led by organic chemist Wallace H. The production of common nylon (nylon-6,6) begins when the basic hydrocarbons, under pressure and heat, are synthesized into the chemicals adipic acid and hexamethylene diamine. (The production of other nylons may require slightly different acids and amines.) These are mixed to form a substance called nylon salt. This concentrated salt solution is heated in huge kettles, called autoclaves. Here the acid and amine molecules link up alternately to form a nylon superpolymer (long-chain molecule). The molten nylon then pours over a giant casting wheel. A swift spray of cold water turns the molten ribbon of nylon into a hard, translucent sheet, which is then chopped into small flakes called nylon chips.
If the nylon is intended for sheets, rods, bristles, coatings, or molds, it is sent to factories in the form of chips. The chips are melted and turned into final products. Nylon intended for yarn must undergo further treatment. In a process called melt spinning, the chips are melted, and the melt is pumped through a spinneret, a perforated plate with tiny holes equal in number to the filaments, or single threads, desired in the finished yarn. The filaments form as soon as they strike cool air outside the spinneret.
Polyester: Polyester is very similar to Nylon. In fact, when we looked polyester up in an encyclopedia, it said see nylon. Through our experiment we discovered that polyester is much more delicate than Nylon. Nylon fabric is more like a plastic, which is why Nylon clothing is water proof. On the other hand, Polyester is extremely delicate and light. Polyester is usually either transparent, or translucent.
Cotton: Cotton is used in some way, every day of our lives. Cotton is used for both warm and cold clothing. Cotton is made from the cotton plant. The cotton plant is a warm-climate crop. To develop fully, the plant usually needs a growing season of 150 days free from frost. The cotton planting season ranges from February 1 in southern to early June. To get warmth from the sun, the seeds are planted shallowly, from one to two inches deep. Some farmers plant their seed in hills, some in furrows, and others in flat seed beds. Cotton was the leading industry in the USA during the 1800's and early 1900's, and it still is in some counties.
Rayon: Rayon or Chardonnet silk is a vegetable fiber (cellulose). Rayon is produced mechanically from wood pulp. The fabric is fairly light, and delicate. Rayon is used in many types of clothes.
Acrylic: Acrylic is a man made fabric. The process for making Acrylic is extremely complicated, and is made using a formula (e.g. H2O). Acrylic is very heavy, and is much like wool. The sample of Acrylic that we used came from a sweater.
COPY AS IS, AND PASTE IN MS WORD. IT WILL MAKE MORE SENCE!
f:\12000 essays\sciences (985)\Physics\GeigerMueller Tube.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the Geiger-Muller tube, particles ionize gas atoms. The tube contains a gas at low pressure. At one end of the tube is a very thin "window" through which charged particles or gamma rays pass. Inside the tube is a copper cylinder with a negative charge. A rigid wire with a positive charge runs down the center of this cylinder. The voltage across the wire and cylinder is kept just below the point at which a spontaneous discharge, or spark, occurs. When a charged particle or gamma ray enters the tube , it ionizes a gas atom between the copper cylinder and the wire. The positive ion produced is accelerated toward the copper cylinder by the potential difference. The electron is accelerated toward the positive wire. As these new particles move toward the electrodes, they strike other atoms and form even more ion in their path.
Thus an avalanche of charged particles is created and a pulse of current flows through the tube. The current causes a potential difference across a resistor in the circuit. The voltage is amplified and registers the arrival of a particle by advancing a counter or producing an audible signal, such as a click. The potential difference across the tube so that the current flow stops. Thus the tube is ready for the beginning of a new avalanche when another particle or gamma ray enters it
f:\12000 essays\sciences (985)\Physics\Gun Physics.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How Guns Work
A gun is a weapon that uses the force of an explosive propellant to project a missile.
Guns or firearms are classified by the diameter of the barrel opening. This is known as the calibre of the gun. Anything with a calibre up to and including .60 calibre(0.6 inches) is known as a firearm.
The precise origin of the gun is unknown, although they were in use by the early 14th century and were common place in Europe by mid-century. These early guns were nothing more than large calibre cylinders of wrought iron or cast bronze, closed at one end and loaded by placing gunpowder and projectile in the muzzle, or open end.
Nowadays firearms are a little more sophisticated.
However, the physics behind all guns remain the same. Weapons such as cannons, shotguns and rifles, work on the basic idea of conservation of momentum and the change in energy from potential to kinetic.
When the trigger is pulled the hammer hits the firing pin. The firing pin then hits the primer which causes the powder to burn hence producing lots of gases. This causes the volume behind the bullet to fill with extremely high pressure gas. The gas pushes on every surface it encounters, including the bullet in front of it and the base of the gun barrel behind it. The increase in pressure caused by the gases causes the bullet to be forced into the barrel hence causing the bullet to come out the muzzle at very high speeds. Once the bullet is fired, it remains in motion from its momentum. The momentum will carry the bullet until it strikes an object or gravity pulls the bullet towards the earth.
Firearms change potential chemical energy into kinetic energy in the actual firing of the gun. Many people do not realise that the force imparted by accelerating the bullet is not the only force acting on the gun, or the shooter. Grains of burned gun powder are sent out the muzzle at high velocity. When the trigger is pulled, the hammer strikes a small charge at the end of the shell, the ammunition. This charge ignites black gun powder packed behind the lead ball bearings. When the black gun powder burns, it produces gas that rapidly expands with the burning of more black gun powder. High pressure gases exert forces on the back of the bullet and on the gun. The only way for the gas to escape is to push the bullet out of its way through the end of the barrel. This is how a bullet is fired from a gun.
Conservation of momentum is the law that is held true when the gun is fired and a "kick" is felt. When a bullet is fired from a gun, total momentum before is zero since nothing is moving. After firing the bullet there is a momentum in the forward direction. The gun must therefore have the same magnitude of momentum but in the opposite direction so that they cancel each other out leaving the total momentum still equal to zero. For this reason the gun must have a recoil velocity after the bullet is fired(i.e. the gun 'jumps' backwards and a 'kick' is felt) .
As the bullet is propelled through the barrel, it gains momentum. In order for the entire system of the gun and the ammunition to have equal momentum, the gun must gain momentum in the opposite direction from the bullet. Momentum is a vector quantity, having both a direction and a direction. The faster an object is moving or the more mass it has, the more momentum it has in the direction of its motion(momentum = mass velocity). Because momentum is a conserved quantity, it cannot be created or destroyed(momentum before = momentum after). It can only be transferred between objects. Momentum is conserved because of Newton's third law of motion.
When one object exerts a force on a second object for a certain amount of time, the second object exerts an equal but oppositely directed force on the first object for exactly the same amount of time. The momentum lost by the first object is exactly equal to the momentum gained by the second object. Momentum is transferred from the first object to the second object. In this case, if a gun exerts a force on a bullet when firing it forward then the bullet will exert an equal force in the opposite direction on the gun causing it to move backwards or recoil. Although the action and reaction forces are equal in size the effect on the gun and the bullet are not the same since the mass of the gun is far greater than the mass of the bullet. The acceleration of the bullet while moving along the gun barrel would be much greater than the acceleration of the gun(acceleration = force mass).
The conservation of momentum is also demonstrated when the bullet hits an object. The object that it strikes absorbs the kinetic energy, energy from motion and momentum. If the force of momentum from the bullet is great enough to overcome the mass of the object, the target will be moved along the same vector as the bullet.
To increase the accuracy of the flight of the bullet, a technique called rifling can be used. Rifling is where the barrel of the gun and or the bullet is creased with spiral grooves that allow air to pass through. When the bullet is fired, the air passes through these curved grooves and spins the bullet. This spinning action allows the bullet to cut through the air more efficiently and fly on a more true course, thus stabilising its trajectory.
f:\12000 essays\sciences (985)\Physics\History of Space Shuttle Program.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The shuttle, a manned, multipurpose, orbital-launch space plane, was designed to carry payloads of up to about 30,000 kg (65,000 lb) and up to seven crew members and passengers. The upper part of the spacecraft, the orbiter stage, had a theoretical lifetime of perhaps 100 missions, and the winged orbiter could make unpowered landings on returning to earth. Because of the shuttle's designed flexibility and its planned use for satellite deployment and the rescue and repair of previously orbited satellites, its proponents saw it as a major advance in the practical exploitation of space. Others, however, worried that NASA was placing too much reliance on the shuttle, to the detriment of other, unmanned vehicles and missions.
The first space shuttle mission, piloted by John W. Young and Robert Crippen aboard the orbiter Columbia, was launched on April 12, 1981. It was a test flight flown without payload in the orbiter's cargo bay. The fifth space shuttle flight was the first operational mission; the astronauts in the Columbia deployed two commercial communications satellites from November 11 to 16, 1982. Later memorable flights included the seventh, whose crew included the first U.S. woman astronaut, Sally K. Ride; the ninth mission, November 28-December 8, 1983, which carried the first of the European Space Agency's Spacelabs; the 11th mission, April 7-13, 1984, during which a satellite was retrieved, repaired, and redeployed; and the 14th mission, November 8-14, 1984, when two expensive malfunctioning satellites were retrieved and returned to earth.
Despite such successes, the shuttle program was falling behind in its planned launch program, was increasingly being used for military tests, and was meeting stiff competition from the European Space Agency's unmanned Ariane program for the orbiting of satellites. Then, on January 28, 1986, the shuttle Challenger was destroyed about one minute after launch because of the failure of a sealant ring on one of its solid boosters. Flames escaping from the booster burned a hole in the main propellant tank of liquid hydrogen and oxygen and caused the booster to nose into and rupture the tank. This rupture caused a nearly explosive disruption of the whole system. Seven astronauts were killed in the disaster: commander Francis R. Scobee, pilot Michael J. Smith, mission specialists Judith A. Resnik, Ellison S. Onizuka, and Ronald E. McNair, and payload specialists Gregory B. Jarvis and Christa McAuliffe. McAuliffe had been selected the preceding year as the first "teacher in space," a civilian spokesperson for the shuttle program. The tragedy brought an immediate halt to shuttle flights until systems could be analyzed and redesigned. A presidential commission headed by former secretary of state William Rogers and former astronaut Neil Armstrong placed much of the blame on NASA's administrative system and its failure to maintain an efficient system of quality control.
In the aftermath of the Challenger disaster, the O-ring seals on the solid rocket booster (SRB) were redesigned to prevent recurrence of the January 28 failure. The shuttle launch program resumed on September 29, 1988, with the flight of Discovery and its crew of five astronauts. On this mission, a NASA communications satellite, TDRS-3, was placed in orbit and a variety of experiments were carried out. The success of this 26th mission encouraged the United States to resume an active launch schedule. One more flight was planned for 1988, and a total of 39 were scheduled through 1992. The long-delayed $1.5-billion Hubble Space Telescope was deployed by space shuttle in 1990 but, because of an optical defect, failed to provide the degree of resolution it was designed to have until it was repaired in December 1993. On February 2, 1995, Lieutenant Colonel Eileen M. Collins became the first woman to pilot the space shuttle. On March 18 the space shuttle Endeavor, piloted by Stephen Oswald, landed after a record 16 days, 15 hours in space.
f:\12000 essays\sciences (985)\Physics\Hologram Essay.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Holograms
Toss a pebble in a pond -see the ripples? Now drop two
pebbles close together. Look at what happens when the two sets
of waves combine -you get a new wave! When a crest and a trough
meet, they cancel out and the water goes flat. When two crests
meet, they produce one, bigger crest. When two troughs collide,
they make a single, deeper trough. Believe it or not, you've
just found a key to understanding how a hologram works. But what
do waves in a pond have to do with those amazing three-
dimensional pictures? How do waves make a hologram look like the
real thing?
It all starts with light. Without it, you can't see. And
much like the ripples in a pond, light travels in waves. When
you look at, say, an apple, what you really see are the waves of
light reflected from it. Your two eyes each see a slightly
different view of the apple. These different views tell you
about the apple's depth -its form and where it sits in relation
to other objects. Your brain processes this information so that
you see the apple, and the rest of the world, in 3-D. You can
look around objects, too -if the apple is blocking the view of
an orange behind it, you can just move your head to one side.
The apple seems to "move" out of the way so you can see the
orange or even the back of the apple. If that seems a bit
obvious, just try looking behind something in a regular
photograph! You can't, because the photograph can't reproduce
the infinitely complicated waves of light reflected by objects;
the lens of a camera can only focus those waves into a flat, 2-D
image. But a hologram can capture a 3-D image so lifelike that
you can look around the image of the apple to an orange in the
background -and it's all thanks to the special kind of light
waves produced by a laser.
"Normal" white light from the sun or a lightbulb is a
combination of every colour of light in the spectrum -a mush of
different waves that's useless for holograms. But a laser shines
light in a thin, intense beam that's just one colour. That means
laser light waves are uniform and in step. When two laser beams
intersect, like two sets of ripples meeting in a pond, they
produce a single new wave pattern: the hologram. Here's how it
happens: Light coming from a laser is split into two beams,
called the object beam and the reference beam. Spread by lenses
and bounced off a mirror, the object beam hits the apple. Light
waves reflect from the apple towards a photographic film. The
reference beam heads straight to the film without hitting the
apple. The two sets of waves meet and create a new wave pattern
that hits the film and exposes it. On the film all you can see
is a mass of dark and light swirls -it doesn't look like an
apple at all! But shine the laser reference beam through the
film once more and the pattern of swirls bends the light to re-
create the original reflection waves from the apple -exactly.
Not all holograms work this way -some use plastics instead
of photographic film, others are visible in normal light. But
all holograms are created with lasers -and new waves.
All Thought Up and No Place to Go
Holograms were invented in 1947 by Hungarian scientist
Dennis Gabor, but they were ignored for years. Why? Like many
great ideas, Gabor's theory about light waves was ahead of its
time. The lasers needed to produce clean waves -and thus clean
3-D images -weren't invented until 1960. Gabor coined the name
for his photographic technique from holos and gramma, Greek for
"the whole message. " But for more than a decade, Gabor had only
half the words. Gabor's contribution to science was recognized
at last in 1971 with a Nobel Prize. He's got a chance for a last
laugh, too. A perfect holographic portrait of the late scientist
looking up from his desk with a smile could go on fooling
viewers into saying hello forever. Actor Laurence Olivier has
also achieved that kind of immortality -a hologram of the 80
year-old can be seen these days on the stage in London, in a
musical called Time.
New Waves
When it comes to looking at the future uses of holography,
pictures are anything but the whole picture. Here are just a
couple of the more unusual possibilities. Consider this: you're
in a windowless room in the middle of an office tower, but
you're reading by the light of the noonday sun! How can this be?
A new invention that incorporates holograms into widow glazings
makes it possible. Holograms can bend light to create complex 3-
D images, but they can also simply redirect light rays. The
window glaze holograms could focus sunlight coming through a
window into a narrow beam, funnel it into an air duct with
reflective walls above the ceiling and send it down the hall to
your windowless cubbyhole. That could cut lighting costs and
conserve energy. The holograms could even guide sunlight into
the gloomy gaps between city skyscrapers and since they can bend
light of different colors in different directions, they could be
used to filter out the hot infrared light rays that stream
through your car windows to bake you on summer days.
Or, how about holding an entire library in the palm of
your hand? Holography makes it theoretically possible. Words or
pictures could be translated into a code of alternating light
and dark spots and stored in an unbelievably tiny space. That's
because light waves are very, very skinny. You could lay about
1000 lightwaves side by side across the width of the period at
the end of this sentence. One calculation holds that by using
holograms, the U. S. Library of Congress could be stored in the
space of a sugar cube. For now, holographic data storage remains
little more than a fascinating idea because the materials needed
to do the job haven't been invented yet. But it's clear that
holograms, which author Isaac Asimov called "the greatest
advance in imaging since the eye" will continue to make waves in
the world of science.
f:\12000 essays\sciences (985)\Physics\hot to make a rocket launcher.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to Make Rocket Launchers
Making a rocket launcher may not be easy but it is worth it. The first thing needed is the model rocket set. The set comes with the engine and all other parts to make the rocket. The instructions to make the rocket must be followed. After making the rocket, a three foot PVC tube and cap must be purchased at a piping store, such as Lowes. In the cap of the PVC tube a one-fourth inch hole must be drilled. Electrical wire, that can be found at any hardware store, must be purchased. A four inch wire must be inserted through the hole in the cap. An electrical igniter must be attached to the end of the four inches of wire. Instructions that came with the rocket set need to be followed to connect the igniter to the rocket engine. One pole of a nine-volt battery, which can be purchased at any Radio Shack, should be connected to one pole of any momentary switch. The two unconnected wires from the cap must be connected to the open poles of the switch and battery. It is ready to fire the rocket launcher. Making a rocket launcher is never easy but the show is worth it.
f:\12000 essays\sciences (985)\Physics\How the laws of balnce aplly to sports.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Billy Moore
Physics
Sports Page
In sports balance and stability are used to increase performance of the athlete or the athletes equipment. In Racecar driving, balance is used to stabalize the racecar. The wheels are wide and extrude from the base of the car. This gives the car a wider support base which increases the stability. The race cars are flat and low to the ground. This moves the center of gravity lower which also increases the stability of the car.
In foot ball you need to keep your balance while your running so that you can resist a tackle. Foot ball players do this by crouching down and keeping their center of mass over their feet which is there support base.
f:\12000 essays\sciences (985)\Physics\Impacto de la Fisica en el medio ambiente.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IMPACTO DE LA FÍSICA EN EL MEDIO AMBIENTE
La física, al igual que muchas otras ciencias se encarga de explicar como funcionan o como pasan muchas de las cosas que nos rodean, entre las que destacamos todos los procesos naturales, estos estudios son útiles para permitir al ser humano duplicar ciertos fenómenos que son útiles para otras labores en beneficio de la comunidad. En este ensayo trataremos de mostrar algunos de los beneficios e influencias que tiene la física sobre la naturaleza en general.
En lo personal me parece poco apropiado decir que la física tiene cierto impacto en el ambiente, ya que creo que la física en su mayoría se dedica a averiguar el porqué de todo lo que pasa en el medio , que es su fin primordial. Una vez que el medio ya está estudiado entonces ahora si la labor se redirecciona a utilizar ese nuevo conocimiento en pro del ser humano, la mayoría de las veces, y es allí cuando se tiene un efecto de retroalimentación sobre el ambiente.
La ciencia física en escencia, como ya dijimos se encarga de averiguar el porqué y el cómo. Ejemplo de estos son todas las leyes que la describen, como de Newton o los diversos teoremas que se encargan de modelar situaciones para describir el comportamiento de diversos sistemas. Es gracias a todos estos estudios que sabemos cosas como ¿porqué se mueven las cosas?, ¿cómo vemos los colores?, ¿qué efecto magnético produce convierte la energía?, ¿cómo se lleva a cabo desprendimiento de calor y cómo se puede aprovechar?, entre otras muchas cuestiones que después se pueden utilizar para ciertas actividades en pro de la especies humana.
Los problemas empiezan cuando estas acciones en pro de la humanidad tienen ciertos efectos secundarios que ocasionan daños que muchas de las veces son irreparable. Como ejemplo de esto podemos citar el uso del petróleo, cuando se obtuvieron los primeros resultados gracias a su capacidad calorífica fue un sorprendente descubrimiento que vino a facilitar un sin número de tareas, pero ¿qué pasó cuando se descubrieron los productos contaminantes de su combustión?, se empezó a generar un caos incrementándose brutalmente los niveles de contaminación en gran parte por la combustión de este y además como era un recurso no renovable llegaría un tiempo en donde existiera escasez.
De esto surgieron formas alternativas como la energía nuclear, que aunque en algunos aspectos era menos contaminante y no llegaría a escasearse, tenía como resultados ciertos residuos radiactivos que serían difíciles de desechar en cualquier medio. Además existe un potencial riesgo de accidente por un descontrol en el sistema que podría ocasionar un desastre natural. El descubrimiento de este tipo de energía tuvo ciertos otros usos, como por ejemplo los médicos que llegaron para el tratamiento de ciertas enfermedades aumentando el tiempo de vida de la población. Aunque también existieron otras aplicaciones como las militares capaces de desaparecer miles de metros cuadrados de superficie generando consecuencias demasiado brutales para cualquier medio ambiente.
Debido a esto y gracias a ciertos otros avances de la física se han podido aprovechar ciertos otros tipos de energía como la solar, que gracias a materiales semiconductores y aprovechando la física de estado sólido, se han podido crear celdas capaces de convertir los fotones en electrones, o más bien la luz solar en electricidad, eliminando así problemas como riegos, desperdicios contaminantes o contaminación causada por productos de combustión. Cabe mencionar que dentro de la física de estado sólido también se ha estado desarrollando el concepto de superconductores, que son elementos que mínimas pérdidas al momento de la conducción de electrones, los que serán capaces, además de desperdiciar menos electricidad, de crear una forma mucho más eficiente que las actuales de almacenamiento de ésta para poder hacer un mejor uso, como por ejemplo utilizar la energía del sol durante la noche.
Existen además ciertos otros avances a los que se están tratando de llegar, como la separación del Hidrógeno del agua, lo que provocaría el abastecimiento casi interminable de un medio de combustión muy limpio que podría ser utilizado para diversas aplicaciones sin las desventajas de otros combustibles.
Durante la realización de este ensayo se me ocurre pensar que la física se ha enfocado al estudio del medio ambiente, en su mayoría, y además, en escala considerablemente menor, se ha utilizado esta información en beneficio de la humanidad. Desgraciadamente en el transcurso y alcance de este beneficio se ha pasado a través de diferentes etapas donde se notan los costos en el ambiente que tuvieron ciertas ganacias en el ser humano, por lo que entonces se busca un método de llagar a tener el mismo efecto sin tener que pagar ese precio. Debido a este sistema de aplicación de tecnología la física, al igual que todas las otras ciencias, se ha visto en una posición con ciertas prioridades al momento de su aplicación, primero el obtener el funcionamiento del medio, posteriormente aplicar esa información en beneficio de la humanidad con dos consideraciones importantes, la primera crear formas que afecten cada vez menos el medio ambiente mediante la planeación más estratégica y consciente de la creación de tecnología y desarrollando nuevos sistemas capaces de corregir ciertos errores que tecnologías capaces han ocasionado, reduciendo en una pequeña escala los efectos negativos y catástrofes originadas.
En mi punto de vista la física ha tomado un papel trascendental para sobrevivir al medio, si bien es cierto que la Tierra si no hubiera sido expuesta al ser humano con cambios en contra de la naturaleza sería un organismo autosustentable sin problemas considerables como los que se tienen ahora, debemos también aceptar los beneficios que han causado tales impactos sobre el ambiente, en especial para nuestra especie. Por lo que la comunidad científica debe comprometerse en encontar cada vez formas mejores, tanto más eficientes, económicas y menos contaminantes, de obtener beneficios para todos, siempre planeando detenidamente todos sus posibles efectos para minimizar las pérdidas. Y además debe tratar de divulgar sus descubrimientos lo más que se pueda para evitar que intereses localizados sean los causantes del continuo deterioro del sistema, logrando así la existencia de un mejor mundo por más tiempo el cual todos podamos disfrutar.
f:\12000 essays\sciences (985)\Physics\Indirect Proofs.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Hypothesis - I think the dice has 2 dots and by indirect proof I think we will be able to prove it.
Data:
Indirect Proof Work:
a) Total number of faces seen: 1 face x 180
Trials = 180
b) Total number of dots seen: 145
c) Average number of dots per cube: 145/180 = 0.81
d) Average number of dots per cube: 0.81 X 6 = 4.9 (5)
Actual number of dots = 5
f:\12000 essays\sciences (985)\Physics\Internal Combustion Engine.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
An internal-combustion engine is a heat engine that burns fuel and air inside a combustion
chamber located within the engine proper. Simply stated, a heat engine is an engine that
converts heat energy to mechanical energy. The internal- combustion engine should be
distinguished from the external- combustion engine, for example, the steam engine and the
Stirling engine, which burns fuel outside the prime mover, that is, the device that actually produces mechanical motion. Both basic types produce hot, expanding gases, which may then be employed to move pistons, turn turbine rotors, or cause locomotion through the reaction principle as they escape through the nozzle.
Most people are familiar with the internal-combustion reciprocating engine, which is used to
power most automobiles, boats, lawn mowers, and home generators. Based on the means of
ignition, two types of internal-combustion reciprocating engines can be distinguished:
spark-ignition engines and compression-ignition engines. In the former, a spark ignites a
combustible mixture of air and fuel; in the latter, high compression raises the temperature of the
air in the chamber and ignites the injected fuel without a spark. The diesel engine is a
compression-ignition engine. This article emphasizes the spark-ignition engine.
The invention and early development of internal-combustion engines are usually credited to
three Germans. Nikolaus Otto patented and built (1876) the first such engine; Karl Benz built
the first automobile to be powered by such an engine (1885); and Gottlieb Daimler designed the
first high-speed internal-combustion engine (1885) and carburetor. Rudolf Diesel invented a
successful compression-ignition engine (the diesel engine) in 1892.
The operation of the internal-combustion reciprocating engine employs either a four-stroke
cycle or a two-stroke cycle. A stroke is one continuous movement of the piston within the
cylinder.
In the four-stroke cycle, also known as the Otto cycle, the downward movement of a piston
located within a cylinder creates a partial vacuum. Valves located inside the combustion
chamber are controlled by the motion of a camshaft connected to the crankshaft. The four
strokes are called, in order of sequence, intake, compression, power, and exhaust. On the first
stroke the intake valve is opened while the exhaust valve is closed; atmospheric pressure forces a
mixture of gas and air to fill the chamber. On the second stroke the intake and exhaust valves are
both closed as the piston starts upward. The mixture is compressed from normal atmospheric
pressure (1 kg/sq cm, or 14.7 lb/sq in) to between 4.9 and 8.8 kg/sq cm (70 and 125 lb/sq in).
During the third stroke the compressed mixture is ignited--either by compression ignition or by
spark ignition. The heat produced by the combustion causes the gases to expand within the
cylinder, thus forcing the piston downward. The piston's connecting rod transmits the power from
the piston to the crankshaft. This assembly changes reciprocating--in other words, up-and-down
or back-and-forth motion--to rotary motion. On the fourth stroke the exhaust valve is opened so
that the burned gases can escape as the piston moves upward; this prepares the cylinder for
another cycle. Internal-combustion spark-ignition engines having a two-stroke cycle combine intake and compression in a single first stroke and power and exhaust in a second stroke.
The internal-combustion reciprocating engine contains several subsystems: ignition, fuel,
cooling, and exhaust systems.
The ignition system of a spark-ignition engine consists of the sparking device (the spark plug);
the connecting wire from the plug to the distributor; and the distributor, which distributes the
spark to the proper cylinder at the proper time. The distributor receives a high-energy spark from
a coil, or magneto, that converts low-voltage energy to high-voltage energy. Some ignition systems employ transistorized circuitry, which is generally more efficient and less troublesome than the mechanical breaker-point system used in the past. Most ignition systems require an external electrical energy source in the form of a battery or a magneto.
Spark-ignition engines require a means for mixing fuel and air. This may be either a carburetor or fuel injection. A carburetor atomizes the fuel into the engine's incoming air supply. The mixture is then vaporized in the intake manifold on its way to the combustion chamber. fuel injection sprays a controlled mist of fuel into the airstream, either in the intake manifold or just before the intake valve or valves of each cylinder. Both carburetors and fuel injectors maintain the correct fuel- to-air ratio, about one part fuel to fifteen parts air, over a wide range of air temperatures, engine speeds, and loads. Fuel injection can compensate for changes in altitude as well.
Internal-combustion engines require some type of starting system. Small engines are generally
started by pulling a starting rope or kicking a lever. Larger engines may use compressed air or
an electric starting system. The latter includes a starter--a high-torque electric motor--to turn the
crankshaft until the engine starts. Starting motors are extremely powerful for their size and are
designed to utilize high currents (200 to 300 amperes). The large starting currents can cause a
battery to drain rapidly; for this reason a heavy- duty battery is usually used. Interrupting this
connection is an electrical switch called a solenoid, which is activated by the low- voltage starting
switch. In this way the ignition switch can be located away from the starter and yet still turn the
starter on and off.
The cooling system is important because internal-combustion engines operate at high
temperatures of combustion--spark- ignition engines at approximately 2,760 degrees C (5,000
degrees F) and diesel engines at even higher temperatures. If it were not for the cooling system,
these high temperatures would damage and melt many parts of the engine. The cooling system
essentially dissipates the heat of combustion in metal, water, or air and automatically regulates
the temperature so that the engine can operate at its optimum temperature--about 93 degrees C
(200 degrees F).
Air-cooled engines, popularly used to power small lawn mowers, chain saws, power generators, and motorcycles, as well as small cars and airplanes, often require no moving parts, and therefore little or no maintenance, for the cooling system. The head, or uppermost part, of the cylinder and the cylinder block have fins cast into them; these fins increase the surface exposed to the surrounding air, allowing more heat to be radiated. Usually a cover or shroud channels the air
flow over the fins. A fan is sometimes included if the engine is located away from a stream of fast-moving air.
Water-cooled engines have water jackets built into the engine block. These jackets surround
the cylinders. Usually a centrifugal water pump is used to circulate the water continuously through the water jackets. In this way the high heat of combustion is drawn off the cylinder wall into the circulating water. The water must then be cooled in a radiator that transfers the heat energy of the water to the radiator's cooler surrounding fluid. The surrounding fluid can be air or water, depending on the application of the engine.
Internal-combustion engines include an exhaust system, which allows the hot exhaust gases to
escape efficiently from the engine. In some small engines the exhaust gases can exit directly into
the atmosphere. Larger engines are noisier and require some type of muffler or sound deadener,
usually a canister with an inner shell that breaks up the sound waves, dissipating their energy
within the muffler before the exhaust gases are permitted to escape.
The power capacity of an engine depends on a number of characteristics, including the volume
of the combustion chamber. The volume can be increased by increasing the size of the piston
and cylinder and by increasing the number of cylinders. The cylinder configuration, or
arrangement of cylinders, can be straight, or in-line (one cylinder located behind the other); radial
(cylinders located around a circle); in a V (cylinders located in a V configuration); or opposed
(cylinders located opposite each other). Another type of internal-combustion engine, the Wankel engine, has no cylinders; instead, it has a rotor that moves through a combustion chamber.
An internal-combustion engine must also have some kind of transmission system to control and direct the mechanical energy where it is needed; for example, in an automobile the energy
must be directed to the driving wheels. Since these engines are not able to start under a load, a
transmission system must be used to "disengage" the engine from the load during starting and
then to apply the load when the engine reaches its operating speed.
f:\12000 essays\sciences (985)\Physics\Lasers 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
We have all at some point in our lives used or seen someone
use a laser. They are used in compact disc players for stereos or
computers, laser surgery, laser printers, holography, cutting and
borring metals, communication, bar-code scanners, etc. Over the past
three decades' lasers have become a tool used daily by many people
and they have become very useful in scientific research. As you can
see lasers are a very useful and important tool which is why I have
chosen this topic to write about.
The term laser is an acronym. It stands for "light amplification by
stimulated emission of radiation". They produce a narrow, intense
beam of coherent light.
In a laser the atoms or molecules of a crystal, like ruby or
garnet-or of a gas, liquid, or other substance-are excited so that more
of them are at higher energy levels than are at lower energy levels. If a
photon whose frequency corresponds to the energy difference
between the excited and ground states strikes an excited atom, the
atom is stimulated, as it falls back to a lower energy state, to emit a
second photon of the same frequency, in phase with and in the same
direction as the bombarding photon. This process is called stimulated
emission. The bombarding photon of the emitted photon may then
strike other excited atoms, stimulating further emission of photons, all
of the same frequency and phase. This process produces a sudden
burst of coherent radiation as all the atoms discharge in a rapid chain
reaction. The light beam produces is usually pencil thin and maintains
its size and direction over very long distances.
Lasers vary greatly in the way they look and what they are used
for. Some lasers are as large as buildings while others can be the size
of a grain of salt.
There are many parts to lasers. I will now explain what they are
and their uses.
1) Pumping systems:
The pumping system is used to transmit energy to the atoms or
molecules of the medium used in the laser.
a. optical pumping systems uses photons provided by a source such
as a Xenon gas flash lamp or another laser to transfer energy to the
lasing material. The optical source must provide photons which
correspond to the allowed transition levels of the lasing material.
b. collision pumping relies on the transfer of energy to the lasing
material by collision with the atoms or molecules of the lasing
material. Again, energies which correspond to the allowed transition
must be provided. This often done by electrical discharge in a pure
gas - or gas mixture - in a tube.
c. chemical pumping systems use the binding energy released in
chemical reactions to raise the lasing material to the metastable
state.
2) Optical Cavity:
An optical cavity is required to provide the amplification desired in
the laser and to select the photons which are traveling in the desired
direction. As the first atom or molecule in the metastable state of the
inverted population decays it triggers (by stimulated emission) the
decay of another atom or molecule in the metastable state.
3) Laser Media:
Lasers are usually classified by the lasing material used by the
laser. There are four types which are solid state, dye, gas and
semiconductor.
a. solid state lasers employ a lasing material distributed in a soloid
matrix sytem. Accessory devices which may be internal or external
may be used to convert the output .
b. gas lasers use a gas or a mixture of gas within a tube. The most
common gas laser uses a mixture of helium and neon with a pimary
output of 632.8 nm which is a red visible colour.
c. dye lasers use a laser medium that is ususally a complex organic
dye in a liquid solution or suspension. The most striking feature of
these lasers is their "tunability". Proper choice of the dye and it's
concentration allows production of laser light over a broad range of
wavelength in or near the visible spectrum.
d. semiconductor lasers are not to be confused with solid state lasers.
Semiconductor devices consist of two layers of semiconductor
material sandwiched together.
Laser Applications
Laser Surgery
The small, intense, bright beam of light can be focused with lenses to
provide a point of energy intense enough to burn through living flesh.
Laser Welding, Cutting & Blasting
Once again the laser's intense energy when focused make it ideal for
providing concentrated welding and cutting.
Laser Shows
The intense color of laser light has opened up a whole new world for
laser artists to weave a new kind of art using different coloured lenses,
mirrors and crystals.
Power Generation
Laser-powered fusion holds hope of generating tremendous amounts
of electricity through the use of lasers.
Information Technology
Using fiber optic bundles to carry them, modulated laser beams can
transfer huge amounts of information(internet). Lasers in compact disc
players read tiny reflections on CD's and laser discs to play back
audio and video. Someday your house could be fitted with fiber optics
to carry cable tv and phone service.
f:\12000 essays\sciences (985)\Physics\Lasers.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Laser
Before we can learn about the laser we need to know a little bit about
light (since that is what a laser is made of). Light from our sun, or from an
electric bulb, is called white light. It is really a mixture of all the different
colours of light. The colours range from violet, indigo, and blue, to green,
yellow, orange, and red. These make up the visible part of the
electromagnetic spectrum. Light is made up of particles, called PHOTONS,
which travel in waves. The difference in the colour depends on the
wavelength of the light. Violet light has the shortest wavelength while red has
the longest. There are other parts of the electromagnetic spectrum which
includes infra-red, radar, television radio and micro- waves (past red on the
spectrum), and on the other end of the spectrum are the other invisible
radiations, ultra- violet, X rays, micro waves and gamma rays. The
wavelength of the light is important to the subject of the laser. A laser is made
up of COHERENT light, a special kind of light in which the wavelengths of
the light are all the same length, and the crests of these waves are all lined up,
or in PHASE. The word Laser is an acronym for Light Amplification by
Stimulated Emission of Radiation. What does that mean? Basically a laser is a
device which produces and then amplifies light waves and concentrates them
into an intense penetrating beam.
The principles of the laser (and it's cousin the maser) were established
long before these devices were successfully developed. In 1916 Albert
Einstein proposed stimulated emission, and other fundamental ideas were
discussed by V.A. Fabrikant in 1940. These ideas, followed by decades of
intensive development of microwave technology set the stage for the first
maser (a laser made up of micro-waves), and this in turn helped to produce
more advances in this area of science. These efforts cumulated in July 1960
when Theodore H. Maiman announced the generation of a pulse of coherent
red light by means of a ruby crystal-- the first laser.
Laser light is produced by pumping some form of energy, such as light,
from a flash tube (see below) into a LASING material, also known as a
medium. Media can be liquids, solids, gases, or a mixture of gases, such as
the common helium-neon laser (see chart). Each medium produces a laser
with a different wavelength and therefore each medium produces different
coloured light. When the energy, in this case photons (light particles) enter
the medium they smash into the atoms of the medium. The atom then releases
another photon of a specific wavelength. When a loose photon hits an atom
that hasn't emitted it's extra photon, both photons are released. That is called
stimulated emission of radiation. A single flash from a flash lamp emits
billions of pairs of photons into the medium. The photons are then released as
coherent light.
The first laser, a ruby laser, was made up of several main components.
It had a flash tube coiled around a central rod of synthetic pink ruby. In this
case the ruby is the medium. A quartz tube was located just underneath the
ruby rod. A trigger electrode was connected to the quartz tube. All of this was
enclosed in a polished aluminum casing. This was cooled by a forced air
supply.
This design was thought to be good enough but later an optical
resonator was added to redirect light in the right direction which increased
laser performance. The optical resonator was a mirror at one end of the laser
to redirect light back into the laser and a partially reflective laser which lets
some coherent light through.
Today there are many types of lasers which include solid state lasers,
which have a solid media. The most common of this is a rod of ruby crystals
and neodymium-doped glasses and crystals. These offer the highest energy
output of all lasers. Another laser type is the gas laser, which can be made of
a pure gas or a mixture of gases or even a vaporized metal in a quartz tube.
The helium-neon laser has hight frequency stability and Carbon Dioxide
lasers are the most efficient and powerful continuous wave lasers. The most
compact type of laser is the Semiconductor laser, made from layers of sem
i-conducting materials. Since these can run by direct application of electrical
current these have many uses, such as CD players and laser printers. Liquid
lasers are usually made with a synthetic dye. Their frequency can be adjusted
by a prism inside the laser cavity. An electron laser is a laser which uses free
electrons pumped to lasing capability by magnets. These are powerful
research instruments because they are tunable and a small number could
cover the entire spectrum from infrared to x rays. They should be able to
produce ver high power lasers now too expensive to produce.
As mentioned before, the laser has many applications in the scientific
community and in our daily lives. In medicine they can be used to painlessly
cut and reseal organs or removing tumours, as well as cosmetic surgery.
Holography is a fun part of the laser technology because lasers are what
creates the holographic images. Microscopic objects can be made into 3D
images using x ray lasers. The information applications of the lasers are for
reading and writing data to CD's. They are used to make high capacity audio
and video recording and playback (music CD's and laser disk players). The
militaries of the world use lasers for many things including tracking enemy
movements and as anti-satellite and ballistic missile defence weapons.
Some people might say that the laser is one of the most important
advances in human technology ever, and some might not, but it is definitely
one of the most important advances in the twentieth century.
f:\12000 essays\sciences (985)\Physics\Lenzs Law and Magnetic Flux.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. With this definition of the flux being , we can now return to Faraday's investigations. He found that the magnitude of the emf produced depends on the rate at which the magnetic flux changes. Faraday found that if the flux through N loops of wire changes by an amount , during a time delta t, the average induced emf during this time is
This fundamental result is known as Faraday's law of induction.
The minus sign is placed there to remind us in which direction the induced emf acts. Experiment shows that an induced emf always gives rise to a current whose magnetic field opposes the original change in flux. This is known a Lenz's law. Let us apply it to the case of relative motion between a magnet and a coil. The changing flux induces an emf, which produces a current in the coil; and this induced current produces its own magnet field. If the distance between the coil and the magnet decreases; so the magnetic field, and therefore the flux, through the coil increases. The magnetic field of the magnet points upward. To oppose this upward increase, the field produced by the induced current must point downward. Thus Lenz's law tells us that the current must move by the use of the use of the right hand rule. If the flux decreases, so the induced current produces an upward magnetic field that is "trying" to maintain the status quo.
Let us consider what would happen if Lenz's law were just the reverse. The induced current would produce a flux in the same direction as the original change; this greater change in flux would produce an even larger current, followed by a still larger change in flux, and so on. The current would continue to grow indefinitely, producing power (=) even after the original stimulus ended. This would violate the conservation of energy. Such "perpetual - motion" devices do not exist.
It is important to note, which I believe was forgotten in the class lecture, is that Faraday's investigation, as summarized in Faraday's law, says that an emf is induced whenever there is a change in flux. Thus an emf can be induced in two ways: (1) by changing the magnetic field B; or (2) by changing the area A of the loop or its orientation theta with respect to the field.
A motor turns and produces mechanical energy when a current is made to flow in it. You might expect that the armature would accelerate indefinitely as a result of applied torque. However, as the armature of a motor turns, the magnetic flux through the coil changes and an emf is generated. This induced emf acts to oppose the motion (Lenz's law) and is called the back or counter emf. The greater the speed of the motor, the greater the back emf. Indeed, as the motor increases in speed, the back emf increases until a balance is reached where the speed remains constant. Thus the counter emf controls the speed of a motor.
For a given coil, the ratio of the electromotive force of induction to the rate of change in the coil is called the self-inductance of the coil. An alternative definition of self-inductance is the number of flux linkages per unit current. Flux linkage is the product of the flux and the number of turns in the coil. Self-inductance does not affect a circuit in which the current is unchanging, however, it is of great importance when there is a changing current, since there is an induced emf during the time that the change takes place.
The mutual inductance of two neighboring circuits is defined as the ratio of the emf induced in one circuit to the rate of change of current in the other circuit. ()
The SI unit of mutual inductance is the henry, the same a the unit of self- inductance. The same value is obtained for a pair of coils, regardless of which coil is the starting point. ()
f:\12000 essays\sciences (985)\Physics\leonardo da vinschi.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sida 2
INNEHÅLLSFÖRTECKNING:
Sida 1 - Titelsida
Sida 2 - Innehållsförteckning
Sida 3 - Bakgrund, syfte, avgränsning
och metod
Sida 4-6 - Inledning på huvudtexten -
Renässansen
Sida 7-12 - Leonardo da Vinci´s liv
Sida 13-15 - Leonardo da Vinci´s
uppfinningar
Sida 16-18 - Leonardo da Vinci´s musik och
konstverk
Sida 19-21 - Leonardo da Vinci´s "Mona Lisa"
Sida 22 - Subjektiv sammanfattning
Sida 23 - Källförteckning
Sida 3
BAKGRUND:
Jag har alltid varit intresserad av
historia och då speciellt av lärda män som
Leonardo da Vinci.
SYFTE:
Att undersöka om Leonardo da Vinci var
typisk för sin tid.
AVGRÄNSNING:
Jag har valt att börja med en kort inledning
om vad renässansmänniska är för något. Sen
tänkte jag berätta om Leonardos uppväxt och
till sist om hans uppfinningar och hans
konstverk.
METOD:
Metoderna har varit att låna böcker på
biblioteket, slå i uppslagsverk och leta i
tidsskrifter.
Sida 4
INLEDNING PÅ HUVUDTEXTEN -
RENÄSSANSEN
Renässansen började under mitten av 1300-
talet och slutade ungefär vid 1600-talets
början.
Rinascità (italienska), renaissance
(franska) är ordet renässans på de olika
språken.
Ordet renäsans betyder pånyttfödelse och
syftar på italienarnas återuppväckande av
det antika kulturarvet efter det långa
avbrottet under medeltiden.
Renässansen var närmast en skördetid för
tankar och ideér som vuxit fram under
medeltiden och som skulle förändra Europas
kulturbild mellan 1300- och 1600-talet.
Det man nu skapade var en ny människosyn,
som utmärktes av naturvetenskapen, de
geografiska upptäckterna, antik- och
språkintresset och de tekniska
uppfinningarna.
Vad som var typiskt för en sann renässans
människa var att han (det var mest män)
skulle kunna lite av allt. Han skulle kunna
vilka de främsta konstnärerna, musikerna,
äventyrarna mm. var. Sen fanns det ju dem
som var experter på alla områdena och
naturligtvis var ju Leonardo den störste av
dem.
Han var ju känd redan under sin tid.
Vad som också var utmärkande var att man
Sida 5
utrustade stora skepp att segla iväg och
hitta nya landområden med mycket guld och
silver.
Många av skeppen kom aldrig tillbaka.
Sida 7
LEONARDO DA VINCI´S LIV:
Leonardo föddes den 15 april 1452 i den
lilla staden Vinci i Italien.
Han arbetade som målare, bildhuggare,
fortifikationsingenjör, vattenbyggare,
kartritare och teknisk konstruktör, men i
grund och botten experimenterade han bara
för sina egna syften.
Under sin tid var han känd som den stora
impulsgivaren och skaparen av två fresker
(målningar):
"Nattvarden" och "Anghiarislaget", båda två
är förstörda idag.
Den enda personen som stod Leonardo nära var
hans vän och lärljunge Melzi, som har
bevarat Leonardos teckningar, studier,
planer, uppfinningar och tankar till
eftervärlden.
Leonardo da Vinci (Leonardo från Vinci) har
fått sitt efternnamn från sin födelsestad.
Vinci betyder "betesmarker" och det har fått
sitt namn efter landskapet med dess
vingårdar och marker.
Leonardos far, Ser Piero, härstammade från
(på den tiden) en berömd florentinsk familj.
Man kan följa släkten tillbaka till 1200-
talet och ättlingar till Leonardos bröder
levde ända in på 1900-talet. Det var ett
segt och starkt släkte.
Sida 8
Släktens medlemmar var oftast väldigt stora
och starka, det var även Leonardo. Man har
hört berättas att Leonardo utan ansträngning
kunde böja ihop en hästsko med bara ena
handen.
Han far, Ser Piero, ägde en liten lantgård i
Vinci och hela familjen var välmående.
Leonardos mor var en bondkvinna. Man vet
bara hennes förnamn och det var Catarina.
Catarina överlämnade sonen i faderns vård
och gifte sig med en enkel man från Vinci.
Leonardo föddes utom äktenskapet men på den
tiden var det ingen skam att vara av oäkta
börd.
Ser Piero gifte sig fyra gånger med först i
det tredje och fjärde äktenskapet blev det
barn, sammanlagt elva stycken. Leonardos
farbror Francesco tog hand om pojken, som
den mycket strängt sysselsattefadern
knappast hade tid med. Men man vet med stor
säkerhet att Leonardos uppväxt var mycket
lycklig.
Han levde nära naturen, där det fanns mycket
att beskåda bland djur, växter och konstigt
formade stenar.
Han konstnärliga begåvning framträdde
tidigt. Han tecknade, knåpade och
modellerade. Hans far satte honom som
fjortonåring i lära hos Verrocchio (som var
en berömd konstnär i Italien), sedan han
Sida 9
visat några prov på Leonardos skicklighet
och frågat honom om han trodde att pojken
hade några framtidsutsikter som målare.
"Helt säkert" lär mästaren ha svarat. Så
blev Leonardo lärling i Verrocchio verkstad.
Han fick hjälpa till vid utförandet av
gravstenar och förfärdiga silverskålar,
mosaikarbeten, bronstatyer och
altarmålningar.
Leonardo hade här grunden till sitt tekniska
kunnande.
Det tog Leonardo sex år innan han fick sitt
mästarbrev.
Hos Verrocchio intresserade man sig också
för matematiken och den nyupptäckta
perspektivläran, som ett medel att återge
verklighetens former. Leonardo påverkades
av de nya teorierna och ägnade matematiken
ett lidelsefullt intresse.
Man har inget tillförlitligt porträtt av
honom som barn, men man har sannolikt med
viss rätt velat se hans drag i ärkeängeln
Mikaels huvud i Botticinis målning "Tobias
och ängeln", och kanske också i "Unge David"
av Verrocchio.
Leonardo ägnade sig också mycket åt musik.
Han både sjöng och spelade luta och var
berömd för sina sällskapstalanger.
Sida 10
Musiken var en konstart som på den tiden
gynnades av hovet och bäste lärare i
musikteori var Franchino Gafurio. Hans verk
om "Musikutövning" hör till de tidigast
tryckta böckerna i Milano.
I utgåvan av Gafurios arbete finns en
teckning av en orgelspelare som satts i
förbindelse med Leonardo.
Leonardo var mycket musikalisk, men bland
hans många anteckningar finns inga noter.
Hans intresse för musiken fick honom att
göra utkast till nya konstigt formade
instrument med större tonstyrka.
I sina skrifter och teckningar har Leonardo
upprepade gånger skildrat visioner av
strider, mänsklig vildhet, naturkatastrofer
och världens undergång.
Han tecknade mycket studier av unga pojkars
och äldre mäns huvuden.
"Han åstadkommer inte många målningar" säger
en av hans samtida, "ty han är aldrig
tillfreds med någonting, det må vara aldrig
så vackert".
Därför finns det endast ett fåtal verk av
honom.
Vetande skaffade han sig på alla de områden,
men han gjorde det på egen hand.
Sida 11
Leonardo strövade ibland runt i Rinardalen
och tecknade landskap.
Ett av hans största intressen var som jag
nämnde innan matematiska studier. Han
umgicks med lärda män, bland annat en läkare
och magiker som hette Paolo dal Pozz
Toscanelli, som var en av de första som
trodde på möjligheten att ta sig fram
sjövägen till Indien (han skickade faktiskt
en karta till Colombus). Det var troligen
han som väckte Leonardos intresse för
geografi. Leonardo blev med tiden (även
på detta område) en mycket skicklig
kartograf (kartritare). Han konstruerade
även ett vattenur och andra tidmätare.
Leonardo tålde inte att se djur i
fångenskap. Han köpte fåglar i bur som han
sedan öppnade och lät flyga ut.
På den tiden gick många konstnärer med
värja. Men Leonardo bar inget vapen. Trots
sin väldiga kroppsbyggnad var han stillsam
och närmast tillbakadragen. Det fanns i
hans uppträdande något skyggt och
hemlighetsfullt. Han var också väldigt rädd
att någon skulle själa hans idéer och
uppfinningar.
SIDA 12
LEONARDOS UPPFINNINGAR:
Hans ritningar på krigsmaskiner med sina
smart uttänkta detaljer är ofta bara
symboliska uttryck för Leonardos
föreställningar och känslor.
Han har gjort en ritning på en
artilleripjäs, där två mörsare (en sorts
kanoner) sköt iväg två läderbehållare med
med en massa kulor i. Dessa behållare skulle
dela sig så fort de skjutits iväg och ett
moln av kulor skulle regna över fienderna.
Varje kula skulle vara försedd med en
explosiv laddning som skulle explodera i ett
moln av små stjärnor.
Detta projekt var inget som han verkligen
gjorde utan det bara utspelades i hans
fantasi.
Att förse projektilerna med tidsinställda
tändanordningar som fick dem att explodera i
rätt ögonblick var en omöjlighet med den
tidens utveckling. Först på 1800-talet
uppfanns sådana tändanordningar.
Vapentekninken på Leonardos tid var inte
alls outvecklad. Kanongjuterierna i Milano
var mycket berömda.
Leonardo har tecknat en väldig kran och i
den hänger ett kanonrör. Som man ser på
bilden har röret en storlek som på den tiden
var omöjliga att framställa. Arbetarna som
är sysselsatta runt jättekanonen gör väldiga
Sida 13
rörelsestudier.
Leonardo har också konstruerat en filmaskin
som kunde slå filens räfflor jämnt på ett
slätt metallstycke, som sedan kunde härdas
med de metoder som man kände till då.
Den fungerade så att när en tyngd fallit
ner, lyftes en hammare för att skilja den
utskjutande kanten och kuggen, tyngden lyfts
sedan igen med hjälp av en vev och
präglingen av räfflorna på filens yta
fortsatte.
En av Leonardos mest berömda teckningar är
den av en enorm ballista (katapult) med
några avancerade egenskaper.
På nästa sida är en liten bild på den.
Teckningen är så skickligt gjord att den
blivit klassiker bland grafiska verk om
ingenjörskonst.
Leonardo återger den stora bågen i
laminerade delar för att få största möjliga
böjlighet. Bågsträngen dras tillbaka med
skruven och kugghjulet i bildens nedre högra
hörn. Där finns två avfyrningsspakar (nere
till vänster i bilden). Den översta har en
fjäderanordning som utlöses av ett slag med
en klubba, den nedersta utlöses av en
hävarm.
Sida 15
LEONARDOS MUSIK OCH
KONSTVERK:
Det enda säkra verk från Leonardos tidiga år
är det av munkarna i San Donato a Scopeto
beställda målningen som föreställer
"Konungarnas tillbedjan". Denna målning
visar att han var flera år före sin tid. Den
visar resultat som måleriet skulle uppnå
först vid sekelskiftet. Ett stort antal
studier visar hur omsorgsfullt han
förberedde sig för sitt verk.
Leonardo visar här vad han har lärt sig. Men
den glömdes bort och tack vare detta har den
blivit bevarad i oförändrat skick.
Ett annat verk från hans tidiga år var
"Hieronymus med lejonet" som aldrig
fullbordades och som återupptäcktes i början
av 1800-talet i skadat skick.
Det som utmärker Leonardos ritningar och
studier jämfört med andra ingenjörer under
hans tid är hans moderna och konstnärliga
ritningar.
Han lade in väldigt mycket konst i sina
ritningar.
Leonardo arbetade också med att måla
madonnabilder.
Det finns en hel rad med målnigar på junfru
Maria med barn, som Leonardo har målat.
En av dessa är "Madonna Litta".
Hon är hårt målad och dräkten är oskön, men
med huvudels graciösa böjningar förs tanken
Sida 16
direkt till Leonardo.
En av hans teckningar visar ett huvud som
kommer "Madonna Litta" mycket nära.
Leonardo fick så småningom elever och
medarbetare i Milano och en målarskola
började växa fram.
Under sin första Milano tid ingick han
kompanjonskap med familjen de Predis.
De fyra bröderna de Predis var mycket
duktiga konsthantverkare. En av dem var
myntgravör, en träsmidare, en miniatyrmålare
och en porträttmålare. Tillsammans med
porträttmålaren Ambrogio de Predis inledde
Leonardo underhandlingar om uppförandet av
en altartavla i kyrkan San Francesco i
Milano.
Leonardo utförde för altaruppsatsens
mittparti "Madonnan i grottan".
Den finns i två versioner, den ena finns
på Louvre Muséet i Paris och den andre finns
i London.
Den i London är förmodligen en kopia gjord
av en av Leonardos elever under hans
uppsikt.
I den versionen som finns i Louvre svävar
inga helgonglorior över figurernas huvud.
Deras hårlockar glänser och ansiktena lyser
mot bakgrunden av ljusdunklet i grottan.
Med sin tekniska begåvning och sina
talanger som festarrangör uppskattades han
Sida 17
mycket vid Moras hov.
Leonardo har skrivit många skämt, gåtor och
kurositeter som han kunde roa ett sällskap
med. Han har återgivit anekdoter som den om
målaren som man frågar varför han gjort sina
barn så fula men sina figurer på målningarna
så vackra, då svarade målaren:
- Mina tavlor gör jag ju på dagen men mina
barn på natten.
När hertig Gian Galeazzo skulle gifta sig
ritade Leonardo dekorationerna.
Den stora hallen i slottet förvandlades av
gröna grenar till en skogsinteriör. Vid
midnatt visades det ett festspel,
"Paradiset", författat av hovskalden
Bellincioni.
Sida 18
LEONARDOS "MONA LISA":
Mona Lisa är ett av Leonardos få fullbordade
konstverk, det är faktiskt den enda bevarade
målningen som man till hundra procent vet
att han har gjort. Innan våra dagar har
tavlan varit med om många episoder, bland
annat har den blivit beskuren cirka tio
centimeter på varje sida.
Mona Lisa blev engång stulen från Louvre
Muséet i Paris av en liten oansenlig
arbetsklädd italienare som hette Vinzenzo
Peruggia, han arbetade med att rama in och
sätta glas i tavlor. Vinzenzo gick helt
fräckt in på Louvre Muséet i agusti 1913 och
lyfte ner tavlan och gick ut igen.
Under två år höll han tavlan gömd i sin
vindsvåning Paris medan världspressen
rasade. 1915 förde han tavlan vidare till
Florens gömd i en låda med kläder och
verktyg. Han blev arresterad när han
försökte sälja den till en kosthandlare.
Under rättegången förklarade han att hans
enda avsikt var att föra tillbaka tavlan
till fosterlandet. Han slapp undan med sju
månaders fängelse, och tavlan återlämnades
under en högtidlig cermoni till den franska
regeringen.
Mona Lisa har blivit konsthistoriens mest
omskrivna, besjungna och kommenterade
porträtt. Hon har givit upphov till
Sida 19
noveller, romaner, lovsånger och operor.
Det mest omdiskuterade är hennes leende.
Somliga har funnit det grymt, det har
uppfattats som det "obarmhärtiga leendet hos
en kvinna som kuvat mannen". Enligt andra är
det älskligt. Enligt Walter Pater är det ett
uttryck för "den moderna själen med alla
dess sjukdomstecken".
Det finns ochså uttranden som Leonardo själv
skrivt ner i "Traktat om måleri". Han
skriver att genom lutspel och högläsning har
han framkallat detta ansiktsuttryck på sin
modell.
Man vet inte så mycket om denna modell mer
än att hon var en mycket vacker och fattig
ung flicka som gifte sig med en tjugo år
äldre rik man som var från en väl ansedd
familj. Paret fick en dotter som dog vid låg
ålder.
Modellen var runt 25 år när Leonardo började
med tavlan och det tog drygt fyra år att
göra klart den.
Sida 21
SUBJEKTIV SAMMANFATTNING
Vad jag har kommit fram till under denna tid
jag har arbetat med detta specialarbete är
att allt det negativa som hände under
medeltiden plötsligt vände och blev
någonting positivt. Alla hade ju haft det
väldigt svårt under medeltiden. Folk hade
haft en massa förbud över sig så de hade
aldrig haft chansen och verkligen göra
vad de ville. Men sen blev det för mycket
och folket gjorde "uppror" (om man nu kan
uttrycka sig så). Man började segla ut i
världen för att upptäcka nya kontineneter,
man gjorde tekniska framsteg, man målade
tavlor, man lärde sig språk, ja listan kan
göras nästan hur lång som helst.
Och den som utmärkte sig mest var ju
Leonardo da Vinci. Han blev ju expert på
vart enda område han gav sig in på. Se bara
på hans målningar och uppfinningar för att
nämna något. De är ju värda miljoner idag.
Detta arbete är något som jag verkligen
gillat att arbeta med.
Sida 22
KÄLLFÖRTECKNING:
"Leonardo Uppfinnaren" skriven av Ludwig H.
f:\12000 essays\sciences (985)\Physics\Life of Georg Simon Ohm.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Georg Simon Ohm
At the time Georg Simon Ohm was born not much was known about electricity, he was out to change this. Georg grew up in Bavaria which is why most information about Georg is in German. There is even a College named after him: Georg-Simon-Ohm Fachhochschule Nuernberg. To much dismay not a whole lot has been written about him. Usually you will find a paragraph of the summary of his life. I hope to change this flaw in the history books by telling you as much as I could find on his life.
When Georg was growing up his dad, owner of a prosperous locksmith business, wanted young Georg to study mathematics before joining the family business. Georg attended a Gymnasium, like a college, in Erlangen, Bavaria (now Germany) . During his time at this Gymnasium a professor noticed how he excelled in math. This professor's name was Karl Christian von Langsdorf, Georg owes this man much credit from his recommendations to others.
After he graduated he took a job teaching mathematics at Erlangen University in 1805. He spent the next years looking for a better teaching position. He found what he was looking for in 1817 when a job was made available to him at Cologne Gymnasium. He now looked to research electrical current. In 1827 he published Die galvanishce Kette, mathematisch bearbeit (The Galvanic Circuit, Mathematically Treated). This was a mathematical description of conduction in circuits modeled after Fourier's study of heat conduction. This is also known as Ohm's Law.
Ohm's Law, which is Georg's greatest accomplishment, started as an experiment. The experiment's purpose was to find the relationship between current and the length of the wire carrying it. Ohm's results proved that as the wire increased the current decreased.
Ohm came up with a formula to state these findings. It is V=IR, where as V=Voltage, I=Current, and R=Resistance. Ohm came up with a statement for this: current is equal to the tension (potential difference) divided by the overall resistance. Units of resistance, or ohms, are named after Georg Ohm. The inverse of resistance is conductance and it's units are mho, or Ohm's name spelled backwards. This is expressed as G=I/R or I=GV. That is conductance is equal to Current divided by resistance.
Georg's work was under constant ridicule because it was experiment only and was irrelevant to a true understanding of nature. So he felt compelled to resign his job at Cologne. He continued his research after this time. After six years he got another teaching job at Nuremberg. He was recognized by the Royal Society of London for his work in the 1840s. He was awarded the Copley Medal in 1841 and Charles Wheatstone attributed his work to the findings of Ohm. He became a foreign member of the Royal Society in 1842. In 1849 Ohm was given his dream job when he became a professor at Munich. He died 5 years later after accomplishing his dream.
Georg Simon Ohm is not a famous man by any means, but his research on electricity is still in use today. Electricity is very important, so this makes Ohm an important man even if he is in the shadows. Although Georg was the talk of the town in physics, he has somewhat faded into an unknown. I hope I have enlightened you with a few words of wisdom about Georg Simon Ohm.
Bibliography
Periodicals:
1. G. Baker, Georg Simon Ohm, Short Wave Magazine 52 (1953), 41
Books:
1. E. Deuerlein, Georg Simon Ohm, 1789-1854 (Erlangen, 1939)
2. C. Jungnickel and R. McCormmach, Intellectual Mastery of Nature, (Chicago, 1986)
3. H.S. Suttman Co., INC. , The Illustrated Science and Invention Encyclopedia, (New York, 1974)
Internet Sources:
1. Http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/ Ohm.html
2. Http://spider.ace.sait.ab.ca/~blanchar/www/ohm/ohm.htm
f:\12000 essays\sciences (985)\Physics\Longwood.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In modern engineering, a systematic approach is used in the design, operation, and construction of an object to reach a desired goal. The first step of the process employs what is commonly known as the scientific method. The next step involves forming an interdisciplinary team of specialists from not only the various engineering disciplines, but from other fields whose knowledge may be useful or even necessary to completing the project. This step doesn't apply to our project, due the confined nature of the class. Finally, considerations must be taken into account to ensure that the project is efficient as well as cost effective.
The goal of the MOBOT Project was to design and build a programmable robot. The robot had to complete a series of four movements in four given directions over a distance of at least 6 inches. Power and weight restrictions were applied to ensure the safety of the students and, more importantly, the teacher. As the goals of the project were made clearer, our group began discussing possible ideas for the design. There were some disagreements about whether we should take the electromechanical route or the purely electrical one. And after some deep thought, we all agreed that the mechanical way would be the simplest to build and the most merciful on our pocketbooks. Even though we were coming up with some good ideas, each design seemed to contain some major problems. One of the reoccurring problems dealt with the synchronization of the driver motor and the steering system. Finally the team came up with a design that allowed the drive and steering controls to be independent of one another, but still allowing each one to be linked in time. This design has now become what is known as LONGWOOD.
The Longwood is divided into two main parts: 1)motion system and 2)logic board. As the engineer, I was responsible for motion design. Therefore, that will be the focus for the remainder of this section.
The main components of the motion system consist of a platform, three wheels, a wheel frame, two motors, and two contact switches. Two of the wheels were connected to a motor and attached at the front end of the platform. These wheels were only allowed to move simultaneously in either a forward or reverse direction. The third wheel was hooked up to the wheel frame and free to rotate approximately 45 degrees in either direction. Figure 1.1 shows an illustration of how the wheel frame works. The wheel frame and third wheel were then attached to the platform completing the basic assembly. The second motor was put near the end of the platform and is used solely to pull the logic board through a series of contact points. The final step involved setting up a canopy containing the contact switches across the platform where the switches were free to strike the logic board.
The fact that the wheel base can be controlled separately from the forward and reverse motor yielded some advantages that we thought were rather interesting. One of them is that the robot is able to make a turn while driving in reverse, instead of just forward. Another feature is that the car is capable of turning and then translating in one command. Even though this was one of the original parameters which was eliminated because it complicated matters, we felt that it couldn't hurt to have it anyway. The theory of the motion design was finished. The only obstacles that remained were the testing and fine-tuning of Longwood, a machine that was destined for success.
f:\12000 essays\sciences (985)\Physics\Luminescence of Black Light.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Luminescence of Black Light
Black Light. What is it? It is a portion of the Ultra-Violet Spectrum that is invisible to our eyes. We can
not distinguish it. However, when this radiation impinges on certain materials visible light is emitted and this is
known as "fluorescence." Fluorescence is visible to the human eye, in that it makes an object appear to "glow in
the dark."
There are several sources of ultra-violet light. These sources are: the sun, carbon arcs, mercury arcs, and black
lights. In most cases, the production of ultra-violet light creates a reasonable amount of heat.
Many materials exhibit the peculiar characteristic of giving off light or radiant energy when ultra-violet light is
allowed to fall upon them. This is called luminescence. In most cases, the wave length of the light radiated is longer
than that of the ultra-violet excitation but a few exceptions have been found.
The quantum theory attempts to explain this property by contending that a certain outside excitation
causes an electron to jump from one orbit to another. It is then in an unstable environment causing it to fall back into
its original orbit. This process releases energy, and if it is in the visible part of the spectrum, we have a transient
light phenomenon. Ultra-violet light is an exciting agent which causes luminescence to occur.
There are many materials which exhibit fluorescent characteristics. Many of which are even organic. Teeth,
eyes, some portions of the skin, and even blood exhibit fluorescent qualities. Naturally occurring minerals such as:
agate, calcite, chalcedony, curtisite, fluorite, gypsum, hackmanite, halite, opal scheelite, and willemite, also have
similar characteristics. These materials can be used in industries.
The radiance of ultraviolet light is measured in units called "Angstrom." The intensity of ultraviolet fluorescence
is the greatest between the 5000 and 6000 range. This being the range between the green and yellow hues.
Ultra violet light is not readily visible. It is not visible because certain materials reflect it. Ultra-violet light is
made visible due to the fact that it causes a reaction at the atomic level. When it strikes the atom, some of the
electrons are sent into other orbits. This then creates an unstable situation which causes the electron to fall back
into its place. This process produces energy, and this is what is seen. This discharge of energy is what creates the
"glow" that is seen. I had no idea that light could cause such an strong reaction on something. That something
being an atom is even more profound. Ultraviolet light causes the atom to lose a subatomic particle then regain it,
and give off energy in the form of visible light. This is just amazing.
f:\12000 essays\sciences (985)\Physics\Magnatism.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Magnatism & the Things We THINK We Know About It!
By Austin D. Ritchie
Magnatism is a wonderous natural phenomanon. Since days before scientific
discoveries were even written down the world has been playing with the theories of
magnatism. In these three labs we delt with some of the same ideas which have pondered
over for long before any of us were around. In these conclusions we will take a look at
these ideas and find out what exactly we have learned.
To understand the results of the lab we must first go over the facts about
magnatism on the atomic level that we have discovered. The way magnatism works is
this: magnatism is all based on the simple principle of electrons and there behavior.
Electrons move around the atom in a specific path. As they do this they are also rotating
on there own axis. This movement causes an attraction or repultion from the electrons
that are unpaird. They are moving in two directions though causing a negative and
positive charge. In the case of magnatism though we find that these elements have a lot
of unpaired electrons, in the case of iron, Fe, there are four. What happens then in the
case of a natural magnet the unpaired electrons line up or the magnet in a specific
mannor. That is all the atoms with unpaired electrons moving in a direction which
causes a certain charge are lined up on one side and all the atoms with the opposite
charge move to the other side. The atoms then start to cancel each other out as they
approach the center of the magnet. This all happens at the currie point where these
atoms are free to move and then when cooled and the metel becomes solid the atoms can
no longer move (barely) causing a "permanent" magnet (as in the diagram on the next
page). This same principle can be applied to a piece of metal that has been sitting next to
a magnatized piece of metel in that over the long time they are togather the very slow
moving atoms in the metal situate in the same fassion also creating a magnet. Now that
we know the basics lets begin with the experiments.
Part one of the lab started us on our journey. In this part we took an apparatus
with wire wrapped around it put a compass in the middle of the wire wraps. The setup
was arranged so that the wraps were running parralel with the magnetic field of the earth,
that is they were north-south. With this setup we were able to force a current through the
coils of the apparatus by means of a 6V battery and this created a magnetic field. This is
because the movement of electrons (which electrisity is) causes the presents of a
magnetic field. Now that we know we have a magnetic field running around the compass
we cbegan the experiment. What we did was take the magnetic field of the coils
begining with one coil and continued until we had five. What we learned from this is
that with every extra coil we placed around the compass the motion that the interaction of
the two magnetic fields caused increased. These magnetic feilds being the earth's and the
coils. What this means is that not only does electicity create a magnetic field but that
there is a direct relationship between the amount of current and the strength of the
magnetic field it creates. This leads us to the relationship: Bc µ I and then by figuring
in the constant we find that we can derive our first equation Bc = k I. This can also be
supported by the data we collected in the lab when we see that as the measured currents
went up the amount of motion went up which mathmaticly indicates that the magnetic
field strength went up.
But we don't only find this equation but we also find that as the current (or more
so the magnetic field it creates) acts upon the initial magnetic field of the earth we get the
motion in the compass. This leads us to the first part of our left hand rule. The left hand
rule for a straight conductor says that when the lines of flux are created they repel from
the north end of the compass in a certain direction (depending on which way the charge
is moving). This can be explained by our experiment's data in part one also because as
we introduced the current to the earth's magnetic field we found that it created the motion
on the compass. This all agrees with the left hand rule.
Lastly, we found in this part of the lab that magnetic field, represented by B, is a
vector. We can say this because we know that a vector is anything that has both a
magnatude and a direction. Now we need to prove that B has these features. This can be
done by looking back on our lab and remembering that as we found the value for B it was
the strength of the magnetic field. Now strength indicates that there is a magnatude to
the field, thus giving us the first part of a vector. To finalize the theory we look back at
the lab and find that as we changed the flow of the electrons in the coils the motion on
the compass changed also. What this tells us is that the magnetic field of the current
passing through the wire has a direction to it also. Knowing this we can deduce that B is
infact a vector. A second, less definite, manor to find that B is a vector is to recall that in
the equation B = k I we have one definite vector in the I (from earlier labs) and since we
know that you much have a vector on each side of the equation in order for it to balance
out and we know that k is a constant (therefore not a vector) the only possiblility is that B
is infact a vector.
In addition to these "required" conclusions we also found, as stated earlier, that
when you have current you also have a magnetic field. This is important because it gives
us another means in which to create magnetic fields other than the use of "natural"
magnets. But to put this theory into mathmatical application we can use the formula of
Fb = B I L and say that since we know it takes two magnetic fields to cause motion
(represented in this equation by F) and we know that B is in itself a magnectic field we
can deduce that the value for "I L" is infact the value for and thus equivilant to a second
magnetic field.
The next lab we conducted consisted of a factory made coil, an ammeter to find
the value of the current we were creating and a bar magnet to act as a magnetic field.
What we did was thrust the bar magnet N end first through one of the sides of the coil
and found that this created a current. This happened because what we were actually
doing was taking one magnetic field and putting it to motion thus creating antother
magnetic field, which in this case happened to be an electical current. This experiment
once agains deals with, obeys and exemplifies the left hand rule, but this time for a
celenoid. What that means is that as we were thrusting the magnets N end into the coil
we induced a positive amount of current simply because of the direction in which the
LHR tells us that the current should go. Now the converse is also true in this case. What
that means is that when you either thrust the N end of the magnet out of the coil or thrust
the S end into the coil we find that a negative amount of current is invoked.
Our next conclusion has to deal with a combonation of theories being Lenz's law
and induction. Now we know from above that as we thrust the N end of the magnet into
the coil we achieved a positive current and with a S end a negative current what this
shows us is that there is conservation of energy here. Conservation of energy is a main
part of Lenz's law. The reason we can say that this is conservation of energy is because
when a charge was induced it is the opposite (pos/neg) of the the current that it was
induced by. We can further Lenz's law by remembering that the faster we thrust the
magnet into the coil the more current that was produced. This also shows us the
principle of conservation of energy because the more energy put into the system the more
current we got back out. This theory can be easily concluded by saying that only when
you have perpendicular motion of a magentic field can a current be produced. All these
currents and fields are created by what is called induction. What this means is that we
are not actually touching the physical objects togather (contact) but instead just placing
them near each other so that their magnetic fields are "touching" and the motion or force
can result.
That moves us onto the last part of the lab where we used the same coil from part
two and hooked it up in a system (pictured on next page) where we could measure the
current strength and have our teeter-totter with an electric current running through it
within the lines of the magnetic field of the coil. What we are able to do with this setup is
run a current through the system creating a pair of magnetic fields on the coil and the
loop (on the end of the teeter-totter). The diagram below shows the setup that was used
along with a vector diagram. What this tells us is that the force, Fb or magnetic force, on
the end of the TT that is inside the coil is infact a vector. Once again that means that it
has both magnatude and direction. Now we learned last term that force is always a
verctor and therefore can assume that this too is a vector but there is even more evidence
to support this. You see the force that is acting upon the end of the TT that is outside the
coil is being acted on by the force of gravity. This gravitational force, Fg on the diagram,
has the value Fb * m, where "m" is the mass of the object that is setting on the end of the
TT. Since we know that gravitational force is a vector and we see that the TT is balanced
out we know that the forces acting upon both sides of the TT must be equal, otherwise
one side would be lowered like in the next diagram (b). Here, in b, we see the TT before
the current, and therefore the magnetic fields acting on eachother causing magnetic force,
has been introduced to the system. As we see the TT is now unballanced. Now look
back at the first diagram and notice that the vectors of Fb and the value of Fg * m are
equal. Since we massed the "weight" we used to uniformity and we know that
gravitational force is 9.8 m/s2 we then know the value of Fb as well as the fact Fb is
indeed a vector that is ofsetting the gravitational force vector. We know this because if
Fb was not a vector the TT would never balance. We also notice that mathmatically
there is a relationship. That is that the units for the value of Fb are kg*m/s2 which we
know to be velocity and therefore a vector as velocity is.
This leads us to the first of three very important equations. This equation,
Fb = Bc * I * Lloop
then gives us the experimental value for Bc which is important because this could not be
measured directly in our lab. We find this value now very useful because it does not
depend on any of the factory specifications for the coil which we prove to not be true
later. This is the most important equation in this section of the lab for that very reason.
This is because now that we know the experimental value of Bc without using the factory
specs we can use that value in the next two equatins to find experimental values for the
factory constants and therefore prove those set values right or wrong.
The next equation,
Bc = k * Ic * Iloop * Lloop
now serves two purposes. One, it allows us to calculate a "factory" value for the
magnetic field, knowing the length of the loop (L), the current through the loop and coil
(I) and the constant (k) from the factory. We do this so that we can compare this value to
our experimental value for Bc and see how close they are. Two, is that you can plug in
the experimental value for Bc and the two I's and the L and find a value for "k" based on
our data. We then compared the two numbers of each to find that in actuallity the factory
and the experiment disagree, but minorly. This could be due to either error on our part or
on the factories but at least lets us know that we are relatively close.
Lastly, we look at the equation,
Bc = u * N * I / L
which does the same basic thing as the previous one does accept in this one we can plug
in all numbers but the number of turns (N) and then solve for the experimental number of
turns. Or we can plug in the factory number of turns and all the rest accept Bc and solve
for that leaving us with another factory value for Bc. Once again we compare these
numbers to the numbers we had previosly and this time we find that the number of turns
on the coil is experimentally less to a great extent and that Bc for this equation is
extreemely different than the ones solved for above. What this told us was that while the
factory value for "k" was relatively close the factory set number of turns is actaully way
off.
All this leads us to the way that the Earth's magnetic field works. We have used
this field in the lab but not defined it. But through our experiment we can make some
conclusions. What we learned combined with the diagrams and researched data that we
acquired shows us that the earth does not have a bar magnet in the middle of it that is
making it attract and repel things like compasses but rather that their is something else
going on. After searching and thinking hard we found that the earth actually has no
magnetic field in it's center but rather that the magnetic pull we feel comes from the
friction (friction induces a current, earlier labs) of the outter layer of molten earth and the
top layer of its' crust and the current then creating a magnetic field as we know occurs.
We can say that there is no charge in the middle because we know that the center of the
earth is extreemly hot and with that it must be above the currie point, where a magnet's
electrons situate and create, when cooled, a magnet. What this means is that it's too hot
for a magnet to possibly exhist at that temperature. We also know that there is no magnet
there because of the simple fact that on the atomic level a magnet cannot exist in a liquid
because of the uniformity a strong magnet requires and the "loosness" of the molecules in
a liquid, that is how free they are to move. Now since we know that the center of the
earth is molten, a liquid, and therefore a magnet cannot exist there. But this doesn't
explain all of what we have learned. We also see that the magnetic "poles" of the earth
are actually not as we think of them. As the next diagram shows the earths poles are
actually made up of a magnetic north and south pole and a geological north and south
pole. But these poles very. The magnetic poles are actually slightly off center to the
geological poles. Along with this we can say that because of the scientists of the past we
actually call the magnetic south pole the north pole and vise-versa. This isn't due to some
phenomanon but rather the fact that when we think of the north pole we think of the
earth's pole that the north end of a compas (or any magnet) is attracted to. This is
actually the south end of the earths magnetic field, explaining this confusion.
All of this was learned on our very difficult trip through the world of the magnet
and now that we have conducted these experiments, done the research, and made these
conclutions we now know that much more about the voo-doo world of the magnet!
f:\12000 essays\sciences (985)\Physics\Memory.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Memory
Memory is the vital tool in learning and thinking . We all use memory in our everyday lives. Think about the first time you ever tied your shoe laces or rode a bike; those are all forms of memory , long term or short. If you do not remember anything from the past , you would never learn; thus unable to process. Without memory you would simply be exposed to new and unfamiliar things . Life would be absent and bare of the richness of it happy or sorrow. Many scientists are still unsure of all that happens and what and how memory works. They are certain , though , that it is involvement of chemical changes in the brain which changes the physical structure (Loftus p. 392). It has been found after many research , that new memory is stored in a section of the brain called the hippocampus (Loftus p. 392). Memory is acquired by a series of solidifying events , but more research is still needed to discover and fully understand (Loftus p. 392).
Memory is broken down into three systems or categories . These different systems are sensory memory , short-term , and long-term memory. Sensory memory is the shortest and less extensive of the others. It can hold memory for only an instance (Memory p. 32). Suppose you see a tree , the image of the tree is briefly held by the sensory memory and quickly disappears unless you transfer it to your short-term memory (Rhodes p. 130). The next level is called short-term memory. The image or fact can be held as long as the brain is actively thinking about it (Loftus p. 392). For example , if you look up a number in the phone book and repeat it to yourself until you dial it , that is a form of short-term memory. Short-term memory lasts roughly half a minute unless it is transferred to long-term memory . Long-term memory is the last and final stage of memory . It is so large and limitless it can hold nearly anything (Loftus p. 392). Long-term memory can hold something that is only a few moments old to many , many years.
Memory can be measured in three ways . These techniques include recall, recognition, and relearning (Loftus p. 393). Suppose someone asks you who was at a party . When you try to list everyone you saw , that is known as recall. The other form is recognition , which contains recall. For example, the person asking you a list of names. The list contains names of people who were at the party and names of those who were not at the party. " In relearning you would memorize the guest list after apparently forgetting it " (Loftus p. 393).
There are many questions to why people forget . Scientists still do not know exactly how people forget . Not surprisingly , people forget more and more as time progresses. The chief explanations for forgetting include interference, retrieval, failure , motivated forgetting, and constructive processes (Loftus p. 393). " Interference occurs when the remembering of certain learned material blocks the memory of other learned material " (Loftus p. 393). Retrieval failure is the inability to recall material or data that has been stored (Loftus p. 393). An example of this is when you try to think of a certain date or number , but fail to remember . Later it will come naturally without any effort. The third reason is a loss of memory caused by conscious or unconscious desires called motivated forgetting (Stevenson p. 393). Scientists believe that many of us forget in purpose because we choose to. Motivated forgetting is closely related to a process motivated by the needs and wishes of the individual called regression (Memory p. 33). A very good example is when people gamble. When people gamble they choose to remember all the times that they have won , and not the times that they lose. The last explanation of forgetting is constructive process. This is involves the unconscious invention of false memories . Memories became systematically distorted or changed over a long period of time (Memory p.33). When people try to remember a certain fact that has occurred a long time ago , the individual will tend to fill in the gaps with information that is not true .
There are many ways to improve memory. Not surprisingly, practice makes perfect and the way people use the devices include rhymes, clues, mental pictures , and other methods (Rhodes p. 130). Another method provides clues by means of an acronym , a word formed from the first letters or syllables of the words (Rhodes p.132). A mental picture can be provided by the key-word method , which is particularly useful in learning foreign words (Rhodes p. 135). Mental pictures can also be used to remember names. When you meet a person for the first time, pick out a physical feature of the individual and relate it to his or her name . To use mnemonic devices , however, you can use it at anytime you wish. A good way to ensure remembering a certain part of information is to study it over and over so you know it perfectly . The more you thoroughly study something, the chances are, the more lasting it will be.
There are times when uncommon memory conditions occur. Sometimes you of people having photographic memory. No one really has a photographic memory , but there are many people who have eidetic memory (Loftus p. 394). Eidetic memory is a picture that remains in a person's mind for a few second after the picture has already disappeared ( Loftus p. 394). People who have this imagery can look at a scene and describe it , though it is not exactly accurate . It is rare to have this way of remembering a picture . Scientists say that only about 5 to 10 percent of children have this (Loftus p. 394). Even the children who do have this lose it as they grow up. A more serious result is called amnesia . This can result in disease , injury, or emotional shock (Loftus p. 394). Many cases of amnesia, even more severe ones are usually temporary and do not last very long. The more severe the injury the greater the loss of memory . Football players and other sport players have the greatest chance of being affected. Someone who suffers brain damage from a car accident might lose months of years of memory . In general , memories are less clear and detailed than perceptions , but occasionally a remembered image is complete in every detail .
f:\12000 essays\sciences (985)\Physics\Newtons First Law of Motion.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Newton's First Law of Motion
Sir Isaac Newton was in my mind one of the greatest people who ever lived. He was born in 1642 and died in 1727. He formulated three laws of motion that help explain some very important principles of physics. Some of Newton's laws could only be proved under certain conditions; actual observations and experiments made sure that they are true. Newton's laws tell us how objects move by describing the relationship between force and motion. I am going to try to explain his first law in more simple terms.
Newton's first law of motion states: A body continues in its state of rest or uniform motion unless an unbalanced force acts on it. When a body is at rest or in uniform motion this is called inertia.
Let's say that someone parks a car on a flat road and forgets to put the vehicle into park. The car should stay in that spot. This state of being is called inertia. All of a sudden the wind picks up or some kid crashes into the car with a bike. Both the wind and the kid's bike crashing into the bike are unbalanced forces. The car should start to move. The car might accelerate to two miles per hour. Now we would all assume that the car would come to a stop sometime. We assume this because it is true. It is true because there is friction between the tires and the road. The car now has inertia in uniform motion. Since there is friction, the car cannot keep moving forever because friction is an unbalanced force acting upon the tires.
What if there was not any friction? The car would keep going forever. That is if there was not any wind or a hill or any unbalanced force acting upon the car. This is rather weird just to think about. Because this usually would not happen in our customary world today. You just would not see a car go on forever.
An easy experiment to demonstrate this law is to take a glass jar and put an index or a heavier than paper card over the top of the glass jar. Next, place a coin on the index card. Be sure that the index card is strong enough to support the penny without bending itself. Now place your finger about three centimeters away from the card and flick the card out from underneath the coin. The coin should fall into the glass jar. The inertia of the coin keeps it in place even when the card is moving underneath it.
f:\12000 essays\sciences (985)\Physics\Newtons Law of Universal Gravitation.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Newton's Law of Universal Gravitation
Gravity if one of the four fundamental forces in the universe. Though the fundamental principles of it eluded scientists until Sir Isaac Newton was able to mathematically describe it in 1687 (Eddington 93). Gravity plays a serious part in everyday actions as it keeps everything on the ground; without gravity everything would be immobile unless a force was applied (then it would move infinitely because there would be no force to stop it).
Perhaps, the best place to start then would be with such a simple item as an apple (after all it is what "sparked" Newton's creativity). The apple is one of the two curiosities (the other being the moon) that led Newton to discover The Law of Universal Gravitation in 1666 (Eddington 93). As Newton later wrote, it is the story of the sight of an apple falling to the ground (he was resting at Woolsthorpe because of the plague at Cambridge) that caused Newton to wonder if this same force was what held the moon in place (Gamow 41).
Newton knew that an object fell to the earth at a rate of about 9.8 meters (32 feet) per second second as pointed out by Galileo. Thus "the apple that fell from the tree" fell to Earth at about this rate. For the first basic explanation of this we will assume a linear plane, one in which all forces act in only one direction. Therefore when the apple fell it went straight towards the center of the earth (accelerating at about 9.8 meters per second second). Newton then figured that the same force that pulled the apple to Earth also pulls the moon to the earth. But what force keeps the moon from flying into the earth or the earth flying into the sun (Edwards 493)?
To better understand this, one other aspect must first be understood. Galileo showed that all objects fall to the earth at the same rate (the classic cannonball and feather proved this). But why? If a piano and a saxophone were both dropped from the top of the Empire State Building then they would both slam into the ground at the same rate. Newton realized then that the moon and the apple were both being pulled towards Earth at the same rate but yet the moon was the only one who resisted the force and stayed in its elliptical orbit (Eddington 94).
Newton's Third Law of Motion says that every force exerted by one object on another is equal to a force, but opposite in direction, exerted be the second object on the first (every reaction has an equal but opposite reaction). So the force of the earth pulling the apple to the ground is proportionally the same as the force the apple exerts back on the earth.
Now Johannes Kepler lived some forty-five years before Isaac Newton. And he showed that the orbits of the planets in our solar system were elliptical. When the time of Newton came around he mathematically proved that, if Kepler's First Law was true, then the force on a planet varied inversely with the square of the distance between the planet and the sun. He did this using Kepler's Third Law (Zitzewitz 160). The distance in this formula is from the center of the masses and is the average distance over their entire period. It is also important to note that the force acted in the direction of this line (an important factor when dealing with vectors) (Zitzewitz 160).
Figure 1
Newton, confident that his idea of all objects exerting a force back on Earth, devised a formula for Universal Gravitation. It is important to note that Newton was not the first to think of Universal Gravitation, he was just the first one to make considerable and remarkable proofs for it based on mathematical explanations. He said that if force is relative to the mass of an object and it's acceleration then the force between two objects must also be the same. Thus he came up with the first part of the equation. Also, as he had proved earlier using Kepler's Third Law of Motion, that the force between two objects is inversely proportional to their distances squared (an inverse square law), then that must also be part of the Universal Gravitation equation. Thus we know that the two masses and the distance are related to the force; and because the distance is inversely proportional then the product of the masses divided by the distance between their centers squared must equal the force between the two objects (Zitzewitz 161).
Now earlier, Newton had proved that the force on an object was proportional to an object's mass and its acceleration. And the equation that he had formulated so far did not include anything that would resemble the acceleration. Thus he knew that a gravitational constant must be present and that it should be the same throughout all of the universe. However, due to scientific limitations he was never able to figure out the exact value of this constant (Zitzewitz 161).
Figure 2
One hundred years later, though, an young engineer by the name of Cavendish devised a complex apparatus that was able to measure this gravitational constant. Basically by using very sensitive telescopes and known angles he was able to determine the distance one ball moved another ball. This is often known as "weighing the earth" (Zitzewitz 162-163).
The effects of Newton's Law of Universal Gravitation were varied; but the most common use for his law was the prediction of several planets beyond Jupiter and Saturn. In 1830, it appeared that Newton's Law of Universal Gravitation had not been correct because the orbit of Saturn did not follow his law. Some astronomers thought that the force of an undiscovered planet may be changing its course and in 1845 a couple of scientists at the Berlin Observatory began searching for this hidden planet. It did not take very long. The massive planet now known as Neptune was found on the first night of searching (Zitzewitz 164).
Perhaps one of the most key things about any theory of gravity prior to Einstein was the fact that none of them proposed the origin of gravity. Newton's law always proved to be true in the common world but did not explain the source of the force (Eddington 95). Albert Einstein proposed his Theory of Gravity in his General Theory of Relativity. In this he said that space was a three dimensional plane and that masses curved this plane in one way or another (Eddington 95). Thus a massive object would cause a large "hole" and smaller objects would "orbit" it. It is interesting to note that in either case, Newton's or Einstein's law, both prove to be true in the common world. Massive universal objects, such as black holes, are an exception but that's another story in itself (Edwards 498).
Works Cited
Zitzewitz, Paul W., Robert F. Neff, and Mark Davids. (1992). Physics: Principles and Problems.
Peoria, Illinois: Glencoe.
Gamow, George. (1962). Gravity: Classic and Modern Views. Garden City, New York:
Anchor Books.
Eddington, Sir Arthur. (1987). Space, Time, & Gravitation: An Outline of the General
Relativity Theory. Cambridge: Cambridge University Press.
Edwards, Paul. (Ed.) (1967). The Encyclopedia of Philosophy. New York, New York:
MacMillan.
f:\12000 essays\sciences (985)\Physics\Newtons Method A Computer Project.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Newton's Method: A Computer Project
Newton's Method is used to find the root of an equation provided that the function f[x] is equal to zero. Newton Method is an equation created before the days of calculators and was
used to find approximate roots to numbers. The roots of the function are where the function crosses the x axis. The basic principle behind Newton's Method is that the root can be found by subtracting the
function divided by its derivative from the initial guess of the root.
Newtons Method worked well because an initial guess was given to put into the equation. This is important because a wrong initial guess may give you the wrong root for the function.
With Mathematica, a program for Newton's method can be produced and a graph of the function can be made. From the graph, the a good initial guess can be made.
Although Newton's Method works to find roots for many functions, it does have its disadvantages. The root sometimes cannot be found by using Newton's Method. The reason it
sometimes cannot be found is because when the function is equal to zero, there is no slope to the tangent line.
As seen in experimentation's, it is important to select an initial guess close to the root because some functions have multiple roots. Failure to choose an initial value that is close to the root
could result in finding a the wrong root or wasting a lot of time doing multiple iterations while getting close to the actual root.
On some occasions, the program cannot find a root to an initial guess that is placed into the program. In some instances Mathmatica could not find the root to the function, like if it is a
parabola with its vertex is placed right on the y axis with its roots an equal distance away in both directions. In a case like this, the computer could not decide which root to work towards so it gave an
indeterminate answer.
Although Newton's Method does have its disadvantages, it is very effective for finding the roots of most equations. The advantages definitely outweigh the slight disadvantages, and that is
why it is still used to this day.
f:\12000 essays\sciences (985)\Physics\Nikola Tesla.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Few people recognize his name today, and even among those who do, the
words Nikola Tesla are likly to summon up the image of a crackpot rather than
an authentic scientist. Nikola Tesla was possibly the greatest inventor the
world has ever known. He was, without doubt, a genius who is not only credited
with many devices we use today, but is also credited with astonishing, sometimes
world-transforming, devices that are even simply amazing by todays scientific
standards.
Tesla was born at precisely midnight between July 9th and 10th, 1856, in
a small Hungarien village. He was born to his father, a priest, and his mother,
an unschooled but extremely intelligent women. Training for an engineering career,
he attendedthe Technical University of Graz, Austria and was shortly employed in a
government telegraph engineering office in Budapest, where he made his first
invention, a telephone repeater. Tesla sailed to America in 1884, arriving in New
York City with four cents in his pocket, and many great ideas in his head. He first
found employment with a young Thomas Edison in New Jersey, but the two inventors,
were far apart in background and methods. But, because of there differences, Tesla
soon left the employment of Edison, and in May 1885, George Westinghouse, head of
the Westinghouse Electric Company in Pittsburgh, bought the patent rights to many
of Tesla's inventions. After a difficult period, during which Tesla invented but
lost his rights to many inventions, he established his own laboratory in New York
City in 1887, where his inventive mind could be free. In 1895, Tesla discovered
X-rays after hours upon hours of experimentation. Tesla's countless experiments
included work on different power sources and various types of lightning. The Tesla
coil, which he invented in 1891, is widely used today in radio and television sets
and other electronic equipment for wireless communication. That year also marked the
date of Tesla's United States citizenship. Brilliant and eccentric, Tesla was then
at the peak of his inventive powers. He managed to produce new forms of generators,
transformers, he invented the fluorescent light, and he became extremely involved
with the wireless transmission of power.
During the 1880a and 1890's Tesla and Edison became rivals, fighting to
develop there inventions as quickly as possible. In 1915 he was severely disappointed
when a report that he and Edison were to share the Nobel Prize. Edison went back on
a promise to pay him a sum of money for a particular inventions and Tesla broke off
relations at once and went into the inventing business for himself. The biggest
rivaling against Edison was Tesla's development of alternating current which was
very conflicting to Edison's use of electricity, direct current. This great power
struggle between Tesla and Edison's use of electricity practically ended when Tesla's
alternating current won out and was most favored and ruled most practical. Tesla's
alternating current was used to light the Chicago's World Fair. His success was a
factor in winning him the contract to install the first power machinery at Niagara
Falls, which bore Tesla's name and patent numbers. The project carried power to
Buffalo by 1896. In 1898 Tesla announced his invention of a teleautomatic boat guided
by remote control. When skepticism was voiced, Tesla proved his claims for it before
a crowd in Madison Square Garden.
The biggest controversy in Tesla's career is what most popularizes his
name today, this controversy is the fact that Tesla made hundreds of inventions
and discoveries that was simply amazing. Many people have called tesla "a man out
of his time" because his astonishing experiments. In Colorado Springs, where he
stayed from May 1899 until early 1900, Tesla made what he regarded as his most
important discovery, terrestrial stationary waves. By this discovery he proved
that the earth could be used as a conductor and would be as responsive as a tuning
fork to electrical vibrations of a certain pitch. He also lighted 200 lamps without
wires from a distance of 25 miles and created man-made lightning, producing flashes
measuring 135 feet . He was fond of creating neighborhood-threatening electrical
storms in his apartment laboratory and once nearly knocked down a tall building
by attaching a mysterious "black box" to its side. He claimed he could have destroyed
the entire planet with a similar device. Caustic criticism greeted his speculations
concerning communication with other planets, his assertions that he could split the
earth like an apple, and his claim to having invented a death ray capable of
destroying 10,000 airplanes, 250 miles distant. Because of a lack of funds, his
ideas remained in his notebooks, which are still examined by engineers for
unexplored clues. Many of these were eventually inherited by Tesla's nephew, and later
housed in the Nikola Tesla Museum in Belgrade, Yugoslavia. However, a major portion
of his notes were impounded by the US Government, and very few of those have surfaced
today. And because he kept so few notes, to this day we can only guess at the
details of many of the fantastic scientific projects that he occupied. Many questions
have raised concerning his confiscated notes, although, the government regards
some as never existed and declared others as "lost". Was he working on particle
weapons and cloaking devices for the United States Government when he died? Was
Reagan's Strategic Defense program known as "starwars" the result of secret research
based on Tesla's discoveries half a century before?
Nikola Tesla allowed himself only a few close friends. Among them were
the writers Robert Underwood Johnson, Mark Twain, and Francis Marion Crawford. In
his later years, Tesla was alone with only his inventions and calculations, although
he did bred pigeons later in life, who he gave all the affection to that he was
unable to give human beings. Telsa's name holds over 700 patents. Tesla died
privately and peacefully at 87 on January 7, 1943 New York hotel
room from no
apparent cause in particular. Hundreds filed into New York City's Cathedral of
St.John for his funeral services, and a flood of messages acknowledged the loss of
a great genius. Three Nobel Prize winners in physics (Millikan, Compton, and W.H.
Barton) addressed their tributes. One of the outstanding intellects of the world
who paved the way for many of the technological developments of modern times,
Nikola Tesla.
f:\12000 essays\sciences (985)\Physics\Nuclear Energy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
During the twentieth century scientists have discovered how to unleash the most powerful energy of all; Nuclear energy. The study of nuclear energy began for the same reasons that most scientific studies are begun; to understand more about the universe and the laws by which the universe works. The more knowledge we have about the universe, the more we can control the world in which we live.
Nuclear energy is contained in the center, or nucleus of an atom. This energy is also known as atomic energy because its obtained from atoms, unfortunately this is not a good choice of words (because many other energies are obtained from atoms). An atomic bomb explosion shows just how powerful muclear energy really is. Such as the underwater explosion of an atomic bomb at Bikini during 1946. This powerful type of energy comes from many things such as atoms and subatomic particles; an atom is a tiny bit of matter that has very little weight. They are much too light to be weighed directly, but scientists have developed methods of determining these tiny weightd by using special labratory instruments. Hydrogen is the lightest of all atoms and carbon atoms weigh twelve times more than the hydrogen atom. Atoms that make up one alement are not like atoms that make up another element.
These (atoms) are not simple particles, their structure is very complexe. They are, in fact, made up of smaller bits of matter called subatomic particles. An atom has two parts. Those two parts are; 1)at the center is a nucleus, a densely packed core composed of two kinds of paticles: protons and neutrons and 2)electrons. The charge in a nucleus of an atom is carried by a particle called a proton, the number of protons in an atom's nucleus is calle the atomic number of the atom. Atomic numbers are always whole numbers such as +92. Each atomic number is always a whole number, and each chemical element has its own atomic number. Protons have a positive electrical charge, yet electrons have a negative charge and since opposite charges attract, it keeps them in their orbits around the nucleus. Neutrons are neutral and weigh a bit more than protons. The breaking apart or joining together of atomic nuclei is called a nuclear reaction. A tramendous amount of energy may be released by a nuclear reaction. Nuclear energy is housed in a nuclear reactor. There are many types of nuclear reactors such as a pressurized water reactor.
There is no specific place where nuclear nergy may be found, neither is there a special geographical place a nuclear power plant must be located. There are many hundreds of nuclear power plants throughout the world, not to mention, many military and research reactors. In the United states alone there are over a hundred plants; about 15% of the nations electricity is produced at these plants. Frace is the leading user of nuclear energy throughout the world, 65% of their electricity is prduced in Nuclear power plants. Other leaders include; Belgium-60%, Sweden-50%, Switzerland-40%, Finland-38%, West Germany-30%, Japan-22%, Spain and Britain-20%, and Soviet-Union-14%. There have been many accidents concerning nuclear power plants, despite the usually good safety record. Two of the most serious accidents have taken place in; 1)the United Staes, when a Unit 2 power plant on Three Mile Island, Southeast of Harrisburg, Pennysania and 2) U.S.S.R. the worst accident concerning nuclear reactors occured on April 26, 1986 at the Chernobyl reactor #4 lacated in a reactor complex near Kiev, Soviet Ukraine.
There are advantages and disadvantages
of nuclear power. Nuclear weapons are the most powerful and fearsome weapons ever introduced to man. Not only because of the enormouse amount of physical damage that results from their detonation but also because of the radiation it releases. Radiation can have long-term effects on humans and our environement. these long-term effects consists of things like:
1) radiation disrupts the process that takes place in the nuclei of living organismes cells, decreasing the resistance of disease and increasing risks of developing cancer.
2)Abnormalities such as mongolism can be passed to offspring. The damage done to our environment is irrepirable. Not to mention it is unknown if the disposal of nuclear waste will have any effects on our environement had we will not know until many years to come. The advantage of nuclear energy is it's a great source of power used to produce electricity for things such as light.
Many people throughout the world use nuclear energy in thier everyday lives. It is used to produce electricity for running our homes, schools and businesses. Also lighting, heating and cooling.
Nuclear energy is better than any other type of energy because it is the most powerful type of energy that can be created.
It is better than the following energy for example: 1) Hydro energy, because when created, rivers have to be blocked by dams. Thereby, most likely, flooding of wastelands where certain animal types may have their homes would be destroyed and maybe even cause death to those animals. In saome cases possible extinction.2) Heat energy created by the burning of wood because it is very destructive to our forest. Trees are a necessity of oxygen, shelter and food to animals and even humans. Finally 3)solar energy because you don't have to depend on the sum to shine.
In conclusion due to the potential for accidents or sabotage at nuclear power plants, they are not as common as power plants for other types of power/energy. The problems associated with radioactive waste produced, these plants have long been a subject of controversy. The developement of nuclear energy, for both peaceful and wartime uses is much more than a scientific issue. It is also a prominent public issue among people in all nations. Because of disasters at nuclear power plants such as the one at Chernobyl, people everywhere refuse to except nuclear power palnts anywhere near their homes. They also refuse to have nuclear waste disposal sites near their homes no matter how safe the gouvernment and companies may say it is.
f:\12000 essays\sciences (985)\Physics\Nuclear Fission.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nuclear energy-This is energy that binds together
components of an atomic nucleus. This is made by the
process of nuclear fission. Nuclear fission is produced when
an atomic atom is split. The way nuclear pore is made is in
a nuclear reactor, this is most likely located in a nuclear
power plant. the fission that is produced is when a heavy
element splits in half or is halved into two smaller nuclei, the
power of the fission is located by the rate of the splitting of
the nuclei at once which causes watts of electricity to be
forced into the energy type.
Energy that is released by the nuclear fission matches
almost completely to that of the properties of kinetic fission
particles, only that the properties of the nuclear energies
nucleus are radioactive. These radioactive nucleuses can
be contained and used as fuel for the power. Most of this
power is fueled by uranium isotopes. These isotopes are
highly radioactive. The isotope catches the fast moving
neutrons created by the splitting atoms, it repels the slower
moving protons and electrons, then gathers the neutrons
and pulls them inward. While all these atoms are flying
about they smash together then split many of many times,
this is when the reactor grabs and pulls in the frictional
energy to be processed into electrical watts.
This usually causes heat or thermal energy, this must
be removed by some kind of a coolant. Most power plants
use water or another type of liquid based formula. these
coolants are always base related, never acidic. Very few
use gas related coolants in there reactors, these are known
as thermal reactor based power plants. Another nuclear
reactor type is a type that runs off of uranium oxide, the
uranium oxide is a gas form of the solid uranium. These
fuels which cause the radioactive particles usually are
always highly radioactive themselves. Because of this all
the power plants take high safety standards and use special
shields to prevent leakage. Usually the leakage can cause
nuclear contamination. This means they must take high
safety standards.
After nuclear fission has occurred many of the thermal
neutrons are moving at thermal neutrons are moving at
thermal velocities which are harder to be absorbed, so they
rely on constructional details. Usually they use thick medals
such as lead or tungsten, usually now though the barrier is
made of concrete. The average shield of a power plant
twelve to fourteen feet in diameter and fifteen to twenty feet
high.
This creates a problem with gamma ray leakage out
into the biomes. this usually would only happen in a time of
crises, this is why shields are so highly needed. Because of
this factor there are secondary shields only used in cases of
extreme emergencies. Usually this action triggers the fast
pace emergency reactions. In this time the secondary
emergency system reacts, the way that it reacts is by
enclosing the reactor in a gastight chamber. This chamber
has airlocks, these airlocks are double sealed and are
usually two sided. The shielded covers the entire reactor,
and the primary coolant system. All the coolant vents are
automatically shut up and off. This safely contain the fission
products inside the shell. Another way of stopping this is the
Negative Temperature Coefficient, what this seal does is
seal off the reactor and pump in gases that are of sub-zero
negative temperature properties, this freezes the thermal
neutrons, slow the fission, and finally freeze the radiation
particles. These procedures are highly effective in stopping
the contamination of the local community.
Because of all the possible damage nuclear power
plants are designed and operated in a manner that
emphasizes the prevention of accidental release of
radioactivity out into the environment. there has never been
a death caused by a commercial nuclear power plant that
are located in the United States of America. The potential
for cancer and genetic damage as the result of the
accidental release of radioactivity has led to an increased
public concern about the safe operation of reactors.
Although the direct health effects from the resulting
release of radioactivity into the environment are still being
investigated, the psychological effects of an accident could
damage the nuclear power associations credibility.
International concern over the issue of reactor safety was
renewed following an accident at a facility in the Soviet
Union in April 1986. The Chernobyl nuclear power plant,
which is located 80 miles northwest of Kiev in Ukraine
suffered a castrophic meltdown of its nuclear fuel. A
radioactive cloud spread from the plant over most of Europe,
this contaminated a very large amount of crops, and
livestock. Lesser amounts of this radiation showed up.
These are some reasons why people and the
community are very cautious against nuclear power, I hope
that this report can better inform people on this issue, even
though nuclear energy is the cleanest, and supposedly the
safest I still lay undecided. Here are some pictures on the
topic.
Nuclear energy-This is energy that binds together
components of an atomic nucleus. This is made by the
process of nuclear fission. Nuclear fission is produced when
an atomic atom is split. The way nuclear pore is made is in
a nuclear reactor, this is most likely located in a nuclear
power plant. the fission that is produced is when a heavy
element splits in half or is halved into two smaller nuclei, the
power of the fission is located by the rate of the splitting of
the nuclei at once which causes watts of electricity to be
forced into the energy type.
Energy that is released by the nuclear fission matches
almost completely to that of the properties of kinetic fission
particles, only that the properties of the nuclear energies
nucleus are radioactive. These radioactive nucleuses can
be contained and used as fuel for the power. Most of this
power is fueled by uranium isotopes. These isotopes are
highly radioactive. The isotope catches the fast moving
neutrons created by the splitting atoms, it repels the slower
moving protons and electrons, then gathers the neutrons
and pulls them inward. While all these atoms are flying
about they smash together then split many of many times,
this is when the reactor grabs and pulls in the frictional
energy to be processed into electrical watts.
This usually causes heat or thermal energy, this must
be removed by some kind of a coolant. Most power plants
use water or another type of liquid based formula. these
coolants are always base related, never acidic. Very few
use gas related coolants in there reactors, these are known
as thermal reactor based power plants. Another nuclear
reactor type is a type that runs off of uranium oxide, the
uranium oxide is a gas form of the solid uranium. These
fuels which cause the radioactive particles usually are
always highly radioactive themselves. Because of this all
the power plants take high safety standards and use special
shields to prevent leakage. Usually the leakage can cause
nuclear contamination. This means they must take high
safety standards.
After nuclear fission has occurred many of the thermal
neutrons are moving at thermal neutrons are moving at
thermal velocities which are harder to be absorbed, so they
rely on constructional details. Usually they use thick medals
such as lead or tungsten, usually now though the barrier is
made of concrete. The average shield of a power plant
twelve to fourteen feet in diameter and fifteen to twenty feet
high.
This creates a problem with gamma ray leakage out
into the biomes. this usually would only happen in a time of
crises, this is why shields are so highly needed. Because of
this factor there are secondary shields only used in cases of
extreme emergencies. Usually this action triggers the fast
pace emergency reactions. In this time the secondary
emergency system reacts, the way that it reacts is by
enclosing the reactor in a gastight chamber. This chamber
has airlocks, these airlocks are double sealed and are
usually two sided. The shielded covers the entire reactor,
and the primary coolant system. All the coolant vents are
automatically shut up and off. This safely contain the fission
products inside the shell. Another way of stopping this is the
Negative Temperature Coefficient, what this seal does is
seal off the reactor and pump in gases that are of sub-zero
negative temperature properties, this freezes the thermal
neutrons, slow the fission, and finally freeze the radiation
particles. These procedures are highly effective in stopping
the contamination of the local community.
Because of all the possible damage nuclear power
plants are designed and operated in a manner that
emphasizes the prevention of accidental release of
radioactivity out into the environment. there has never been
a death caused by a commercial nuclear power plant that
are located in the United States of America. The potential
for cancer and genetic damage as the result of the
accidental release of radioactivity has led to an increased
public concern about the safe operation of reactors.
Although the direct health effects from the resulting
release of radioactivity into the environment are still being
investigated, the psychological effects of an accident could
damage the nuclear power associations credibility.
International concern over the issue of reactor safety was
renewed following an accident at a facility in the Soviet
Union in April 1986. The Chernobyl nuclear power plant,
which is located 80 miles northwest of Kiev in Ukraine
suffered a castrophic meltdown of its nuclear fuel. A
radioactive cloud spread from the plant over most of Europe,
this contaminated a very large amount of crops, and
livestock. Lesser amounts of this radiation showed up.
These are some reasons why people and the
community are very cautious against nuclear power, I hope
that this report can better inform people on this issue, even
though nuclear energy is the cleanest, and supposedly the
safest I still lay undecided. Here are some pictures on the
topic.
Nuclear energy-This is energy that binds together
components of an atomic nucleus. This is made by the
process of nuclear fission. Nuclear fission is produced when
an atomic atom is split. The way nuclear pore is made is in
a nuclear reactor, this is most likely located in a nuclear
power plant. the fission that is produced is when a heavy
element splits in half or is halved into two smaller nuclei, the
power of the fission is located by the rate of t
f:\12000 essays\sciences (985)\Physics\Nuclear Power 2.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Nuclear Power
Producing energy from a nuclear power plant is very complicated. The process of nuclear energy involves the fission of atoms, the release of energy from fission as heat, and the transfer of heat to electricity in power plants.
The process of splitting the atom is called nuclear fission. Fission can take place in many different kinds of atoms. This explanation uses Uranium - 235, the atom most commonly used in nuclear reactors. The Uranium atom has many protons, thus making it unstable. Since the nucleus of the atom is so unstable it wants to split itself apart, causing a spontaneous fission. When the nuclei of a Uranium atom splits apart, it splits into two atoms. Commonly the nucleus splits into Barium and Krypton; however, it can split into any two atoms as long as the number of protons equals the original amount of the protons found in the Uranium. In addition, a mass amount of energy is released along with two or three neutrons. It is these neutrons that can begin a chain reaction, each neutron that is given off could collide with another Uranium atom splitting it apart. Each of these fissioning atoms releases a very large amount of energy, and some more neutrons.
This process continues causing a chain reaction withut any outside assistance, and the Uranium has "gone critical"(Martindale, 794-195). This chain reaction is the basis for how nuclear power is made.
The amount of the energy that is given off in nuclear fission is astronomical. To equal the amount of energy given off when splitting some uranium the size of a golf ball, one would have to burn approximately twenty-five train cars full of coal. Presently, the planet contains twenty-five times more nuclear fuel compared to fossil fuel. On average, an atomic power plant can produce half a million kilowatts of power. As a comparison, a hair dryer takes about one kilowatt (Jenny, 1-2).
The producing of energy from nuclear fission is very similar to using a very common fossil fuel boiler. The difference lies in the reactor, where the heat is generated by fissioning material. The most common of reactors is the pressurized water reactor; however, there are many other types.
The pressurized water reactor is the most common reactor in the United States. The reactor of a nuclear power plant is where the fissioning takes place. The Uranium is contained in fuel rods, each rod is sealed so no contamination occurs. Many of these rods are then contained in a fuel assembly. All the fuel assemblies are separated by control rods. The control rods limit the amount of fission taking place by the use of Boron, an element that absorbs neutrons. If the control rod is inserted, it collects the neutrons from the fissioning atoms, which slows down or stops fission taking place in the reactor. There commonly are 300 to 600 fuel assemblies in one reactor (Michio, 31). Surrounding all of the fuel assemblies is a moderator, water in most cases. The moderator is a substance that is used to slow down the neutrons. The slower the neutrons travel, the more likely they will strike the nucleus of an atom. The process begins when a spontaneous fission takes place and starts the chain reaction. The control rods are the inserted to keep the rate of fission constant, this is called "going critical". As the fission takes place in the fuel assemblies, the kinetic energy (heat) given off is absorbed by the water. The water is under pressure so it will never boil. The water becomes super heated, sometimes above 300º C, and is then pumped into a heat exchanger. The heat exchanger runs water, at normal pressure, through pipes in the super heated water, boiling the water at normal pressure vigorously. That boiling water quickly turns to steam which is then used to turn massive generators. The generators then turn the kinetic energy into electricity (Weiss, 26). The steam then is cooled down and returned to the heat exchanger so it will boil again. If there is no need to use the water again, it is pumped into a nearby lake or river. In turn, if more water is needed, it is pumped from a nearby lake or river. If the water in the reactor becomes too hot, it is vented into a cooling tower where the water is condensed to steam and released into the air. Then cool water from a lake or river is pumped into the reactor to cool it down.
There are many other ways of utilizing nuclear fission for energy. Of these, the more popular types are the heavy water reactors, the gas-cooled reactors, the graphite moderated water-cooled reactors, and the fast breeder reactors.
The heavy water reactor is mainly found in Canadian reactors, where heavy water is abundant. The heavy water reactor is almost the same as the pressurized water reactor. The pressurized water is heated, and then pumped into a heat exchanger. The pressurized water is used to boil ordinary water, turning it into steam. The steam is then used to turn a generator. The advantage is that a heavy water is used as the moderator. Heavy water is has a special isotope of hydrogen in it. Since heavy water is a larger particle of matter, it slows down the neutrons even more, and less energy wasted. Another benefit of this reactor, is that it doesn't have to be shut down to refuel (Martindale, 797).
The gas cooled reactors uses graphite as the moderator and carbon dioxide or helium as the coolant. The gas is heated and then again passed to a heat exchanger where steam is produced to turn the generators. The gas cooled reactors are most commonly found in Europe (Martindale, 797).
Another kind of reactor is the graphite moderated water cooled reactor. This reactor is almost exclusively used in the Soviet Union. This form of reactor is a hybrid of the pressurized water reactors and the graphite moderated reactors. The advantage of using this form of reactor is that it does not have to be shut down for refueling. However, because of the poor engineering, this reactor is commonly known for uncontrollable chain reactions leading to meltdowns. This is the type of reactor that melted down in Chernobyl. For this reason no other country is willing to take on the risks connected with this reactor (Foreman, 38).
The last type of reactor, the fast breeder reactor, is of a unique design. The core of the reactor still contains uranium - 235; however, lining the walls of the reactor is uranium - 238. When there is little or no moderator to slow the neutrons, some of the stray neutrons strike the uranium - 238 on the walls producing plutonium - 239 or uranium - 233. Both of these by products are then used as the fuel in the reactor (Martindale, 797). The obvious advantage of this form of reactor is that it takes less of our worlds nuclear fuel to run the plant(Martindale, 797).
Nuclear reactors are a complicated form of an energy source. Although similar in the over all form, the nuclear power plants can produce much more energy then the conventional fossil fuel power plants.
Sources
Kaku, Michio and Trainer, Jennifer, et al. Nuclear Power: Both sides. Toronto: George J. McLeod Limited, 1982.
Foreman, Harry. Nuclear Power and the Public. Minneapolis: University of Minnesota Press, 1970.
Martindale, David, et al. Heath Physics. Lexington: D.C. Heath and company, 1992.
Weiss, Ann. The Nuclear Question. New York: Harcourt Brace Jovanoich, 1981.
Jenny and Mike. Atomic Energy. Internet: http://web66.coled.umn.edu/hillside/franklin/atomic/project.html
f:\12000 essays\sciences (985)\Physics\Nuclear Power.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Innehållsförteckning
Innehållsförteckning 1
Inledning 2
Vad är kärnkraft? 3
Fission 3
Fusion 4
Lite historik om kärnkraften 5
Kärnkraftverk 6
Hur ett kärnkraftverk fungerar 6
Olika typer av kärnreaktorer 6
Bridreaktor 8
Intervju med Sten-Ove Beck 9
Kärnavfallet 10
Radioaktivitet 11
Folkomröstningen 1980 12
Kärnvapen och atombomber 13
Ordförklaringar 14
Källförteckning 16
Inledning
Jag har valt att skriva om kärnkraft för att det är ett ämne som rör oss alla, och dessutom är spännande. I Sverige står kärnkraften för en stor del av energiförsörjningen, men vad är kärnkraft? Är det farligt? Finns det något bättre än kärnkraften? Vad är kärnvapen? Det är flera frågor som säkert många har frågat sig själv, men egentligen aldrig fått något riktigt svar på. Dessa frågor skall jag försöka svara på i detta arbete, med hjälp av information från böcker, tidningsartiklar och material från kärnkraft-verket Ringhals. Jag hoppas att detta arbete skall hjälpa dig att besvara några av de frågor som du kanske ställt dig, eller kanske hjälpa dig till ett ställningstagande för eller emot kärnkraft i Sverige. Jag har tänkt skriva om kärnkraft i både civilt syfte (kärnkraftverk) och i militärt syfte (Atom-bomber och kärnvapen) och lite om folkomröstningen om kärnkraften i Sverige 1980.
Vad är kärnkraft?
Kärnkraft kallas även ofta för atomenergi eller kärnenergi, vilket är exakt samma sak. Atomenergi är en omvandling av materia till energi och kan genomföras genom två olika metoder, fission och fusion. Fusion är en sammanslagning av lätta atomkärnor till tunga atomkärnor, och fission är klyvning av tunga atomkärnor. I båda fallen frigörs det energi som tas till vara.
Fission
Fission är den metod som dagens kärnkraftverk använder sig av. I en fissionsreaktion använder man grundämnet uran. Det finns tre olika sorters uranatomer, som skiljer sig ifrån varandra i fråga om antal partiklar i atomkärnan. Sådana atomer brukar kallas isotoper. Den mest sällsynta uran-isotopen består av 234 partiklar, och kallas därför URAN 234. Den vanligaste uranistopen har 238 partiklar och kallas därför URAN 238. Den tredje har 235 partiklar, och är det viktigaste bränsle-källan till kärnkraft. Om man sammanför ett tillräckligt stort antal av dessa atomer så uppnår man den "kritiska massan", då det av sig själv utlöser en kärnreaktion, fission.
Fission (kärnklyvning) är den metod som våra kärnkraftverk använder sig av. Det bygger i stort på att uranatomskärnor träffas av fria neutroner och delas. När kärnan träffas av den fria neutronen blir det av uran-kärnan två andra kärnor och några fria neutroner. De två atomkärnor som bildas brukar kallas för klyvningsprodukter, och är inte uran utan andra radioaktiva ämnen. Sammtidigt frigörs det en massa energi. (Se bild nedan.)
Fusion
Kärnkraften idag använder som sagt en fissionsreaktion som kraftkälla, vilket innebär klyvning av tunga atomkärnor till lättare atomkärnor. Men man kan även utvinna energi genom att slå ihop lätta atomkärnor med varandra så det blir tyngre atomkärnor. Det kallas för fusion. Solen och stjärnorna har utnyttjat denna metod i miljarder år, men här på jorden har vi bara lyckats att använda den i så kallade vätebomber, som är den starkaste bomben på jorden. Skulle vi däremot lyckas med att använda fusion i fredliga ändamål skulle vi ha energi i flera århundraden framåt.
I fusionsprocessen spelar deuterium och tritium den viktigaste rollen. Även de är så kallade isotoper och består bara av en proton och en respektive två neutroner i atomkärnan vardera. Om man skulle kunna utnyttja det deuterium som finns i en kubikmeter havsvatten till fusion skulle det ge lika mycket energi som 200 ton olja. Till skillnad från fissionsprocessen avger fusion relativt ofarliga ämnen. För att få en fusionsreaktion krävs en temperatur på 100 miljoner grader. Det är unge-fär sex till sju gånger varmare än i solens inre, att det däremot kan ske inne i solen beror på den otroligt höga gravitationen. Vid denna höga temperatur bastår deuterium- och tritiumatomerna av en "gröt" av partiklar i atomen. Detta kallas plasma. Om dessa två kolliderar i plasma slås de ihop och bildar en tyngre atomkärna (Helium) och en fri neutron, eftersom deuterium- och tritiumatomerna tillsammans har en neutron för mycket än i en heliumkärna. Den "extra" neutronen frigörs tillsammans med en massa energi. Man har gjort experiment med fusionskraft i USA, Sovjet, Europa och Japan men hittills har det krävt mer energi än fusionskraften ger.
Lite historik om kärnkraften
Ordet "atom" betyder på grekiska odelbar. Man trodde först alltså att atomerna var odelbara. Senare kom man på att elektronerna kunde fri-göras, men atomkärnan var odelbar. Så småningom upptäckte man att även atomkärnan kunde delas, och att den bestod av mindre partiklar (protoner och neutroner) som hölls samman av starka krafter. Albert Einstein kom 1905 på en formel för att beräkna den energi som kan fri-göras vid splittring av atomkärnan.
Under andra världskriget sades det i USA att Hitler höll på att utveckla en atombomb. Därför var USA mycket angelägna om att forska fram en atombomb innan Hitler lyckades. USA lyckades att göra en atombomb, och den 6 augusti 1945 klockan 08:15 släppte ameri-kanarna sin atombomb över stade Hiroshima i Japan. 80000-90000 människor dog direkt, men många fler har dött efter det av den farliga strålningen. 3 dagar senare fällde amerikanarna sin andra atombomb, nu över staden Nagasaki i Japan. En del påstår att spräng-ningen i Nagasaki skedde för att amerikanarna ville "testa" sin plutoniumbomb.
Man förstod snart att atomenergins väldiga krafter kunde användas i fredliga ändamål. Många länder försökte utveckla ett kärn-kraftverk, och 1953 togs den första reaktorn i bruk i Storbritanien. Den första amerikanska reaktorn stog klar först några år senare. Sverige ville inte vara sämre och bildade därför bolaget AB Atomenergi 1947, som skulle driva forskning och utveckling av kärnkraften i civila ändamål. 1954 byggdes det en forskningsreaktor vid Tekniska Högskolan i Stockholm, men först 1972 togs den första reaktorn i drift, Oskarshamn I. Nu har vi fyra stycken kärnkraftverk i drift i Sverige: Oskarshamn, Barsebäck, Forsmark och Ringhals. I en statlig utredning om kärnkraften i Sverige från 50-talet kan man läsa följande:
"De radioaktiva klyvningsprodukterna betraktas idag som ett besvärligt avfallsproblem. De utgör emellertid samtidigt en ny strålningskälla av en styrka men ej haft tillgång till, och lovande arbeten är igång att finna användning för dem, t ex för konservering av livsmedel och för genom-förande av kemiska processer."
Detta kan ha lett till att många riksdagsmän trott att vi skulle kunna använda hela eller stora delar av avfallet. 1980 hade Sverige en folk-omröstning om kärnkraften i Sverige, läs mer om den senare i arbetet.
Kärnkraftverk
Hur ett kärnkraftverk fungerar
Ett kärnkraftsverks principer bygger på en fissionsreaktion. När uran-atomer klyvs slungas neutronerna iväg i en väldigt hög fart. I kärnkraft-verket bromsas neutronerna av en så kallad moderator som kan vara grafit, tungt vatten eller vanligt vatten, för att sedan klyva andra atomkärnor. Den värmeenergi som då bildas används för att värma upp vatten till ånga, denna ånga driver en turbin som får en generator att alstra elström. Denna process är lite olik för olika typer av reaktorer. Bränslet till kärnkraftverken finns i så kallade bränslepatroner, som vanligtvis är fyllda med uranoxid. Dessa bränslepatroner byts ut varje eller vartannat år. För att värmeenergin skall vara lika stor både när patronerna är nya och i slutet, använder man så kallade kontrollstavar. Dessa sitter i reaktorn emellan bränslepatronerna. Kontrollstavarna är gjorda i ett ämne som drar till sig neutroner. Ju fler kontrollstavar som sitter i ju mer bromsas fissionen. I en reaktor har man så många kontrollstavar så fissionen kan stoppas helt. De förbrukade bränsle-patronerna är mycket radioaktiva och måste förvaras säkert, detta är det debatterade kärnavfallet. Mer om detta senare i arbetet.
Olika typer av kärnreaktorer
Det finns flera olika typer av kärnreaktorer vilka illustreras enkelt på denna och nästa sida. Den vanligaste reaktorn i Sverige är kokar-reaktorn. När värmeenergin kommer från fissionsprocessen kokar den vatten så att det kokar till ånga som alstrar ström. Tryckvattenreaktorn förekommer i Sverige bara på Ringhals, men är vanlig i hela västvärlden.I den används vatten under högt tryck både som kylare och moderator. Moderatorns uppgift är att bromsa neutronerna. En annan reaktortyp är den gaskylda reaktorn, som är den äldsta. Denna typ förekommer endast
på ett fåtal ställen i Storbritanien. I denna typ använder man koldioxid som kylare och grafit som moderator. Den sista reaktortypen före-kommer endast i Canada, Indien och Argentina. Den kallas CANDU-reaktor och använder tungt vatten både som moderator och kylare. Tungt vatten ser precis ut som vanligt vatten men är effektivare som moderator. Tungt vatten består av en syreatom och två deuterium-atomer.
Bridreaktor
Det finns också en annan sorts reaktor som använder vanliga kärnkraft-verks avfall som bränsle. Dessa reaktorer kallas bridreaktorer. De använder plutonium och uran 238 som bränsle, som båda är avfall från vanliga kärnkraftverk. Bridreaktorn skiljer sig från vanliga reaktorer på flera viktiga punkter, bland annat att den saknar moderator och därför ibland kallas för snabb reaktor. Temperaturen är mycket högre än i vanliga reaktorer och därför använder man natrium i stället för vatten. (Se bild) Små reaktorer av denna typ har varit igång i Frankrike och Storbritanien sen 70-talet, men många hävdar att de är osäkra.
Intervju med Sten-Ove Beck
Jag ringde Ringhals kärnkraftverk, som ägs av Vattenfall, för att intervjua Sten-Ove Beck. Sten-Ove Beck arbetar på RInghals informationsav-delning. Jag frågade honom om allmänna saker om kärnkraften. På frågan om han tycker vi skall avveckla kärnkraften i Sverige svarar han att han tycker vi skall avveckla den om det finns något annat alternativ, och det finns det inte enligt honom idag. En fråga som jag ställde som blivit aktuell på senare år är U-länders och Sovjets osäkra kärnkraftverk, som till exempel Tjernobyl. Sten-Ove svarar att han tycker vi skall hjälpa dem så mycket vi kan. Vidare säger han att de verkligen behöver sin energi och att vi kan hjälpa dem ekonomiskt i stället för att satsa extra pengar på våra egna kärnkraftverk, som redan är bland världens säkraste kärnkraftverk. Han nämde också en del nackdelar om kärnkraften bland annat den bristande kunskapen om strålning och om kärnavfallets problem. Han säger vidare att Sverige har en av världens bästa metoder att förvara kärnavfall på, alltså nere i bergrum i urberget. Jag delar hans åsikter om både U-länders kärnkraftverk och hans åsikter om avveckling av kärn-kraften.
På bilden nedan ser vi Ringhals väldiga kärnkraftverk med sina tre tryck-vattenreaktorer och sin kokarreaktor. Tryckvattenreaktorerna är de svart-vita runda byggnaderna och kokarreaktorn är den stora fyrkantiga grå-vita byggnaden med hög skorsten.
Kärnavfallet
Kärnavfallet brukar delas in i följande tre grupper:
Högaktivt avfall
Detta är det använda bränslet från kärnkraftverken. Det dröjer tusentals år innan detta avfall blir ofarligt för människan. Man har beräknat att det kommer bli totalt 7800 ton av detta avfall fram till år 2010 om alla reaktorer som idag är i drift.
Låg- och mellanaktivt avfall
Detta är avfall från kärnkraftverk till exempel filter, överdragskläder, skrotade verktyg med mera. Detta avfall behöver bara isoleras i några hundra år innan det blir ofarligt för människan.
Rivningsavfall
Detta är det radioaktiva avfall som blir vi nedläggning av kärnkraftverken. Detta behandlas på samma sätt som låg- och mellanaktivt avfall.
Låg- och mellanaktivt avfall från kärnkraftverken, sjukvården, industrin och forskningen tas om han av SFR (Slutförvar för radioaktivt avfall). Avfallet placeras femtio meter under havsytan, där det får ligga i några hundra år tills det blir ofarligt för människan. Det högaktiva avfallet transporteras till CLAB (Centralt lager för använt bränsle) där det mellanlagras i 40 år. Lagringen sker i vattenfyllda bassänger 25 meter under markytan. Vattnet fungerar både som kylmedel och som avskär-mare. Efter 40 års mellanlagring skall det slutförvaras. (Än så länge har inte slutförvaringen påbörjats, den beräknas starta 2010) Man har flera ideér för hur sluförvaringen skall ske, den vanligaste är att avfallet gjuts i koppar och grävs ner 500 meter ner i urberget. Det finns även förslag om att vi skall skicka ut det i rymden.
De som har hand om allt kärnavfall i Sverige är SKB. (Svensk kärnbränslehantering AB) De har ett specialbyggt fartyg (m/s Sigyn) för transporter av kärnavfall. De äger även speciella fordon för transport till fartyget och behållare för avfall. Ett litet räkneexempel från SKB visar att en om svensk normalfamilj på fyra personer årligen gör av med 8000 kWh (Ej inräknat uppvärmning av bostad) och får all sin elenergi från ett kärnkraftverk så blir det en liter kärnavfall per år för den familjen.
Radioaktivitet
Vi utsätts dagligen för radioaktiv strålning, både från rymden, solen, kroppen och från marken. Många får även i sig radioaktiv strålning genom röntgen och av att de bor i så kallade radonhus. Om det radio-aktiva avfallet skulle komma ut till människoroch djur i större mängder kan det orsaka cancer. Avfallet från kärnkraften måste inte enbart skydda människor från den den direkta strålningen utan även från förgiftning av till exempel grundvattnet. För att förhindra detta behandlas alltid kärnavfallet i fast form.
Folkomröstningen 1980
En söndag 1980 gick 75 % av Sveriges röstberättigade personer till vallokalen för att rösta för eller emot kärnkraften. Det fanns tre olika röst-alternativ, linje 1, linje 2 och linje 3. Man fick självklart även rösta blankt. Linje 1 stod för en utbyggnad av 6 reaktorer till och sedan en långsam avveckling av kärnkraften i Sverige. Linje 2 stog för ungefär samma saker, men var mer detaljerad hur det skulle ske. Linje 3 var helt emot kärnkraften, och en avveckling inom tio år. (Se nedan hur röstsedlarna såg ut.) Resultatet i omröstningen blev så här:
Antal röster Procent
Linje 1 904.968 18,9%
Linje 2 1.869.344 39,1%
Linje 3 1.846.911 38,7%
Blanka röster 157.103 3,3%
Antal röstande 4.781.479 75,6%
Antal röstberättigade 6.321.165
Valdeltagandet var som synes lågt. Många trodde att linje 3 skulle bli vinnaren i valet, mycket beroende på opinionsundersökningar innan och Harrisburg-olyckan några år tidigare, men vinnaren blev med knapp marginal linje 2.
Kärnvapen och atombomber
Kärnvapen och atombomber tillhör en grupp av vapen som kallas ABC-stridsmedel, vilket betyder Atom-, biologiska- och kemiska vapen. (Engelska: Atomic-, biological- and chemical weapons) Till kärnvapnen räknas uran-, plutonium- och vätevapen. I uran- och plutoniumvapen sker en kärnklyvning (fission) med uran eller plutonium. Vätevapen fungerar genom en sammanslagning av lätta atomkärnor till tyngre. (fusion) Strategiska kärnvapen är vapen som är riktade mot befolknings- och industricentra eller mot militära flyg-, flott- eller robotbaser. Mindre kärnvapen kallas taktiska kärnvapen och används mot taktiska mål såsom flygplan, fartyg och militärförband. Man brukar mäta kärnvapens sprängverkan i hur många tusen kilo vanligt sprängämne sprängsverkan motsvarar:
1000 Kg = 1 KiloTon (KT) 1 miljon Kg = 1 MegaTon (MT)
Bomberna som släpptes i Japan 1945 hade en sprängverkan på 20 KT, det motsvarar alltså 20000 Kg vanligt sprängämne.
Det finns tre olika sorters atombomber, väte-, uran- och plutoniumbomb. Vätebomben använder sig som sagt av en fusions-reaktion och är tusentals gånger starkare än uran- och plutonium-bomber. Atombomben som släpptes den 6 augusti 1945 över staden Hiroshima i Japan var en uranbomb på cirka 20 KT. Förutom de 80.000-90.000 människor som dog och lika många skadade vid själva explosio-nen har många dött långt senare på grund av den radioaktiva strålningen från bomben. För att åstadkomma effekten av den 7 ton tunga bomben med ungefär 40 kg uranbränsle skulle det i Europa krävas 1000 flygplan vid samma tid.
Ordförklaringar
Atom är den minsta del ett ämne kan delas i. Atomen består av en atomkärna, vilken det kretsar elektroner runt.
Atomkärna är kärnan av atomen. Den består av protoner och neutroner som hålls samman av starka krafter.
Deuterium är en väteisotop, som består av en proton och en neutron. Används vid fusionsreaktioner.
Effekt är energi på en viss tid. Ex. hästkraft, watt.
Elektron är en negativt laddad partikel som kretsar runt atomkärnan.
Fission är en kärnklyvning av tunga ämnen då energi frigörs.
Fusion är en sammanslagning av lätta atomkärnor till tyngre atomkärnor samtidigt som energi frigörs.
Gravitation är den kraft som verkar mellan alla föremål. Ju större massa och ju kortare avstång ju större är gravitationskraften.
Härd är den del av en kärnreaktor där bränslet placerats.
Isotop är en variant av ett grundämne som har samma egenskaper men har olika antal neutroner i atomkärnan.
Den kritiska massan är den minsta mängden kärnbränsle för att starta en kedjereaktion.
kWh (kilowattimme) är en enhet för energiförbrukning.
Kärnreaktor är en annordning där en kärnreaktion äger rum.
Moderator är ett ämne som bromsar neutronernas fart så att en fission kan ske. Används i kärnkraftverk.
Neutron är en kärnpartikel utan elektisk laddning.
Plasma är ett gasliknande tillstånd då atomerna inte har några elektroner.
Plutonium är ett klyvbart grundämne. Det finns ej i naturen men kan framställas av uran i en kärnreaktor.
Proton är en kärnpartikel med positiv laddning
Tritium är en väteisotop med en atomkärna bestående av en proton och två neutroner.
Uran är en naturligt förekommande radioaktiv metall. Används som bränsle i kärnreaktorer. Uran är ett grundämne med den kemiska beteckningen U.
Watt (W) är en enhet för effekt. Watt = Joule/sekund
Källförteckning
Material från Vattenfall - Ringhals
Material från SKB (Svensk kärnbränslehantering AB)
Bra Böckers Lexikon
Svenska Dagbladet
Media Familjelexikon - Bonniers
Kärnkraften av Per Kågeson och Kerstin Ahlgren .- Prisma
Fängslad vid kärnkraften? av Per Kågeson och Björn Kjellström - Liber
Kärnavfallet av Anna Schytt - Sveriges Radios Förlag
Upptäck energi av Frank Frazer - Bonnier Fakta
f:\12000 essays\sciences (985)\Physics\Nuclear PowerOur Future.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thousands of years ago human beings learned to make fire. By collecting and burning wood they were able to warm themselves, cook food, and manufacture primitive tools. Later, the Egyptians discovered the principal of the sail. Even more recent was the invention of the water wheel. All of these activities utilize various forms of energy-biological, chemical, solar, and hydraulic.
Energy, the ability to do work, is essential for meeting basic human needs, extending the life expectancy, and providing a rising living standard.
This is where the need for nuclear power comes in. Uranium fission is about a million times more efficient than the common practice of burning coal or oil. For comparison, coal combustion produces about 20-30 MJ/kg of heat energy while uranium, in a fast breeder reactor, produces more than 24,000,000 MJ/kg (Energy 27). Those numbers alone are astounding.
Uranium is also abundant, thanks to recent discoveries of large reserves. At present, uranium is only being mined and separated from ore. However, a huge untapped source is our oceans. Sea water contains 3.3x10^(-9) (3.3 parts per billion) of uranium, so the 1.4x10^18 tons of sea water contains 4.6x10^9 tons of uranium. All the world's electricity usage, 650GWe could therefore be supplied by the uranium in sea water for 7 million years(Energy 25). This is a only a theoretical number because it is not possible to get all of the uranium out of our vast oceans. Also, it does not include the fact that in that many years, half of the uranium will no longer exist due to radioactive decay. So, at worst, we would get about 2 million years of power from it. Thorium is another element than can be used in nuclear reactors. Thorium is approximately four times more abundant than uranium. It is obvious that we are in no danger of exhausting these sources of energy. We need to exploit these resources and use them to our advantage. God has given us the knowledge to use uranium for power, so why shouldn't use it? There are many benefits to using nuclear generated power over our other common sources.
A big advantage of nuclear power plants is that they do not burn anything, they are non-polluting, and they are kind to the environment. Unlike coal-, gas-, and oil-fired power plants, nuclear power plants do not emit carbon dioxide and other harmful greenhouse gases into the atmosphere.
This is not to say that no waste is produced in a nuclear reaction. An average size nuclear reactor produces 1000 MWe and leaves behind about 25 tons of spent fuel. This product is highly radioactive and gives off a great deal of heat. However, it can be reprocessed so that 97% can be recycled. The remaining 3%, about 700kg, is high-level radioactive waste that needs to be isolated from the environment for many years (Gale 22). This small quantity makes the task readily manageable. Even if the fuel is not reprocessed, the yearly amount of 25 tons is modest compared with the quantities of waste from similar sized coal-fired power plants. And, the spent fuel could be stored and then reprocessed many years later if the need arose.
For comparison, a 1000 MWe coal-fired power station produces about seven million tons of carbon dioxide each year, plus perhaps 200,000 tons of sulfur dioxide which remains a major source of atmospheric pollution. There are approximately 200,000 tons of other wastes produced including toxic metals, arsenic, cadmium, mercury, organic carcinogens (which causes cancer and genetic mutations) and, surprisingly to most people, naturally occurring radioactive substances (Jones 13).
The nuclear industry is unique in that it is the only energy producing industry that has taken full responsibility for the disposal of all its waste and pays the full cost of doing so.
By the laws of supply and demand, the large supply of uranium keeps the price down, unlike the situation with crude oil. From the outset the basic attraction of nuclear energy has been its low fuel costs compared with coal, oil, and gas fired plants. Uranium, however, has to be processed, enriched, and fabricated into fuel elements. About one third of the fuel cost is due to enrichment. Allowances must be then be made for the management of radioactive spent fuel and the ultimate disposal of this or the wastes arising from it. Nonetheless, with these costs included, the total fuel costs of a nuclear power plant are typically about one third of those of a coal-fired plant and about one fifth of those of a gas combined cycle plant (Economics 35).
Uranium has the advantage of being a highly concentrated source of energy which is therefore easily and cheaply transportable. On kilogram of natural uranium will yield about twenty thousand times as much energy as the same amount of coal (Hawley 7). It has the intrinsic property of being a very portable and tradable commodity. In addition, because the fuel cost contribution to the overall cost of electricity produced is relatively small, even a large price increase will have relatively little effect (Hawley 8).
Nuclear energy also gives the nation a diversity of fuel sources for meeting its electricity needs. No country would want to be too dependent on a single source of energy. By not putting all of our energy eggs in one basket, America can keep a reliable supply of electricity flowing to our homes and businesses despite interruptions in fuel supplies caused by weather conditions and natural disasters, or by international events and economic fluctuations. Since the Arab oil embargo of 1973, nuclear energy has displaced the need for more than 2.5 billion barrels of oil at an estimated cost of $66 billion, giving the United States greater energy security and economic strength (Keepin 12). Not only does nuclear energy keep American dollars at home, it keeps Americans at work, with an estimated 400,000 people employed in nuclear-related jobs. There should not be a question of coal or nuclear power. What we need is a balance between the two, with as much help as possible from hydro power and other renewable sources.
By mid 1996, there were 32 countries of varying size, political persuasion, and degree of industrial development, which included nuclear power in their energy mix and were operating nuclear reactors. About 17% of the world's electricity is being produced by some 440 reactors, with 30 more under construction. Belgium, China, France, Hungary, India, Japan, Switzerland, UK, USA, and Russia are just some of the countries with major nuclear energy programs (Blinkin 17). We need to continue to expand our use of nuclear energy to its full potential.
Through continuing research and innovation, America's technological leadership in the nuclear industry is unmatched anywhere in the world today(Clarke 35). Maintaining that leadership position not only benefits our society today, but also will create opportunities for our children. By providing our society with a reliable, economical, and clean supply of electricity, nuclear enrgy offers America the opportunity for sustainable growth for future generations.
1
f:\12000 essays\sciences (985)\Physics\Nuklear Power Our miusunderstood Freind.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
At first nuclear power was only seen as a means of destruction but after World War II a major effort was made to apply nuclear energy to peacetime uses. Nuclear power if made when a nucleus of an atom is split to release a powerful burst of energy. Though technological advancements nuclear power now supplies us with new medical aids, a new power source and new ways to do scientific research.
New medical advancements are being produced rapidly due to nuclear power. Nuclear material is now being used to treat diseases. Pacients suffering from cancer can then be exposed to the healing effects of the radiation under controlled conditions. The radiation of the nuclear energy can help in medical tests. Radioactive phosphorus is an important diagnostic aid. It is injected into the veins of a patient, it concentrates in the cells of certain brain tumors. Thyroid gland strongly attracts iodine. Radioactive iodine is used both in diagnosing and in treating diseases of the thyroid. Nuclear power is changing the face of medicine with new cures and tests that will cure millions..
Nuclear power can be converted into strong and efficient nuclear energy and be used for many purposes. Nuclear power reactors generates heat that is converted into steam. The steam can be used directly for energy. This energy is used in transportation. Most military subs are now ran by nuclear energy. The most used purpose of nuclear energy can also be used to generate electric power for example in a commercial nuclear power plant. Another way to produce nuclear energy is by gas-cooled reactors with either carbon dioxide or helium as the coolant instead of water. This method is used mainly in commercial nuclear plants in the United Kingdom and France due to the lack of freshwater. With growing popularity nuclear energy will definitely of the future with new ways to use this energy in a positive manner.
Scientists can now use nuclear power for biological research to help understand life more. Radioactive isotopes have been described as the most useful research tool since the invention of the microscope. Physiologists use them to learn where and at what speed physical and chemical processes occur in the human body. Isotopes are also used for agricultural Biologists use radioactive isotopes to see how plants absorb chemicals as they grow. With radioactive cobalt, botanists can produce new types of plants. Structural variations that normally take years of selective breeding to develop can be made to occur in a few months.
Many believe that nuclear power is too destructive and as such should be destroyed. Although it does have it's negative aspects, nuclear power is not evil in anyway. Nuclear power is an inanimate object, it does not live nor have a mind of it's own. It is the human race that decided that the best way to use this power was to use it as an instrument of war. Nuclear Power should be seen as a positive and humanity can blame no one for its destructive manner but themselves. It was our decision to use it for death, it is now our responsibility to use it for life.
f:\12000 essays\sciences (985)\Physics\Physics Lab.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Purpose:
The purpose of this experiment was to measure the velocity of various objects in freefall. Upon completion of this lab one will be able to calculate the values for acceleration and the effect of air resistance.
Apparatus:
2m clear plastic tube
Acculab sonic ranger (speed of sound = 343 m/s period = 0.03 seconds)
SensorNet software
Macintosh computer with system 6.07 or later
Blank 3.5 HD Macintosh formatted disk
Electronic mass scale, calipers
Various objects to drop
Procedure
For the first part of Experiment 1 we dropped three objects of the same shape, but of varying mass. To promote accuracy, we performed three trials for each object. The first object we dropped was a 301 gram ball of radius 1.85 cm, followed by a 225 gram ball with radius 1.82 cm, and finally a 20 gram ball of radius 1.81 cm. From this information we were able to derive the mass densities of the objects.
mass density = mass/((4/3)pR3 )
We recorded the relevant data and evaluated the average acceleration by means of the slope of the velocity graph. From the following equation we determined the magnitude of the net force acting on the object:
Force net = mass x acceleration = Force gravity + Force air resistance
Next, we calculated the errors and uncertainties:
mean = 1/N [t1 + t2 + ... + tN] º 1/N S ti
s = [1/N S (mean - ti )2]1/2
For the next part of Experiment 1, we simply conducted three trials for a ping pong ball of mass 20 grams and radius 1.81 cm. We followed the same basic procedure as above. Upon completion of three trials, we compared the data collected for the light object with that of a heavy object.
The method used in Experiment 2 is a lot like that used in the previous experiment. Only now we are concerned with the velocity and acceleration during a jump. It took us several attempts to finally achieve three data samples in which the ball did not hit the wall of the tube. We evaluated the acceleration at several different key points and recorded our data. Then we calculated the errors using the equations stated above.
Relevant Data
On the following page are three representative samples of raw data from one trial for each of the experiments we conducted. The slope of the velocity graph, which can be calculated by the computer, is equivalent to the average acceleration. The mass density was calculated using the equation stated in the Procedure. The mean values of the slope for the three combined trials is calculated using the mean formula. The standard deviation is found using the standard deviation formula.
Sample Calculations (using Mass = 301 g; Radius = .0185 m)
Mass Density
mass density = mass/((4/3)pR3 )
mass density = 301 g/((4/3)p(.0185)3 )
mass density =1.13 ´ 107
Mean Average Acceleration
mean = 1/N [t1 + t2 + ... + tN] º 1/N S ti
mean = 1/3 [14.605 + 6.394 + 5.634] º 8.877
Standard Deviation of the Mean
s = [1/3 S (8.877 - {14.605, 6.394, 5.634})2]1/2
s = 2.344
Average Acceleration
average acceleration = mean average acceleration ± standard deviation
average acceleration = 8.877 ± 2.344 m/s2
Experiment 1.1
Mass = 301 g
Radius = .0185 m
Mass Density = 1.13 ´ 107
Mean of the Acceleration = 8.877
s = 2.344
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
1.32 0.562 2.922 2.922
1.35 0.663 3.368 14.887 3.368
1.38 0.773 3.647 9.304 3.647
1.41 0.889 3.889 8.064 3.889
1.44 1.014 4.168 9.304 4.168
1.47 1.144 4.317 4.962 4.317
1.5 1.285 4.708 13.026 4.708
1.53 1.433 4.913 6.823 4.913
1.56 1.595 5.415 16.748 5.415
1.59 1.767 5.732 10.545 5.732
1.62 1.95 6.104 12.406 6.104
Mass = 225 g
Radius = .0182 m
Mass Density = 8.91 ´ 106
Mean of the Acceleration = 12.617
s = 1.392
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
0.69 0.585 2.922 2.922
0.72 0.686 3.35 14.267 3.35
0.75 0.8 3.815 15.507 3.815
0.78 0.909 3.647 -5.583 3.647
0.81 1.033 4.131 16.128 4.131
0.84 1.168 4.503 12.406 4.503
0.87 1.307 4.615 3.722 4.615
0.9 1.458 5.024 13.647 5.024
0.93 1.62 5.415 13.026 5.415
0.96 1.791 5.713 9.925 5.713
0.99 1.97 5.936 7.444 5.936
Mass = 20 g
Radius = .0181 m
Mass Density = 8.05 ´ 105
Mean of the Acceleration = 13.672
s = 1.593
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
0.57 0.555 2.903 2.903
0.60 0.650 3.182 9.304 3.182
0.63 0.755 3.480 9.925 3.480
0.66 0.868 3.759 9.304 3.759
0.69 0.988 4.020 8.684 4.020
0.72 1.069 2.680 -44.661 2.680
0.75 1.066 -0.093 -92.424 -0.093
0.78 1.100 1.154 41.560 1.154
0.81 1.108 0.242 -30.395 0.242
0.84 1.728 20.675 681.087 20.675
0.87 1.903 5.825 -494.998 5.825
Experiment 1.2
Lead Ball
Mass = 225 g
Radius = .0182 m
Mass Density = 8.91 ´ 106
Mean of the Acceleration = 12.617
s = 1.392
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
0.69 0.585 2.922 2.922
0.72 0.686 3.35 14.267 3.35
0.75 0.8 3.815 15.507 3.815
0.78 0.909 3.647 -5.583 3.647
0.81 1.033 4.131 16.128 4.131
0.84 1.168 4.503 12.406 4.503
0.87 1.307 4.615 3.722 4.615
0.9 1.458 5.024 13.647 5.024
0.93 1.62 5.415 13.026 5.415
0.96 1.791 5.713 9.925 5.713
0.99 1.97 5.936 7.444 5.936
Ping Pong Ball
Mass = 2 g
Radius = .0180 m
Mass Density = 8.18 ´ 106
Mean of the Acceleration = 9.862
s = 2.955
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
0.81 0.511 2.456 2.456
0.84 0.600 2.977 17.368 2.977
0.87 0.694 3.145 5.583 3.145
0.90 0.799 3.480 11.165 3.480
0.93 0.908 3.629 4.962 3.629
0.96 1.024 3.889 8.684 3.889
0.99 1.151 4.206 10.545 4.206
1.02 1.280 4.317 3.722 4.317
1.05 1.416 4.522 6.823 4.522
1.08 1.559 4.764 8.064 4.764
1.11 1.709 5.006 8.064 5.006
1.14 1.859 5.006 0.000 5.006
1.17 1.985 4.206 -26.673 4.206
Experiment 2
Tennis Ball on way down first time
Average Acceleration = 9.367 m/s2
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
0.84 0.575 2.810 2.810
0.87 0.678 3.424 20.470 3.424
0.90 0.779 3.387 -1.241 3.387
0.93 0.893 3.796 13.647 3.796
0.96 1.014 4.020 7.444 4.020
0.99 1.147 4.429 13.647 4.429
1.02 1.283 4.541 3.722 4.541
1.05 1.426 4.764 7.444 4.764
1.08 1.579 5.099 11.165 5.099
1.11 1.742 5.452 11.786 5.452
1.14 1.911 5.620 5.583 5.620
Tennis Ball just after hitting the floor
Average Acceleration = -0.620 m/s2
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
1.20 1.911 -2.401 -2.401
1.23 1.796 -3.833 -47.763 -3.833
1.26 1.694 -3.387 14.887 -3.387
1.29 1.600 -3.145 8.064 -3.145
1.32 1.518 -2.717 14.267 -2.717
1.35 1.444 -2.494 7.444 -2.494
Tennis Ball near the top of its trajectory
Average Acceleration = 14.577 m/s2
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
1.56 1.176 -0.577 -0.577
1.59 1.173 -0.093 16.128 -0.093
1.62 1.182 0.298 13.026 0.298
Tennis Ball on way down second time
Average Acceleration = 9.361 m/s2
Time (s) Distance (m) Dist (vel) Dist (accel) Vel (m/s)
1.65 1.197 0.484 0.484
1.68 1.220 0.782 9.925 0.782
1.71 1.253 1.098 10.545 1.098
1.74 1.291 1.265 5.583 1.265
1.77 1.340 1.638 12.406 1.638
1.80 1.398 1.917 9.304 1.917
1.83 1.464 2.196 9.304 2.196
1.86 1.539 2.512 10.545 2.512
1.89 1.623 2.791 9.304 2.791
1.92 1.718 3.164 12.406 3.164
1.95 1.818 3.331 5.583 3.331
1.98 1.925 3.573 8.064 3.573
Analysis
After analyzing our data we discovered some information concerning the relationships between different characteristics and properties of freefall. The graph of Acceleration vs. Mass, depicted on the following page, reveals no relationship between mass and acceleration. After completion of Experiment 1.1, our data showed that objects of lighter mass fall with a greater acceleration than those of greater mass. However, when we performed Experiment 1.2, our data for the ping pong ball did not follow the same trend as our data from Experiment 1.1. Upon careful evaluation, we discovered the relationship between mass density and acceleration. Objects of great density, such as the 301 gram ball, fall with less acceleration than objects of little density such as the ping pong ball.
Experiment 2 revealed many things about acceleration. The acceleration downward is very close to the value of 'g', approximately 9.8 m/s2. This value of the average downward acceleration is constant for both the first time down and the second time down. We found the value of the average acceleration during the first descent to be 9.367 m/s2. This value is very close to the value we found for the second descent, 9.361 m/s2. The difference between these values and 9.8 m/s2 can be attributed to air resistance. We also found the value for the acceleration just after hitting the floor to be -0.620 m/s2. The value of the average acceleration near the top of the trajectory was found to be 14.577 m/s2.
Discussion and Conclusion
From this experiment, one learns how to measure the velocity of various objects in freefall. From this data, one can then calculate the values for acceleration and air resistance. This experiment relates directly to everyday situations. For example when one dribbles a basketball, the acceleration of the object becomes negative. Acceleration that at one time was direct towards the earth, is now directed in the opposite direction. From our calculations we concluded that as mass density increases, the average acceleration decreases and approaches 'g.' We also concluded that the downward acceleration is not related to air resistance. However we did find a relationship between velocity and the force of resistance. As the velocity of the object increases, the force of resistance decreases. We also found that the mass of an object has no effect on the downward acceleration of the object in freefall. From this experiment one discovers the relationships between different properties of an object.
f:\12000 essays\sciences (985)\Physics\Practical investigation on teminal velocity of a sphere in oi.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Physics CAT One
Extended Practical Investigation
Report
Student Number:
Purpose
The Purpose of this investigation is to explore how the terminal velocity of a sphere falling through glycerol varies with the temperature of the glycerol and the size of the sphere.
Introduction
In the early stages of the project it was intended to investigate how the speed of a sphere falling through glycerol varies with the size of the sphere. However, after analysis it was decided that the investigation would be more callenging if a second variable was incorporated. There are many constants that could have been manipulated such as, amount of glycerol used, distacnce over which times were taken, distance sphere was allowed to fall before timing was taken and the temperature of the glycerol. After much consultation it was decided that the temperature of the glycerol should be varied. Once this had benn incorporated into the investigation some scientific concepts related to the viscosity of a liquid had to be attained. (Refer to article).
In conducting the experiments an attempt was made to attain results that could, produce graphs that showed the terminal velocity of a sphere related to the temperature of the glycerol and the terminal velocity of a sphere related to its size.
Apparatus used
• 600 ml of glycerol (density 1.26/ml. Assay 98.0 - 101.0%)
• Small ball bearings of radius: 3.175mm
3.960mm
5.000mm
6.000mm
7.000mm
• 900 ml measuring cylinder
• Stop watch
• Thermometer
• Some type of heating and cooling device to varie the temperature of the glycerol
• Tweezers
Variables and Constants
The variables that have been used in this investigation are the size of the ball bearings and the temperature of the glycerol. The constants that have been used in this investigation are the amount of glycerol used, the size of the measuring cylinder, the intervals at which time were taken, the distance the sphere was allowed to drop before times were taken and the number of tests taken.
Method
To begin experimentation the distance over which the sphere accelerates to reach terminal velocity had to be determined. This was done by systematically varying the distance over which the sphere was allowed to fall then finding the point at which the spheres acceleration is zero. It was found that for the sphere to reach terminal velocity it had to be allowed to fall 6 - 7 centimeters before an accurate, constant reading could be taken. It was found that the distance needed for a sphere to reach terminal velocity is only slightly changed when the temperature of the glycerol is varied (+/- 0.2cm).
To attain that the sphere had reached terminal velocity by varying the distance that the sphere fell before timing began, the distance was varied from 2cm to 10cm. Starting at 2cm the measuring cylinder was marked at 2cm intervals and times were taken for each interval. the times taken were analysed to determine if the rate of descent of the sphere was constant for each reading. To ensure that the sphere had reached terminal velocity a full 10 cm of descent was allowed.
Using 'Stokes law for the terminal velocity of a sphere falling under gravity' and the relationship of mg = U + F at terminal velocity the above result is proven. These calculations can be seen in the results section.
For all experiments room temperature was recorded at 20oc.
The first part of the experiment was to vary the size of the ball bearing but not the temperature. A sphere of 3.175mm in diameter was dropped from just above water level and allowed to fall 6 cm before timing began. Once the sphere had fallen the initial 6 cm timings were taken at intervals along the measuring cylinder every 200ml (10cm). This experiment was repeated 4 times and an average was taken.
The experiment was then repeated using ball bearings of sizes 3.960mm, 5.00mm, 6.00mm, 7.00mm, 9.00mm. Each individual experiment was repeated 4 times and an average was taken. All results are shown in the results section.
The second part of the experiment was to vary the temperature of the glycerol but not the ball bearing size. A sphere of 3.175mm was chosen to be used in all experiments, due to its extremely slow descent rate. The same procedure as above was used except five temperatures of 7oc, 12 oc, 15 oc, 17 oc and 20 oc for the glycerol were used.
Results
The averaged results obtained from the experiment are presented in the following tables and graphs. (For full documentation of all the results obtained refer to appendix 1.)
Size of Sphere Timing Interval No Averaged Results Averaged Velocity Temperature Timing Interval No Averaged Results Averaged Velocity
3.175mm 1 2 3 1.4371.6001.637 6.178 cm/s 7oc 123 5.5758.1158.095 1.24 cm/s
3.960mm 1 2 3 0.7500.9820.970 10.240 cm/s 12oc 123 2.6504.4754.400 2.24 cm/s
5.000mm 1 2 3 0.5000.6900.680 14.598 cm/s 15oc 123 2.3753.6573.755 2.68 cm/s
6.000mm 1 2 3 0.7850.6550.627 15.600 cm/s 17oc 123 2.1252.9352.977 3.36 cm/s
7.000mm 1 2 3 0.3400.3600.360 27.777 cm/s 20oc 123 1.4801.6021.627 6.17 cm/s
Chart One
Note that there is a reflex error for all the recordings of +/- 0.1 seconds. Also, the first timing interval cannot be used for any calculations as the sphere has not yet reached terminal velocity.
This is a graph representing how the velocity of a 3.175mm sphere varies with the temperature of the glycerol.
This is a graph representing how the velocity of a sphere varies with the diameter of that sphere.
Analysis of Results
Chart One demonstrates that as expected the terminal velocity of the sphere increases as the temperature of the glycerol and the size of ball bearings increase. Graphs one and two visually illustrate this point and it can be seen by the positive gradient shown. It is interesting to note that the change of velocity with the temperature is signifigantly greater as the temperature becomes higher (15oc to 20.5 oc). The reason for this is directly related to the change in viscosity as the temperature is varied. As the temperature increase the viscosity becomes less and so the sphere is able to move freely through this less viscous liquid thus having a greater terminal velocity. A chart of temperatures and their relative viscosities for glycerol is shown in appendix two. A hypothetical relationship can be developed between velocity and temperature. The shape of the graph, although not smooth, is a curve and therefore it is reasonable to suggest that the relationship would invole T to the power of something: ie) v = kTn (where k is a constant). Thus, Log10v = Log10k + n Log10T, where n takes the gradient value. If a graph of Log10v vs. Log10T is plotted it may be possible to form a relationship.(Graph 3)
A line of best fit for the above graph gives a gradient of 2.69. Therefore a hypothesis for the relationship between velocity and temperature is V = kT2.69. Of course for the results to be most accurate the sphere would ideally have reached terminal velocity when the times in graph three were taken. An attempt has been made to calcualte the terminal velocity at 20oc using stokes law and the relationship mg = U + F at terminal velocity so that it can be compared to the velocity found at this temperature.
THIS GRAPH SHOWS HOW VELOCITY VARIES WITH TIME
Refering to the graph the velocitites of the ball bearings for each temperature are shown. These results can be proven using Stoke's Law (for a detailed description of Stoke's Law and other related physics concepts refer to the article), but due word limit restrictions these calculations have been removed.
From chart one a relationship between the size of a ball bearing and its velocity can also be formed. Studying graph two it can be seen that there is a gradual curve which indictes that it is reasonable to suggest that the relationship would once again involve T to power of something. Therefore a relationship could be formed using a Log-Log graph, shown below.
Using a line of best fit the gradient can be found as 0.638. Therefore the relationship between the Log of Velocity vs. Log of Diameter is V = kD0.638. All discrepancies in calculations for graph five and the same as for graph three.
Difficulties
Difficulties encounted during this investigation are:
· Trying to establish weather the sphere had reached terminal velocity before timing began.
· Trying to maintain the temperature attained once the glycerol has been heated or cooled.
· Human errors when timing.
· Human errors in general.
· Transfering the glycerol from the measuring cylinder to bottles without loosing any.
· Trying to hold the ball bearings just above the glycerol without dropping them in.
· Trying to perform as many tests as possible (in an effort to get a more accurate average) within the time allocated in class.
Although every difficulty was hard work to around, trying to establish weather the sphere had reached terminal velocity before timing began was the main difficulty encountered.
Errors
% error in distance = 0.15cm x 100 = 1.5%
10cm 1
% error in time = 0.36s x 100 = 4.4% This is in regard to human error in 8.1 1 responding with the stopwatch.
% error in velocity = 8%
% Error in temperature = 7 x 100 = 32% This allows for a possible increase 20 1 or decrease in temperature whilst the experiment was taking place or for the chance that the thermometer wasn't calibrated correctly
Error in radius = 1% This accounts for human error in 1 reading the measurements or that the radius' of the spheres used was not uniform.
% Error in velocity calculations
using Stoke's Law and mg = U + F = 1%
Success of The Investigation
The aim of this investigation was show that the terminal velocity of a sphere falling through glycerol varies with the temperature and the size of the sphere. From the results shown I believe that the investigation was a success.
Conclusions
As a result of this investigation it can clearly be concluded that as the temperature of glycerol increases, viscosity decreases and therefore any sphere falling through the glycerol will experience an increase in terminal velocity. Also the rate of increase in velocity is greater as the temperature rises. This is because the less viscous the state of the glycerol, the more freely the sphere is able to fall. It can also be concluded that as the diameter of the sphere increases the weight of the sphere increases and therefore its terminal velocity increases.
Bibliography
De Jong, Physics Two Heinman Physics in Context, Australia 1994
McGraw-Hill Encyclopedia of Physics 2nd edition, 1993
Appendix One
Size of Sphere Test 1 Test 2 Test 3 Test 4 Average
3.175mm 1 1.5802 1.9503 1.940 1.2801.4101.570 1.5501.5401.410 1.3401.5001.630 1.4371.6001.637
3.960mm 1 0.7502 1.0403 1.050 0.7500.9100.910 0.7200.9700.950 0.7801.0100.990 0.7500.9820.970
5.000mm 1 0.5302 0.6303 0.670 0.4400.4800.470 0.5300.7400.610 0.4800.5100.590 0.5000.5900.590
6.000mm 1 0.7402 0.6403 0.580 0.6600.6500.670 0.9600.6600.660 0.7800.6700.600 0.7850.6550.627
7.000mm 1 0.3102 0.3603 0.340 0.3600.3500.370 0.3300.3600.350 0.3500.3700.380 0.3400.3600.360
Temperature Test 1 Test 2 Test 3 Test 4 Average
7oc 1 5.4252 8.0503 8.060 5.9008.2508.150 5.3008.1008.050 5.6008.0608.050 5.5008.0508.060
12oc 1 2.7002 4.5403 4.420 2.8004.6004.700 2.6004.5004.450 2.5004.3004.400 2.7004.5004.400
15oc 1 2.3002 3.6303 3.920 2.3003.6003.800 2.4003.7003.700 2.5003.8003.600 2.3003.5303.920
17oc 1 2.0402 2.8903 3.360 2.0002.9003.000 2.2002.9502.950 2.3003.0002.900 2.0002.8903.060
20oc 1 1.4402 1.6003 1.640 1.5001.6001.650 1.4501.6101.630 1.5301.6001.590 1.4401.6001.640
Appendix Two
This chart demonstrates that as temperature increase there is a signifigant decrease in the viscosity.
Temp. oc Viscosity cp
-42 6.71x106
-36 2.05x106
-25 2.62x105
-20 1.34x105
-15.4 6.65x104
-10.8 3.55x104
-4.2 1.49x104
0 12,100
6 6,260
15 2,330
20 1,490
25 954
30 629
Physics CAT One
Extended Practical Investigation
Report
Student Number:
Purpose
The Purpose of this investigation is to explore how the terminal velocity of a sphere falling through glycerol varies with the temperature of the glycerol and the size of the sphere.
Introduction
In the early stages of the project it was intended to investigate how the speed of a sphere falling through glycerol varies with the size of the sphere. However, after analysis it was decided that the investigation would be more callenging if a second variable was incorporated. There are many constants that could have been manipulated such as, amount of glycerol used, distacnce over which times were taken, distance sphere was allowed to fall before timing was taken and the temperature of the glycerol. After much consultation it was decided that the temperature of the glycerol should be varied. Once this had benn incorporated into the investigation some scientific concepts related to the viscosity of a liquid had to be attained. (Refer to article).
In conducting the experiments an attempt was made to attain results that could, produce graphs that showed the terminal velocity of a sphere related to the temperature of the glycerol and the terminal velocity of a sphere related to its size.
Apparatus used
• 600 ml of glycerol (density 1.26/ml. Assay 98.0 - 101.0%)
• Small ball bearings of radius: 3.175mm
3.960mm
5.000mm
6.000mm
7.000mm
• 900 ml measuring cylinder
• Stop watch
• Thermometer
• Some type of heating and cooling device to varie the temperature of the glycerol
• Tweezers
Variables and Constants
The variables that have been used in this investigation are the size of the ball bearings and the temperature of the glycerol. The constants that have been used in this investigation are the amount of glycerol used, the size of the measuring cylinder, the intervals at which time were taken, the distance the sphere was allowed to drop before times were taken and the number of tests taken.
Method
To begin experimentation the distance over which the sphere accelerates to reach terminal velocity had to be determined. This was done by systematically varying the distance over which the sphere was allowed to fall then finding the point at which the spheres acceleration is zero. It was found that for the sphere to reach terminal velocity it had to be allowed to fall 6 - 7 centimeters before an accurate, constant reading could be taken. It was found that the distance needed for a sphere to reach terminal velocity is only slightly changed when the temperature of the glycerol is varied (+/- 0.2cm).
To attain that the sphere had reached terminal velocity by varying the distance that the sphere fell before timing began, the distance was varied from 2cm to 10cm. Starting at 2cm the measuring cylinder was marked at 2cm intervals and times were taken for each interval. the times taken were analysed to determine if the rate of descent of the sphere was constant for each reading. To ensure that the sphere had reached terminal velocity a full 10 cm of descent was allowed.
Using 'Stokes law for the terminal velocity of a sphere falling under gravity' and the relationship of mg = U + F at terminal velocity the above result is proven. These calculations can be seen in the results section.
For all experiments room temperature was recorded at 20oc.
The first part of the experiment was to vary the size of the ball bearing but not the temperature. A sphere of 3.175mm in diameter was dropped from just above water level and allowed to fall 6 cm before timing began. Once the sphere had fallen the initial 6 cm timings were taken at intervals along the measuring cylinder every 200ml (10cm). This experiment was repeated 4 times and an average was taken.
The experiment was then repeated using ball bearings of sizes 3.960mm, 5.00mm, 6.00mm, 7.00mm, 9.00mm. Each individual experiment was repeated 4 times and an average was taken. All results are shown in the results section.
The second part of the experiment was to vary the temperature of the glycerol but not the ball bearing size. A sphere of 3.175mm was chosen to be used in all experiments, due to its extremely slow descent rate. The same procedure as above was used except five temperatures of 7oc, 12 oc, 15 oc, 17 oc and 20 oc for the glycerol were used.
Results
The averaged results obtained from the experiment are presented in the following tables and graphs. (For full documentation of all the results obtained refer to appendix 1.)
Size of Sphere Timing Interval No Averaged Results Averaged Velocity Temperature Timing Interval No Averaged Results Averaged Velocity
3.175mm 1 2 3 1.4371.6001.637 6.178 cm/s 7oc 123 5.5758.1158.095 1.24 cm/s
3.960mm 1 2 3 0.7500.9820.970 10.240 cm/s 12oc 123 2.6504.4754.400 2.24 cm/s
5.000mm 1 2 3 0.5000.6900.680 14.598 cm/s 15oc 123 2.3753.6573.755 2.68 cm/s
6.000mm 1 2 3 0.7850.6550.627 15.600 cm/s 17oc 123 2.1252.9352.977 3.36 cm/s
7.000mm 1 2 3 0.3400.3600.360 27.777 cm/s 20oc 123 1.4801.6021.627 6.17 cm/s
Chart One
Note that there is a reflex error for all the recordings of +/- 0.1 seconds. Also, the first timing interval cannot be used for any calculations as the sphere has not yet reached terminal velocity.
This is a graph representing how the velocity of a 3.175mm sphere varies with the temperature of the glycerol.
This is a graph representing how the velocity of a sphere varies with the diameter of that sphere.
Analysis of Results
Chart One demonstrates that as expected the terminal velocity of the sphere increases as the temperature of the glycerol and the size of ball bearings increase. Graphs one and two visually illustrate this point and it can be seen by the positive gradient shown. It is interesting to note that the change of velocity with the temperature is signifigantly greater as the temperature becomes higher (15oc to 20.5 oc). The reason for this is directly related to the change in viscosity as the temperature is varied. As the temperature increase the viscosity becomes less and so the sphere is able to move freely through this less viscous liquid thus having a greater terminal velocity. A chart of temperatures and their relative viscosities for glycerol is shown in appendix two. A hypothetical relationship can be developed between velocity and temperature. The shape of the graph, although not smooth, is a curve and therefore it is reasonable to suggest that the relationship would invole T to the power of something: ie) v = kTn (where k is a constant). Thus, Log10v = Log10k + n Log10T, where n takes the gradient value. If a graph of Log10v vs. Log10T is plotted it may be possible to form a relationship.(Graph 3)
A line of best fit for the above graph gives a gradient of 2.69. Therefore a hypothesis for the relationship between velocity and temperature is V = kT2.69. Of course for the results to be most accurate the sphere would ideally have reached terminal velocity when the times in graph three were taken. An attempt has been made to calcualte the terminal velocity at 20oc using stokes law and the relationship mg = U + F at terminal velocity so that it can be compared to the velocity found at this temperature.
THIS GRAPH SHOWS HOW VELOCITY VARIES WITH TIME
Refering to the graph the velocitites of the ball bearings for each temperature are shown. These results can be proven using Stoke's Law (for a detailed description of Stoke's Law and other related physics concepts refer to the article), but due word limit restrictions these calculations have been removed.
From chart one a relationship between the size of a ball bearing and its velocity can also be formed. Studying graph two it can be seen that there is a gradual curve which indictes that it is reasonable to suggest that the relationship would once again involve T to power of something. Therefore a relationship could be formed using a Log-Log graph, shown below.
Using a line of best fit the gradient can be found as 0.638. Therefore the relationship between the Log of Velocity vs. Log of Diameter is V = kD0.638. All discrepancies in calculations for graph five and the same as for graph three.
Difficulties
Difficulties encounted during this investigation are:
· Trying to establish weather the sphere had reached terminal velocity before timing began.
· Trying to maintain the temperature attained once the glycerol has been heated or cooled.
· Human errors when timing.
· Human errors in general.
· Transfering the glycerol from the measuring cylinder to bottles without loosing any.
· Trying to hold the ball bearings just above the glycerol without dropping them in.
· Trying to perform as many tests as possible (in an effort to get a more accurate average) within the time allocated in class.
Although every difficulty was hard work to around, trying to establish weather the sphere had reached terminal velocity before timing began was the main difficulty encountered.
Errors
% error in distance = 0.15cm x 100 = 1.5%
10cm 1
% error in time = 0.36s x 100 = 4.4% This is in regard to human error in 8.1 1 responding with the stopwatch.
% error in velocity = 8%
% Error in temperature = 7 x 100 = 32% This allows for a possible increase 20 1 or decrease in temperature whilst the experiment was taking place or for the chance that the thermometer wasn't calibrated correctly
Error in radius = 1% This accounts for human error in 1 reading the measurements or that the radius' of the spheres used was not uniform.
% Error in velocity calculations
using Stoke's Law and mg = U + F = 1%
Success of The Investigation
The aim of this investigation was show that the terminal velocity of a sphere falling through glycerol varies with the temperature and the size of the sphere. From the results shown I believe that the investigation was a success.
Conclusions
As a result of this investigation it can clearly be concluded that as the temperature of glycerol increases, viscosity decreases and therefore any sphere falling through the glycerol will experience an increase in terminal velocity. Also the rate of increase in velocity is greater as the temperature rises. This is because the less viscous the state of the glycerol, the more freely the sphere is able to fall. It can also be concluded that as the diameter of the sphere increases the weight of the sphere increases and therefore its terminal velocity increases.
Bibliography
De Jong, Physics Two Heinman Physics in Context, Australia 1994
McGraw-Hill Encyclopedia of Physics 2nd edition, 1993
Appendix One
Size of Sphere Test 1 Test 2 Test 3 Test 4 Average
3.175mm 1 1.5802 1.9503 1.940 1.2801.4101.570 1.5501.5401.410 1.3401.5001.630 1.4371.6001.637
3.960mm 1 0.7502 1.0403 1.050 0.7500.9100.910 0.7200.9700.950 0.7801.0100.990 0.7500.9820.970
5.000mm 1 0.5302 0.6303 0.670 0.4400.4800.470 0.5300.7400.610 0.4800.5100.590 0.5000.5900.590
6.000mm 1 0.7402 0.6403 0.580 0.6600.6500.670 0.9600.6600.660 0.7800.6700.600 0.7850.6550.627
7.000mm 1 0.3102 0.3603 0.340 0.3600.3500.370 0.3300.3600.350 0.3500.3700.380 0.3400.3600.360
Temperature Test 1 Test 2 Test 3 Test 4 Average
7oc 1 5.4252 8.0503 8.060 5.9008.2508.150 5.3008.1008.050 5.6008.0608.050 5.5008.0508.060
12oc 1 2.7002 4.5403 4.420 2.8004.6004.700 2.6004.5004.450 2.5004.3004.400 2.7004.5004.400
15oc 1 2.3002 3.6303 3.920 2.3003.6003.800 2.4003.7003.700 2.5003.8003.600 2.3003.5303.920
17oc 1 2.0402 2.8903 3.360 2.0002.9003.000 2.2002.9502.950 2.3003.0002.900 2.0002.8903.060
20oc 1 1.4402 1.6003 1.640 1.5001.6001.650 1.4501.6101.630 1.5301.6001.590 1.4401.6001.640
Appendix Two
This chart demonstrates that as temperature increase there is a signifigant decrease in the viscosity.
Temp. oc Viscosity cp
-42 6.71x106
-36 2.05x106
-25 2.62x105
-20 1.34x105
-15.4 6.65x104
-10.8 3.55x104
-4.2 1.49x104
0 12,100
6 6,260
15 2,330
20 1,490
25 954
30 629
f:\12000 essays\sciences (985)\Physics\Preeclampsia and Eclampsia.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Tuesday, December 17, 1996
physics
Pre-eclampsia and eclampsia are disorders in pregnant women. Pre- eclampsia is hypertension and eclampsia is the worsening of pre-eclampsia where the woman experiences convulsions or goes into a coma. The complication of eclampsia in a pregnant woman can put her and her unborn child at risk. A risk that may be fatal. This is only to briefly define the disorders.
Furthermore, I predict that women who have suffered from eclampsia do need future medical help due to the permanent damage caused in the physiological make up of the body. I will prove this by means of statistics, nationwide studies, and explaining the damage to the body.
To give a complete definition of eclampsia we must define pre-eclampsia. Pre-eclampsia does not have chronic hypertension but becomes hypertensive in late pregnancy. With pre-eclampsia a woman doesn't experience a coma or convulsions, her blood pressure returns to normal after delivery. Although the majority of women who experience pre-eclampsia never get eclampsia-if the blood pressure gets out of hand suddenly, the disease may progress to eclampsia. Eclampsia is pre-eclampsia that has progressed to the point of convulsions and possible coma. Resulting in retardation for the child with lack of oxygenation and other proteins to fetus.
A term that must be known is chronic hypertension due to increased pressure in the arteries and often associated with atherosclerosis (collections of fatty substances on the inside wall of the arteries). It is not caused by pregnancy, but may cause problems if a woman with chronic hypertension becomes pregnant. It has an unknown cause. 15% of the time chronic hypertension is secondary to a primary problem-that is a renal disorder, heart disease, endocrine disorder or some other condition is the cause of the hypertensive disease. Women with chronic hypertension who become pregnant are in high risk. Because of arterial narrowing the blood supply to the uterus is compromised and growth and oxygenation of the fetus are jeopardized. Pre-eclampsia and eclampsia are also likely to develop, with characteristic tissue swelling and proteinuria. In extreme full flown eclampsia ( convulsions or coma) may occur. Women with chronic hypertension are at higher risk for fetal growth retardation, stillbirth and 4 to 5 grater risk for placental abruption. About 15% of women with chronic hypertension with experience pre-eclampsia next to their usual chronic hypertension.
To show how eclampsia is related to physics we must look at the fact that eclampsia primarily comes about from hypertension. The swelling occurs: when there is high quantity of sodium; H2O is attracted into the veins. The walls of the veins are permeable to H2O at this point, starving the rest of the cells of the body from water that later leads to seizures, weakening the brain cells.(Just one example) When there are weak cells the functions of the body seem to break down, affecting not only the mother, but the fetus as well.
Hypertension forms like this: there is direct proportion between pressure and volume--when there is a large volume there will high pressure. In relation to physics we have to speak of Fluids In Motion. We must picture fluid in a tube: when there is a certain amount of volume going through a tube it's going at a constant, if the volume increases the the flow will be more rapid because the center of the diameter is less than that at either end-- according to Bernoulli's Principle. Going back to eclampsia or pre-eclampsia we could see this example when the volume of the blood increases, because of sodium and attraction of water and so does the pressure. The speed of the blood decreases and that's when the body looses oxygen and cells die because the supplements don't arrive as needed causing the systems to break down. There is a cycle when pressure in the body is not at a normal, it goes from the heart not working hard enough and the brain begins to die. ( another subject) In a Venturi meter we could calculate the speed of a fluid in the horizontal tube from the difference in pressure in the vertical tubes. Where the speed of the fluid is lower the pressure is higher; where the speed of the fluid is higher the pressure is lower. Kinetic energy plays a role; where the speed of the fluid increases and so does its kinetic energy.
Many women don't realize that pre-eclampsia can also begin during labor or after deliver (one third of pre-eclampsia is manifested before labor, one third of cases occur during and another one third take place during deliver). After hearing the physiological theories behind high blood pressure, we get into the symptoms. For pre-eclampsia the symptoms are high blood pressure or swelling with rapid of weight gain, headaches, nervousness, intermittent blurred vision and undue fatigue. These are reasons why blood pressure and weight and a urinalysis are performed at each prenatal visit is to make sure pre-eclampsia is not developing. Many of the symptoms are normal during pregnancy. The real tests are blood pressure and the absence or presence of protein in the urine.
In eclampsia it's more severe; from convulsions to coma. There is blindness, brain hemorrhaging, renal failure, hypertension and arrhythmia; the damages are permanent that leave the mother having to change her life style after the delivery of her child. As with pre-eclampsia, eclampsia can affect every organ and body system, causing either permanent damage or death of the mother and baby if not vigorously managed.
Preventive measures start with exercise and diet and frequent check-ups if not hospitalized. In recent study a preventive measure for pre-eclampsia was immunological intercourse. It is suggested that by increasing the duration of sexual cohabitation before the first pregnancy with partner. It has been observed that repeated exposure to male ejaculation may prevent pre-eclampsia. In the study of 83 pre-eclamptics it had an average of 59.4 physiological exposures to semen but the non pre-eclamptics control group of 55 had 191.6 exposures. A permanent cure is delivering the child and following up on both. Some medications that are for convulsions are magnesium sulphate, diazepam, phenytoin (magnesium sulphate being superior); all given intravenously. Magnesium sulphate diminishes the risk of further nonfatal morbidity than other agents. It is far better than phenytoin in preventing convulsions for hypertensive pregnant women, according to The Must-Read Trial.
Eclampsia is a problem in undeveloped countries. It is relatively uncommon in developed counties where it complicated about one in every 2000 deliveries. Eclampsia can be 20 times more common in developing countries, and it probably accounts for more than 50000 maternal deaths world wide each year. Here in the United Stated prenatal care is to prevent pre-eclampsia. That has been going on since 1961.
To close my paper I must point out that the damages left behind the disorder of eclampsia are dramatic and almost permanent. It is a disorder in which the check-ups or prenatal are critical and must be kept up with to prevent such disorder. Although this disorder rarely gets by any nurse or doctor here in the U.S. it is a problem in other countries. My prediction has proven where we see the numbers of women dying every year from eclipse. Most of the women don't get to live with the side effects of eclampsia because they die. Hypertension alone is a problem in 80% of the world population. Eclampsia is a disorder better prevented that cured.
Tuesday, December 17, 1996
physics
Pre-eclampsia and eclampsia are disorders in pregnant women. Pre- eclampsia is hypertension and eclampsia is the worsening of pre-eclampsia where the woman experiences convulsions or goes into a coma. The complication of eclampsia in a pregnant woman can put her and her unborn child at risk. A risk that may be fatal. This is only to briefly define the disorders.
Furthermore, I predict that women who have suffered from eclampsia do need future medical help due to the permanent damage caused in the physiological make up of the body. I will prove this by means of statistics, nationwide studies, and explaining the damage to the body.
To give a complete definition of eclampsia we must define pre-eclampsia. Pre-eclampsia does not have chronic hypertension but becomes hypertensive in late pregnancy. With pre-eclampsia a woman doesn't experience a coma or convulsions, her blood pressure returns to normal after delivery. Although the majority of women who experience pre-eclampsia never get eclampsia-if the blood pressure gets out of hand suddenly, the disease may progress to eclampsia. Eclampsia is pre-eclampsia that has progressed to the point of convulsions and possible coma. Resulting in retardation for the child with lack of oxygenation and other proteins to fetus.
A term that must be known is chronic hypertension due to increased pressure in the arteries and often associated with atherosclerosis (collections of fatty substances on the inside wall of the arteries). It is not caused by pregnancy, but may cause problems if a woman with chronic hypertension becomes pregnant. It has an unknown cause. 15% of the time chronic hypertension is secondary to a primary problem-that is a renal disorder, heart disease, endocrine disorder or some other condition is the cause of the hypertensive disease. Women with chronic hypertension who become pregnant are in high risk. Because of arterial narrowing the blood supply to the uterus is compromised and growth and oxygenation of the fetus are jeopardized. Pre-eclampsia and eclampsia are also likely to develop, with characteristic tissue swelling and proteinuria. In extreme full flown eclampsia ( convulsions or coma) may occur. Women with chronic hypertension are at higher risk for fetal growth retardation, stillbirth and 4 to 5 grater risk for placental abruption. About 15% of women with chronic hypertension with experience pre-eclampsia next to their usual chronic hypertension.
To show how eclampsia is related to physics we must look at the fact that eclampsia primarily comes about from hypertension. The swelling occurs: when there is high quantity of sodium; H2O is attracted into the veins. The walls of the veins are permeable to H2O at this point, starving the rest of the cells of the body from water that later leads to seizures, weakening the brain cells.(Just one example) When there are weak cells the functions of the body seem to break down, affecting not only the mother, but the fetus as well.
Hypertension forms like this: there is direct proportion between pressure and volume--when there is a large volume ther
f:\12000 essays\sciences (985)\Physics\Radios and how they work.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Idoh Gersten
Physics
Idoh Gersten
Mr. Zambizi
Physics
March 12, 1995
Radio is a form of communication in which intelligence is transmitted without wires from one point to another by means of electromagnetic waves. Early forms of communication over great distances were the telephone and the telegraph. They required wires between the sender and receiver. Radio, on the other hand, requires no such physical connection. It relies on the radiation of energy from a transmitting antenna in the form of radio waves. These radio waves, traveling at the speed of light (300,000 km/sec; 186,000 mi/sec), carry the information. When the waves arrive at a receiving antenna, a small electrical voltage is produced. After this voltage has been suitably amplified, the original information contained in the radio waves is retrieved and presented in an understandable form. This form may be sound from a loudspeaker, a picture on a television, or a printed page from a teletype machine.
HISTORY
Early Experimenters
The principles of radio had been demonstrated in the early 1800s by such scientists as Michael Faraday and Joseph Henry. They had individually developed the theory that a current flowing in one wire could induce (produce) a current in another wire that was not physically connected to the first.
Hans Christian Oersted had shown in 1820 that a current flowing in a wire sets up a magnetic field around the wire. If the current is made to change and, in particular, made to alternate (flow back and forth), the building up and collapsing of the associated magnetic field induces a current in another conductor placed in this changing magnetic field. This principle of electromagnetic induction is well known in the application of transformers, where an iron core is used to link the magnetic field of the first wire or coil with a secondary coil. By this means voltages can be stepped up or down in value. This process is usually carried out at low frequencies of 50 or 60 Hz (Hertz, or cycles per second). Radio waves, on the other hand, consist of frequencies between 30 kHz and 300 GHz.
In 1864, James Clerk Maxwell published his first paper that showed by theoretical reasoning that an electrical disturbance that results from a change in an electrical quantity such as voltage or current should propagate (travel) through space at the speed of light. He postulated that light waves were electromagnetic waves consisting of electric and magnetic fields. In fact, scientists now know that visible light is just a small portion of what is called the electromagnetic spectrum, which includes radio waves, X rays, and gamma rays (see electromagnetic radiation).
Heinrich Hertz, in the late 1880s, actually produced electromagnetic waves. He used oscillating circuits (combinations of capacitors and inductors) to transmit and receive radio waves. By measuring the wavelength of the waves and knowing the frequency of oscillation, he was able to calculate the velocity of the waves. He thus verified Maxwell's theoretical prediction that electromagnetic waves travel at the speed of light.
Marconi's Contribution
It apparently did not occur to Hertz, however, to use electromagnetic waves for long-distance communication. This application was pursued by Guglielmo Marconi; in 1895, he produced the first practical wireless telegraph system. In 1896 he received from the British government the first wireless patent. In part, it was based on the theory that the communication range increases substantially as the height of the aerial (antenna) is increased.
The first wireless telegraph message across the English Channel was sent by Marconi in March 1899. The use of radio for emergencies at sea was demonstrated soon after by Marconi's wireless company. (Wireless sets had been installed in lighthouses along the English coast, permitting communication with radios aboard nearby ships.) The first transatlantic communication, which involved sending the Morse-code signal for the letter s was sent, on Dec. 12, 1901, from Cornwall, England, to Saint John's, Newfoundland, where Marconi had set up receiving equipment.
The Electron Tube
Further advancement of radio was made possible by the development of the electron tube. The diode, or valve, produced by Sir Ambrose Fleming in 1905, permitted the detection of high-frequency radio waves. In 1907, Lee De Forest invented the audion, or Triode, which was able to amplify radio and sound waves.
Radiotelephone and Radiotelegraph
Up through this time, radio communication was in the form of radio telegraphy; that is, individual letters in a message were sent by a dash-dot system called Morse Code. (The International Morse Code is still used to send messages by shortwave radio.) Communication of human speech first took place in 1906. Reginald Aubrey Fessenden, a physicist, spoke by radio from Brant Rock, Mass., to ships in the Atlantic Ocean.
Armstrong's Contributions
Much of the improvement of radio receivers is the result of work done by the American inventor Edwin Armstrong. In 1918 he developed the superheterodyne circuit. Prior to this time, each stage of amplification in the receiver had to be adjusted to the frequency of the desired broadcast station. This was an awkward operation, and it was difficult to achieve perfect tuning over a wide range of frequencies. Using the heterodyne principal, the incoming signal is mixed with a frequency that varies in such a way that a fixed frequency is always produced when the two signals are mixed. This fixed frequency contains the information of the particular station to which the receiver is tuned and is amplified hundreds of times before being heard at the loudspeaker. This type of receiver is much more stable than its predecessor, the tuned-radio-frequency (TRF) receiver.
In order to transmit speech the radio waves had to be modulated by audio sound waves. Prior to 1937 this modulation was done by changing the amplitude, or magnitude, of the radio waves, a process known as amplitude modulation (AM). In 1933, Armstrong discovered how to convey the sound on the radio waves by changing or modulating the frequency of the carrier radio waves, a process known as frequency modulation (FM). This system reduces the effects of artificial noise and natural interference caused by atmospheric disturbances such as lightning.
Radiobroadcasting
The first regular commercial radio broadcasts began in 1920, but the golden age of broadcasting is generally considered to be from 1925 to 1950. NBC was the first permanent national network; it was set up by the Radio Corporation of America (RCA). Radio was also being used in the 1930s by airplane pilots, police, and military personnel.
Significant changes in radio occurred in the 1950s. Television displaced the dramas and variety shows on radio; they were replaced on radio by music, talk shows, and all-news stations. The development of the transistor increased the availability of portable radios, and the number of car radios soared. Stereophonic were initiated in the early 1960s, and large numbers of stereo FM receivers were sold in the 1970s. A recent development is stereo AM, which may lead to a similar boom for this type of receiver in the 1980s.
OPERATION
Frequency Allocations
In the United States the Federal Communications Commission (FCC) allocates the frequencies of the radio spectrum that may be used by various segments of society. Although each user is assigned a specific frequency in any particular area, general categories are identified. Some representative allocations are indicated in the table that follows the article.
The Transmitter
The heart of every transmitter is an oscillator. The oscillator is used to produce an electrical signal having a frequency equal to that assigned to the user. In many cases the frequency of oscillation is accurately controlled by a quartz crystal, which is a crystalline substance that vibrates at a natural resonant frequency when it is supplied with energy. This resonant frequency depends on its thickness and the manner in which it is cut. By means of the piezoelectric effect, the vibrations are transformed into a small alternating voltage having the same frequency. After being amplified several thousand times, this voltage becomes the radio-frequency carrier. The manner in which this carrier is used depends upon the type of transmitter.
Continuous Wave. If applied directly to the antenna, the energy of the carrier is radiated in the form of radio waves. In early radiotelegraph communications the transmitter was keyed on and off in a coded fashion using a telegraph key or switch. The intelligence was transmitted by short and long bursts of radio waves that represented letters of the alphabet by the Morse code's dots and dashes. This system, also known as interrupted continuous wave (ICW) or, simply, continuous wave (CW), is used today by amateur radio operators, by beacon buoys in harbors, and by airport beacons.
Amplitude Modulation. In radio-telephone communication or standard broadcast transmissions the speech and music are used to modulate the carrier. This process means that the intelligence to be transmitted is used to vary some property of the carrier. One method is to superimpose the intelligence on the carrier by varying the amplitude of the carrier, hence the term amplitude modulation (AM). The modulating audio signal (speech or music) is applied to a microphone. This produces electrical signals that alternate, positively and negatively. After amplification, these signals are applied to a modulator. When the audio signals go positive, they increase the amplitude of the carrier; when they go negative, they decrease the amplitude of the carrier. The amplitude of the carrier now has superimposed on it the variation of the audio signal, with peaks and valleys dependent on the volume of the audio input to the microphone. The carrier has been modulated and, after further amplification, is sent by means of a transmission line to the transmitting antenna.
The maximum modulating frequency permitted by AM broadcast stations is 5 kHz at carrier frequencies between 535 and 1,605 kHz. The strongest AM stations have a power output of 50,000 watts.
Frequency Modulation. Another method of modulating the carrier is to vary its frequency. In frequency modulation (FM), on the positive half-cycle of the audio signal the frequency of the carrier gradually increases. On the negative half-cycle it is decreased. The louder the sound being used for modulation, the higher will be the change in frequency. A maximum deviation of 75 kHz above and below the carrier frequency is permitted at maximum volume in FM broadcasts. The rate at which the carrier frequency is varied is determined by the frequency of the audio signal. The maximum modulating frequency permitted by FM broadcast stations is 15 kHz at carrier frequencies between 88 and 108 MHz. This wider carrier frequency (15 kHz for FM as opposed to 5 kHz for standard AM broadcasts) accounts for the high fidelity of FM receivers. FM stations range in power from 100 watts to 100,000 watts. They cover distances of 24-105 km (15-65 mi) because government frequency allocations for commercial FM are in the VHF range, unlike commercial AM. Television transmitters use AM for picture signals and FM for sound.
The CW system described earlier is used in a modified FM form known as frequency shift keying (FSK) by high-speed teletype, facsimile, missile-guidance telemetry, and satellite communication. The carrier is shifted by amounts between 400 and 2,000 Hz. The shifts are made in a coded fashion and are decoded in the receiver. This keeps the receiver quiet between the dots and dashes and produces an audible sound in the receiver corresponding to the coded information.
The Antenna
An ANTENNA is a wire or metal conductor used either to radiate energy from a transmitter or to pick up energy at a receiver. It is insulated from the ground and may be situated vertically or horizontally.
The radio waves emitted from an antenna consist of electric and magnetic fields, mutually perpendicular to one another and to the direction of propagation. A vertical antenna is said to be vertically polarized because its electric field has a vertical orientation. An AM broadcast antenna is vertically polarized, requiring the receiving antenna to be located vertically also, as in an automobile installation. Television and FM broadcast transmitters use a horizontal polarization antenna.
For efficient radiation the required length of a transmitting (and receiving) dipole antenna must be half a wavelength or some multiple of a half-wavelength. Thus an FM station that broadcasts at 100 MHz, which has a wavelength of 3 m (9 ft 10 in), should have a horizontally polarized antenna 1.5 m (4 ft 11 in) in length. Receiving antennas (sometimes in the form of "rabbit ears") should be approximately the same length and placed horizontally.
For an AM station broadcasting at 1,000 kHz, the length should be 150 m (492 ft). This is an impractical length, especially when it must be mounted vertically. In this case, a quarter-wavelength Marconi antenna is often used, with the ground (earth), serving as the other quarter wavelength.
The Receiver
When the modulated carrier reaches the receiving antenna, a small voltage is induced. This may be as small as 0.1 microvolt in some commercial communication receivers but is typically 50 microvolts in a standard AM broadcast receiver. This voltage is coupled to a tunable circuit, which consists of a coil and a variable capacitor. The capacitor has a set of fixed metal plates and a set of movable plates. When one set of plates is moved with respect to the other, the capacitance is changed, making the circuit sensitive to a different, narrow frequency range. The listener thus selects which transmitted signal the receiver should reproduce.
The Crystal Receiver. An early method of detecting radio waves was the crystal receiver. A crystal of galena or carborundum along with a movable pointed wire called a cat whisker provides a simple rectifier. This component lets current flow in one direction only, so that only the upper half of the modulated wave can pass. A capacitor is then used to filter out the unwanted high-frequency carrier, leaving the audio to operate the earphones. No external power or amplifiers are used, so the only source of power in the earphones is the signal. Only strong signals are audible, but with a long antenna and a good ground, reception of a signal from 1,600 km (1,000 mi) away is sometimes possible.
The TRF Receiver. Following the development of the triode, increasing selectivity, sensitivity, and audio output power in tuned-radio-frequency (TRF) receivers was possible. This process involved a number of stages of radio-frequency amplification prior to the detection stage. In early receivers each of these stages had to be separately tuned to the incoming frequency--a difficult task. Even after single-dial tuning was achieved by ganging together the stages, the TRF was susceptible to breaking into oscillation and was not suitable for tuning over a wide range of frequencies. The principle is still used, however, in some modern shipboard emergency receivers and fixed-frequency microwave receivers.
The Superheterodyne Receiver. Practically all modern radio receivers use the heterodyne principle. The incoming modulated signal is combined with the output of a tunable local oscillator whose frequency is always a fixed amount above the incoming signal. This process, called frequency conversion or heterodyning, takes place in a mixer circuit. The output of the mixer is a radio frequency that contains the original information at the antenna. This frequency, called the intermediate frequency (IF), is typically 455 kHz in AM broadcast receivers. No matter what the frequency that the receiver is tuned to, the intermediate frequency is always the same; it contains the information of the desired station. As a result, all further stages of radio-frequency amplification can be designed to operate at this fixed intermediate frequency.
After detection, audio amplifiers boost the signal to a level capable of driving a loudspeaker.
Comparison of AM and FM
Although the method of detection differs in AM and FM receivers, the same heterodyne principle is used in each. An FM receiver, however, generally includes automatic frequency control (AFC). If the frequency of the local oscillator drifts from its correct value the station will fade. To avoid this problem, a DC voltage is developed at the detector and fed back to the local oscillator. This voltage is used to change automatically the frequency output of the local oscillator to maintain the proper intermediate frequency. Both AM and FM receivers incorporate automatic gain control (AGC), sometimes called automatic volume control (AVC). If a strong station is tuned in, the volume of the sound would tend to be overwhelming if the volume control had previously been set for a weak station. This drawback is overcome by the use of negative feedback--a DC voltage is developed at the detector and used to reduce automatically the gain, or amplification, of the IF amplifiers.
The prime advantage of FM, in addition to its fidelity, is its immunity to electrical noise. Lightning storms superimpose noise on an AM signal by increasing the amplitude of the signal. This effect shows up in a receiver as a crackling noise. An FM receiver, because it decodes only the frequency variations, has a limiter circuit that restricts any amplitude variations that may result from added noise.
Single Sideband Systems
When an audio signal of 5 kHz is used to amplitude-modulate a carrier, the output of the transmitter contains sideband frequencies in addition to the carrier frequency. The upper sideband frequencies extend to 5 kHz higher than the carrier, and the lower sideband frequencies extend to 5 kHz lower than the carrier. In normal AM broadcasts both sidebands are transmitted, requiring a bandwidth in the frequency spectrum of 10 kHz, centered on the carrier frequency. The audio signal, however, is contained in and may be retrieved from either the upper or lower sideband. Furthermore, the carrier itself contains no useful information. Therefore, the only part that needs to be transmitted is one of the sidebands. A system designed to do this is called a single sideband suppressed carrier (abbreviated SSBSC, or SSB for short). This is an important system because it requires only half of the bandwidth needed for ordinary AM, thus allowing more channels to be assigned in any given portion of the frequency spectrum. Also, because of the reduced power requirements, a 110-watt SSB transmitter may have a range as great as that of a 1,000-watt conventional AM transmitter. Almost all ham radios, commercial radiotelephones, and marine-band radios, as well as citizens band radios, use SSB systems. Receivers for such systems are more complex, however, than those for other systems. The receiver must reinsert the nontransmitted carrier before successful heterodyning can take place.
Radio has become a sophisticated and complex area of electrical engineering, especially when compared to its elementary origin. Every day new radio applications are being found, ranging from digital radio-controlled garage-door openers to weather satellites and from tracking systems for polar bear migrations to radio telescope investigations of the universe. This multiplicity of uses demonstrates the important part radio plays in the world today.
f:\12000 essays\sciences (985)\Physics\research plan.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Science Fair Research Plan
Problem:
Which material is the best insulator against electricity that a six volt battery produces?
Hypothesis:
We believe that the plastic coating will insulate against electricity the most effectively.
Detailed Description of the Procedure:
First we will gather all necessary for experimentation. A twelve inch strand of copper wire, an Eveready six volt battery, a twelve inch copper wire coated with plastic, a twelve inch copper wire covered with cloth, a twelve inch copper wire coated with college ruled notebook paper, a magnetic compass 1 and a college ruled notebook that we will use to record our data.
Next we will take a plain copper wire and strategically attach the copper wire to the six volt lantern battery made by the Eveready corporation so that each strand of the forked tongue attached to either the positive or negative electricity ports of the battery.
1 A magnetic compass is a device that humans use to tell in which direction that they are walking or going.
Next we remove our magnetic compass from the pile of supplies that we have previously mentioned. We will then place the magnetic compass on solid asphalt in an attempt to rule out any unnecessary electromagnetism.
We will then twist the copper wire so that the negative and positive electricity that is passing through the wire will be equally distributed upon the magnetic compass. We will then place this newly twisted copper wire exactly one inch from the magnetic compass which has previously been laid on the solid asphalt.
For the next step we will record the exact amount of degrees that the compass needle, representing North, moves.
We will then record all information including the number of degrees that the compass point turned or pivoted and the trial that this motion took place in and the type of insulator being tested ( also other observations ).
Next we will take a plastic insulated copper wire and strategically attach the copper wire to the six volt lantern battery made by the Eveready corporation so that each strand of the forked tongue is attached to either the positive or negative electricity ports of the battery.
We will then twist the copper wire covered with plastic so that the negative and positive electricity that is passing through the wire will be equally distributed upon the magnetic compass. We will then place this newly twisted copper wire insulated with plastic exactly one inch from the magnetic compass which has previously been laid on the solid asphalt.
For the next step we will record the exact amount of degrees that the compass needle, representing North, moves.
We will then record all information including the number of degrees that the compass point turned or pivoted and the trial that this motion took place in and the type of insulator being tested ( also other observations ).
Next we will take a cloth insulated copper wire and strategically attach the copper wire to the six volt lantern battery made by the Eveready corporation so that each strand of the forked tongue is attached to either the positive or negative electricity ports of the battery.
We will then twist the copper wire covered with cloth so that the negative and positive electricity that is passing through the wire will be equally distributed upon the magnetic compass. We will then place this newly twisted copper wire insulated with cloth exactly one inch from the magnetic compass which has previously been laid on the solid asphalt.
For the next step we will record the exact amount of degrees that the compass needle, representing North, moves.
We will then record all information including the number of degrees that the compass point turned or pivoted and the trial that this motion took place in and the type of insulator being tested ( also other observations ).
Next we will take a paper insulated copper wire and strategically attach the copper wire to the six volt lantern battery made by the Eveready corporation so that each strand of the forked tongue is attached to either the positive or negative electricity ports of the battery.
We will then twist the copper wire covered with cloth so that the negative and positive electricity that is passing through the wire will be equally distributed upon the magnetic compass. We will then place this newly twisted copper wire insulated with cloth exactly one inch from the magnetic compass which has previously been laid on the solid asphalt.
For the next step we will record the exact amount of degrees that the compass needle, representing North, moves.
We will then record all information including the number of degrees that the compass point turned or pivoted and the trial that this motion took place in and the type of insulator being tested ( also other observations ).
We will then conclude and analyze our data and record any observations in our data book. We will also compose a brief but detailed summary or conclusion answering our problem or purpose of Samuel Rovine and Gordie Stewart's Science Fair Experimentation Project ( S. F. E. P. ).
Plastic Insulator Cloth Insulator Paper Insulator No Insulator(exposed wire)
Number ofDegrees the Compass Point Turns
Trial Number
The following is the data chart in which we will record all our data for this science fair project.
This research plan was developed, written, and typed by Gordie Stewart and Samuel Rovine. We greatly hope that you have much knowledge on the workings of our Science Fair Experimentation Project ( S. F. E. P. ).
Sincerely,
Gordie Stewart, Sam Rovine
Bibliography
1. Encarta 1997 Deluxe Edition (compact disc)
2. Grolier,s 1995 Edition, vol. 16 (book version)
3. Compton's 1996 Edition (compact disc)
f:\12000 essays\sciences (985)\Physics\Resonance.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Resonance
RESONANCE: " The property whereby any vibratory system responds with
maximum amplitude to an applied force having the a frequency equal to its
own."
In english, this means that any solid object that is struck with a sound
wave of equal sound wave vibrations will amplitude the given tone. This
would explain the reason why some singers are able to break wine glasses
with their voice. The vibrations build up enough to shatter the glass. This is
called RESONANCE.
Resonance can be observed on a tube with one end open. Musical tones
can be produces by vibrating columns of air. When air is blown across the
top of the open end of a tube, a wave compression passes along the tube.
When it reaches the closed end, it is reflected. The molecules of reflected air
meet the molecules of oncoming air forming a node at the closed end. When
the air reaches the open end, the reflected compression wave becomes a
rarefaction. It bounces back through the tube to the closed end, where it is
reflected. the wave has now completed a single cycle. It has passed through
the tube four times making the closed tube, one fourth the length of a sound
wave. By a continuous sound frequency, standing waves are produced in the
tube. This creates a pure tone.
We can use this knowledge of one fourth wavelength to create our own
demonstration. It does not only have to be done using wind, but can also be
demonstrated using tuning forks. If the frequency of the tuning forks is
known, then v=f(wavelength) can find you the length of your air column.
Using a tuning fork of frequency 512 c/s, and the speed of sound is
332+0.6T m/s, temperature being, 22 degrees, substitute into the formula.
Calculate 1/4 wavelength
V=f(wavelength)
wavelength=V/f
=345.2 (m/s) / 512 (c/s)
=0.674 m/c
1/4 wave. =0.674 (m/c) / 4
= 0.168 m/c
Therefore the pure tone of a tuning fork with frequency 512 c/s in a
temperature of 22 degrees would be 16.8 cm. The pure tone is C.
If this was done with other tuning forks with frequencies of 480, 426.7,
384, 341.3, 320, 288, and 256 c/s then a scale in the key of C would be
produced.
There are many applications of this in nature. One example of this would
be the human voice. Our vocal chords create sound waves with a given
frequency, just like the tuning fork.
One of the first applications of the wind instrument was done in ancient
Greece where the pipes of pan were created. pipes of hollow reeds were
bound together, all of different length. When Pan, the god of fields, blew
across his pipes, the tones of a musical scale were heard. Later reproduction
of the same type were created and musical instruments are heard all over the
world thanks to the law of resonation.
Bibliography
Granet, Charles; Sound and Hearing; Abelard-Schuman, Toronto; 1965
Freeman, Ira M.; Sound and Ultrasonics; Random House; New york; 1968
Freeman, Ira M.; Physics Made Simple; Doubleday, New York; 1965
Jones, G.R.; Acoustics; English Univ. Press; London; 1967
White, Harvey E; Physics and Music; Saunders College, Philadelphia; 1980
Funk and Wagnall; Standard Desk Dictionary; Harper Row, USA; 1985
Resonance
RESONANCE: " The property whereby any vibratory system responds with
maximum amplitude to an applied force having the a frequency equal to its
own."
In english, this means that any solid object that is struck with a sound
wave of equal sound wave vibrations will amplitude the given tone. This
would explain the reason why some singers are able to break wine glasses
with their voice. The vibrations build up enough to shatter the glass. This is
called RESONANCE.
Resonance can be observed on a tube with one end open. Musical tones
can be produces by vibrating columns of air. When air is blown across the
top of the open end of a tube, a wave compression passes along the tube.
When it reaches the closed end, it is reflected. The molecules of reflected air
meet the molecules of oncoming air forming a node at the closed end. When
the air reaches the open end, the reflected compression wave becomes a
rarefaction. It bounces back through the tube to the closed end, where it is
reflected. the wave has now completed a single cycle. It has passed through
the tube four times making the closed tube, one fourth the length of a sound
wave. By a continuous sound frequency, standing waves are produced in the
tube. This creates a pure tone.
We can use this knowledge of one fourth wavelength to create our own
demonstration. It does not only have to be done using wind, but can also be
demonstrated using tuning forks. If the frequency of the tuning forks is
known, then v=f(wavelength) can find you the length of your air column.
Using a tuning fork of frequency 512 c/s, and the speed of sound is
332+0.6T m/s, temperature being, 22 degrees, substitute into the formula.
Calculate 1/4 wavelength
V=f(wavelength)
wavelength=V/f
=345.2 (m/s) / 512 (c/s)
=0.674 m/c
1/4 wave. =0.674 (m/c) / 4
= 0.168 m/c
Therefore the pure tone of a tuning fork with frequency 512 c/s in a
temperature of 22 degrees would be 16.8 cm. The pure tone is C.
If this was done with other tuning forks with frequencies of 480, 426.7,
384, 341.3, 320, 288, and 256 c/s then a scale in the key of C would be
produced.
There are many applications of this in nature. One example of this would
be the human voice. Our vocal chords create sound waves with a given
frequency, just like the tuning fork.
One of the first applications of the wind instrument was done in ancient
Greece where the pipes of pan were created. pipes of hollow reeds were
bound together, all of different length. When Pan, the god of fields, blew
across his pipes, the tones of a musical scale were heard. Later reproduction
of the same type were created and musical instruments are heard all over the
world thanks to the law of resonation.
Bibliography
Granet, Charles; Sound and Hearing; Abelard-Schuman, Toronto; 1965
Freeman, Ira M.; Sound and Ultrasonics; Random House; New york; 1968
Freeman, Ira M.; Physics Made Simple; Doubleday, New York; 1965
Jones, G.R.; Acoustics; English Univ. Press; London; 1967
White, Harvey E; Physics and Music; Saunders College, Philadelphia; 1980
Funk and Wagnall; Standard Desk Dictionary; Harper Row, USA; 1985
nstrated using tuning forks. If the frequency of the tuning forks is
known, then v
f:\12000 essays\sciences (985)\Physics\Scientific Report On Heat Transfer.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Heat Transfer
Aim: Our Aim Is To Record The Temperature Of The Water After We Have
Left The Nut In There For A Designated Period Of Time.
Hypothesis: If We Put The Heated Nut Into The Water Then The Water
Temperature Will Rise.
Apparatus: For This Experiment We Need: Container, 200ml Of Water,
Thermometer, Bunsen Burner, Stop Watch, Matches And A Metal Tong
Method:
1. Light The Bunsen Burner
2. Change The Bunsen Burners Flame To Blue
3. Combine The Metal Tongs With The Nut And Wire
4. Hold It Over The Bunsen Burner For The Designated Amount Of
Time
5. Take It Off The Bunsen Burner And Place It Straight Into The
Water
6. Leave It In For The Designated Amount Of Time
7. Place The Thermometer In And Let It Sit There For About 30
Seconds
8. Take The Thermometer Reading And Put It In The Results Section
Diagram
Heat Transfer
Results: Time In Bunsen Burner Water Temperature
30 Seconds 3.1 Degrees
1 Minute 3.3 Degrees
2 Minutes 3.7 Degrees
Conclusion: Our Results Prove Our Hypothesis Is True " If We Put The
Heated Nut Into The Water Then The Water Temperature Will Rise ". Also
We Porved That Water IS A Very Poor Conducter Of Heat
f:\12000 essays\sciences (985)\Physics\simple machines.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Definitions:
Machine- A device that makes work easier by changing the speed , direction, or amount of a force.
Simple Machine- A device that performs work with only one movement. Simple machines include lever, wheel and axle, inclined plane, screw, and wedge.
Ideal Mechanical Advantage (IMA)- A machine in which work in equals work out; such a machine would be frictionless and a100% efficient
IMA= De/Dr
Actual Mechanical Advantage (AMA)- It is pretty much the opposite of IMA meaning it is not 100% efficient and it has friction.
AMA= Fr/Fe
Efficiency- The amount of work put into a machine compared to how much useful work is put out by the machine; always between 0% and 100%.
Friction- The force that resist motion between two surfaces that are touching each other.
What do we use machines for?
Machines are used for many things. Machines are used in everyday life just to make things easier. You use many machines in a day that you might take for granted. For example a simple ordinary broom is a machine. It is a form of a lever. Our country or world would never be this evolved if it wasn't for machine. Almost every thing we do has a machine involved. We use machines to manufacture goods, for transportation, ect.
In the W=F*d equation the trade of between force and distance is as you use a machine the force goes down and distance goes up. If there was no friction they would be equal and trade.
There are six simple machines. They are a lever, pulleys, inclined plain, wheel and axle, screw, and wedge. The lever is used very often an example of a lever is a broom. Your hand is the fulcrum and when you sweep it is a lever. A lever consist of a fulcrum, effort, and resistance. A pulley is used to lift or pull objects with a advantage. To get a advantage it matters how many lines are going to the load. For example if there is 3 lines to the load it is a 3/1 advantage. A inclined plain is used to lift an object easier but with more work. Instead of lifting it straight up you push it a greater distance but with less force. A screw is a inclined plain wrapped around a cylinder post. Its like a ramp around the screw.
A wedge is a inclined plain with one or two sloping sides. Chisels, knives, and ax blades are examples of wedges.
IMA is ideal mechanical advantage meaning a frictionless world with 100% efficiency. It is saying that work in and work out are exactly the same. AMA is Actual Mechanical advantage meaning there is friction in the machine and the efficiency could range from 0% to 99%. The difference between the two is one has friction and more efficient and the other isn't.
Machines can never be 100% efficient because of friction. In a Ideal world there is no friction but we don't live in a ideal world. We have friction and efficiency is not always high or low but it is never 100%.
Added discussion
Believe it or not your arm is a simple machine. Your arm is a form of lever. The muscles and bones work together as levers. The two bones in your forearm act as the bar of the lever. Your elbow is the fulcrum of your lever. When you contract your biceps, this muscle exerts an effort force on the forearm bones to pull upward. the entire forearm is the resistance force.
f:\12000 essays\sciences (985)\Physics\SolarSystem.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Solar cells today are mostly made of silicon, one of the most common elements on
Earth.
The crystalline silicon solar cell was one of the first types to be developed and it is still the
most
common type in use today. They do not pollute the atmosphere and they leave behind no
harmful
waste products. Photovoltaic cells work effectively even in cloudy weather and unlike
solar
heaters, are more efficient at low temperatures. They do their job silently and there are no
moving parts to wear out. It is no wonder that one marvels on how such a device would
function.
To understand how a solar cell works, it is necessary to go back to some basic atomic
concepts. In the simplest model of the atom, electrons orbit a central nucleus, composed
of
protons and neutrons. each electron carries one negative charge and each proton one
positive
charge. Neutrons carry no charge. Every atom has the same number of electrons as there
are
protons, so, on the whole, it is electrically neutral. The electrons have discrete kinetic
energy
levels, which increase with the orbital radius. When atoms bond together to form a solid,
the
electron energy levels merge into bands. In electrical conductors, these bands are
continuous but
in insulators and semiconductors there is an "energy gap", in which no electron orbits can
exist,
between the inner valence band and outer conduction band [Book 1]. Valence electrons
help to
bind together the atoms in a solid by orbiting 2 adjacent nucleii, while conduction
electrons,
being less closely bound to the nucleii, are free to move in response to an applied voltage
or
electric field. The fewer conduction electrons there are, the higher the electrical resistivity
of
the material.
In semiconductors, the materials from which solar sells are made, the energy gap Eg is
fairly small. Because of this, electrons in the valence band can easily be made to jump to
the
conduction band by the injection of energy, either in the form of heat or light [Book 4].
This
explains why the high resistivity of semiconductors decreases as the temperature is raised
or the
material illuminated. The excitation of valence electrons to the conduction band is best
accomplished when the semiconductor is in the crystalline state, i.e. when the atoms are
arranged in a precise geometrical formation or "lattice".
At room temperature and low illumination, pure or so-called "intrinsic"
semiconductors
have a high resistivity. But the resistivity can be greatly reduced by "doping", i.e.
introducing
a very small amount of impurity, of the order of one in a million atoms. There are 2 kinds
of
dopant. Those which have more valence electrons that the semiconductor itself are called
"donors" and those which have fewer are termed "acceptors" [Book 2].
In a silicon crystal, each atom has 4 valence electrons, which are shared with a
neighbouring atom to form a stable tetrahedral structure. Phosphorus, which has 5 valence
electrons, is a donor and causes extra electrons to appear in the conduction band. Silicon
so
doped is called "n-type" [Book 5]. On the other hand, boron, with a valence of 3, is an
acceptor, leaving so-called "holes" in the lattice, which act like positive charges and
render the
silicon "p-type"[Book 5]. The drawings in Figure 1.2 are 2-dimensional representations of
n-
and p-type silicon crystals, in which the atomic nucleii in the lattice are indicated by
circles and
the bonding valence electrons are shown as lines between the atoms. Holes, like electrons,
will
remove under the influence of an applied voltage but, as the mechanism of their
movement is
valence electron substitution from atom to atom, they are less mobile than the free
conduction
electrons [Book 2].
In a n-on-p crystalline silicon solar cell, a shadow junction is formed by diffusing
phosphorus into a boron-based base. At the junction, conduction electrons from donor
atoms in
the n-region diffuse into the p-region and combine with holes in acceptor atoms,
producing a
layer of negatively-charged impurity atoms. The opposite action also takes place, holes
from
acceptor atoms in the p-region crossing into the n-region, combining with electrons and
producing positively-charged impurity atoms [Book 4]. The net result of these movements
is the
disappearance of conduction electrons and holes from the vicinity of the junction and the
establishment there of a reverse electric field, which is positive on the n-side and negative
on
the p-side. This reverse field plays a vital part in the functioning of the device. The area in
which it is set up is called the "depletion area" or "barrier layer"[Book 4].
When light falls on the front surface, photons with energy in excess of the energy gap
(1.1 eV in crystalline silicon) interact with valence electrons and lift them to the
conduction
band. This movement leaves behind holes, so each photon is said to generate an "electron-
hole
pair" [Book 2]. In the crystalline silicon, electron-hole generation takes place throughout
the
thickness of the cell, in concentrations depending on the irradiance and the spectral
composition
of the light. Photon energy is inversely proportional to wavelength. The highly energetic
photons
in the ultra-violet and blue part of the spectrum are absorbed very near the surface, while
the
less energetic longer wave photons in the red and infrared are absorbed deeper in the
crystal and
further from the junction [Book 4]. Most are absorbed within a thickness of 100 æm.
The electrons and holes diffuse through the crystal in an effort to produce an even
distribution. Some recombine after a lifetime of the order of one millisecond, neutralizing
their
charges and giving up energy in the form of heat. Others reach the junction before their
lifetime
has expired. There they are separated by the reverse field, the electrons being accelerated
towards the negative contact and the holes towards the positive [Book 5]. If the cell is
connected
to a load, electrons will be pushed from the negative contact through the load to the
positive
contact, where they will recombine with holes. This constitutes an electric current. In
crystalline
silicon cells, the current generated by radiation of a particular spectral composition is
directly
proportional to the irradiance [Book 2]. Some types of solar cell, however, do not exhibit
this
linear relationship.
The silicon solar cell has many advantages such as high reliability, photovoltaic power
plants can be put up easily and quickly, photovoltaic power plants are quite modular and
can
respond to sudden changes in solar input which occur when clouds pass by. However
there are
still some major problems with them. They still cost too much for mass use and are
relatively
inefficient with conversion efficiencies of 20% to 30%. With time, both of these problems
will
be solved through mass production and new technological advances in semiconductors.
Bibliography
1) Green, Martin Solar Cells, Operating Principles, Technology and System Applications.
New
Jersey, Prentice-Hall, 1989. pg 104-106
2) Hovel, Howard Solar Cells, Semiconductors and Semimetals. New York, Academic
Press,
1990. pg 334-339
3) Newham, Michael ,"Photovoltaics, The Sunrise Industry", Solar Energy, October 1,
1989,
pp 253-256
4) Pulfrey, Donald Photovoltaic Power Generation. Oxford, Van Norstrand Co., 1988. pg
56-61
5) Treble, Fredrick Generating Electricity from the Sun. New York, Pergamon Press,
1991. pg
192-195
f:\12000 essays\sciences (985)\Physics\statics and strength of materials.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF CONTENTS
INTRODUCTION.................................................................................................1
Chapter
I. General Principles.................................................................................2
I. Systems of Force...................................................................................4
II. Stress...................................................................................................6
III. Properties of Material......................................................................7
IV. Bolted and Welded Joints.................................................................10
V. Beams -- A Practical Application.....................................................13
VI. Beam Design...................................................................................17
VII. Torsional Loading: Shafts, Couplings, and Keys.......................19
VIII. Conclusion..................................................................................20
BIBLIOGRAPHY.............................................................................................21
INTRODUCTION
Mechanics is the physical science concerned with the dynamic behavior of bodies that are acted on by mechanical disturbances. Since such behavior is involved in virtually all the situations that confront an engineer, mechanics lie at the core of much engineering analysis. In fact, no physical science plays a greater role in engineering than does mechanics, and it is the oldest of all physical sciences. The writings of Archimedes covering bouyancy and the lever were recorded before 200 B.C. Our modern knowledge of gravity and motion was established by Isaac Newton (1642-1727).
Mechanics can be divided into two parts: (1) Statics, which relate to bodies at rest, and (2) dynamics, which deal with bodies in motion. In this paper we will explore the static dimension of mechanics and discuss the various types of force on an object and the different strength of materials.
The term strength of materials refers to the ability of the individual parts of a machine or structure to resist loads. It also permits the selection of materials and the determination of dimensions to ensure the sufficient strength of the various parts.
General Principles
Before we can venture to explain statics, one must have a firm grasp on classical mechanics. This is the study of Newton's laws and their extensions. Newton's three laws were originally stated as follows:
1. Every body continues in its state of rest, or of uniform motion in a straight line, unless it is compelled to change that state by forces impressed on it.
2. The change of motion is proportional to the motive force impressed and is made in the direction in which that force is impressed.
3. To every action there is always opposed an equal reaction; or the mutual actions of two bodies on each other are equal and direct to contrary parts.
Newton's law of gravitational attraction pertains to celestrial bodies or any object onto which gravity is a force and states: "Two particles will be attracted toward each other along their connecting line with a force whose magnitude is directly proportional to the product of the masses and inversely proportional to the distance squared between the particles.
When one of the two objects is the earth and the other object is near the surface of the earth (where r is about 6400 km) / is essentially constant, then the attraction law becomes f = mg.
Another essential law to consider is the Parallelogram Law. Stevinius (1548-1620) was the first to demonstrate that forces could be combined by representing them by arrows to some suitable scale, and then forming a parallelogram in which the diagonal represents the sum of the two forces. All vectors must combine in this manner.
When solving static problems as represented as a triangle of force, three common theorems are as follows:
1. Pythagorean theorem. In any right triangle, the square of the hypotenuse is equal to the sum of the squares of the two legs:
=
2. Law of sines. In any triangle, the sides are to each other as the sines of the opposite angle:
3. Law of cosines. In any triangle, the square of any side is equal to the sum of the squares of the other two sides minus twice the product of the sides and the cosine of their included angle:
= - 2ab cos C
By possessing an understanding of Newton's Laws, following these three laws of graphical solutions, and understanding vector algebra you can solve most engineering static problems.
Systems of Force
Systems of force acting on objects in equilibrium can be classified as either concurrent or nonconcurrent and as either coplanar or noncoplanar. This gives us four general categories of systems.
The first category, concurrent-coplanar forces occur when the lines of action of all forces lie in the same plane and pass through a common point. Figure 1 illustrates a concurrent-coplanar force in such that F1, F2, and W all lie in the same plane (the paper) and all their lines of action have point O in common. To determine the resultant of concurrent force systems, you can use the Pythagorean theorem, the law of sines, or the law of cosines as outlined in the previous chapter.
Nonconcurrent-coplanar force is when the lines of action of all forces lie in the same plane but do not pass through a common point as illustrated in figure 2. The magnitude and direction of the resultant force can be determined by the rectangular component method using the first two equations in figure 2, and the perpendicular distance of the line of action of R from the axis of rotation of the body can be found using the third equation in figure 2.
Concurrent-noncoplanar forces are when Application the lines of action of all forces pass through a common point and are not in the same plane. To find the resultant of these forces it is best to resolve each force into components along three axes that make angles of 90 degrees with each other.
Nonconcurrent-noncoplanar forces are when the lines of action of all forces do not pass through a common point and the forces do not all lie in the same plane.
Stress
When a restrained body is subject to external forces, there is a tendency for the shape of the body that is subject to the external force to be deformed or changed. Since materials are not perfectly rigid, the applied forces will cause the body to deform. The internal resistance to deformation of the fibers of a body is called stress. Stress can be classified as either simple stress, sometimes referred to as direct stress, or indirect stress.
The various types of direct stress are tension, compression, shear, and bearing. The various types of indirect stress are bending and torsion. A third variety of stress is categorized as any combination of direct and indirect stress.
Simple stress is developed under direct loading conditions. That is, simple tension and simple compression occur when the applied force is in line with the axis of the member and simple shear occurs when equal, parallel, and opposite forces tend to cause a surface to slide relative to the adjacent surface. When any type of simple stress develops we can calculate the magnitude of the stress by the formula , where:
· s = average unit stress;
· F = external force causing stress to develop;
· A = area over which stress develops.
Indirect stress, or stress due to bending should be properly classified under statics of rigid bodies and not under strength of materials. The bending moment in a beam depends only on the loads on the beam and on its consequent support reactions. Torsion is when a shaft is acted upon by two equal and opposite twisting moments in parallel planes. Torsion can be either stationary or rotating uniformly. Indirect stress will be discussed in detail in later sections.
Properties of Material
In order for the engineer to effectively design any item, whether it is a frame which holds an object or a complicated piece of automated machinery, it is very important to have a strong knowledge of the mechanical and physical properties of metals, wood, concrete, plastics and composites, and any other material an engineer is considering using to construct an object. The rest of this paper will deal with strength of materials and how to best choose a material and construction technique to effectively accomplish what was set out without "over-engineering."
Strength of materials deals with the relationship between the external forces applied to elastic bodies and the resulting deformations and stresses. In the design of structures and machines, the application of the principles of strength of materials is necessary if satisfactory materials are to be utilized and adequate proportions obtained to resist functional forces.
In today's global economy is crucial for success to be able to build the "biggest and best" while spending the least. To do that successfully it is imperative to have a firm understanding of different materials and their correct uses. The load per unit area, called stress, and the deformation per unit length, called strain, must be understood. The formula for stress is:
The formula for strain is:
The amount of stress and strain a material can endure before deformation occurs is known as the proportional limit. Up to this point, any stress or strain induced into the material will allow the material to return to its original shape. When stress and strain exceed the proportional limit of the material and a permanent deformation, or set, occurs the object is said to have reached its elastic limit. Modulus of elasticity, also called Young's modulus, is the ratio of unit stress to unit strain within the proportional limit of a material in tension or compression. Some representatives values of Young's modulus (in 10^6 psi) are as follows:
· Aluminum, cast, pure...............................................9
· Aluminum, wrought, 2014-T6............................10.6
· Beryllium copper...................................................19
· Brass, naval............................................................15
· Titanium, alloy, 5 Al, 2.5 Sn.................................17
· Steel for buildings and bridges, ASTM A7-61T...29
Once the elastic limit of a material is reached, the material will elongate rather easily without a significant increase in the load. This is known as the yield point of the material. Not all materials have a yield point. Some repre
f:\12000 essays\sciences (985)\Physics\Steam Turbines.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Steam Turbines
The invention of the water turbine was so successful that eventually, the idea came about
for extracting power from steam. Steam has one great advantage over water-it expands in
volume with tremendous velocity. To be the most effective, a steam turbine must run at a very
high speed. No wheel made can revolve at any speed approaching the velocity that a steam
turbine can. By utilizing the kinetic energy of steam flow, the turbine could achieve a higher
efficiency. As a result, the steam turbine has supplanted the reciprocating engine as a prime
mover in large electricity-generating plants and is also used as a means of jet propulsion.
The action of the steam turbine is based on the thermodynamic principle that when a vapor
is allowed to expand, its temperature drops. In turn, its internal energy is decreased. This
reduction in internal energy is transformed into mechanical energy in the form of an acceleration
of the particles of vapor. The transformation that occurs, provides a large amount of available
work energy.
The essential parts of all steam turbines consist of nozzles or jets through which the steam
can flow and expand. Thus, the temperature drops, and kinetic energy is gained. In addition,
there are blades, on which high pressure steam is exerted. Stationary blades shift the steam onto
rotating blades, which provide power. Also, turbines are equipped with wheels or drums where
the blades are mounted. A shaft for these wheels or drums is also a basic component, as well as
an outer casing that confines the steam to the area of the turbine proper. In order to efficiently
use this contraption, it is necessary to have a number of stages. In each of these stages, a small
amount of thermal energy is converted to kinetic energy. If the entire conversion of energy took
place at once, the rotative speed of the turbine wheel would be way too excessive.
Steam turbines are really quite simple machines, that have only one major moving part, the
rotor. However, auxiliary equipment is necessary for their operation. Journal bearings support
the shaft, and an oil system provides lubrication to these bearings. A special seal system prevents
steam from leaking out, or outside air, from leaking in. A modern multistage steam turbine is
inherently high in expansion efficiency, because of the ability to recover losses of one stage
downstream. This is done through the process of reheating.
Steam turbines are still in heavy use today, providing power to ships as well as many other
things. They are used in the generation of nuclear power and they can operate with fuel-fired
boilers for power generation. In factories, industrial units are used to power machines, pumps,
compressors, electrical generators.
f:\12000 essays\sciences (985)\Physics\Stratospheric Observatory For Infrared Astronomy.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Stratospheric Observatory For Infrared Astronomy
The Stratospheric Observatory For Infrared Astronomy (SOFIA) will be a 2.5 meter, optical/infrared/sub-millimeter telescopemounted in a Boeing 747, to be used for many advanced astronomical observations performed at stratospheric altitudes. The Observatory will accommodate installation of different focal plane instruments, with in-flight accessibility, provided by
investigators selected from the international science community. The Observatory objective is to have an operational lifetime in excess of 20 years.
The SOFIA project is in the early full-scale stage. The start of detailed system design is anticipated in the Fall of 1996. The German Space Agency (DARA) is a partner with NASA in the SOFIA project. DARA will provide the telescope and NASA will provide the rest of the facility including the 747 aircraft, aircraft modifications, on-board mission control system, ground
facilities and support equipment, overall management, system integration and operations.
The SOFIA project is currently moving forward with evaluation of proposals for prime contracts for the U.S. and German portions of the program. Final approval for program implementation has been received from the U.S. Congress and NASA management. The observatory will begin flight operations by the year 2001.
f:\12000 essays\sciences (985)\Physics\Stress.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Everyone has stress, and we all have different stressors. Each person has their own way of
coping with stress. some ignore their problems while others face them head on. There are four
types of stressors and we all experience them at some point in our lives.
One of these stressors is hassles. Hassles are a part of everyday life, but if they aren't
coped with, they an cause major problems. One hassle in my life is me being constantly sick all of
the time. Lately, I have had a lot of colds and flus's. Coughing, sneezing, and missing school can
get really old. It is a hassle to blow my nose and take my pills all of the time. My being sick is a
big hassle, but it is not really a high quality of stressor. Hassles can cause quite a bit of stress, but
they are nothing compared to a catastrophe.
Catastrophes are unpredictable events that can change your life permanently. The biggest
catastrophe in my life was when my best friend, Dre, died. It was hard for me because I knew
what was happening to him but there was nothing I could do about it. My parents didn't know
about him so I couldn't turn to them. I couldn't turn to my boyfriend because he wouldn't
understand or care. Dre was the one person I could always turn to, and when I lost him my life
changed forever. The death of a loved one is usually considered a life change, but in my case it
was much more drastic than that.
My life change that has caused me a lot of stress would be my problems with my parents.
As I've gotten older we've been arguing about who I am hand how I am supposed to act. It puts a
lot of stress on my because I want their respect but I still want to be my own person. This stress is
pretty hard on me because I see my parents a lot and the stressor is always around. It doesn't
worry me too bad because I know I'll be out of the house in a year. The final type of stress is a
societal stressor. This is a stress that society puts on you. My societal stressor is my looks. Society
puts a lot of pressure on teenage girls to be lender and fit. I'm overweight and it really bothers me.
Comments from my family and peers make it hard for me to be comfortable with how I look. I
don't like being out of shape and I can't stand to be seen in shorts or a swimming suit. It bothers
me a lot because it affects how I feel about myself. I think about it all of the time, so right now, it
is my biggest stressor.
How bad stressors affect you depends on how you interpret them. On the "Type A or B"
questionnaire I turned out to be a type A personality, someone who is hostile and stressful. The
results were right because I am stressful. The questions were vague and the test was inconclusive.
Like the question, " Do you try to find more efficient ways of doing things?" If you answered
yes it was supposed to mean that you were a type A personality. Wanting things to work better
does not mean you are stressful, it means you are intelligent and responsible. The question," Do
late people make you mad?," was also off. People being late makes everyone upset, whether they
are stressful or not.
The Locus of Control test was also inconclusive. It was vague and stereotyping. The test
asked if you find it useless to try to get your own way at home. Your answer could be based on
your parents and not whether or not you believe in fate. I find it useless to try to get my way at
home because my parents don't listen to me or respect me, not because I believe in fate. The test
said I thought my life was run y others and fate. That is wrong because I believe that my actions,
the actions of others and fate all determine my life.
I don't handle my stress very constructively. I waver between ignoring it or dwelling on it.
When something bad happens, like a fight with my boyfriend, I usually ignore it until it starts to
bother me too much and then I dwell on it until it becomes some major issue. It's a problem I am
trying to solve because I know it contributes to my health and mental problems.
Your recourses are the ways you handle your stress. I handle my societal stress with many
different recourses.
The first way I handle my weight is by using my health rider exercise machine. I try to ride
it whenever I can find the time. I'm also trying to eat less by skipping snacks and cutting my
proportions. I am also trying to eat healthier foods. The advantages of exercising are that I have
more energy, I burn calories and I tone my muscles, but I don't do it enough because I never seem
to have enough time. Eating less deprives me of the vitamins and nutrients I need but it also
eliminates calories. Eating healthier makes me healthier and have more energy, but I find it hard
when the only thing available is junk food.
I have other recourses available to handle the stress of my weight that I am not currently
using. I could go on Jenny Craig, get liposuction, or have some kind of surgery. All of these
options have immediate effects but they are expensive and can be dangerous and the effects don't
last long.
A good way to handle stress is to develop a stress management plan. I develops a stress
management plan to help me handle my stress of being overweight. I made a plan of what and
how much I would eat every day and made sure that it provided enough nutrients and vitamins that
I need. I also set aside time to exercise everyday. One way to handle your stress is to change your
stressor. I could do this by avoiding thinking about it, not letting people's comments bother me,
and I could also see a counselor. You can change how you interpret your stressor you can make
better use of your recourses.
Stress is a normal part of life. Everyone experiences stress and everyone has their own way of
handling it. Talking and learning about stress teach you how to manage and cope with it. Stress
can either destroy you or make you who you are. I chose to let stress make me into a stronger
person. As long as I keep managing my stress, I should be okay.
f:\12000 essays\sciences (985)\Physics\Summary of Chapters 15 & 16.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
15.1 Electric Charge and the Electrical Structure of Matter
Fundamental Law of Electric Charge:
Opposite electric charges attract each other. Similar electric charges repel each other.
A Simple Model of the Structure of Matter
1. All matter is composed of sub-microscopic particles, called atoms.
2. Electric charges are carried by particles within each atom that are called electrons and protons.
3. Protons are found in a small central region of the atom, called the nucleus. They are small, heavy particles, and each one carries a positive electric charge of a specific magnitude, called the elementary.
4. Electrons move about in the vast space around this central nucleus. They are small, very light particles, yet each of them carries a negative electric charge equal in magnitude to that of the proton.
5. Atoms are normally electrically neutral, because the number of positive protons in the nucleus is equal to the number of negative electrons moving around the nucleus.
6. Neutrons are small, heavy particles found in the nucleus, and they carry no electric charge.
7. An atom may gain electrons. Then it is no longer neutral, but instead has an excess of electrons and, therefore, a net positive charge. Such an atom is called a negative ion.
8. An atom may also lose electrons. As a result, it will have a deficit of electrons and, therefore, a net positive charge. Such an atom is called a positive ion.
All electric charges in solids result from either an excess or a deficit of electrons.
Conductor- solids in which electrons are able to move easily from one atom to another.
Insulator- solids in which the electrons are not free to move about easily from atom to another.
Gazes and liquids can be either conductors or insulators.
15.2 Transfer of Electric Charge
Induced Charge Separation
The positive charges on a solid conductor are fixed and cannot move. Some negative electrons are quite free to move about from atom to atom.
Charging by Contact
An object charged by contact has the same sign as the charging rod.
Charging by Induction
An object that is charged by induction has a charge opposite to that of the carrying rod.
15.3 Electric Forces - Coulomb's Law
The magnitude of the electric force is inversely proportional to the square of the distance between the centers of the charged objects.
15.4 Electric Fields
Every charged object creates an electric field of force in the space around it. Any other charged object in that space will experience a force of attraction or repulsion from the presence of the electric field
15.5 Electric Potential
1 V is the electric potential at a point in an electric field. If 1 J of work is required to move 1 C of charge from infinity to that point.
15.6 The Milikan Experiment - Determination of the Elementary Charge
e = 1.602 x 10 C q = Ne
16.1 Natural Magnetism
Magnets
Law of Magnetic Poles:
Opposite magnetic poles attract. Similar magnetic poles repel.
Induced Magnetism: Temporary and Permanent Magnets
This model provides a simple explanation for many common properties of induced magnets:
1. A needle can be magnetized by stroking it in one direction with a strong permanent magnet, thereby aligning its domains.
2. When a bar magnet is broken in two rather than producing separate north and south poles, two smaller magnets are produced, each with its own north and south poles.
3. Some induced magnets made of soft iron demagnetize instantaneously, while others made of hard steel or alloys remain magnetized indefinitely. Impurities in the alloys seem to "lock" the aligned domains in place and prevent them from relaxing to their random orientation.
4. Heating or dropping a magnet can cause it to lose its magnetization, jostling the domains sufficiently to allow them to move and resume their random orientation.
5. A strong magnetic field can reverse the magnetism in bar magnets so that the pole marked S points north. This occurs when the domains reverse their direction of orientation by 180 due to the influence of the strong external field in the opposite direction.
6. Ships' hulls, columns and beams in buildings, and many other steel structures are often found to be magnetized by the combined effects of the Earth's magnetic field and the vibrations created during construction. The effect is similar to stroking a needle with a strong magnet, in that the domains within the metals are caused to line up with the Earth's magnetic field.
16.2 Electromagnetism
Moving electric charges produce a magnetic field.
Magnetic Field of a Straight Conductor
If a conductor is grasped in the right hand, with the thumb pointing in the direction of the current, the curled fingers will point in the direction of the magnetic field lines.
Magnetic Field of a Coil or Solenoid
If a solenoid is grasped in the right hand, with the fingers curled in the direction of the electric current, the thumb will point in the direction of the magnetic field lines in its core, and, hence, towards its north pole.
Using Electromagnets and Solenoids
In addition to its use in the well-known lifting electromagnets found in scrap-metal yeaards, it's used in bells, switches, relays, and magnetic speakers.
16.3 Magnetic Forces on Conductors and Moving Charges
If the right thumb points in the direction of the current, and the extended fingers point in the direction of the magnetic field, the force will be in the direction in which the right palm would push.
1 T is the magnetic field strength when a conductor with a current of 1A, and a length 1 m at an angle of 90 to the magnetic field, experiences a force of 1 N.
F = B I L sin 0
Force on a Moving Charge
F = Bqv sin 0
16.4 Ampere's Law
Along any closed path through a magnetic field, the sum of the products of the component of B parallel to the path segment with the length of the segment is directly proportional to the net electric current passing through the area enclosed by the path.
The Ampere As a Unit of Electric Current
1 A is the current flowing through each of two long, straight, parallel conductors 1 m apart in a vacuum, when the magnetic force between them is 2 x 10 N per metre of length.
1 C is the charge transported by a current of 1 A in a time of 1 s.
1 C = 1A * s
16.5 The Mass of the Electron and the Proton
e/m = 1.76 x 10 C/kg
16.6 The Oscilloscope
A device commonly used in the laboratory to analyze and measure electrical signals is the oscilloscope, whose major component is the cathode ray tube (CRT).
16.8 Electromagnetic Waves
Maxwell's Equations of Electromagnetism:
1. The distribution of an electric charge, in space, is related to the electric field it produces.
2. Magnetic field lines are continuos, and do not have a beginning or an end, whereas electric field lines
begin and end on electric charges.
3. An electric field can produce a magnetic field, so a changing electric field should produce a changing
magnetic field.
4. A changing magnetic field can produce a changing electric field, and hence an induced current and
potential difference in a conductor in the changing field.
Properties and Characteristics of Electromagnetic Waves:
1. Electromagnetic waves are produced whenever electric charges are accelerated. The accelerated charge loses energy that is carried away in the electromagnetic wave.
2. If the electric charge is accelerated in periodic motion, the frequency of the electromagnetic waves produced is exactly equal to the frequency of oscillation of the charge.
3. All electromagnetic waves travel through a vacuum at a common speed c, calculated at 3.0 x 10 m/s, and obey the wave equation c = f
4. Electromagnetic waves consist of oscillating electric and magnetic fields in a constant phase relation, perpendicular to each other, and both at 90 to the direction of propagation of the wave, as depicted in the sketch.
5. Electromagnetic waves exhibit the properties of interference, diffraction, polarization, and refraction, and can carry linear and angular momentum. Their intensity is proportional to the square of the magnitude of the electric field amplitude, and to the square of the frequency.
The Electromagnetic Spectrum
There is a broad range of frequencies of electromagnetic waves, called the "electromagnetic spectrum", all having the same basic characteristics that Maxwell had predicted.
16.9 Applications of Electromagnetic Waves
There are many applications of electromagnetic waves in our 20th century, some of the most significant ones are described in detail below.
A. Radio and Television Communications
Marconi first recognized the potential for transmitting information over long distances using electromagnetic waves without any direct connection by wires.
Transmission
Sound waves are detected by a microphone and converted into a weak audio signal, containing frequencies in the range 20 Hz to 20 000 Hz. This signal is strengthened by an AF amplifier, then passes through modulator, where it interferes with an RF. This modulated signal, of either type, is then further amplified by an RF amplifier, and supplied to an antenna, where the complex mixture of frequencies and amplitudes is sent out in the form of an electromagnetic waves.
Reception
The first thing for the receiver is to select a certain carrier frequency to correspond to certain station. This can be accomplished by means of a resonant tuning circuit that contains an inductance (L) and a capacitance (C). This selected RF signal is then amplified and sent into a demodulator. Then AF signal is separated, and is amplified again, and then sent to the speaker for conversion into sound waves.
C. Blackbody Radiation
As the temperature of a solid increases, the radiation it emits is of higher and higher frequency.
Several key points about black body radiation:
· At a given temperature, a spectrum of different wavelengths is emitted, of varying intensity, but there is a definite intensity maximum at one particular wavelength
· As the temperature increases, the intensity maximum shifts to a lower wavelength.
D. Gamma Radiation
There three distinct types of radioactive materials:
1) a type of particle similar to high-speed helium nuclei, with poor penetrating power, called alpha particles.
2) a type of particle similar to high speed electrons, with the ability to penetrate up to several millimeters of aluminum, called beta particles .
3) a form of radiation, similar to very high energy X-rays, with the ability to penetrate even several centimetres of lead, called gamma rays .
f:\12000 essays\sciences (985)\Physics\Superconductivity.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTRODUCTION
We've all heard about superconductivity. But, do we all know what it
is? How it works and what are its uses? To start talking about
superconductivity, we must try to understand the how "normal" conductivity
works. This will make it much easier to understand how the "super" part
functions. In the following paragraphs, I will explain how superconductivity
works, some of the current problems and some examples of its uses.
CONDUCTIVITY
Conductivity is the ability of a substance to carry electricity. Some
substances like copper, aluminium, silver and gold do it very well. They are
called conductors. Others conduct electricity partially and they are called
semi-conductors. The concept of electric transmission is very simple to
understand. The wire that conducts the electric current is made of atoms
which have equal numbers of protons and electrons making the atoms
electrically neutral. If this balance is disturbed by gain or loss of electrons,
the atoms will become electrically charged and are called ions. Electrons
occupy energy states. Each level requires a certain amount of energy. For an
electron to move to a higher level, it will require the right amount of energy.
Electrons can move between different levels and between different materials
but to do that, they require the right amount of energy and an "empty" slot in
the band they enter. The metallic conductors have a lot of these slots and
this is where the free electrons will head when voltage (energy) is applied. A
simpler way to look at this is to think of atoms aligned in a straight line (wire).
if we add an electron to the first atom of the line, that atom would have an
excess of electrons so it releases an other electron which will go to the
second atom and the process repeats again and again until an electron pops
out from the end of the wire. We can then say that conduction of an electrical
current is simply electrons moving from one empty slot to another in the
atoms' outer shells.
The problem with these conductors is the fact that they do not let all
the current get through. Whenever an electric current flows, it encounters
some resistance, which changes the electrical energy into heat. This is what
causes the wires to heat. The conductors become themselves like a
resistance but an unwanted one. This explains why only 95% of the power
generated by an AC generator reaches consumers. The rest is converted
into useless heat along the way. The conducting wire is made of vibrating
atoms called lattice. The higher the temperature, the more the lattice shakes
making it harder for the electrons to travel through that wire. It becomes like
a jungle full of obstacles. Some of the electrons will bump with the vibrating
atoms and impurities and fly off in all directions and lose energy in form of
heat. This is known as friction. This is where superconductivity comes into
work. Inside a superconductor, the lattice and the impurities are still there,
but their state is much different from that of an ordinary conductor.
SUPERCONDUCTIVITY (Theory / history)
Superconductivity was discovered in 1911 by Heike Kamerlingh
Onnes, a Dutch physicist. It is the ability to conduct electricity without
resistance and without loss. At that time, it took liquid helium to get extremely
low temperatures to make a substance superconduct, around 4 kelvins. That
wasn't very far from absolute Zero (The theoretical temperature at which the
atoms and molecules of a substance lose all of their frantic heat-dependent
energy and at which all resistance stops short.) Kelvin believed that electrons
travelling in a conductor would come to a complete stop as the temperature
got close to absolute zero. But others were not so sure. Kelvin was wrong.
The colder it gets, the less the lattice shakes, making it easier for electrons
to get through. There's one theory that explains best what happens in a
superconducting wire: When a conductor is cooled to super low
temperatures, the electrons travelling inside it would join up in some way and
move as a team. The problem with this notion was that electrons carry
negative charges and like charges repel. This repulsion would prevent the
electrons from forming their team. The answer to that was phonons. It is
believed that packets of sound waves (phonons) that are emitted by the
vibrating lattice overcome the electrons natural repulsion making it possible
for them to travel in team. It's as if they were all holding hands together. If
one of them falls in a hole or bumps into something, the preceding electron
would pull him and the following one would push. There was no chance of
getting lost. Since the lattice was cooled, there was less vibration making it
easier for the paired electrons to go through.
NEW MATERIAL
That theory worked well for the conventional, metallic, low-temperature
superconducting materials. But later on, new materials were discovered. It
conducted at temperatures never before dreamed possible. That material
was ceramic. What was believed to be an insulator became a
superconductor. The latest Ceramic material discovered superconducts at
125 Kelvin. This is still far away from room temperature but now, liquid
nitrogen could be used. It is much cheaper than the rare, expensive liquid
Helium. Scientists still don't know how the new superconductivity works.
Some scientists have suggested that the new ceramics are new kinds of
metals that carry electrical charges, not via electrons, but through other
charged particles.
PROBLEMS / SOLUTIONS
Throughout the time, scientists have succeeded in increasing the
transition temperature which is the temperature required by a material to
superconduct. Although they have reached temperatures much higher than
4k, it is still difficult to use superconductors in the industry because it is well
below room temperature. Another problem is the fact that the new ceramic
conductors are too fragile. They cannot be bent, twisted, stretched and
machined. This makes them really useless. Scientists are attempting to find
a solution to that by trying to develop composite wires. This means that the
superconducting material would be covered by a coating of copper. If the
ceramic loses its superconductivity, the copper would take over until the
superconductor bounced back. The old superconductors have no problem
with being flexible but the required very low temperatures remain to be a
problem. One good thing about ceramics is the fact that they generate
extremely high magnetic fields. The old superconductors use to fail under low
magnetic fields but the new ones seem to do well even with extremely high
magnetic field applied on them.
POSSIBLE USES
The characteristics of a superconductor (low resistance and strong
magnetic fields) seemed to have many uses. Highly efficient power
generators; superpowerful magnets; computers that process data in a flash;
supersensitive electronic devices for geophysical exploration and military
surveillance; economic energy-storage units; memory devices like
centimetre-long video tapes with super conducting memory loops; high
definition satellite television; highly accurate medical diagnostic equipment;
smaller electric motors for ship propulsion; magnetically levitated trains; more
efficient particle accelerators; fusion reactors that would generate cheap,
clean power; and even electromagnetic launch vehicles and magnetic tunnels
that could accelerate spacecraft to escape velocity.
THE MAGNETICALLY LEVITATED TRAIN
In my research, I had the chance to learn how two of these applications
work: the magnetically levitated train and magnetically propelled ships.
First, the magnetically levitated train, a fairly simple but brilliant
concept. That train can reach great speeds since it had no friction with it's
track. The guideway has thousands of electromagnets for levitation set in the
floor along the way. More electromagnets for propulsion are set on the sides
of the U-shaped track. The superconducting magnets on the train have the
same polarity of the electromagnets of the track, so they push against each
other and make the train float about 4 inches above ground. The interesting
concept comes with propulsion. The operator sends and AC current through
the electromagnets on the sides and can control the speed of the train by
changing the frequency of the pulses. Supposing that the positive peak
reaches the first electromagnet on the side of the track. That magnet will
push the magnet making the train move forward. When the negative peak
reaches that same magnet, the magnet on the train would have moved
forward so it will be pushed by that same magnet on the track and pulled by
the following electromagnet on the track, which now has the positive voltage
across it. So the first would be pushing and the second would be pulling. It
takes some time to clearly understand what is going on but it becomes so
obvious afterwards. It's as if the train was "surfing" on waves of voltage.
THE MAGSHIP
Another interesting application is what is referred to as the magship.
This ship has no engine, no propellers and no rudder. It has a unique power
source which is electromagnetism. The generator on the boat creates a
current which travels from one electrode to another which go underwater on
each side of the ship. This makes the water electrically charged. This only
works in salt water because pure water would not conduct the current. The
magnets which are located on the bottom of the ship would produce a
magnetic field which will push the water away making the ship move forward.
There are a lot of problems related with that. The magnetic field could attract
metallic objects and even other ships causing many accidents.
CONCLUSION
As time goes by, transition temperature, critical field (maximum
magnetic field intensity that a superconductor can support before failing),
current capacity and all other problems are improving slowly. But, at least
they show that we are moving in the right direction. A lot of people are getting
interested in that field since it promises a lot for the future.
f:\12000 essays\sciences (985)\Physics\Surface Tension.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Surface Tension
My problem was to find out how to test or measure surface tension. I think the
reason of some of the force in surface tension is cohesion and gravity. Surface Tension is
the condition existing at the free surface of a liquid, resembling the properties of an
elastic skin under tension. The tension is the result of intermolecular forces exerting an
unbalanced inward pull on the individual surface molecules; this is reflected in the
considerable curvature at those edges where the liquid is in contact with the wall of a
vessel. Because of this property, certain insects can stand on the surface of water. A razor
blade can also be supported by the surface tension of water. The razor blade is not
floating: if pushed through the surface, it sinks through the water. More specifically, the
tension is the force per unit length of any straight line on the liquid surface that the
surface layers on the opposite sides of the line exert upon each other. The tendency of
any liquid surface is to become as small as possible as a result of this tension, as in the
case of mercury, which forms an almost round ball when a small quantity is placed on a
horizontal surface. The near-perfect spherical shape of a soap bubble, which is the result
of the distribution of tension on the thin film of soap, is another example of this force;
surface tension alone can support a needle placed horizontally on a water surface.
Surface tension depends mainly upon the forces attraction between the particles
within the given liquid and also upon the gas, solid, or liquid in contact with it.
The molecules in a drop of water, for example, attract each other weakly. Water
molecules well inside the drop may be thought of as being attracted equally in all
directions by the surrounding molecules. However if surface molecules could be
displaced slightly outward from the surface, they would be attracted back by the near by
molecules. The energy responsible for the phenomenon of surface tension may be
thought of as approximately equilivant to the work or energy required to remove the
surface layer of molecules in a unit area. In comparison, organic liquids, such as benzene
and alcohol's, have lower surface tensions, whereas mercury has a higher surface
tension . An increase in temperature lowers the net force of attraction among molecules
and hence decreases surface tension.
Surface tension is also viewed as the result of forces acting in the plane of the
surface and tending to minimize its area. On this basis. surface tension is often expressed
as amount of force exerted in the surface perpendicular to a line of unit length. The unit
then is Newton's per metre, which is equivalent to joules per square metre.
Surface tension is important at zero gravity, as in space flight: Liquids cannot be
stored in open containers because they run up the vessel walls.
Cohesion is phenomenon of intermolecular forces holding particles of a substance
together. Cohesion in liquids is reflected in the surface tension caused by the unbalanced
inward pull on the surface molecules, and also in the transformation of a liquid into a
solid state when the molecules are brought sufficiently close together. Cohesion in solids
depends on the pattern of distribution of atoms, molecules, and ions, which in turn
depends on the state of equilibrium (or lack of it) of the atomic particles. In many organic
compounds, which form molecular crystals, for example, the atoms are bound strongly
into molecules, but the molecules are bound weakly to each other.
Bibliography: Microsoft Encarta 95 1994 Funk & Wagnall's Corporation
Encyclopedia Britannica 1988 vol.11 15th Edition Encyclopædia
Britannica, Inc.
Compton's Encyclopedia 1988 vol.22 Edition Encyclopædia
Britannica, Inc.
f:\12000 essays\sciences (985)\Physics\Telephones.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The telephone itself is a rather simple appliance. A microphone, called the transmitter, and an earphone, called the receiver, are contained in the handset. The microphone converts speech into its direct electrical analog, which is transmitted as an electrical signal; the earphone converts received electrical signals back to sound. The switch hook determines whether current flows to the telephone, thereby signaling the central office that the telephone is in use. The ringer responds to a signal sent by the central office that causes the telephone to ring. As simple a device as the telephone, had a mighty big impact on society during the 30's. This was due to the fact that, it was during the 30's when telephone service became economically feasible and also reliable.
Men and women alike were captivated by the intrique and fascination of talking to relatives and friends, miles and miles away. Not only did the telephone pamper to individual woes, but it provided a very useful industrial service. It allows commercial companies to expand their horizons infinitely easier than ever before. It became possible to set up meetings and discuss business matters with partners thousands of miles away. Companies that posessed a telephone had a enormous advantage over the rest. And in a time as economically troubled as the 30's depression, everyone was looking for a competitive edge.
The telephone wasn't invented in the thirties, nor was the first transatlantic line built then, but the thirties represents a time in history when the world was changing incredible fast and much of that change was made possible by the the telephone. Without the telephone, progress would have been much slower and people might not have been so receptive to change. We owe a great deal to Alexander Graham Bell, the inventor of the telephone, for his invention has served mankind well and will continue to offer society a valuable service for years to come.
f:\12000 essays\sciences (985)\Physics\THE BIG BANG THEORY!.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
It is always a mystery about how the universe began, whether
if and when it will end. Astronomers construct hypotheses called
cosmological models that try to find the answer. There are two
types of models: Big Bang and Steady State. However, through
many observational evidences, the Big Bang theory can best
explain the creation of the universe.
The Big Bang model postulates that about 15 to 20 billion
years ago, the universe violently exploded into being, in an
event called the Big Bang. Before the Big Bang, all of the
matter and radiation of our present universe were packed together
in the primeval fireball--an extremely hot dense state from which
the universe rapidly expanded.1 The Big Bang was the start of
time and space. The matter and radiation of that early stage
rapidly expanded and cooled. Several million years later, it
condensed into galaxies. The universe has continued to expand,
and the galaxies have continued moving away from each other ever
since. Today the universe is still expanding, as astronomers
have observed.
The Steady State model says that the universe does not
evolve or change in time. There was no beginning in the past,
nor will there be change in the future. This model assumes the
perfect cosmological principle. This principle says that the
universe is the same everywhere on the large scale, at all
times.2 It maintains the same average density of matter forever.
There are observational evidences found that can prove the
Big Bang model is more reasonable than the Steady State model.
First, the redshifts of distant galaxies. Redshift is a Doppler
effect which states that if a galaxy is moving away, the spectral
line of that galaxy observed will have a shift to the red end.
The faster the galaxy moves, the more shift it has. If the
galaxy is moving closer, the spectral line will show a blue
shift. If the galaxy is not moving, there is no shift at all.
However, as astronomers observed, the more distance a galaxy is
located from Earth, the more redshift it shows on the spectrum.
This means the further a galaxy is, the faster it moves.
Therefore, the universe is expanding, and the Big Bang model
seems more reasonable than the Steady State model.
The second observational evidence is the radiation produced
by the Big Bang. The Big Bang model predicts that the universe
should still be filled with a small remnant of radiation left
over from the original violent explosion of the primeval fireball
in the past. The primeval fireball would have sent strong
shortwave radiation in all directions into space. In time, that
radiation would spread out, cool, and fill the expanding universe
uniformly. By now it would strike Earth as microwave radiation.
In 1965 physicists Arno Penzias and Robert Wilson detected
microwave radiation coming equally from all directions in the
sky, day and night, all year.3 And so it appears that
astronomers have detected the fireball radiation that was
produced by the Big Bang. This casts serious doubt on the Steady
State model. The Steady State could not explain the existence of
this radiation, so the model cannot best explain the beginning of
the universe.
Since the Big Bang model is the better model, the existence
and the future of the universe can also be explained. Around 15
to 20 billion years ago, time began. The points that were to
become the universe exploded in the primeval fireball called the
Big Bang. The exact nature of this explosion may never be known.
However, recent theoretical breakthroughs, based on the
principles of quantum theory, have suggested that space, and the
matter within it, masks an infinitesimal realm of utter chaos,
where events happen randomly, in a state called quantum
weirdness.4
Before the universe began, this chaos was all there was. At
some time, a portion of this randomness happened to form a
bubble, with a temperature in excess of 10 to the power of 34
degrees Kelvin. Being that hot, naturally it expanded. For an
extremely brief and short period, billionths of billionths of a
second, it inflated. At the end of the period of inflation, the
universe may have a diameter of a few centimetres. The
temperature had cooled enough for particles of matter and
antimatter to form, and they instantly destroy each other,
producing fire and a thin haze of matter-apparently because
slightly more matter than antimatter was formed.5 The fireball,
and the smoke of its burning, was the universe at an age of
trillionth of a second.
The temperature of the expanding fireball dropped rapidly,
cooling to a few billion degrees in few minutes. Matter
continued to condense out of energy, first protons and neutrons,
then electrons, and finally neutrinos. After about an hour, the
temperature had dropped below a billion degrees, and protons and
neutrons combined and formed hydrogen, deuterium, helium. In a
billion years, this cloud of energy, atoms, and neutrinos had
cooled enough for galaxies to form. The expanding cloud cooled
still further until today, its temperature is a couple of degrees
above absolute zero.
In the future, the universe may end up in two possible
situations. From the initial Big Bang, the universe attained a
speed of expansion. If that speed is greater than the universe's
own escape velocity, then the universe will not stop its
expansion. Such a universe is said to be open. If the velocity
of expansion is slower than the escape velocity, the universe
will eventually reach the limit of its outward thrust, just like
a ball thrown in the air comes to the top of its arc, slows,
stops, and starts to fall. The crash of the long fall may be the
Big Bang to the beginning of another universe, as the fireball
formed at the end of the contraction leaps outward in another
great expansion.6 Such a universe is said to be closed, and
pulsating.
If the universe has achieved escape velocity, it will
continue to expand forever. The stars will redden and die, the
universe will be like a limitless empty haze, expanding
infinitely into the darkness. This space will become even
emptier, as the fundamental particles of matter age, and decay
through time. As the years stretch on into infinity, nothing
will remain. A few primitive atoms such as positrons and
electrons will be orbiting each other at distances of hundreds of
astronomical units.7 These particles will spiral slowly toward
each other until touching, and they will vanish in the last flash
of light. After all, the Big Bang model is only an assumption.
No one knows for sure that exactly how the universe began and how
it will end. However, the Big Bang model is the most logical and
reasonable theory to explain the universe in modern science.
ENDNOTES
1. Dinah L. Mache, Astronomy, New York: John Wiley & Sons,
Inc., 1987. p. 128.
2. Ibid., p. 130.
3. Joseph Silk, The Big Bang, New York: W.H. Freeman and
Company, 1989. p. 60.
4. Terry Holt, The Universe Next Door, New York: Charles
Scribner's Sons, 1985. p. 326.
5. Ibid., p. 327.
6. Charles J. Caes, Cosmology, The Search For The Order Of
The Universe, USA: Tab Books Inc., 1986. p. 72.
7. John Gribbin, In Search Of The Big Bang, New York: Bantam
Books, 1986. p. 273.
BIBLIOGRAPHY
Boslough, John. Stephen Hawking's Universe. New York: Cambridge
University Press, 1980.
Caes, J. Charles. Cosmology, The Search For The Order Of The
Universe. USA: Tab Books Inc., 1986.
Gribbin, John. In Search Of The Big Bang. New York: Bantam
Books, 1986.
Holt, Terry. The Universe Next Door. New York: Charles
Scribner's Sons, 1985.
Kaufmann, J. William III. Astronomy: The Structure Of The
Universe. New York: Macmillan Publishing Co., Inc., 1977.
Mache, L. Dinah. Astronomy. New York: John Wiley & Sons, Inc.,
1987.
Silk, Joseph. The Big Bang. New York: W.H. Freeman and Company,
1989.
----------------
f:\12000 essays\sciences (985)\Physics\The Chaos Theory.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Where Chaos begins, classical science ends. Ever since physicists have inquired into the laws of nature, the have not begun to explore irregular side of nature, the erratic and discontinuous side, that have always puzzled scientists. They did not attempt to understand disorder in the atmosphere, the turbulent sea, the oscillations of the heart and brain, and the fluctuations of wildlife populations. All of these things were taken for granted until in the 1970's some American and European scientists began to investigate the randomness of nature.
They were physicists, biologists, chemists and mathematicians but they were all seeking one thing: connections between different kinds of irregularity. "Physiologists found a surprising order in the chaos that develops in the human heart, the prime cause of a sudden, unexplained death. Ecologists explored the rise and fall of gypsy moth populations. Economists dug out old stock price data and tried a new kind of analysis. The insights that emerged led directly into the natural world- the shapes of clouds, the paths of lightning, the microscopic intertwining of blood vessels, the galactic clustering of stars." (Gleick, 1987)
The man most responsible for coming up with the Chaos theory was Mitchell Feigenbaum, who was one of a handful of scientists at Los Alamos, New Mexico when he first started thinking about Chaos. Feigenbaum was a little known scientist from New York, with only one published work to his name. He was working on nothing very important, like quasi periodicity, in which he and only he had 26 hour days instead of the usual 24. He gave that up because he could not bear to wake up to setting sun, which happened periodically. He spent most of time watching clouds from the hiking trails above the laboratory. To him could represented a side of nature that the mainstream of physics had passed by, a side that was fuzzy and detailed, and structured yet unpredictable. He thought about these things quietly, without producing any work.
After he started looking, chaos seemed to be everywhere. A flag snaps back and forth in the wind. A dripping faucet changes from a steady pattern to a random one. A rising column of smoke disappears into random swirls. "Chaos breaks across the lines that separate scientific disciplines. Because it is a science of the global nature of systems, it has brought together thinkers from fields that have been widely separated...Chaos poses problems that defy accepted ways of working in science. It makes strong claims about the universal behavior of complexity. The first Chaos theorists, the scientists who set the discipline in motion, shared certain sensibilities. They had an eye for pattern, especially pattern that appeared on different scales at the same time. They had a taste for randomness and complexity, for jagged edges and sudden leaps. Believers in chaos-- and they sometimes call themselves believers, or converts, or evangelists--speculate about determinism and free will, about evolution, about the nature of conscious intelligence. They feel theat they are turning back a trend in science towards reductionism, the analysis of systems in terms of their constituent parts: quarks, chromosomes, or neutrons. They believe that they are looking for the whole."(Gleick, 1987)
The Chaos Theory is also called Nonlinear Dynamics, or the Complexity theory. They all mean the same thing though- a scientific discipline which is based on the study of nonlinear systems. To understand the Complexity theory people must understand the two words, nonlinear and system, to appreciate the nature of the science. A system can best be defined as the understanding of the relationship between things which interact. For example, a pile of stones is a system which interacts based upon how they are piled. If they are piled out of balance, the interaction results in their movement until they find a condition under which they are in balance. A group of stones which do not touch one another are not a system, because there is no interaction. A system can be modeled. Which means another system which supposedly replicates the behavior ofthe original system can be created. Theoretically, one can take a second group of stones which are the same weight, shape, and density of the first group, pile them in the same way as the first group, and predict that they will fall into a new configuration that is the same as the first group. Or a mathematical representation can be made of the stones through application of Newton's law of gravity, to predict how future piles of the same type - and of different types of stones - will interact. Mathematical modeling is the key, but not the only modeling process used for systems.
The word nonlinear has to do with understanding mathematical models used to describe systems. Before the growth of interest in nonlinear systems, most models were analyzed as though they were linear systems meaning that when the mathematical formulas representing the behavior of the systems were put into a graph form, the results looked like a straight line. Newton used calculus as a mathematical method for showing change in systems within the context of straight lines. And statistics is a process of converting what is usually nonlinear data into a linear format for analysis.
Linear systems are the classic scientific system and have been used for hundreds of years, they are not complex, and they are easy to work with because they are very predictable. For example, you would consider a factory a linear system. If more inventory is added to the factory, or more employees are hired, it would stand to reason that more pieces produced by the factory by a significant amount. By changing what goes into a system we should be able to tell what comes out of it. But as any factory manager knows, factories don't actually work that way. If the amount of people, the inventory, or whatever other variable is changed in the factory you would get widely differing results on a day to day basis from what was predicted. That is because a factory is a complex nonlinear system, like most systems found in nature.
When most natural systems are modeled, their mathematical representations do not produce straight lines on graphs, and that the system outputs are extremely difficult to predict. Before the chaos theory was developed, most scientists studied nature and other random things using linear systems. Starting with the work of Sir Isaac Newton, physics has provided a process for modeling nature, and the mathematical equations associated with it have all been linear. When a study resulted in strange answers, when a prediction usually came true but not this one time, the failure was blamed on experimental error or noise.
Now, with the advent of the Chaos theory and research into complex systems theory, we know that the "noise" actually was important information about the experiment. When noise is added to the graph results, the results are no longer a straight line, and are not predictable. This noise is what was originally referred to as the chaos in the experiment. Since studying this noise, this chaos, was one of the first concerns of those studying complex systems theory, Glieck originally named the discipline Chaos Theory.
Another word that is vital to understanding the Complexity theory is complex. What makes us determine which system is more complex then another? There are many discussions of this question. In Exploring Complexity, Nobel Laureate Ilya Prigogine explains that the complexity of the system is defined by the complexity of the model necessary to effectively predict the behavior of the system. The more the model must look like the actual system to predict system results, the more complex the system is considered to be. The most complex system example is the weather, which, as demonstrated by Edward Lorenz, can only be effectively modeled with an exact duplicate of itself. One example of a simple system to model is to calculate the time it takes for a train to go from city A to city B if it travels at a given speed. To predict the time we need only to know the speed that the train is traveling (in mph) and the distance (in miles). The simple formula would be mph/m, which is a simple system.
But the pile of stones, which appears to be a simple system, is actually very complex. If we want to predict which stone will end up at which place in the pile then you would have to know very detailed information about the stones, including their weights, shapes, and starting location of each stone to make an accurate prediction. If there is a minor difference between the shape of one stone in the model and the shape of the original stone, the modeled results will be very different. The system is very complex, thus making prediction very difficult..
The generator of unpredictability in complex systems is what Lorenz calls "sensitivity to initial conditions" or "the butterfly effect." The concept means that with a complex, nonlinear system, a tiny difference in starting position can lead to greatly varied results. For example, in a difficult pool shot a tiny error in aim causes a slight change in the balls path. However, with each ball it collides with, the ball strays farther and farther from the intended path. Lorenz once said that "if a butterfly is flapping its wings in Argentina and we cannot take that action into account in our weather prediction, then we will fail to predict a thunderstorm over our home town two weeks from now because of this dynamic."(Lorenz, 1987)
The general rule for complex systems is that one cannot create a model that will accurately predict outcomes but one can create models that simulate the processes that the system will go through to create the models. This realization is impacting many activities in business and other industries. For instance, it raises considerable questions relating to the real value of creating organizational visions and mission statements as currently practices.
Like physics, the Chaos theory provides a foundation for the study of all other scientific disciplines. It is a variety of methods for incorporating nonlinear dynamics into the study of science. Attempts to change the discipline and make it a separate form of science have been strongly resisted. The work represents a reunification of the sciences for many in the scientific community.
One of Lorenz's best accomplishments supporting the Chaos Theory was the Lorenz Attractor. The Lorenz Attractor is based on three differential equations, three constants, and three initial conditions. The attractor represents the behavior of gas at any given time, and its condition at any given time depends upon its condition at a previous time. If the initial conditions are changed by even a tiny amount, checking the attractor at a later time will show numbers totally different. This is because small differences will reproduce themselves recursively until numbers are entirely unlike the original system with the original initial conditions. But, the plot of the attractor, or the overall behavior of the system will be the same.
A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment. But even if it were the case that the natural laws had any secret for us, we could still know the situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, and we should say that the phenomenon has been predicted, that it is governed by the laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible..." (Poincare, 1973)
The Complexity theory has developed from mathematics, biology, and chemistry, but mostly from physics and particularly thermodynamics, the study of turbulence leading to the understanding of self-organizing systems and system states (equilibrium, near equilibrium, the edge of chaos, and chaos). "The concept of entropy is actually the physicists application of the concept of evolution to physical systems. The greater the entropy of a system, the more highly evolved is the system."( Prigogine, 1974) The Complexity theory is also having a major impact on quantum physics and attempts to reconcile the chaos of quantum physics with the predictability of Newton's universe.
With complexity theory, the distinctions between the different disciplines of sciences are disappearing. For example, fractal research is now used for biological studies. But there is a question as to whether the current research and academic funding will support this move to interdisciplinary research.
Complexity is already affecting many aspects of our lives and has a great impacts on all sciences. It is answering previously unsolvable problems in cosmology and quantum mechanics. The understanding of heart arrhythmias and brain functioning has been revolutionized by complexity research. There have been a number of other things developed from complexity research, such a the SimLife, SimAnt, etc. which are a series of computer programs. Fractal mathematics are critical to improved information compression and encryption schemes needed for computer networking and telecommunications. Genetic algorithms are being applied to economic research and stock predictions. Engineering applications range from factory scheduling to product design, with pioneering work being done at places like DuPont and Deere & Co.
Another element of the nonlinear dynamics, Fractals, have appeared everywhere, most recently in graphic applications like the successful Fractal Design Painter series of products. Fractal image compression techniques are still being researched, but promise such amazing results as 600:1 graphic compression ratios. The movie special effects industry would have much less realistic clouds, rocks, and shadows without fractal graphic technology.
Though it is one of the youngest sciences, the Chaos Theory holds great promise in the fields of meteorology, physics, mathematics, and just about anything else you can think of.
f:\12000 essays\sciences (985)\Physics\The Nuclear Power Debate.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
27/6 THE NUCLEAR POWER DEBATE
In 1953, nuclear energy was introduced into America as a cheap and efficient energy source, favoured in place of increasingly scarce fossil fuels which caused air pollution. Its initial use was welcomed by the general public, as it was hoped to lower the price of electricity, and utilise nuclear power for it's potential as a resource, not a weapon. However, as people became aware of the long term dangers involved in storing nuclear waste, it's use was criticised. Two accidents, at Three Mile Island and Chernobyl, demonstrated to the world the enormous risks involved in producing nuclear power.
Nuclear power provides 17% of the world's electricity but coal is the main source, making up 39%. However, fossil fuels such as coal, require greater quantities to produce the equivalent amount of electricity produced from Uranium. The use of nuclear power opposed to burning fossil fuels has reduced carbon dioxide emissions by 2 billion tonnes per year, minimising the global warming effect on the atmosphere. Carbon dioxide is responsible for half of man made gases contributing to the Greenhouse Effect, and has sparked action from the UN Intergovernment Panel on Climate Change. Their consensus is a concern for the environment in the next century if fossil fuels continue to be used, even at present global levels. The Panel claims that for carbon dioxide to be stabilised to safe levels, a 50-80% reduction in all emissions would be required.
The United Nations has predicted a world population growth from 5.5 billion to 8.5 billion by the year 2025, meaning demand for energy will increase. Nuclear power is the only practical source, in consideration for the environment, cost and efficiency. Coal-fired generation of electricity would increase carbon dioxide emissions, and renewable sources such as solar and hydro, are not suitable for large scale power generation.
Nuclear power is not without its own implications. The process includes disposing of radioactive waste, which poses a threat to the environment and the world if not contained properly and temporarily disposed of with maximum security. In the
thesis, "Nuclear power: an energy future we can't afford", by Peter Kelly from Hamilton College, he wrote,
"...we'd still have to worry about terrorists making bombs out of nuclear waste. Just five pounds of plutonium, a component of nuclear waste, is enough to make a nuclear bomb. Such a bomb could topple the World Trade Centre and kill hundreds of thousands of people...Terrorists may be able to recruit disgruntled scientists..."
Disposing of nuclear waste is extremely controversial, because it takes thousands of years to decompose, and the radiation remains active.
Other than the environmental effects of disposing nuclear waste, the potential of radioactive fallout from a faulty reactor is a dangerous possibility, and the events following the accident at Chernobyl demonstrated the long term destructiveness radiation is capable of. In 1986 at Chernobyl, an unauthorised experiment conducted with the cooling system turned off, lead to the explosion of one of the reactors. The radioactive fallout spread through the atmosphere, reaching into northern Europe and Great Britain. The Soviets claim 31 people died directly from the accident, while deaths due to radiation are yet to be determined. Radiation sometimes causes genetic mutations in the child whose parents were exposed to radiation. A few years ago on the television program '60 Minutes', they presented a story on the after effects of the Chernobyl accident. They revealed horrific shots of mutated embryos preserved in jars, the most disturbing, an embryo named 'Cyclops', because it only had one eye.
While nuclear power is more efficient and environmentally safer in terms of global warming than fossil fuels, it has a destructive potential that cannot be ignored. Electricity, generated from the nuclear fission of Uranium 235 or Plutonium 239 are both elements which are used in nuclear weapons. Radiation either from waste or fall out from a reactor explosion can cause detrimental effects, both long and short term, to the environment and society. Precautions must be taken in security, disposal, and generation of nuclear power and its waste, in order for it to be a successful resource and temporary
alternative. At present, renewable energy sources are too expensive and are not suitable for large scale power generation. However, advancing technology may improve on current systems, making them more efficient and suitable for major electricity generation. Peter Kelly concluded his thesis, "...nuclear power should be seen as a way to tide us over to an age of conservation and renewables. Barring an unexpected breakthrough in fusion, the age of nuclear power will end in the foreseeable future."
BIBLIOGRAPHY
1. Microsoft Encarta '95 Microsoft Corporation
1994-95
2. Nuclear power: an energy future we can't afford Peter Kelly
3. World Energy Needs and Nuclear Power Unknown
Nuclear Issues Briefing Paper 11 June 1996
f:\12000 essays\sciences (985)\Physics\The Physics of Scuba Diving.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Physics
Of
Scuba Diving
Swimming with the Fish....
Have you ever wondered what it would be like to swim with the fish and explore the underwater jungle that covers two-thirds of the earth's surface? I have always been interested in water activities; swimming, diving and skiing, and I felt that scuba was for me. My first dive took place while on a family vacation. I came across a dive shop offering introductory dives, which immediately caught my interest. After much convincing (my parents), with my solemn assurance that I would be careful, I was allowed to participate in a dive. I was ready, or so I thought. The slim basics such as breathing were explained and I was literally tossed in. Sounds easy enough, right!, well WRONG!!. From the moment I hit the water, my experience was much less than fun. I quickly sank to the bottom into a new world, with unfamiliar dangers. I really wasn't ready for this experience. I was disorientated, causing me to panic, which shortened the length of my dive, not to mention my air supply. Let's just say I would not do that again.
To start exploring the underwater world, one must first master a few skills. Certification is the first step of learning to dive. From qualified professionals one must learn how to use the equipment, safety precautions, and the best places to dive. This paper is designed to help give a general understanding of the sport and the importance that physics plays in it.
Self-contained Underwater Breathing Apparatus, or SCUBA for short, is a hell of a lot of fun. However, there is considerably more to Diving than just putting on a wetsuit and strapping some compressed air onto ones back. As I quickly learned, diving safely requires quite a bit more in terms of time, effort, and preparation. When one goes underwater, a diver is introduced to a new and unfamiliar world, where many dangers exist, but can be avoided with proper lessons and understanding. With this knowledge the water is ours to discover.
The Evolution of Scuba Diving
Divers have penetrated the oceans through the centuries for the purpose of acquiring food, searching for treasure, carrying out military operations, performing scientific research and exploration, and enjoying the aquatic environment. Bachrach (1982) identified the following five principal periods in the history of diving which are currently in use. Free (or breath-hold) diving, bell diving, surface support or helmet (hard hat) diving, scuba diving, and, saturation diving or atmospheric diving (Ketels, 4)
SCUBA DIVING
The development of self-contained underwater breathing apparatus provided the free moving diver with a portable air supply which, although finite in comparison with the unlimited air supply available to the helmet diver, allowed for mobility. Scuba diving is the most frequently used mode in recreational diving and, in various forms, is also widely used to perform underwater work for military, scientific, and commercial purposes.
There were many steps in the development of a successful self-contained underwater system. In 1808, Freiderich yon Drieberg invented a bellows-in-a-box device that was worn on the diver's back and delivered compressed air from the surface. This device, named Triton, did not actually work but served to suggest that compressed air could be used in diving, an idea initially conceived of by Halley in 1716. (Ketels, 9)
In 1865, two French inventors, Rouquayrol and Denayrouse, developed a suit that
they described as "self-contained." In fact, their suit was not self contained but consisted of a helmet-using surface-supported system that had an air reservoir that was carried on the diver's back and was sufficient to provide one breathing cycle on demand. The demand valve regulator was used with surface supply largely because tanks of adequate strength were not yet available to handle air at high pressure. This system's demand valve, which was automatically controlled, represented a major breakthrough because it permitted the diver to have a breath of air when needed.
The Rouquayrol and Denayrouse apparatus was described with remarkable accuracy in Jules Verne's classic, Twenty Thousand Leagues Under The Sea, which was written in 1869, only 4 years after the inventors had made their device public (Ketels, 10).
Semi-Self-Contained Diving Suit
The demand valve played a critical part in the later development of one form of scuba apparatus. In the 1920's, a French naval officer, Captain Yves Le Prieur, began work on a self-contained air diving apparatus that resulted in 1926 in the award of a patent, shared with his countryman Fernez. This device was a steel cylinder containing compressed air that was worn on the diver's back and had an air hose connected to a mouthpiece. The diver wore a nose clip and air-tight goggles that undoubtedly were protective and an aid to vision but did not permit pressure equalization.
The major problem with Le Prieur's apparatus was the lack of a demand valve, which necessitated a continuous flow (and thus waste) of gas. In 1943, almost 20 years after Fernez and Le Prieur patented their apparatus, two other French inventors, Emile Gagnan and Captain Jacques-Yves Cousteau, demonstrated their "Aqua Lung."
This apparatus used a demand intake valve drawing from two or three cylinders, each containing over 2500 psig. Thus it was that the demand regulator, invented over 70 years earlier by Rouquayrol and Denayrouse and extensively used in aviation, came into use in a self-contained breathing apparatus which did not emit a wasteful flow of air during inhalation (although it continued to lose exhaled gas into the water). This application made possible the development of modern open-circuit air scuba gear (Ketels,11).
In 1939, Dr. Christian Lambertsen began the development of a series of three patented forms of oxygen rebreathing equipment for neutral buoyancy underwater swimming. This became the first self-contained underwater breathing apparatus successfully used by a large number of divers. The Lambertsen Amphibious Respiratory Unit (LARU) formed the basis for the establishment of U.S. military self-contained diving. This apparatus was designated scuba (for self-contained underwater breathing apparatus) by its users. Equivalent self-contained apparatus was used by the military forces of Italy, the United States, and Great Britain during World War II and continues in active use today. (Ketels, 12).
A major development in regard to mobility in diving occurred in France during the 1930's: Commander de Carlieu developed a set of swim fins, the first to be produced since Borelli designed a pair of claw-like fins in 1680. When used with Le Prieur's tanks, goggles, and nose clip, de Carlieu's fins enabled divers to move horizontally through the water like true swimmers, instead of being lowered vertically in a diving bell or in hard-hat gear. The later use of a single-lens face mask, which allowed better visibility as well as pressure equalization, also increased the comfort and depth range of diving equipment (Tillman, 27).
Thus the development of scuba added a major working tool to the systems available to divers. The new mode allowed divers greater freedom of movement and access to greater depths for extended times and required much less burdensome support equipment. Scuba also enriched the world of sport diving by permitting recreational divers to go beyond goggles and breath-hold diving to more extended dives at greater depths.
The physics of Scuba Diving
Upon entering the underwater world, one notices new and different sensations as one ventures into a realm where everything looks, sounds and feels different than it does above the water. These sensations are part of what makes diving so special.
Understanding why the underwater world is different helps you adapt and become accustomed to the changes. In the following pages I will attempt to explain two factors that greatly affect a diver under water: buoyancy and pressure.
Have you ever wondered why a large steel ocean liner floats, but a small steel nail sinks? The answer is surprisingly simple. The steel hull of the ship is formed in a shape that displaces much water. If the steel used to manufacture the ocean liner were placed in the sea without being shaped into a large hull, it would sink like the nail. The ocean liner demonstrates that whether an object floats depends not only on its weight, but on how much water it displaces (Ascher, 51).
The principle of buoyancy can be simplified this way: An object placed in water is buoyed up by the force equal to the weight of the quantity of water it displaces. The principle of buoyancy is that if an object displaces an amount of water weighing more than its own weight, it will float. If an object displaces an amount of water weighing less than its own weight then it will sink. If an object displaces an amount of water equal to its own weight it will neither float nor sink, but remain suspended. If an object floats, it is said to be positively buoyant; if it sinks, it is negatively buoyant; and if it neither floats nor sinks, it is neutrally buoyant (Kolezer, 16).
It is important for a diver to learn to use these principles of buoyancy so that the diver can effortlessly maintain his/her position in the water. One must control buoyancy carefully. When you are at the surface, you will want to be positively buoyant so that you could conserve energy while resting or swimming. Under water, you will want to be neutrally buoyant so that you are weightless and can stay off the bottom and avoid crushing or damaging delicate corals and other aquatic life. Neutral buoyancy permits a diver to move freely in all directions (Kolezer, 17).
Buoyancy control is one of the most important skills that a diver could master, but it is also one of the easiest. A diver, controls his/her buoyancy using lead weight and a buoyancy control device (BCD). The lead weight, which is incorporated into a weight system, such as a weight belt is negatively buoyant. The BCD is a device that can be partially inflated or deflated to control buoyancy (Kolezer, 19).
Another factor that affects the buoyancy of an object is the density of water. The denser the water, the greater the buoyancy. Salt water (due to its dissolved salts) is more dense than fresh water, so you'll be more buoyant in salt water than in fresh water - in fact, when floating motionless at the surface, most divers need to exhale air from their lungs to sink. By exhaling, the volume of the lungs is decreased, and less water is displaced, resulting in less buoyancy (Kolezer, 19).
Thus, we can see, that changing the volume of an object changes its buoyancy. Divers primarily control buoyancy by changing the volume of air in their BCD's.
Body air spaces and water pressure
Although usually not noticeable, air is constantly exerting pressure on us. An example being as simplified as when walking against a strong wind, what is actually felt its force pushing against our body. This demonstrates that air can exert pressure, or weight. One doesn't usually feel the air's pressure because our body is primarily liquid, distributing the pressure equally throughout our entire body. The few air spaces in our body are- in the ears, sinuses and lungs- These are filled with air equal in pressure to the external air. However, when the surrounding air pressure changes, such as when you change altitude by flying or driving through mountains, some of us can feel the change as a popping sensation in our ears (Tillman, 40).
Just as air exerts pressure on us at the surface, water exerts pressure when a person is submerged. Because water is much denser than air, pressure changes under water occur more rapidly, making one more aware of them.
The weight of the water above a person greatly compounds the amount of pressure one (ears, lungs, and the air in ones lungs) is under. While it takes the entire height of the atmosphere to contain a weight of air enough to give 1 atmosphere (1 ATM) of pressure (the pressure one is used to be under as one walks around daily), it only takes 33 ft. of water to make up an additional ATM of pressure. Of course, the air is still there too, so at a depth of 33 feet, a diver is subjected to two Atmospheres of pressure, fully twice what one is subjected to at the surface! (Resneck, 53)
A diver would have to go really, really deep before being in any danger of actually being crushed by pressure. It's what the pressure does to the gases in your body that can be dangerous. Physics teaches us Boyle's Law of gases, which suggests that the volume of a gas is proportional to its pressure. Thus, when one goes to a depth of, say, 33 feet (1 extra ATM) and fills ones lungs with a breath of air from a tank and then ascend to the surface without exhaling, the air in the lungs would expand to twice its volume, causing massive trauma to the lungs. Other more subtle problems occur with gas under pressure, such as the accumulation of residual nitrogen in the body's tissues which can result in Decompression Sickness (DCS), commonly known as the bends (Tillman, 44).
As with air pressure, one doesn't feel water pressure on most of ones body, but we can feel it in our body's air spaces. When water pressure changes corresponding with a change in depth, it creates a pressure sensation one can feel. Through training and experience a diver will learn to avoid the problems associated with water pressure and the air spaces in our bodies.
As previously mentioned, pressure increases at a rate of one atmosphere (ATM) for each additional 33 feet of depth underwater. The total pressure is twice as great at 33 feet than at the surface, three times as great at 66 feet, and so on. This pressure pushes in on flexible air spaces, compressing them and reducing their volume. The reduction of the volume of the air spaces is proportional to the amount of pressure placed upon it.
When the total pressure doubles, the air volume is halved. When the pressure triples, the volume is reduced to one third, and so on (Tillman, 40).
The density of air in the air spaces is also affected by pressure. As the volume of the air spaces is reduced due to compression, the density of the air increases as it is squeezed into a smaller place. No air is lost; it is simply compressed. Air density is also proportional to pressure, so that when the total pressure is doubled, the air density is doubled. When the pressure is tripled the air density triples and so on.
To maintain an air space as its original volume when pressure is increased, more air must be added to the space. This is the concept of pressure equalization, and the amount of air that must be added is proportional to the pressure increased.
Air within an airspace expands as pressure is reduced. If no air has been added to the air space, the air will simply expand to fill the original volume of the air space upon reaching the surface (Ketels, 76).
If air has been added to an air space to equalize the pressure, this air will expand as pressure is reduced during ascent. The amount of expansion is again proportional to the pressure. In an open container, such as the bucket, the expanding air will simply bubble out of the opening, maintaining it original volume during ascent. In a closed flexible container, however, the volume will increase as the pressure is reduced. If the volume exceeds the capacity of the container, the container may be ruptured by the expanding air (Cramer, 51).
Now let's take a look at how the relationship between pressure volume and density affect a diver while diving. Previously it has been mentioned that air spaces are effected by changes in pressure. The air spaces that a diver is concerned about are both the natural ones in your body and those artificially created by wearing diving equipment.
The air spaces within a diver's body that are most obviously affected by increasing pressure are found in the ears and sinuses. The artificial air spaces most affected by increasing pressure is the one created by a divers mask.
During descent, water pressure increases and pushes in your body's air spaces, compressing them. If pressure within these air spaces is not kept in balance with this increasing water pressure, the sensation of pressure builds, becoming uncomfortable and possibly even painful as the diver continues to descend. This sensation is the result of a squeeze on the air spaces. A squeeze is not only a scuba phenomena but may also be experienced in a swimmers ears when diving to the bottom of a swimming pool. A squeeze, then is a pressure imbalance resulting in a pain or discomfort in a bodies air space. In this situation, the imbalance is such that the pressure outside the air space is greater than the pressure inside (Ketels, 76-77).
Squeezes are possible in several places: ears, sinuses, teeth, lungs and ones mask. Fortunately, divers can easily avoid all these squeezes.
To avoid discomfort, pressure inside an air space must always equal the water pressure outside the air spaces. This is accomplished by adding air to the air spaces during descent, before discomfort occurs. This is called equalization.
Compared to the ear and sinus air spaces, the lungs are large and flexible. As a scuba diver, one automatically equalizes the pressure in the lungs by continuously breathing from the scuba equipment. When you skin dive, holding ones breath, the lungs can be compressed with no consequence as long as they are filled with air when one begins to descent. The lungs will be reduced in volume during decent and will re-expand during ascent to nearly the original volume when one reaches the surface (some of the air from the lungs is used to equalize the other body air spaces) (Ketels, 78).
In a healthy diver, blocking the nose and attempting to gently blow through it with the mouth closed will direct air into the ear and sinus air spaces. Swallowing and wiggling the jaw from side to side may be an effective equalization technique. Some divers even attempt a combination of the previous two methods.
As mentioned previously along with squeezes, the lungs experience no harmful effects from the changes in pressure when holding ones breath while skin diving. At the start of the skin dive, one takes a breath and descends; the increasing water pressure compresses the air in the lungs. During ascent, the air re-expands so that when reaching the surface, the lungs return to their original volume (Ketels, 78).
When scuba diving, however, the situation is different. Scuba equipment allows one to breathe under water by automatically delivering the air at a pressure equal to the surrounding water pressure. This means the lungs will be at their normal volume while at depth, full of air that will expand on ascent (Cramer, 51).
If a diver breaths normally, keeping the airway to you lungs open, the expanding air escapes during ascent and your lungs remain at their normal volume. But, by holding ones breath and then blocking the airway while ascending the lungs would over expand, much like the sealed bag. Expanding air can cause lung over-pressurization (lung rupture), the most serious injury that can occur to a diver. The most important rule in scuba diving is to breath continuously and never hold your Breath. Lung rupture will occur unless pressure is continuously equalized by breathing normally at all times (Cramer, 52).
Other physical Phenomena's
As an air-breathing creature, we have evolved to live on land. Above the water, we see, hear and move about in a familiar and comfortable manner that seems normal because we have adapted to an air environment.
Under water, though, one enters a new world, where seeing, hearing, staying warm and moving are different. This is because water is 800 times more dense than air, affecting light, sound and heat in ways that we aren't used to.
Sight seeing is a big part of what diving is all about. One dives for numerous reasons. A primary purpose is to see new environments, aquatic life and natural phenomena. Since underwater sight seeing is important, like buying a new camera, one must learn, how. Therefor when diving, one must know how the liquid environment affects vision.
To see clearly under water, a mask is needed because the human eye cannot focus without any air space in front of it. A mask provides the air space. Without the mask, you can see large objects, but they will be blurred and indistinct because your eyes cannot bring the rays of light into sharp focus. Only by wearing a mask can you see sharply (Ascher, 9).
Light travels at a different speed in water than in air. When light enters the air in your mask from the water, the change in speed causes its angle of travel to shift slightly. This causes a magnificent effect that makes objects under water appear 25% larger and closer (Ascher, 52).
Water has other effects on light. As you descend, there is less light. This is due to several facts: some light reflects off the water's surface, some is scattered by particles in the water, and some is absorbed by the water itself. However, water does not absorb light uniformly.
White light, such as sunlight, is actually composed of various colors mixed together. The colors are absorbed one by one as depth increases: First red, followed by orange and yellow. Since each color is part of the total light entering the water, less light remains as depth increases and each color is absorbed. For these reason, deeper water is darker and less colorful. To see true colors, divers sometimes carry underwater lights with them (Resneck, 151).
Underwater Hearing
The underwater world is not a silent world. One can hear many new and interesting sounds, like snapping shrimp, grunting fish, and boat engines passing in the distance. Since sound travels farther in water than in air, one is able to hear things over much longer distances.
Sound also travels about four times faster in water than in air and because of this, one may have trouble determining the direction a sound is coming from (Cramer, 95).
Speech is virtually impossible under water because ones vocal cords do not work in a liquid environment, not to mention the addition of the tube in ones mouth. Communication by sound is usually limited to attracting the attention of another diver by rapping on the tank with a solid object, such as a knife. The diver will hear the rapping, but may not be able to tell where the sound is coming from.
Heat loss in water.
Diving stops being enjoyable when the diver gets cold. In fact, even a small loss of body heat has the potential to be a serious health threat. For these reasons, understanding about heat loss is important.
In air, body heat is lost as it rises from the skin into the air, as it is carried away by air currents, or as perspiration cools the skin through evaporation. Water conducts heat away from your body twenty times faster than air does, meaning that for a given temperature, water has a far greater cooling effect. Even seemingly warm 86F water can become chilly after a while (Cramer, 91).
The loss of body heat in water can quickly lead to a serious condition unless you use insulation to reduce the heat loss. Insulation through the use of exposure suits is recommended for diving in water 75F or colder. Just as one dresses according to the temperature and conditions to go outdoors, one must dress appropriately for diving.
Motion in water
One of the best aspects of diving is that it can be so relaxing. There's little reason for hurrying. By learning how to move without breathlessness, cramping or fatigue, you learn to relax during a dive.
Due to the greater density of water, resistance to movement in water is much greater than in air. If you've ever tried to run waist-deep water, you've experienced this. In overcoming this increased resistance while diving, the best way to conserve energy is to move slowly and steadily. Avoid rapid and jerky movements that waste energy. Simply take your time. After all this is a sport to enjoy.
Conclusion
Several months after my vacation, I decided to give scuba diving a second chance. However, this time I decided to do it right. I signed up to take a P.A.D.I. certification, which is one of the many internationally recognized scuba associations. It was here, in a properly structured course, consisting of both theoretical and practical (in water) sessions where I was properly re-introduced to the sport.
Since my introductory dive from hell, I have had the chance to become quite the scuba enthusiast. Partaking in numerous dives not only in warmer climates (preferably) but in the colder Montreal waters as well, scuba diving has become part of my lifestyle. I participate in and enjoy every opportunity to re-visit the underwater world that once scared me away.
In this paper, I included some history of the evolution of the sport in order to point out that there is more to this particular sport than jumping into the water. Scuba is a complex sport and can not be enjoyed without some scientific knowledge. Scuba diving did not simply evolve, but it is the result of numerous inventions and physical properties. One could only imagine the difficulty that those historic divers (scientists) had in creating this sport.
My objective in writing this paper was not to deter people away from the sport, but to stress the importance of the knowledge that is required to properly and safely partake in it. Like everything else in life, one must work towards a goal, and this is no different. One will quickly see that the payoff is far greater than anything else ever experienced. Recreational scuba is meant to be a very enjoyable and relaxing sport. The scenery is magnificent and the sensations are truly indescribable.
Today, scuba diving is quickly becoming one of the expanding trades. Whether for military, research, business, or recreation, hundreds of thousands of people are heading for the depths, to experience the unknown. My advice for a new diver is to do it right. Get the proper certification and make each dive a safe one.
When a diver is fully trained, and in good mental and physical condition, safe diving can be one of the most enjoyable of experiences. The true beauty of the underwater world, coupled with the marvelous almost-weightlessness of floating with neutral buoyancy is an indescribable experience.
Bibliography/Further Reading
Ascher, Scott M. Scuba Handbook for Humans. Iowa : Kendall/Hunt Publishing Company. 1975.
Cramer, John L. Ph.D. Skin and Scuba Diving: Scientific Principles and Techniques. N.Y.: Bergwall Productions, Inc. 1975.
Ketels, Henry & McDowell, Jack. Safe Skin and Scuba Diving, adventure in the underwater world. Canada : Little, Brown and Company (Canada) Ltd. 1975.
Koelzer, William. Scuba Diving, How to get started. Pennsylvania :Chilton Book Company. 1976.
Resneck, John Jr. Scuba, Safe and Simple. New Jersey : Prentice-Hall, Inc. 1975.
Tillman, Albert A. Skin and Scuba Diving. Iowa : Wm. C. Brown Company Publishers. 1966.
f:\12000 essays\sciences (985)\Physics\The Threat of Nuclear Smuggling.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Real Threat of Nuclear Smuggling
This reading was based on the controversy over the
threat that nuclear smuggling poses. It begins by going
over the view of each side in a brief manner. It states
that some analysts dismiss it as a minor nuisance while
others find the danger to be very real and probable.
This reading stands mainly for the belief that nuclear
smuggling is a real danger. The analysts that find this
issue to be a problem say that nuclear smuggling presents
grave and serious because even though the percent of
these type of smuggling is less than that of drugs for
example, the law-enforcement type officials are also less
experienced at stopping shipments of an item such as
uranium than they are in seizing marijuana or hashish.
These same analysts have also found that even a small
leakage rate of any type of nuclear material can have
extremely vast consequences and dangers. They say that
although secrecy rules make precise numbers impossible to
get, Thomas B. Cochran of the Natural Resources Defense
Council in Washington, D.C., estimates that a bomb
requires between three and 25 kilograms of enriched
uranium or between one and eight kilograms of plutonium.
A Kilogram of plutonium occupies about 50.4 cubic
centimeters, or one seventh the volume of a standard
aluminum soft-drink can.
In addition to this, analysts have found that
security is much to lax in even the supposedly "most
protected locations". For example, the Russian stores in
particular suffer from sloppy security, poor inventory
management and inadequate measurements. Then there is the
virtually nonexistent security at nuclear installations
that compounds the problem. The main reason for this
lack of security is that pay and conditions have worsened
and disaffection has become widespread. So with an
alienated workforce suffering from low and often late
wages, the incentives for nuclear theft have become far
greater at the very time that restrictions and controls
have deteriorated.
Against this background, it is hardly surprising
that the number of nuclear-smuggling incidents-both real
and fake-has increased during the few years. German
authorities for example, reported 41 in 1991, 158 in
1992,241 in 1993 and 267 in 1994. Although most of these
cases did involve material suitable for bombs, as the
number of incidents increases so does the likelihood that
at least a few will include weapons-grade alloys.
In March 1993, according to a report from Istanbul,
six kilograms of enriched uranium entered Turkey through
the Aralik border gate in Kars Province. Although
confirmation of neither the incident nor the degree of
the uranium's enrichment was forthcoming, It raised fears
that Chechen "Mafia" groups had obtained access to
enriched uranium in Kazakhastan.
So what should we do about this? Some suggest that
systematic multinational measures be taken as soon as
possible to inhibit theft at the source, to disrupt
trafficking, and to deter buyers. The U.S., Germany,
Russia and other nations with an interest in the nuclear
problem should set up a "flying squad" with an
investigative arm, facilities for counterterrorist and
counterextortion actions and a disaster management team.
Even though such an idea may seem extremely far-fetched
at the moment because of a continuing reluctance to
recognize the severity of the there, it is minutely the
consensus that it would be a horrible tragedy if
governments were to accept the need for a more
substantive program only after a nuclear catastrophe.
f:\12000 essays\sciences (985)\Physics\The Truth About Physics and Poetry.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The Truth About Physics and Religion
Many people believe that physics and religion are separate entities. They
claim that physics deals only with the objective, material world, while religion
deals only with the world of values. It is obvious, from these, and from many
other comparisons, that conflicts have arisen between physics and religion.
Many are convinced that the two fields completely oppose each other, and are
not related in any ways. Many people, who follow a particular religion, feel
offended by the claims that physicists have made, while physicists believe that
religion has no basis in reality. I will show, however, that these conflicts are
founded on a misunderstanding, and that there is no division between physics
and religion. I will also prove that the misunderstanding lies in the parables of
religion and in the statements made by physicists. Furthermore, I will show that
only physicists can really know the truth of physics, and only religious followers
can know the truth of that religion; everyone else has to take it on faith.
Many people believe that physics and religion are entirely separate. They
claim that physics is only concerned with discovering what is true or false, while
religion is concerned with what is good or evil. Scientists appear to agree that
"physics is the manner in which we argue about the objective side of reality."
Religious followers, on the other hand, agree that "religion is the way we
express the subjective decisions that help us choose the standards by which we
live." Although these definitions seem to be contrasting, an important element
remains absent, an element that must first be considered before religion and
physics can be compared.
Those who think that religion has no basis in reality also believe that
there is an "obvious" separation between the two fields. They think that
religion is a jumble of false assertions, with no basis in reality. Paul Dirac, a
physicist, once said:
The very idea of God is a product of the human
imagination. It is quite understandable why primitive
people, who were so much more exposed to the
overpowering forces of nature than we are today,
should have personified these forces in fear and
trembling. But nowadays, when we understand so
many natural processes, we have no need for such
solutions.
Dirac, and those who think the same way, however, fails to consider the
essential element that has caused many to misunderstand the relationship
between physics and religion. What they fail to realize is that religion uses
language in quite a different way from science. The language of religion is
more closely related to the language of poetry than to the language of science.
The fact that religions have, throughout the ages, spoken in parables and
images, simply means that there is no other way of understanding the reality to
which they refer. But I strongly believe, however, that religion is a genuine
reality. Neils Bohr once said:
The relationship between critical thought about the
spiritual content of a given religion and action based
on the deliberate acceptance of that content is
complementary. And such acceptance fills the
individual with strength of purpose, helps him to
overcome doubts and, if he has to suffer, provides him
with the kind of solace that only a sense of being
sheltered under an all-embracing roof can grant.
In this sense, religion helps to make social life more harmonious; its most
important task is to remind us, in the language of parables and images, of the
wider picture that we live our lives.
Dirac, like many others who share his thoughts, thinks that religion is
entirely based on faith. But, because of his ignorance to the meaning of the
word "faith", he has developed many incorrect beliefs and assumptions. Faith is
defined as "the belief in something, with strong conviction and confidence."
What many fail to realize, however, is that faith is just as essential an element
of physics as it is of religion. The reason why many fail to realize this, is
because of the common misconception that physics is a self-regulating
machine which automatically produces information when the crank of scientific
method is turned. Very little faith would be required, of course, for the
operation of such a machine. But physics, as many of us have experienced
through experiments, is not at all like that. The experimenter usually finds
nothing resembling the smooth, ordered, lawful behavior depicted by the
textbooks. What he finds instead are error-filled and highly questionable
results. William Pollard, a physicist, once wrote:
Scientific research is a tough and unrelenting business.
Only those who enjoy a firm and unshakable faith that
the universal principles will always hold true can
become successful. Without such an abiding faith, it is
simply not possible to become a part of the physics
community.
Consider. for example, this common claim: "anyone can demonstrate the truths
of physics for himself, but the tenets of religion have to be accepted blindly on
faith." How many people, for example, can demonstrate to their own
satisfaction that the mass of the earth is 5.98 x 1024 kilograms, or that the
charge on a proton is + 1.60 x 10-19 coulombs. A long, hard educational
process is required during which a person must freely submit himself to a
rigorous discipline, and strongly desire and believe in its outcome.
Consequently, the truth follows that only by becoming a physicist can he
possess the capacity to demonstrate the truths of physics to his own
satisfaction. Likewise, only those who become serious followers of a religion
can know the truths of that religion. In both cases, everyone else must take it
all on faith.
Another way in which science and religion are frequently contrasted is in
terms of the personal and impersonal. This contrast is based on the belief that
science is a dispassionate, completely detached activity in which the process of
knowing is independent of the involvement or participation of the knower. In
contrast to this, religious knowledge is thought to be deeply personal, since it
comes only through the passionate involvement and commitment of the
believer in that which he knows. Many believe that religion affects both, our
actions and our emotions, as opposed to physics, which does not. The fact is,
none of these statements can be validated unless the person saying it has
endured and committed himself to both physics and religion. A sincere and
hard-working physicist will feel the personal affects of physics on him, whereas
others will not. Similarly, a dedicated and determined follower of a religion will
feel the personal affects of that religion on him. Others, again, will not.
A number of the contrasts which are frequently made between physics and
religion are seen to be either wrong or irrelevant through careful analysis.
Einstein, himself, believed that God was somehow involved in the immutable
laws of nature, and that there is no split between physics and religion. What is
and always has been our mainspring is faith. To have faith always means: "I
decide to do it, I stake my existence on it." When Columbus started on his first
voyage into the West, he believed that the earth was round and small enough to
be circumnavigated. He did not merely think this was right in theory-he staked
his whole existence on it. There's an old saying: "I believe in order that I may
act; I act in order that I may understand." This saying is relevant not only to the
concepts of physics and religion, but also to the entire life we live.
f:\12000 essays\sciences (985)\Physics\Theories of Evolution.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thoeries of Evolution
Evolution is the process by which living organisms originated on earth and have changed their forms to adapt to the changing environment. The earliest known fossil organisms are the single-celled forms resembling modern bacteria; they date from about 3.4 billion years ago. Evolution has resulted in successive radiations of new types of organisms, many of which have become extinct, but some of which have developed into the present fauna and flora of the world (Wilson 17).
Evolution has been studied for nearly two centuries. One of the earliest evolutionists was Jean Baptiste de Lamarck, who argued that the patterns of resemblance found in various creatures arose through evolutionary modifications of a common lineage. Naturalists had already established that different animals are adapted to different modes of life and environmental conditions; Lamarck believed that environmental changes evoked in individual animals direct adaptive responses that could be passed on to their offspring as inheritable traits. This generalized hypothesis of evolution by acquired characteristics was not tested scientifically during Lamarck's lifetime.
A successful explanation of evolutionary processes was proposed by Charles Darwin. His most famous book, On the Origin of Species by Means of Natural Selection (1859), is a landmark in human understanding of nature. Pointing to variability within species, Darwin observed that while offspring inherit a resemblance to their parents, they are not identical to them. He further noted that some of the differences between offspring and parents were not due soley to the environment but were themselves often inheritable. Animal breeders were often able to change the characteristics of domestic animals by selecting for reproduction those individuals with the most desirable qualities. Darwin reasoned that, in nature, individuals with qualities that made them better adjusted to their environments or gave them higher reproductive capacities would tend to leave more offspring; such individuals were said to have higher fitness. Because more individuals are born than survive to breed, constant winnowing of the less fit-a natural selection-should occur, leading to a population that is well adapted to the environment it inhabits. When environmental conditions change, populations require new properties to maintain their fitness. Either the survival of a sufficient number of individuals with suitable traits leads to an eventual adaptation of the population as a whole, or the population becomes extinct. Evolution proceeds by the natural selection of well-adapted individuals over a span of many generations, according to Darwin's theory(Microsoft 96).
The parts of Darwin's theory that were the hardest to test scientifically were the interferences about the heritability of traits because heredity was not understood at that time. The basic rules of inheritance became known to science during the turn of the century, when the earlier genetic works of Gregor Mendel came to light. Mendel had discovered that characteristics are transmitted across generations in discrete units, known as genes that are inherited in a statistically predictable fashion. The discovery was then made that inheritable changes in genes could occur spontaneously and randomly without regard to the environment. Since mutations were seen to be the only source of genetic novelty, many geneticists believed that evolution was driven onward by the random accumulationof favorable mutation changes. Natural selection was reduced to a minor role by mutationist such as Vries. Morgan, and Bates.
While mutation was replacing Darwinism, the leading evolutionary theory, the science of population genetics was being founded by Sewall Wright, J.B.S. Haldine, and several other geneticists, all working independantly. They developed arguments to show that even when a mutation that is immediately favored appears, its subsequent spread within a population depends on such variables as the following:
the size of the population
the length of generations
the degree to which the mutation is favorable
the rate at which the same mutation reappears in descendants
Furthermore, a given gene is favorable only under certain environmental conditions. If conditions change in space, then the gene may be favored only in a localized part of the population; if conditions change over time, the gene may become generally unfavorable. Because different individuals usually have different assortments of genes, the total number of genes available for inheritance by the next generation can be large, forming a vast store of genetic variability. This is called the gene pool. Sexual reproduction ensures that the genes are rearranged in each generation, a process called recombonation. Mutations provide the gene pool with a continuous supply of new genes; through the process of natural selection the gene frequencies change so that advantageous genes occur in greater proportions(Ardrey 24).
As the new evolutionary theory became enriched from such diverse sources, it became known as the synthetic theory. Three American scientists made controbutions that were especially important. The German-born Ernst Mayr, a zoologist, showed that new species usually arise in geographic isolation, often following a genetic turn that quickly changes the contents of their gene pools. George Simpson, a paleontologist, showed from the fossil record that rates and modes of evolution are correlated. G. Ledyard Stebbins, a botanist, showed that plants display evolutionary patterns similar to those of animals, and especially that plant evolution has demonstrated diverse adaptive responses to environmental pressures and opportunities. In additon, these biologists reviewed a broad range of genetic, ecological, and systematic evidence to show that the synthetic theory was strongly supported by observation and experiment.
During the establishment of the synthetic theory of evolution, the science of heredity underwent another drastic change in 1953, when James Watson and Francis Crick demonstrated the way genetic material is composed of two nucleic acids, deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Nucleic acid molecules contain genetic codes that dictate the manufacture of proteins, and the latter direct the biochemical pathways of development and metabolism in an organism. Natural selection can then operate to favor or supress a particular gene according to how strongly its protein product contributes to the reproductive success of the organism.
Life originated more than 3.4 billion years ago, when the earth's environment was much different than that of today. Especially important was the lack of significant amounts of free oxygen in the atmosphere. Experiments have shown that rather complicated organic molecules, including amino acids, can arise spontaneously under conditions that are believed to simulate the earth's primitive environment.
The earliest organisms that still exists would be cells, resembling modern bacteria. These simple unicellular forms(procaryotes) were at first anaerobic, but they diversified into and array of adaptive types from which blue-green algae descended, including aerobic photosynthesizers. Advnced cells (eucaryotes) may have evolved through the amalgamation of a number of distinct simple cell types. A large ingesting cell may have incorporated as symbionts some small blue-green algal cells that evolved into chloroplast and some tiny aerobic bacteria that evolved into mitochondria(Reader 45).
In order for complex animal communities to develop, plants must first become established to support herbivore populations, which in turn may support predators and scavengers. Land plants appeared about 400 million years ago, spreading from lowland swamps as expanding greenbelts(Gribbon 208).
Dinosaurs and mammals shared the terrestrial environment for 135 million years. Dinosaurs may well have been more active, and certainly were larger, than their mamalian contemporaries, which were small and pssibly nocturnal. The mammals, however, survived a wave of extinction that eliminated dinosaurs about 65 million years ago, and subsequently diversified into many of the habitats and modes of life that formerly had been dinosaurian(Gribbon 211).
Humans belong to an order of mammals, the primates, which existed before the dinosaurs became extinct. Early primates seem to have been tree dwelling and may have resembled squirrels in their habitats. Many of the primate attributes, the short face, overlapping visual fields, grasping hands, large brains, and even alertness and curiosity, must have been acquired as arboreal adaptations. Descent from tree habitats to forest floors and eventually to more open country, however, was associated with the development of many of the unique features of the human primate, including erect posture and reduced canine teeth, which suggest new habitats of feeding(Schwartz 78).
The history of life as inferred from the fossil record displays a wide variety of trends and patterns. Lineages may evolve slowly at one time and rapidly at another time, they may follow one pathway of change for sometime only to switch to another pathway, and they may diversify rapidly at one time and then shrink under widespread extinctions.
The key to many of these patterns is the rate and nature of environmental change. Species become adapted to the environmental conditions that exist at a given time, and when change leads to new conditions, they must evolve new adaptations or become extinct. When the environment undergoes a particularly rapid or extensive change, waves of extinction occur. These are followed by waves of development of new species. The times of mass extinction are not yet well understood. Although the most famous one is that of the dinosaurs, about 65 million years ago, such events appear in the fossil record as far back as Precambrian time, when life first arose. In fact, five mass extinctions on the scale of that at the end of the age of dinosaurs are known over the past 600 million years. Some scientists also claim to have demonstrated a definite periodicity to smaller periods of mass extinction, and in particular a 26-million-year cycle of eight extinctions over the past 250 million years(Wilson 34).
Controversy has arisen over the proposal made by some geologists that mass extinctions are related to periodic catastrophes such as the striking of the earth's surface by a large asteroid or comet. Many paleontologists and evolutionary theorists reject such hypotheses as unjustified. The feel that periods of mass extinctions can be accounted for by less spectacular evolutionary processes and by more earthbound events such as cycles of climatic change and volcanic activity. Whatever proposals may eventually prove true, however, it seems fairly certain that periodic waves of mass extinction do occur.
Species adapted to live in environments that are changeable in the short term have broad tolerances, which may better enable them to survive extensive changes. Human beings are uniquely adapted in that they make and use tools and devices and invent and propogate procedures that give them extended control over their environments. Humans are significantly changing the environment itself. The effects are most complex and cannot be predicted, and yet like the likelihood is that evolutionary patterns in the future will reflect the influence of the human species(Microsoft96).
Works Cited
Ardrey, Robert. The Hunting Hypothesis: A Personal Conclusion Concerning the Evolutionary Nature of Man. New York: Antheneum, 1976.
Encarta 96. Computer Software. Microsoft, 1995.
Gribbon, John and Cherfas, Jeremy. The Monkey Puzzle: Reshaping the Evolutionary Tree. Philly: Pantheon, 1982.
Reader, John. Missing links: The Hunt for Earliest Man. Boston: Little, 1981
Schwartz, Jeffery H. The Red Ape: Orang-Utans and Human Origins.
San Francisco: Houghton, 1987.
Wilson, Peter J. The Domestication of the Human Species. Oxford:
Yale, 1991.
f:\12000 essays\sciences (985)\Physics\VoltageCurrent relationship in a DC circuit.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ABSTRACT
Ohm's Law and Kirchhoff's rules is fundamental for the understanding of dc circuit. This experiment proves and show how these rules can be applied to so simple dc circuits.
INTRODUCTION
In the theory of Ohm's Law, voltage is simply proportional to current as illustrated in the proportionality, V=RI. As shown in this relation, V represent voltage which is the potential difference across the two ends of a electrical conductor and between which an electric current, I, will flow. The constant, R, is called the conductor's resistance. Thus by the Ohm's Law, one can determine the resistance R in a DC circuit without measuring it directly provided that the remaining variable V and I is known.
A resistor is a piece of electric conductor which obeys Ohm's Law and has been designed to have a specific value for its resistance. As an extension of the Ohm's Law, two more relationship can be drawn for electric circuits containing resistors connected in series or/and parallel. For resistors connected in series, the sum of their resistance is, RTOTAL=R1+R2+ ..... +Rn . And for resistors connected in parallel, 1/RTOTAL==1/R1+1/R2+ ..... +1/Rn . Complex dc circuit involving a combination of parallel and series resistors can be analyzed to find the current and voltage at each point of the circuit using 2 basic rules formulated by Kirchhoff. 1) The algebraic sum of current at any branch point in a circuit is zero. 2) The algebraic sum of potential difference, V, around any closed loop in a circuit is zero. These rules and equations provided by the Ohm's law and the Kirchhoff rule can be experimentally tested with the apparatus available in the lab
EXPERIMENTAL METHOD
The apparatus used in the experiment includes a Voltmeter, an Ammeter, some connecting wires and a series of resistors and light bulb with varies resistance. This experiment could be divided into 5 sections which value of voltage and current measured is noted in all sections for further calculation. In the first section, in order to evaluate the reliability of Ohm's law, a dc circuit was constructed as FIG 2 (on p.4 ) using a resistor with an expected resistance at 2400W*120W. In the second section, we were instructed to determine the internal resistance of the voltmeter. Two dc circuit were constructed as FIG 1. and FIG 2. using a resistor with an expected resistance at 820000W*41000W. In the third section, we were asked to judge if the filament of a light bulb obey Ohm's law, this was done by constructing a dc circuit as FIG 1. with a light bulb instead of a resistor. Where in the forth section of the experiment, we explored the ability of multimeter to measure resistance directly and observed the difference in total resistance when two resistor at 270W*14W and 690W*35W were connected parallel or series together. And finally, in the last section of this experiment, we were instructed to construct a circuit like the one shown in FIG 3. and test the Kirchhoff's rules where R1, R2, R3 are 270W*14W, 690W*35W and 2400W*120W respectively. The voltage and current across and through each resistor was measured.
RESULTS AND DISCUSSION
Results from section 1 as we saw on Graph 1, the calculated resistance was constant at 2448W*147W and this was within the experimental error of the actual resistance of the resistor and so proved the accuracy of Ohm's law. Graph 2 and 3 summarized the differences in total resistance led to the finding of the voltmeter's internal resistance in section 2. Since the calculated total resistance , R1total , from circuit constructed as FIG 1. was, Resistor ,the resistance of the resistor alone, on the other hand, the calculated total resistance, R2total , from circuit constructed as FIG 2. was , 1/Rresistor+1/ Internal resistance , a combination of resistance of resistor and internal resistance of the voltmeter. Though a series of mathematical calculation, Internal resistance can be solved. Our calculated Internal resistance is 18.21MW*0.02MW which was much greater than the expected value of 10MW. This error is most likely due to 1) the inaccurate value of given internal resistance since it's unlikely that all voltmeter have the same internal resistance. 2) Unstability of power supply causes reading error. Graph 4 shown that growing light bulb did not obey Ohm's law. Its resistance increased as it became brighter. The fact that resistance of a metal increases with temperature is largely due to the heat, or kinetic vibration built up in metal interferes with flow of electrons. In the fourth section of the experiment, the resistance measured in parallel and series is 191W*1W and 950W*5W, very similar to the calculated resistance which is 194W*13W and 960W*37W respectively. And in our last section, to verify Kirchhoff's rules, I2+I3=3.70mA*0.04mA is approximately equal to I1 which is 3.79mA*0.03mA. Also, Vbattery+V1+V2= Battery +V1+V3 where both are equal to 0V.
CONCLUSION
This experiment show that most dc circuit problems can be solve by Ohm's law and Kirchhoff's rules which interested in voltage current and resistance.
REFERENCES
M.M.Sternheim, J.W.Kane. General Physics 2nd edition John Wiley & Sons, Inc. 1991. Canada. p.434-435
F.Hynds. First Year Physics Laboratory Manual 1995-1996 University of Toronto
Bookstores. 1995. Toronto, Canada. p.74-76
f:\12000 essays\sciences (985)\Physics\What is Physics .TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Physics, a branch of science, is traditionally defined as the study of
matter, energy, and the relation between them. The interaction between matter
and energy is found everywhere. In order for matter to move, it requires some
form of energy.
Sports show many good examples of the relationship between matter and energy.
For instance, a pitcher requires energy to throw a baseball at the incredible speed
and accuracy that is needed to keep the batter from using his energy to try and hit
the ball. The batter exhibits the need for a certain trajectory because he/she needs
to hit the ball hard enough and keep it high enough to sail over the outfield wall.
On the other hand, the batter must be certain to keep the trajectory low enough so
that the ball will reach the fence. Trajectory is also seen in basketball, where
players must shoot the ball with enough arch to get over the front of the rim, and go
through the hoop. The energy required to do this comes from not only the arms, but
the legs as well.
The medical field has seen enormous breakthroughs because of principles of physics.
Doctors are now able to use lasers for surgery. Lasers are based on the physical principle
of light, and are devices for the creation and amplification of a narrow, intense beam of
coherent light. New laser microsurgery can actually alter the shape of the cornea in the
eye so the patient's eyesight can return to normal, and he/she will no longer need those
bothersome glasses. Ultrasound is used in the medical field for destroying various unwanted
substances in the body such as kidney stones. Ultrasound uses sound waves to dissolve these
foreign bodies. If not for physics, ultrasounds would never have been discovered and utilized.
MRI scans, another new discovery, are able to show a complete three dimensional picture of the
interior structure of the body, and are extremely valuable in hospitals. These scans are based
on the principles of electromagnetism, and the phenomenon that nuclei of some atoms line up in
the presence of an electromagnetic field.
Understanding the dark matter of the universe, which has remained a mystery for quite some time,
is based primarily on theories of physics. We have yet to see a black hole, but physics has
explained what one is, and why we cannot see it. Otherwise we would have never known that it is
an extremely small region of space-time with a gravitational field so intense that nothing can
escape, not even light. Physics help to understand the dark matter of the universe, because it
applies theories to what the dark matter is. We are also able to look at distant spots in the
universe with new telescopes because of the principles of magnification and amplification of light.
Not only can physics better your baseball game and explain the dark matter of the universe,
but it can save lives. It remains a very important part of us and our world.
f:\12000 essays\sciences (985)\Physics\wind chimes.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Wind chimes produce clear, pure tones when struck by a mallet or suspended clapper. A wind chime usually consists of a set of individual alloy rods, tuned by length to a series of intervals considered pleasant. These are suspended from a devised frame in such a way that a centrally suspended clapper can reach and impact all the rods. When the wind blows, the clapper is set in motion and randomly strikes one or more of the suspended rods-- causing the rod to vibrate and emit a tone.
The pitch of said tone is governed by the length of the rod, but the perceived loudness is affected by many determinants: the force of the clappers impact, the alloy's density and structure, and the speed and direction of the wind (to name a few). Also affecting the loudness is the lack of resonating chamber or hard connection between rods and frame. The chime would certainly be louder, for instance, if the rods were built with the inclusion of small chambers containing a volume of air whose fundamental harmonic was the same as that of the rod-- when struck, the rod would transfer vibration to the enclosed air as well as directly to the atmosphere, resulting in a louder tone. A hard connection between rods and frame would also accomplish this result somewhat; the vibrations of each seperate rod would be commuted to the others, resulting in more vibrating surface area (and hence, more volume).
The transmission of the chime's sound without the abovementioned alterations is quite simple; each rod releases longitudinal waves radially from it's longest axis (excepting deviances caused by deformation or impurity of the metal), which travel until they are absorbed or reflected by an independent surface. These waves travel at a speed governed by the temperature of the atmosphere-- the colder the air, the more immediate the transmission.
The waves that are not absorbed can be perceived by the human ear; of equal importance to the directly intercepted waves are those reflected before interception, as these allow an animal or human to identify the physical relationship of self to sound-emitter. These intercepted waves (reflected or not) are processed by the ear in an amazing process.
Sound waves vibrate the ear-drum, causing the minute movement of three microscopic bones (hammer, then anvil, then stirrup) in the middle ear. The bone chain, having transferred air vibration to physical vibration, systematically disturbs the fluid (perilymph) in the inner ear (cochlea). Hair cells along the basilar membrane (which runs the length of the cochlea) perceive the disturbances and interpret them as auditory signals to be transmitted to the nervous system. With pure tones such as those created by a wind chime, certain groups of hair cells are agitated more than others-- and the position of that group along the basilar membrane can be directly correlated to the relative pitch of the tone.
f:\12000 essays\sciences (985)\Physics\Yucca Mountain as a Permanent Nuclear Waste Site.TXT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Yucca Mountain-Right or Wrong?
As the United States' nuclear waste buildup becomes larger, the need for a permanent storage facility becomes more urgent. One proposed site is in the Yucca Mountains of Nevada. This makes many Nevadans uneasy, as visions of three-legged babies and phosphorescent people come to mind. This is an unfounded worry, as many reasons prove. In fact, the Yucca Mountains provide an ideal site for a permanent underground nuclear waste facility in the U.S.
While the Yucca Mountains are the best site we have found as of yet, this procedure will cost a huge amount of taxpayer dollars. The Department of Energy (DOE) estimates the total cost of its high-level waste management program at $25-35 billion. Completing the scientific investigation and licensing of the Yucca Mountain site is expected to cost $6-7 billion alone. At the end of 1993, total nuclear waste fund expenditures through the end of the year were nearly 3.7 billion. Very little of this money comes from individual investors. If a retrievable facility (one where the casks of spent fuel can be retrieved later) is built, this will be a good deal more. Other disposal types, such as
sub-seabed and space disposal may prove to be cheaper at a later time.
This is a cause for concern, but there are a greater amount of reasons to further and eventually finish the Yucca Mountain Project. One is the desert climate naturally occurring in the western United States. The weather is dry and warm and their are very few natural disasters, such as earthquakes. Also, this part of the nation has a lower water table than the rest of the country. This reduces the risk of water contamination in case of a breach.
This is only one safety cushion that the proposed site provides. There are several more. All of these factors add up to a relatively stable environment. But will it be stable enough? If a permanent site is constructed, it will have to remain stable for 10,000 years. This is a very long time, considering the United States has only existed for a little over 200. During this period, if a breach occurs, the western United States' water supply could become contaminated, and cost the federal government even more to clean. The question is whether or not the United States want to spend money now or later. The safety of highly dangerous materials is a matter of national security. If a breach were to occur and contaminate the western section of America, it would be more devastating than a nuclear bomb. That is why the Yucca Mountains are being speculatively chosen for this purpose. Throughout the United States, no better area has been found.
Safety of this hazardous material is not only crucial in it's final resting place. Security en route to the site is also of utmost importance. If this site is chosen, a safe transportation method will be needed to move the radioactive materials to the Yucca mountains. Vehicles, that will only be used once, will have to be custom built for safety
and security, as will containers for the spent fuel rods. This would also be, however unlikely, a prime target for a terrorist attack. There would be no way to hide a biohazard convoy, so extra security measures must be taken. All of these measures add up to extra costs, obviously. And as the nation waits, the costs multiply.
But expenses are second only to safety of the facility and speed in which it is constructed. At the present moment, all of the United State's nuclear waste is held in above-ground pools and airtight casks, inside the country's many commercial power plants. This is all right for now, but how much longer will there be enough space to hold thousands of metric tons of radioactive materials? And the longer these materials sit above ground, the greater the odds of a catastrophe. These hazardous materials must be placed and stored in a stable environment soon, where the risk is significantly lower.
While these methods may prove better and cheaper in the future, we need a place to put the huge accumulated amount of spent fuel rods and radioactive materials. Subterranean storage is the most viable method that technology will allow. The aforementioned Yucca mountains provide all the desirable features for this method. The mountains were formed by a volcanic eruption and the rock surrounding the site is a type called volcanic tuff. It is a very stable kind of rock, and often encases salt beds, which are ideal for nuclear containment. These beds are virtually waterproof, so water will not seep down in the groundwater residing beneath the storage structure. Also, fractures in the salt are self-sealing, which will stop radiation for simply floating up to the surface through pores, cracks, or faults in the rock. This type of host rock (the rock that surrounds the
site) will give the site both a man-made and natural protection.
But perhaps the most beneficial protection is the remoteness of the location of the site. Located in Nevada, which has a very low population density (only 0-2 people per square mile*) the risk of humans accidentally tampering with the repository is very low. Also to be noted is that there will be no construction or utility digging. Nevadans will see to it that the site stays untouched.
All points taken, the Yucca Mountains are currently the best spot to store the country's ever growing buildup of nuclear waste. Due to it's remote location, secure land formations, and low water table, this area provides an ideal and secure spot for the huge amount of potentially harmful material. The U.S. is in dire need of a permanent nuclear waste disposal site, and this is the best option right now. The usual dawdle of the federal government will only act against us in this matter.